Specify where jobs run

Specify whether this job has any special requirements that impact where it can run. Be aware of any prerequisites or dependencies needed to run a job, and ensure your job hardware has the necessary software, operating systems, and other resources installed required by your job before attempting to run the job.

  1. Select which hardware you want to run the job on.

    • Shared hardware: Shared hardware refers to job hardware nodes that are configured in the Continuous Delivery (CD) root console’s Hardware tab. These nodes are available to jobs across the entire application (all workspaces). Configuration of shared hardware is restricted to the root user and super users. Global shared job hardware uses a shared container image set in the root console. The default image used for containerized jobs is gcr.io/platform-services-297419/puppet-dev-tools:puppet8. You can find details on the available commands in the image documentation.

      If you are running your job on shared hardware, you can skip the rest of the instructions and go to entering your environment variables, if you need to use them.

    • Workspace hardware: Workspace hardware refers to job hardware that has been defined in your workspace in the Hardware tab. Running jobs on this hardware affords users more flexibility for configuration.

  2. If you are running on Workspace hardware, select the Hardware capabilities that are required on the hardware where this job runs from the drop-down menu. You may select multiple capabilities from the list. Choose Apply when you have finished selecting.

    Hardware capability is a user-configured grouping of hardware nodes with a specific capability or commonality. For example, CD provides a built-in capability called "Containerized" that can be populated with nodes that have Docker or Podman installed. You can then set your container-based job templates to run on nodes with the Containerized capability to ensure that they always run on nodes with a container runtime installed. CD’s built-in jobs come pre-set to run on this capability.

  3. If you are running on Workspace hardware, select whether or not to run this job in a container. CD automatically detects and uses either Docker or Podman on the job hardware host if you elect to run the job in a container. If you decide to run the job in a container, select which image to use:

    • Default image: This image is configured in the root console’s Hardware tab. The default image used for containerized jobs is gcr.io/platform-services-297419/puppet-dev-tools:puppet8.

    • Custom image: Use a custom image. You must specify the name of the image. Keep in mind that custom image names that do not include the registry may be subject to the container runtime’s default behavior. For best results, include the registry you want your job’s hardware node to pull from.

      Note that the image you specify here must be available to the job hardware node that runs this job.

  4. Enter the container run arguments you would like to use for your selected container. These are optional arguments that are passed onto the container run command when the job is executed.

  5. Optionally, enter a set of environment variables to be injected into the host environment at the time the job is run.

    • On Shared hardware, these environment variables are not passed into the job container.

    • On Workspace hardware, if you want these to be available inside a running container, you need to define them here and then set container runtime arguments to attach system variables, e.g., --env VAR for VAR=value.