Troubleshooting Continuous Delivery

Use this guidance to troubleshoot issues you might encounter with your Continuous Delivery installation.

If your PE instance has a replica configured for disaster recovery, Continuous Delivery is not available if a partial failover occurs. Learn more at What happens during failovers in the PE documentation. To restore Continuous Delivery functionality, you must promote the replica to primary server.

Update your resolvable hostname

The resolvable_hostname setting in the Continuous Delivery Hiera config controls the hostname that outside applications use to make requests to Continuous Delivery. This includes things like OAuth and webhooks from source control providers as well as Puppet Enterprise's Orchestrator. The hostname must be accessible to any external service that you need to integrate with Continuous Delivery.

When installing or migrating Continuous Delivery you need to provide a resolvable hostname. If you need to change this setting for some reason, follow this process.

To change your resolvable hostname for Continuous Delivery:

  1. Edit the data/common.yaml Hiera file in your Bolt project and update the value of the resolvable_hostname setting to the new host.
  2. Copy the protocol, host, and port of your existing webhooks, for example https://my-cd4pe.net:8000. This can be found at the bottom of the Manage Pipelines window for any control repo or module. You need this information in a later step.
  3. Update certificates to use the new hostname.
    • If you are using the default auto-generated SSL certificates for NGINX, use bolt plan run cd4peadm::regen_certificates. This creates new certificates containing the new hostname and adds them to the Hiera configuration.
    • If you are using your own custom certificates, you need to issue a new leaf certificate using the new hostname and supply it in the Hiera data.
  4. Run bolt plan run cd4peadm::apply_configuration to update all relevant settings in the app and install the new certificates.
  5. Update webhooks in your VCS providers using one of the following methods:
    • Update webhooks manually for each repo in your VCS providers.
    • Attempt to update webhooks automatically. From the Webhooks tab of the Settings page of the root console, enter the protocol, host, and port that you copied earlier and click Update webhooks. This attempts to find any webhooks containing the URL that you provided and update them to use the new resolvable hostname that you specified in your data/common.yaml file. This process may not be reliable, so you may find updating your webhooks manually the safer course.

 

Look up a source control webhook

Continuous Delivery creates a webhook and attempts to automatically deliver it to your source control system when you add a new control repo or module to your workspace. You can look up this webhook if you ever need to manually add (or re-add) it to your source control repository.

  1. In the Continuous Delivery web UI, click Control Repos or Modules, and select the control repo or module whose webhook you want to view.
  2. In the Pipelines section, click Manage pipelines.
  3. If you're using pipelines-as-code, click Manage in the web UI. This temporarily converts your pipeline code to the web UI format so you can copy the webhook. After copying the webhook, don't save any changes and make sure you switch back to Manage as code before exiting the page.
    If you use pipelines-as-code, make sure you don't save any changes, and make sure you switch back to Manage as code before exiting the page. Your pipeline code isn't affected as long as you don't save.
  4. In the Automation webhook section, copy the full webhook URL. This URL represents the unique webhook that connects this control repo or module in Continuous Delivery with its corresponding repository in your source control system.
What to do next
Add the webhook to the corresponding repository in your source control system, according to the source control provider's documentation. Usually, webhooks are managed in the repository's settings.

Manually configure a Puppet Enterprise integration

When you add credentials for a Puppet Enterprise (PE) instance, Continuous Delivery attempts to look up the endpoints for PuppetDB, Code Manager, orchestrator, and node classifier, and it attempts to access the primary SSL certificate generated during PE installation. If this information can't be located, such as in cases where your PE instance uses customized service ports or your PE infrastructure servers are running a custom environment instead of production, you must enter it manually.

This task assumes you have completed the steps in Add your Puppet Enterprise credentials and have been redirected to the manual configuration page.

  1. In the Name field, enter a unique friendly name for your PE installation.
    If you need to work with multiple PE installations within Continuous Delivery, the friendly names help you differentiate which installation's resources you're managing.
  2. In the API token field, paste a PE access token for your "Continuous Delivery" user. Generate this token using the puppet-access command or the RBAC v1 API.

    For instructions on generating an access token, see Token-based authentication in the PE documentation.

  3. In the five Service fields, enter the endpoints for your PuppetDB, Code Manager, orchestrator, and node classifier services:
    1. In the PE console, go to Status and click Puppet Services status.
    2. Copy the endpoints from the Puppet Services status monitor and paste them into the appropriate fields in Continuous Delivery. Omit the https:// prefix for each endpoint, as shown in the table below:
      ServicePE console formatContinuous Delivery format
      PuppetDB servicehttps://sample.host.puppet:8081sample.host.puppet:8081
      Puppet Server servicehttps://sample.host.puppet:8140sample.host.puppet:8140
      Code Manager servicehttps://sample.host.puppet:8170sample.host.puppet:8170

      Use port 8170 for Code Manager in Continuous Delivery.

      Orchestrator servicehttps://sample.host.puppet:8143sample.host.puppet:8143
      Classifier servicehttps://sample.host.puppet:4433sample.host.puppet:4433
      The Puppet Server service is used for impact analysis, among other processes. You can run impact analysis tasks on a compiler or load balancer instead of the primary server. This is strongly recommended for PE installations that use compilers or load balancers as part of their architecture. To do this, in the Puppet Server service field, enter the hostname of the compiler or load balancer at :8140. For example: loadbalancer.example.com:8140
  4. To locate the master SSL certificate generated when you installed PE, run:
    curl https://<HOSTNAME>:8140/puppet-ca/v1/certificate/ca --insecure
    The <HOSTNAME> is your PE installation's DNS name.
  5. Copy the entire certificate (including the header and footer) and paste it into the CA certificate field in Continuous Delivery.
  6. Click Save Changes.
  7. Optional: Once the main PE integration is configured, Configure impact analysis.
What to do next

If you want code deployments to skip unavailable compilers, go to Enable compiler maintenance mode.

Restart Continuous Delivery

Restarting Continuous Delivery is an appropriate first step when troubleshooting.

  1. To restart Continuous Delivery, run:
    bolt plan run cd4peadm::ctl action=restart

Stop Continuous Delivery

In rare circumstances, you might need to shut down, or force stop, Continuous Delivery.

  1. To stop Continuous Delivery, run:
    bolt plan run cd4peadm::ctl action=stop
  2. To start Continuous Delivery after a force stop, run:
    bolt plan run cd4peadm::ctl action=start

Logs

Display the logs for Continuous Delivery.

Run the following plan on your Bolt runner to view Continuous Delivery logs:

bolt plan run cd4peadm::logs

Use bolt plan show cd4peadm::logs for information on gathering logs for other sub-components of Continuous Delivery.

Trace-level logging

To enable or disable trace-level logging, in the data/common.yaml file, update the containers.pipelinesinfra.log_level setting in Hiera. For example:

 containers: 
    pipelinesinfra: 
      log_level: "trace" 

Installation or service restart fails when default DROP or REJECT rule in iptables

Running Podman and using a default DROP or REJECT rule in the FORWARD chain of iptables causes the installation of CD4PE or the restart of the services to fail. The default drop rule either needs to be removed or you need to add a rule allowing forwarding to the podman network. You can obtain the network interface name needed to construct such a rule by running:

podman network inspect cd4pe --format '{{.NetworkInterface}}'

Looking up information about Continuous Delivery

Use container runtime commands to access information about your Continuous Delivery installation.

Look up the environment variables in use

Follow the instructions in Generate a support bundle to create a file that includes the environment variables for Continuous Delivery.

To manually list the environment variables in use on your installation, run:

<podman/docker> inspect pipelinesinfra |jq ".[].Config.Env"
For information on using environment variables to tune your Continuous Delivery installation (such as adjusting HTTP and job timeout periods, changing the size of LDAP server searches, or enabling Git repository caching), refer to the Configuration reference.

Look up your Continuous Delivery version

To print the version of Continuous Delivery running in your installation, run:

<docker/podman> inspect pipelinesinfra |jq ".[].ImageName"

Generate a support bundle

When seeking support, you might be asked to generate and provide a support bundle. This bundle collects a large amount of logs, system information, and application diagnostics.

To create a support bundle, from the root of your Continuous DeliveryBolt project run:

bolt plan run cd4peadm::support_bundle

The plan may take several minutes to run depending on the size of the environment and configured log retention policies. Once the plan is complete it prints the path to the generated support bundle to send to your support representative.

If you are unable to generate a support bundle because the plan failed, give the bolt-debug.log file in your Bolt project to your support rep as well as any errors that were printed to the console.