Advanced configuration
Advanced configuration settings for Continuous Delivery help you fine-tune aspects of the software that can impact runtime and operation speed.
Improve job performance by caching Git repositories
If you have large Git repositories, you can enable Git repository caching to improve job performance. By default, repository caching is disabled.
The cached repository's files and data are stored on the container running Continuous Delivery at /<DEFAULT_ROOT_STORAGE_DIRECTORY>/repos
. The entire repository is
cloned from source control, including branches; therefore, caching requires space
equivalent to the size of the uncompressed repository.
Cached repositories are not automatically deleted. When attempting to read from the cached repository, if the cached version is missing object ID references, or if the previous cache attempt failed, then the cached version is deleted and re-cloned.
Enable repository caching, as outlined in the Configuration reference, using:
repo_caching: true
Including the .git directory in cached repositories
The .git directory is automatically omitted when copying cached Git repositories to job hardware. This means that the job cannot perform Git actions on the code. If needed, you can adjust this setting so that the .git directory is included in the cached repository.
Include the .git directory in copies of cached Git repositories sent to job hardware, as outlined in the Configuration reference, using:
include_git_history_for_jobs: true
Configure outbound connections through a proxy server
Configure Continuous Delivery to use a proxy server for outbound connections (such as those to VCS providers) by using JVM’s system properties:
http.proxyHost
http.proxyPort
https.proxyHost
https.proxyPort
You can set these using the java_args setting in common.yaml. For example:
java_args: "-Dhttps.proxyHost=application-proxy.example.com -Dhttps.proxyPort=12345 -Dhttp.nonProxyHosts=localhost"
These properties would be added to any Java arguments already present for that key, such as memory tuning.
Add trusted external CA certificates
If you need to provide a trusted CA certificate to external services, use the following workflow to add them to Continuous Delivery’s trust stores. For example, if you have a proxy in front of your VCS provider that presents its own certificate, you can use this method to supply the CA certificate that it was signed by.
-
Create a directory in your Bolt project:
<project root>/files/trusted_certs
. -
Add any number of CA certificate files to this directory. They can be either individual certs or chains. If the CA you need to trust has multiple CA certs in its chain, be sure to provide all of them.
-
Stop the application with
bolt plan run cd4peadm::ctl action=stop
. -
Upload the certificates with
bolt plan run cd4peadm::upload_trusted_certs
. -
Start the app again with
bolt plan run cd4peadm::ctl action=start
.
If you place the certificate files in <project root>/files/trusted_certs
before installing or upgrading, they are automatically uploaded. The Bolt plan, cd4peadm::install_from_v4 plan
, used to migrate from a 4.x instance extracts any custom certificates from 4.x and automatically installs them into your new 5.x instance.
Use custom TLS certificates
By default, Continuous Delivery uses automatically generated certificates. Your organization's security policies might require using custom certificates or adding additional certificates. Use these steps to configure custom TLS certificates for the Continuous Delivery web UI connection.
It is recommended to do this after you Install Continuous Delivery, but you could do this during initial setup.
- Obtain a custom certificate and accompanying key pair. You need the entire
certificate, including the header and footer, and the private key. Most
configurations also need a CA certificate chain.
Make sure you have configured the DNS names you want to use for Continuous Delivery. When you generate and sign the CSR, make sure it includes subject alternative names for all DNS names used to connect to the Continuous Delivery host.
- Edit the Hiera section in the data/common.yaml file.
- Combine your certificate and CA to create a cert chain and add this to
Hiera for the
ssl_cert_chain
key. - Add your CRL that the provided CA is associated with in the
ssl_crl
setting in Hiera. - Copy your private key to a file called
key.txt
and run:Copybolt secret encrypt -- "$(<key.txt)"
This generates an encrypted string.
-
Copy the encrypted string from the previous step to the
ssl_private_key
setting in Hiera. - Update the configuration with the Hiera changes:Copy
bolt plan run cd4peadm::apply_configuration
- Optional: Use OpenSSL or curl commands to verify your certificates.
If you want to go back to using the automatically generated certificates, run:
bolt plan run cd4peadm::regen_certificates
To use a custom certificate for your Continuous Delivery SAML SSO configuration, refer to Configure SAML.
Enable compiler maintenance mode
You can tell Continuous Delivery to skip offline or unavailable compilers and replicas when deploying code.
You must manually monitor the status of your compilers and replicas to ensure they're in sync with the primary server. If a compiler or replica is out of sync, you'll need to manually deploy code to that compiler or replica.
To enable this setting:
- In the Continuous Delivery web UI, navigate to Settings > Puppet Enterprise.
- Locate the PE instance you want to configure and click More actions > Edit integration.
- In the Compiler maintenance mode section, enable Ignore unavailable compilers or replicas when deploying code.
- Click Save changes.
Use the Code Manager API GET /v1/deploys/status
endpoint to make sure your compilers and replicas are in sync with the
primary server. The file-sync-client-status
portion of
the response contains all servers with code synced. In the deployed
array for each server, compare the deploy-signature
and date
for each
deployment. The deploy-signature
is the hash of the git
commit that was last synced to the server. If a compiler or replica has a different hash
that the primary, you must Deploy code manually to the
desynchronized compiler or replica.
Configuring a custom firewalld zone
If you are using firewalld alongside Continuous Delivery 5.x, it should work out of the box. However, if you are using custom firewalld zones, you need to make sure that masquerading is enabled for the firewall zone it’s attached to and that you’ve added the interface created by your chosen container runtime like so:
firewall-cmd --zone myZone --add-interface docker0
The runtime interfaces, by default, are, cni-podman0
, podman0
, or docker0
. You can check on the name of the Podman interface using:
podman network inspect podman --format "{{.NetworkInterface}}"