For use with 4.4.8+
These deployment strategies assume you're installing v4.4.8 or later. Earlier versions of Codecov Enterprise need different considerations when deploying.
Please contact [email protected] if you are deploying an earlier version of Codecov Enterprise,
Being a containerized service, there are numerous strategies that can be utilized to deploy Codecov Enterprise successfully. How Codecov Enterprise is deployed ultimately depends on a combination of available resources, uptime needs, and planned usage.
This documentation is not intended to be an all-encompassing guide for deploying Codecov Enterprise. It should, however, serve to outline a few popular deployment strategies that can meet many common requirements of modern on-premises deployments.
Generally, deployment strategies fall along a spectrum of very easy to deploy but with low availability, to more challenging to deploy with high availability. Where a particular use case falls along this spectrum can be helpful in determine which deployment strategy should be used.
Unlike other reference pages in this documentation, which can be fairly easily skimmed, it is recommended to read this page in its entirety if you're considering a non-trivial deployment of Codecov Enterprise.
This document will outline three deployment strategies, one of each of the following types:
- Minimal Configuration, minimal availability -- This is great for trial and proof of concept style deployments where an organization is still attempting to determine if Codecov is right for the organization's needs.
- Moderate Configuration, high availability -- This is great for small to midsize deployments that prioritize high availability now, but not necessarily incredibly large scale. This deployment is right for an organization that has decided to use Codecov, wants to ensure high uptime to end users, and desires the ability to move to a larger scale deployment in the future with no data loss and minimal additional effort.
- High Configuration, high availability, high scale -- This is great for deployments that have to scale to meet the needs of many users within extremely large organizations. This deployment strategy is the one employed by Codecov's SaaS offering: codecov.io.
The remainder of this document will outline each of these deployment types in detail, as well as provide any reference materials necessary for configuration and setup.
Trial Purposes Only
For all but the smallest of organizations, it is recommended to use this Deployment Strategy for trial purposes only.
The most straightforward deployment is to deploy Codecov Enterprise on a single server using docker-compose with no external services. If your goal is simply to evaluate Codecov Enterprise, you have no internal requirement to preserve the data generated by Codecov, and plan to start fresh with a more robust deployment if you purchase Codecov; then this is the recommended Deployment Strategy.
The greatest benefit to this deployment is that a single engineer should be able to produce a usable version of Codecov Enterprise in two hours or less. Therefore, it's great for testing out the product.
This Deployment Strategy utilizes Docker volumes for persistence of the database, archive file/object storage, and caching. Therefore, loss of the underlying server would result in total data loss. For this reason, it is recommended to use this deployment only for the trial and proof of concept stage of using Codecov Enterprise. However, included with this documentation are recommended modifications to make the data generated by this deployment persistent. It is recommended to follow those instructions if you with to upgrade your deployment in the future without any potential data loss.
Using Cloud Services does not Guarantee Uptime
The below modification only ensures that data generated by Codecov Enterprise will persist past the lifetime of the server on which Codecov Enterprise is deployed.
If you require high availability / guarantees on uptime, please do not use this deployment.
This Deployment Strategy can be modified to support smaller deployments and/or to provide data persistence. This is useful if you do not want to allocate a large amount of resources to the installation, or if you would prefer to maintain data generated a trial/proof of concept period in the event that your organization decides to deploy Codecov permanently.
The following modifications will guarantee data persistence in the event that Codecov Enterprise's underlying server fails. However, this deployment provides no high availability guarantee, and it is recommended to use a different deployment to support that use case.
It is possible to use this deployment with cloud services in order to guarantee data persistence in the event of a loss of the underlying server running Codecov Enterprise. To do this the implementer must:
- Create a Postgres 10 database in their cloud provider (e.g., RDS, Google Cloud SQL)
- Create a cloud storage bucket (e.g., using S3 or Google Cloud Storage) for uploaded report storage.
- Create a redis server for dedicated caching (e.g., using Elasticache)
- Update the codecov.yml to use these services.
- Remove the postgres and redis blocks from the docker-compose.yml
Example configuration files are provided below. These files assume the user is deploying on a single EC2 instance in Amazon Web Services using GitHub as the repository provider:
setup # Replace with the http location of your Codecov # https://docs.codecov.io/docs/configuration#section-codecov-url codecov_url"<codecov-url>" # Replace with your Codecov Enterprise License key # https://docs.codecov.io/docs/configuration#section-enterprise-license enterprise_license"<license-key>" # Replace with a random string # https://docs.codecov.io/docs/configuration#section-cookie-secret http cookie_secret"<your-cookie-secret>" github client_id"<client-id>" client_secret"<client-secret>" global_upload_token"<upload-token>" services database_url"<postgres-rds-url>" redis_url"<redis-elasticache-url>" minio hosts3.amazonaws.com bucket<bucket-name> region<bucket-region> verify_ssltrue access_key_id<aws-iam-access-key> secret_access_key<aws-iam-secret>
Note that if you're using GCP with Google Cloud Storage, your
minio block should read as follows:
#for GCS services minio hoststorage.googleapis.com verify_ssltrue bucket<bucket-name> region<bucket-region> access_key_id<gcs-hmac-key> secret_access_key<gcs-hmac-secret>
You will need to do some additional work to setup Google Cloud Storage. You can find those instructions in the v4.4.8 changelog
With this configuration you can completely remove minio from your docker-compose.yml:
version"3" services traefik imagetraefik v1.7-alpine command --api --docker --docker.watch --docker.constraints=tag==web --metrics --metrics.prometheus # NOTE: Toggle the lines below to enable SSL --entryPoints=Name:http Address::80 Compress::true --defaultEntryPoints=http # - --entrypoints=Name:http Address::80 Redirect.EntryPoint:https Compress::true # - --entryPoints=Name:https Address::443 TLS:/ssl/ssl.crt,/ssl/ssl.key Compress::true # - --defaultentrypoints=http,https volumes /var/run/docker.sock:/var/run/docker.sock:rw /dev/null:/traefik.toml:rw # NOTE: Provide the SSL certs in the ./config folder below # - ./config:/ssl ports "80:80" "8080:8080" "443:443" networks codecov depends_on web web imagecodecov/enterprise v4.4.8 #or newer commandweb volumes ./codecov.yml:/config/codecov.yml:ro ports "5000" labels "traefik.tags=web" "traefik.backend=web" "traefik.port=5000" "traefik.frontend.rule=PathPrefix: /" environment STATSD_HOST=statsd DATADOG_TRACE_ENABLED=false networks codecov depends_on statsd worker imagecodecov/enterprise v4.4.8 #or newer commandworker volumes ./codecov.yml:/config/codecov.yml:ro environment STATSD_HOST=statsd DATADOG_TRACE_ENABLED=false networks codecov depends_on statsd statsd imageprom/statsd-exporter v0.6.0 command-statsd.listen-udp= 8125 -statsd.listen-tcp=8125 ports "8125" "9102" networks codecov networks codecov driverbridge
Note that the AWS Access Key and AWS Secret Key must be the identifying credentials for an IAM user that can specify the AWS bucket specified in the minio block of the above codecov.yml.
Thanks to the use of Traefik, it is possible to scale both of codecov's services, the web and worker containers, on a single server. You should consider scaling this way if:
- There is sufficient headroom on the underlying server
- Report processing is lengthier than desired (e.g., status checks are taking longer than usual to complete on Pull Requests)
- Web requests are taking longer than desired to complete (e.g., individual pages are taking awhile to load when viewing Codecov's web front end in the browser.)
Scaling is straightforward with:
docker-compose scale <service-name>=<container count>. For example, to add more workers:
docker-compose scale worker=3. Since Traefik is used as a reverse proxy in this deployment, new web containers should be automatically detected and used. Worker containers communicate via Redis, and should automatically begin processing tasks after startup.
The Following Configuration Changes Should be Avoided
Despite outlining the modifications in detail, it is highly recommended to avoid performing the below steps. These modifications are described because they are both performed often by users who are new to Codecov Enterprise and guaranteed not to work without unknown quantities of additional effort.
Given the simplicity of this deployment, it is tempting to do the following:
- Modify the docker-compose.yml and codecov.yml to use cloud services
- Spin up multiple EC2 instances using this configuration
- Put a load balancer in front of all the instances
- Proclaim that you have a high availability deployment.
This approach will not work. The reason being that this Deployment Strategy handles its own proxying between http/https based services using Traefik. Traefik is used in this case because it provides enormous convenience when deploying scalable/multiple services on a single server. However, if you place an external load balancer in front of multiple servers each running Traefik, the reliability of the underlying services is not guaranteed. Unless you're comfortable modifying/removing Traefik, and supplying your own reverse proxying solution for the web and minio containers, it is not recommended to try the above modification.
Suitable for Most Enterprise Users
This Deployment Strategy is the one generally recommended by Codecov. It strikes an acceptable balance between implementation difficulty, scalability, and availability.
This Deployment Strategy is very similar to Deploy Strategy I with the cloud services modification. However, it also provides an additional high availability / uptime guarantee by running multiple copies of each Codecov Enterprise service and load balancing them.
Use this Deployment Strategy if you require scalability, moderate ease of implementation and maintenance, and high availability to end users. This is the Deployment Strategy that is suitable for most Codecov Enterprise users.
The basic approach is the following:
- Decouple the services into the following: web and worker, minio; and deploy each on their own instance
- Place a load balancer in front of the instances running web and worker containers
- Optionally place a load balancer in front of the instances running minio. This is required if you intended to horizontally scale minio and/or require high availability of minio.
- Create the required cloud services, and update the codecov.yml to use those services.
- Modify docker-compose.yml.
In practical terms, you will need to create a single codecov.yml and two to three different docker-compose.yml's to support this deployment.
An example deployment in AWS would do the following:
- Create at least three EC2 instances: two for web and worker containers, one for minio. Though more can be created if desired.
- Create an Application Load Balancer for the web and worker containers. Optionally, one can be created for minio.
- Create RDS (postgres 10), Elasticache (redis), and an S3 bucket for the deployment.
- Split the standard docker-compose.yml file into a web+worker file and a minio file and deploy them to the appropriate instances.
Example configuration files are shown assuming a deployment into Amazon Web Services:
setup # Replace with the http location of your Codecov # https://docs.codecov.io/docs/configuration#section-codecov-url codecov_url"<load balancer path>" # Replace with your Codecov Enterprise License key # https://docs.codecov.io/docs/configuration#section-enterprise-license enterprise_license"<license-key>" # Replace with a random string # https://docs.codecov.io/docs/configuration#section-cookie-secret http cookie_secret"<your-cookie-secret>" github client_id"<client-id>" client_secret"<client-secret>" global_upload_token"<upload-token>" services database_url"<postgres-rds-url>" redis_url"<redis-elasticache-url>" minio hosts3.amazonaws.com bucket<bucket-name> region<bucket-region> verify_ssltrue access_key_id<aws-iam-access-key> secret_access_key<aws-iam-secret>
The web and worker containers can be merged into a single docker-compose.yml and deployed. Note that, most importantly, traefik has been removed:
version"3" services web imagecodecov/enterprise-private v4.4.8 #or newer commandweb volumes ./codecov.yml:/config/codecov.yml:ro ports "8000:5000" "80:5000" environment STATSD_HOST=statsd depends_on statsd restartalways worker imagecodecov/enterprise v4.4.8 #or newer commandworker volumes ./codecov.yml:/config/codecov.yml:ro environment STATSD_HOST=statsd DATADOG_TRACE_ENABLED=false depends_on statsd statsd imageprom/statsd-exporter v0.6.0 command-statsd.listen-udp= 8125 -statsd.listen-tcp=8125 ports "8125" "9102"
If desired, you can split the web and worker docker-compose into separate docker-compose.yml files and deploy them separately. The worker does not require any sort of HTTP access to function properly, as it only communicates via Redis.
Recommended for High Scale
The following deployment is a near replica of what is used for Codecov Enterprise's SaaS offering: codecov.io. It is recommended when high availability and high scale are required, and maintainers don't mind taking on the additional burden of kubernetes and terraform.
This Deployment Strategy is recommended for the most demanding of Codecov Enterprise deployments. It is managed using Terraform and requires kubernetes. Codecov has provided an terraform configuration files supporting this deployment for all three major cloud providers: Azure, AWS, and Google Cloud Platform. Those resources and supporting documentation can be found on GitHub
There are two main components to consider when determining the performance of the underlying server(s) that will support Codecov Enterprise: Web traffic to the Codecov Enterprise frontend, Report processing demands.
Since every organization is different, it's impossible to make hard and fast recommendations that will meet every deployment's needs.
As a general building block, Codecov uses Google's n1-standard-4 for deployment of web containers and the n1-standard-8 for deployment of worker containers (see: Google Machine Types) .
By far the performance bottleneck on Codecov will be coverage report processing time. Therefore it's helpful to consider resource needs in terms of number of reports processed per minute. If your install will initially process 25 or less reports per minute (i.e., 25 or fewer commits running through a codecov enabled CI/CD per minute), it is recommended to deploy the equivalent of at least 1 n1-standard-8 for all infrastructure when using Deployment Strategy I; and 2 n1-standard-8's for web+worker (to provide high availability) and 1 n1-standard-4 for minio when using Deployment Strategy II. Once deployed, the performance of each instance can be monitored and scaled up or down as needed.
This is generally a good starting point, and is straightforward to scale horizontally when using Deployment Strategy II. If larger deployments are needed, the Codecov team is happy to provide more detailed recommendations depending on specific needs.
Updated 2 months ago