- Folder Structure
- Install/Initial Setup
- Local Application Development
- AppConfig Services
- Fargate Services
- GitHub Branch Flow
- AWS CodeBuild/CodePipeline Infrastructure
- CodePipeline Testing Stages
- Version Management
- Frequently Asked Questions (F.A.Q.)
NOTE
This repository should never be used directly, the "Use this template" button should always be utilized to create fork of this repository.
This section describes the layout of the v1
version of this project.
build
: Everything that build processes would need to execute successfully.- For AWS CodeBuild, the needed BuildSpec and related shell scripts are all in this folder.
env
: Any environment-related override files.- There are a number of JSON files in this folder which are used to override parameters in the various CloudFormation templates (via CodePipeline CloudFormation stages). This is very useful for making specific changes to different environments.
- This folder could also contain other environment configuration files for the application itself.
git-hook
: Any git hooks that are needed for local development.- If the main setup script is used, it will help set the hooks up.
iac
: Any infrastructure-as-code (IaC) templates.- Currently this only contains CloudFormation YAML templates.
- The templates are categorized by AWS service to try to make finding and updated infrastructure easy.
- This folder could also contain templates for other IaC solutions.
script
: General scripts that are needed for this project.- This folder includes the main infrastructure setup script, which is the starting point of getting things running.
- This folder could contain any other general scripts needed for this project (sans the CodeBuild related scripts, which are always in the build folder).
src
: The source files needed for the application.- This folder should contain any files that the
Dockerfile
needs to build the application.
- This folder should contain any files that the
test
: Any resources needed for doing testing of the application.- This project supports Cucumber.js Behavior-Driven Development (BDD) testing by default.
- Cucumber.js is a very versatile testing solution which integrates well with CodeBuild reporting.
- Cucumber tests can be written in many different programming languages including Java, Node.js, Ruby, C++, Go, PHP, Python, etc.
version root
: All of the files that are generally designed to be at the base of the project.- Build-related files, such as the
github.json
placeholder file. - Docker-related files, such as the main
Dockerfile
anddocker-compose.yml
file (if Docker Compose is being used). - Node.js-related files, such as the
package.json
file. - Miscellaneous files, such as the
.shellcheckrc
andREADME.md
file.
- Build-related files, such as the
NOTE
These setup instructions assume you are working with a non-prod/prod AWS account split and that you have already set up the base infrastructure: boilerplate-aws-account-setup
The boilerplate-aws-account-setup
repository will get cloned (uisng the "Use this template" button) for each set of accounts; this allows for base infrastructure changes that are specific to that account. The changes can be made without impacting the original boilerplate repository.
- This repository is meant to be used as a starting template, so please make use of the "Use this template" button in the GitHub console to create your own copy of this repository.
- Since you have the base infrastructure in place, in your primary region, there should be an SSM parameter named:
/account/main/github/service/username
- This should be the name of the GitHub service account that was created for use with your AWS account.
- You will want to add this service account user to your new repository (since the repository is likely private, this is required).
- During the initial setup, the GitHub service account will need to be given admin. access so that it can create the needed webhook (no other access level can create webhooks at this time).
- Once you try to add the service account user to your repository, someone who is on the mailing list for that service account should approve the request.
If you want to use the helper script (recommended), you will need to have AWS credentials for both your AWS non-prod and AWS production accounts and have the AWS CLI installed.
- Instructions for installing the AWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
- WarnerMedia currently only allows for local credentials to be generated from your SSO access.
- You will need to install and configure the gimme-aws-creds tool.
- You will then use that tool to generate the needed AWS CLI credentials for both the non-prod and production AWS accounts.
- You will need to keep track of what your AWS profile names are for use with the script.
NOTE
You will need either DevOps or Admin SSO role in order to have enough permissions to set things up.
This can be done via the helper script or manually through CloudFormation. The script takes more setup but then handles more things for you. The manual CloudFormation upload requires more clicking around but less setup since you don't have to locally configure the AWS CLI, etc.
- Make sure that you know the names of your AWS CLI account profiles for the non-production and production accounts.
- Retrieve a fresh set of AWS CLI credentials for your non-prod and prod AWS CLI account profiles (using the
gimme-aws-creds
tool). These credentials generally expire every 12 hours.
- You will want to make a local checkout of the repository that you created.
- Once you have the local checkout, switch to the
v1
folder. - Change to this directory:
./script/cfn/setup/
- Locate the following file:
.setuprc
- Open this file for editing, you will see the initial values that the boilerplate was making use of.
- Modify all of these values to line up with the values related to your accounts. It may be useful to look at the
.setuprc
of an existing repository for your account if you are not familiar with the values that you need to fill in.
- Run the following script:
main.sh
- The
main.sh
script will use the.setuprc
file to set its default values. - The script will ask you a number of questions, at which time you will have the option to change some of the default values.
- Once you have run through all of the questions in the script, it will kick off the CloudFormation process in both the non-prod and production accounts.
- At this point, CloudFormation will create a setup and infrastructure CodePipeline. These CodePipelines will set up everything else that your CI/CD process needs. The following environments will be set up:
dev
int
qa
stage
prod
- If you need to make changes to any of the infrastructure, you can do so via CloudFormation templates located in this folder:
v1/iac/cfn
- You can make your changes and merge then into the
main
branch via a pull request. From this point forward, the CodePipelines will make sure the appropriate resources are updated.
If you don't want to go through the work of setting up the AWS CLI locally, you can manually upload the main setup CloudFormation template.
You may want to look at the helper script to make sure you set all the parameters correctly.
NOTE
This method is not recommended as the potential for human error or confusion is higher.
- You will want to make a local checkout of the repository that you created.
- Once you have the local checkout, switch to the
v1
folder. - Find the following CloudFormation template:
iac/cfn/setup/main.yaml
- Log into the AWS Console for your production account.
- Go to the CloudFormation console.
- Upload this template and then fill in all of the needed parameter values.
- Go through all the other CloudFormation screens and then launch the stack.
- Monitor the stack and make sure that it completes successfully.
- Switch to the
v1
folder. - Find the following CloudFormation template:
iac/cfn/setup/main.yaml
- Log into the AWS Console for your non-prod account.
- Go to the CloudFormation console.
- Upload this template and then fill in all of the needed parameter values.
- Go through all the other CloudFormation screens and then launch the stack.
- Monitor the stack and make sure that it completes successfully.
- Once the stacks have created everything successfully, you will need to kick the CodeBuild off. This can be done in one of two ways:
- Make a commit to your
main
branch in your repository. - In the primary region of each account, locate a CodeBuild project in your primary region named
(your project name)-orchestrator
and then run a build (with no overrides).
- Make a commit to your
- At this point, CloudFormation will create a setup and infrastructure CodePipeline. These CodePipelines will set up everything else that your CI/CD process needs. The following environments will be set up:
dev
int
qa
stage
prod
- If you need to make changes to any of the infrastructure, you can do so via CloudFormation templates located in this folder:
iac/cfn/
- You can make your changes and merge them into the
main
branch via a pull request. - From this point forward, the CodePipelines will make sure the appropriate resources are updated.
Since ECS works with Docker, it is recommended to use Docker locally to develop your application. A tool named Docker Compose can make this process easier.
Your local system will need to have the following installed:
NOTE
By default, the docker-compose.yml
file in this folder is set up for local development. It works with the local Dockerfile
when you are doing local development.
The docker-compose-test.yml
file in this folder is set up for CodeBuild to use for application testing. It works with images pulled from ECR.
- Make any needed changes to the application in the
src
folder. - Make any needed changes to your
Dockerfile
file in this folder. - If you need to update any of the environment variables, you can update them in the
docker-compose.env
file. - Make any needed changes to the
docker-compose.yml
file in this folder. - To build the image locally, run the following command from the same folder as the
docker-compose.yml
file:docker-compose build
- Once the image has finished building, it can be spun up by running the following command:
docker-compose up
- You should then be able to pull up your application in a web browser (e.g. http://localhost:8080/hc/)
- When you are done testing, you can stop the application pressing
ctrl+c
.
NOTE
When working with Docker, if you make changes to any files that are used in the Dockerfile
, then you need to rebuild the local image, which is done using this command: docker-compose build
AppConfig provides a standard way to host and distribute versioned configurations for your application. It allows for slow rollout of new versions and can even roll back changes automatically (if a specified error rate is reached).
A lot of simple applications use environment variables to configure their application. Though this does work, it has the following drawbacks:
- In order to update an environment variable, you need to roll out a new version of the application, this is time-consuming and the response simply may not be fast enough.
- This gets messy as these variables mix in with the other environment variables (which might not be part of the application configuration).
- Environment variables are basically just strings, though you can put a JSON string into an environment variable, that string can get annoying to manage.
- Only someone who has the access and know-how to deploy the application can make an update to the configuration.
- You could not use an environment variable to activate a feature after the new version of an application has been deployed, it would get activated at the time of deployment.
AppConfig helps solve the above issues by getting all of your application configuration in one place and making it available via AWS SDKs and CLI.
AWS AppConfig currently supports two different types of configuration profiles:
- Freeform: This version allows you to create freeform JSON, Text, or YAML configurations.
- Feature Flags: This version allows you to create a JSON configuration which has flags you can toggle on and off. These flags can also have attributes.
It is nice to have both the Freeform and Feature Flag AppConfig configuration profiles, as they can serve different purposes.
Application configurations and feature flags are very useful tools for modern trunk-based deployment flows. They reduce the need for deploynment rollbacks and allow you to fully deploy a new version of an application before you enable a new feature. AppConfig covers all the basics that an application would need for configuration and feature flags.
For this implementation, there are two types of AppConfig configuration profiles for each environment:
- YAML Freeform configuration:
- Used for basis application configuration.
- Things like the application name, API URLs, CDN image root, etc. can be configured using this Freeform configuration.
- Though the example uses YAML, JSON and basic text are also supported.
- We chose YAML for this example to highlight the fact that AppConfig can support different configuration formats.
- Feature Flag configuration:
- This configuration can be used to enable and disable features of the application.
- Unlike the Freeform configuration, what is sent to the application is always JSON.
- When in the console, the flags will appear as toggles with attributes, the JSON will not be visible.
- When a feature is activated, all of the attributes associated with that feature will be sent as part of the JSON configuration.
There are a lot of application configuration and feature flag services out there; here are some pros and cons to using AppConfig as your solution:
- If you are already working primarily with AWS services, this is another service you can just add to your project. You are already familiar with AWS as a vendor and know how to work with the permissions, etc.
- AppConfig can be deployed using Infrastructure as Code (IaC), so you can deploy and maintain your configurations using GitOps.
- You have the option to do instant deployments or roll them out slowly (to reduce the impact of a potentially bad configuration).
- Rollbacks can be done automatically based on metrics that you configure.
- Having both Freeform and Feature Flag profiles gives you a lot of flexibility in your implementation.
- Since AppConfig works with AWS SDKs and CLI, you could manage AppConfig from your existing CMS by creating an integration.
- Since configurations are versioned, rolling back to a previous revision is easy and it is also easy to look back at changes over time.
- You can create your own custom deployment strategies for getting your configurations deployed.
- Freeform configurations can be hosted either by AppConfig directly or in S3.
- Attributes of Feature Flag configuration profiles can have value validators and can be of different types (strings, numbers, arrays, and boolean).
- By default, the best way to make quick changes is via the AWS Console. This means that people who need access to update configurations or toggle flags must have AWS Console access.
- The AWS Console experience for Freeform configuration profiles doesn't do validation, so confirming your changes are valid is up to you.
- The AWS Console experience for Feature Flag configuration profiles does do validation, but depending on what you are validating, some happen client side, and others (like regex), happen server side.
- The AWS Console experience is just a bit odd in general. It doesn't take long to figure out the quirks and get used to them, but this experience needs improvement.
- At the time of writing, code examples and documentation could be improved.
- A lot of feature flag vendors have features built in to support things like A/B testing. Unless you got creative with AppConfig, you cannot use it for this purpose at this time.
- Though you can set up the same configurations in different regions, unless you want to update the flag in multiple regions each time, you will have to pick a primary region and stick with it. You could have a backup version in another region, but if there was an outage in the primary region, your code would have to be smart enough to switch over to the backup region or keep the last known good version.
- AppConfig is a consumption-based model. It doesn't seem too expensive, but you will want to make sure your application has some level of caching built in if the application is being used at a large scale.
This demonstration repository makes use of a single-page Node.js application (Docker) container running on AWS Fargate.
NOTE
The purpose of this application is to demonstrate how to make use of AppConfig with a basic web application. The application should be used as a reference for how to use the Node.js AWS SDK with AppConfig and is not optimized to be used in a production setting.
Some useful details about the sample Node.js application:
- By default, the repository is deploying the application to two regions, this helps demonstrate that AppConfig from one region can be used to configure applications in multiple regions.
- The Fargate services in each region are behind a load balancer and the load balancer is where HTTPS is terminated.
- Route53 is used to split requests between both regions, so traffic is basically divided evenly between the regions.
- The application supports Basic Auth. so that even if the application is made public, the content is not immediately visible to everyone.
- You have the option to configure the laod balancer to only allow requests from specific CIDR blocks. This allows you to have applications that are only available inside your VPN, etc.
- This is a single-page application with the exception of one route designated for the simple health check.
- The application is pulling in both a Freeform and a Feature Flag AppConfig configuration since they generally serve different purposes.
- If for some reason we cannot reach AppConfig, the application falls back to a default configurations.
- There is a simple cache built into the application so that each page request is not causing a call back to AppConfig. Though simple, even a basic caching solution can make a huge impact on the cost of using a service such as AppConfig.
- Each environment where the application is running has its own configurations, so we can enable and disable features per environment.
- Fargate is getting a copy of the image from the region it is running in, so there is region isolation in that regard.
- With the default settings, this repository assumes that production is running in its own AWS account.
NOTE
Use of direct commits to the main
branch is discouraged. Pull requests should always be used to help give visibility to all changes that are being made.
This repository uses a trunk-based development flow. You can read up on trunk-based flows on this website:
https://trunkbaseddevelopment.com
The use of "Conventional Commits" is encouraged in order to help make commit message more meaningful. Details can be found on the following website:
https://www.conventionalcommits.org
main
:- This branch is the primary branch that all bug and feature branches will be created from and merged into.
- For the purposes of this flow, it is the "trunk" branch.
dev
:- This is a pseudo-primary branch which should always be considered unstable.
- It will be automatically created once the first production deployment happens and will be based off of the release that was just deployed.
- This branch will support an "off-to-the-side" environment for testing changes that may be too risky to deploy into the main flow. For instance, interactions with a new AWS service that are hard to test locally.
- All
feature
orbug-fix
branches can be merged into this branch for testing before being merged into themain
branch. - NOTE: The
dev
branch should never be merged into themain
branch, onlyfeature
orbugfix
branches should ever be merged intomain
.
.*hotfix.*
:- Branches with the keyword
hotfix
anywhere in the middle of the name will temporarily override the main flow, allowing for a specific hotfix to get pushed through the flow. - Once a
hotfix
branch has been deployed to fix the problem, the changes can be copied/cherry-picked back into themain
branch for deployment with the next full release. - All
hotfix
branches should be considered temporary and can be deleted once merged into themain
branch.
- Branches with the keyword
feature
/bugfix
branches:- These branches will be created from the
main
branch. - Engineers will use their
feature
/bugfix
branch for local development. - Feature branch names typically take the form of
f/(ticket number)/(short description)
. For example,f/ABC-123/update-service-resources
- Bug fix branch names typically take the form of
b/(ticket number)/(short description)
. For example,b/ABC-123/correct-service-variable
- Once a change is deemed ready locally, a pull request should be used to get it merged into the
main
branch. - An optional step would be to merge your feature branch into the
dev
branch first to test in thedev
environment. - If you do merge your
feature
/bugfix
branch into thedev
branch for testing, once you verify things are good, then you would merge yourfeature
/bugfix
branch into themain
branch via a pull request. - NOTE: The
dev
branch should never be merged into themain
branch, onlyfeature
/bugfix
branches should ever be merged intomain
. - All
feature
/bugfix
branches should be considered temporary and can be deleted once merged into themain
branch. The pull request will keep the details of what was merged.
- These branches will be created from the
- An engineer would create a
feature
/bugfix
branch from the local checkout of the local, currentmain
branch. - The engineer would then make their changes and do in-depth local testing.
- The engineer should write any needed application/infrastructure tests related to their changes.
- If this feature isn't fully functional, it is good practice to wrap it in a feature flag so that it can be disabled until it is ready.
- Once things look good locally, the engineer would push the branch to GitHub.
- In GitHub, a pull request will be created to the
main
branch. - A peer review should be done of the pull request by at least one other engineer.
- Once the pull request is approved, it will be merged into the
main
branch. - This will trigger a CodePipeline which will build the Docker image and deploy it to ECR (normally to both a non-prod and prod account repository for each region).
- Once the Docker image is deployed to all required ECS repositories, the image produced will be tagged for the
int
environment. - The
int
ECR tagging will cause the changes to be deployed to the initial integration (int
) environment. - If things are approved in the
int
environment, then a manual approval in the CodePipeline will promote the changes to the Quality Assurance (qa
) environment. - At this point a GitHub pre-release tag will be created and a link to a GitHub changelog will be added to the notes of the pre-release.
- Once things are approved in the
qa
environment, a manual approval in the CodePipeline will promote the change to the Staging (stage
) environment. - The
stage
environment will allow for one last review before things go to the production (prod
) environment.
- Now that things are ready for a production deployment, a time and date should be set for the production deployment and all deployment documentation processes and notifications should be done.
- At the desired time, a manual approval in the
stage
CodePipeline will then promote the changes to theprod
environment in the production AWS account. - The changes will then be deployed to the production account.
- Once things have been successfully deployed to production, the
dev
branch will be overwritten with the release that was just deployed to production.
In the above flow, there is one additional environment that changes can be pushed to if they are high-risk. Here are the details:
- Once an engineer is ready to get their changes merged into the
main
branch, they can optionally first choose to create a pull request into thedev
branch. - Once their branch is merged into the
dev
branch, a development build CodePipeline will be triggered to build and deploy thedev
Docker image to ECR. - The Docker image will get deployed to the same ECR repositories as the main flow, but
version
tag will have a-dev
added to the end (to help indicate that this image should never be deployed to production). The image will also get tagged for thedev
environment. - Since the image is now tagged for the
dev
environment, changes will be deployed to an "off-to-the-side" environment where the changes can be tested and verified without blocking the main flow. - Once the changes look good on the
dev
environment, the engineer can create a separate pull request for their branch to themain
branch. - The
dev
branch is wiped out and replaced whenever there is a production deployment. It is replaced with the release SHA that was just deployed to the production environment. This prevents thedev
environment from becoming a "junk drawer" of failed test branches. - The
dev
branch and environment should never really be considered a "stable" environment. - NOTE: The
dev
branch should never be merged into themain
branch, onlyfeature
/bugfix
branches should ever be merged into themain
branch.
This project uses AWS CodeBuild and AWS CodePipeline to get your application deployed. Here we will outline the different components of the deployment flow.
- The orchestrator is a CodeBuild project which is triggered by a GitHub Webhook.
- This CodeBuild project can be found in the primary region where you set up the infrastructure and have a name that follows this pattern:
(your project name)-orchestrator
- The orchestrator will examine the changes that were just committed and determine the type of change which was just made.
- The changes will be packaged into different ZIP archives and then deploy them to archive S3 bucket.
- The appropriate CodePipelines will then be triggered based on the type of change that was committed.
- The orchestrator creates different ZIP archives, the contents of those ZIP archives are managed by
*.list
files which are located here:env/cfn/codebuild/orchestrator/
- There are two project infrastructure CodePipelines, the setup CodePipeline and the Infrastructure CodePipeline.
- When the initial setup CloudFormation template runs, it creates a setup CodePipeline.
- This CodePipeline will get triggered within a minute of the first successful CodeBuild orchestrator run.
- This CodePipeline is very simple, it's only purpose is to create and maintain the infrastructure CodePipeline.
- This CodePipeline may feel like an extra step, but it is there so that project infrastructure changes can be made easily.
- Updates to the CodePipeline should be rare.
- NOTE: If changes need to be made to the setup CodePipeline, then the the main setup template will need to be edited and the changes manually run from the AWS CloudFormation console.
- This CodePipeline is initially created and maintained by the setup CodePipeline.
- The template that manages this CodePipeline is located here:
iac/cfn/codepipeline/infrastructure.yaml
- Any environment parameters overrides can be set in the JSON files located in this folder:
env/cfn/codepipeline/infrastructure
- This CodePipeline will create/maintain all of the base infrastructure for this project. Some examples are:
- General IAM Roles, such as roles for the testing CodeBuild projects, deployment CodePipelines, etc.
- General CodeBuild projects. For example, CodeBuild projects that the deployment CodePipelines would use for testing stages, etc.
- Deployment CodePipelines for all of the different environments:
dev
,int
,qa
,stage
, andprod
.
- You can review the CloudFormation parameters in the infrastructure CodePipeline template to see what options are all available.
- For example, there is a parameter to turn on a manual approval step for the infrastructure CodePipeline; this is useful for approving changes to the production infrastructure after being verified in non-prod.
- You would use this CodePipeline to set up things that are shared by all deployment CodePipelines, or things that will rarely change.
- There is an optional CodeBuild stage that can be activated which would allow you to run custom commands, such as triggered a different IAC solution or triggering an AWS CLI command after things are all updated.
- This CodePipeline is configured to work with up to three different regions and deploy to two regions as standard functionality.
- It is good to have your infrastructure and application deployed in two regions for spreading load, redundancy, and disaster recovery.
- The CodePipeline could do deployments in up to three different regions. However, with each region you add, you also add complexity.
- Though there is a good case to be made for running in two regions for things like disaster recovery, the case for going into more regions gets weaker (as costs will rise for maintenance, data synchronization, etc.)
- The main reason for third region is that you could switch to a different region if one of your normal regions is going to be down for a prolonged period of time.
- Running in two regions is fairly complex and one should make sure they have that perfected before trying to add the complexity of any additional regions.
- The build CodePipelines have one purpose, to build and deploy the Docker images to ECR.
- There are two build CodePipelines, the
dev
branch has an isolated build CodePipeline, themain
and*hotfix*
branches use the primary build CodePipeline. - The build CodePipelines are triggered by the orchestrator.
- Once a build CodePipeline successfully runs, it will trigger the required deployment CodePipeline.
- There is an individual CodePipeline for each environment.
- Because each environment gets a CodePipeline, this means that environments can be added or removed in a modular fashion.
- In the default flow, there are five environments:
dev
: This is an optional, "off-to-the-side" non-prod environment which can be used for testing risky changes (without blocking the main flow). It is trigger when thedev
build CodePipeline successfully completes.int
: This is the first non-prod environment in the main deployment flow; it is triggered when the primary build CodePipeline successfully completes.qa
: This is the second non-prod environment in the main deployment flow; it is triggered when there is a manual approval in theint
CodePipeline.stage
: This is the third and final non-prod environment in the main deployment flow; it is triggered when there is a manual approval in theqa
CodePipeline.prod
: This is the only environment in the production account and the final step in the main deployment flow; it is triggered when there is a manual approval in thestage
CodePipeline.
- Each environment has the option to enable an application and infrastructure testing stage. These are controlled by parameters in the CloudFormation template.
- Just like the infrastructure CodePipelines, each deployment CodePipeline is configured to work with up to three different regions and deploy to two regions as standard functionality.
- Please see the details for the infrastructure CodePipeline to understand the reasoning for this.
There are three application deployment CodePipeline testing stages that can be enabled for both regions. These testing stages are run using AWS CodeBuild (this allows for a lot of flexibility in testing).
The three testing stages are:
- Security
- Application
- Infrastructure
By default, all three stages use Cucumber.js to run the tests. Other testing frameworks can be used, but this is the one that is part of this boilerplate.
The Security stage is intended for running any needed security scans of your code. By default, There is a Cucumber.js skeleton testing script in place as well as corresponding feature file.
Currently the default test script doesn't do anything, you need to set up the logic for any custom scans that you want to do.
If enabled, this is the first testing stage that runs in application deployment CodePipeline.
This stage is intended for testing the application Docker image and running tests to ensure the image is healthy. It leverages Docker Compose to run the container inside of CodeBuild.
If enabled, this stage would run after the security stage, but before the application is deployed.
This stage is intended for testing the application once it has been fully deployed in the AWS infrastructure. It can test things like making sure the secure certificate is valid and working and also that you are getting the intended responses, etc.
If enabled, this stage would run after the other two testing stages and after the application has been deployed.
The testing stages are all part of the application deployment CodePipeline. This template has parameters that can be switched between Yes
and No
values for each environment.
The CloudFormation environment JSON files can be used to switch these on per environment. These JSON files are located here:
For example, if you wanted to disable security scanning for the int
environment, you would modify the following file...
env/cfn/codepipeline/deploy/int.json
...and then modify the CodeBuildRunSecurityTests
parameter value to No
. The JSON would look something like this:
{
"Parameters": {
"ActionMode": "REPLACE_ON_FAILURE",
"ApprovalEnvironment": "qa",
"CodeBuildRunAppTests": "Yes",
"CodeBuildRunInfrastructureTests": "Yes",
"CodeBuildRunSecurityTests": "No",
"ServiceSourceFile": "service.zip",
"ServiceEnvSourceFile": "service-env.zip",
"TestSourceFile": "test.zip"
}
}
Once these changes are merged into the main
branch, the application deployment CodePipeline for int
would get updated to no longer have this testing stage.
This same process can be used for all three testing stage types in all the environments. So you can mix and match as needed.
The version of the application can be managed manually or automatically (via the deployment flow).
NOTE
Semantic versioning should be used for maintaining the version of this application. If the same versioning method is used between applications, it is helpful for consistency.
Given a version number MAJOR.MINOR.PATCH
, increment the:
MAJOR
version when you make incompatible API changes,MINOR
version when you add functionality in a backwards compatible manner, andPATCH
version when you make backwards compatible bug fixes.- Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
Since this is a Node.js application, we will leverage the package.json
file for managing the version number.
- Once you have your changes ready to commit to your
bugfix
/feature
branch, review the changes and see which semantic versioning level they match up with. - Open the
package.json
file. - Update the the
version
property in the JSON object to the needed value. - Commit this change with the rest of your changes.
- Merge your pull request into either the
main
ordev
branch. - The build process will notice that you manually updated the version number and will use your version number.
- When your changes are promoted to the Quality Assurance (QA) environment, this version number will be used as the pre-release tag in GitHub:
https://github.com/<organization>/<repository>/releases
- Once the application is promoted to production, the GitHub pre-release will get promoted to a full release.
NOTE: It is recommended to manually update the package JSON file to ensure that thought is given to which semantic versioning level should be updated.
If you merge a bugfix
/feature
branch into the main
or dev
branch, but did not manually update the package.json
file, the following will happen:
- The build process will notice that you did not update the version number and will automatically increment the
PATCH
level of the semantic version (e.g.1.1.2
to1.1.3
). - When your changes are promoted to the Quality Assurance (QA) environment, this version number will be used as the pre-release tag in GitHub:
https://github.com/<organization>/<repository>/releases
- Once the application is promoted to production, the GitHub pre-release will get promoted to a full release.
- I don't like all these environments, are they all required?
- No, not all of the environments are required. That is why each environment has its own deployment CodePipeline, the idea is that you could remove environments.
- If you look through the CloudFormation templates, you will notice that there are configuration options for environment setup, such as environment names, which one is the initial environment, etc.
- At minimum, you would need to have three environments, one primary non-prod environment, one prod environment, and then one off-to-the-side environment for testing risky changes.
- NOTE: In-depth full testing of removing environments has not been done, if you try to remove/refactor environments and run into major bugs, please report them so that the boilerplate can be improved.
- Why are there all these testing stages, and why are they failing for my project?
- Each CodePipeline has the option to test your security, application, and AWS infrastructure using a CodeBuild testing stage (which, by default, uses Cucumber.js).
- If your test phases are failing (most-likely due to the boilerplate code being replaced with your actual code), then you have the following options:
- Update the Cucumber.js test to tests that will work with your application and AWS infrastructure (recommended).
- Replace the Cucumber.js tests with your own test suite/testing solution (this is also fine/recommended).
- Shut off the testing stages in all environments (which is supported by CloudFormation parameters) and figure out the whole testing thing later (not recommended).
- Though you can use another product other than Cucumber/Cucumber.js, please note the reporting will break, unless your testing suite outputs to one of the supported formats.
- Can I turn on testing stages for only certain environments?
- Yes, the application and infrastructure testing stages can be turned on or off in any environment via parameters in the CloudFormation templates. You can use the environment JSON files for this purpose.
- When making changes to the infrastructure CodePipeline, I don't like the fact that the changes are deployed out to the production account immediately, I would like to approve changes; is there a way to do that?
- Yes, you can activate a manual approval step for the infrastructure CodePipeline using the environment JSON files to override the default parameter value for the
EnableManualApproval
CloudFormation parameter. - When things are first set up, we want all the infrastructure to get established in both the non-prod and prod accounts. But as your product matures, you may not want that. You may want to approve infrastructure changes in production (or even non-prod), so this feature was added.
- Yes, you can activate a manual approval step for the infrastructure CodePipeline using the environment JSON files to override the default parameter value for the
- I have added or removed files, but the CodePipelines cannot find them. How do I fix this?
- The orchestrator CodeBuild uses some
*.list
files to know which things it should include and exclude from the various artifact ZIP archives (allowing you to control when, say, the infrastructure CodePipeline is triggered). - Make sure that your
*.list
files are up-to-date with your latest changes, and then get these changes merged in via a pull request, this will trigger a new orchestrator run and the ZIP archive files will be updated appropriately. - If you look at the build logs from the orchestrator CodeBuild, you will see which files are being included into the different ZIP archives.
- The orchestrator CodeBuild uses some
- When the automatic GitHub patch occurs, it is triggering another build, which starts a build loop; how do I fix this?
- When you first set up the project, the orchestrator is given the ID of a GitHub user that it adds to the GitHub WebHook, any changes coming from this user are ignored.
- If the wrong GitHub ID is provided for the GitHub WebHook, then it will get triggered when you do not want it to be triggered.
- The GitHub ID needs to match the ID of the GitHub service account that is being used by the deployment flow. This username should be in the following SSM parameter:
/account/main/github/service/username
- You can find the GitHub ID by using this HTTPS URL:
https://api.github.com/users/<your github user name>
- NOTE: If you notice a build loop happening, you want to make sure to disable the CodePipeline that is in the loop; otherwise, if left to run, you could run up a significant CodeBuild/CodePipeline bill.
- How can I determine if a feature of the deployment flow is configurable?
- There are many configurable aspects of the CloudFormation templates, the best thing to do is open the template you are interested in and see what parameters the template already has available.