Developers often fear database changes because a mistake by anyone on your team can lead to a major outage and even data loss. The stakes are higher when changes are not backwards compatible, cannot be rolled back, or impact system performance. This can cause a lack of confidence and slow your team velocity. As a result, database changes are a common failure point in agile and DevOps.
Databases are often created created manually and too often evolve through manual changes, informal process, and even testing in production. Manual changes often lack documentation and are harder to review, test, and coordinate with software releases. This makes your system more fragile with a higher risk of failure.
The solution is to include your database in your source control and CI/CD pipeline. This lets your team document each change, follow the code review process, test it thoroughly before release, make rollbacks easier, and coordinate with software releases.
Let’s look at an example of how you can include your database in your CI/CD process and how to push a non-backwards-compatible database change successfully. We'll also look at testing your changes, progressive deployments, dealing with rollbacks, issues to look out for, and a few helpful tools.
CI/CD is a cornerstone of modern development and DevOps.
CI—or Continuous Integration—is the practice of merging all working developer code into a shared repository throughout the day. Its purpose is to prevent integration problems by integrating often and early. Commonly, this integration kicks off an automated build and test.
CD—or Continuous Delivery—is the practice of building, testing, and releasing software in short cycles, with the aim of ensuring that a working version of the software can be released at any time.
There are several key requirements to having your database ready for CI/CD. First, the database must be reproducible from scratch using one or more SQL scripts. This means that in addition to a script that creates the initial version of your database, you must also maintain scripts that make all required schema updates to your database.
When you create these scripts, you have two options:
- Create one script per schema object, then update the corresponding script (state based) when making changes to the object.
- Create one original script that creates the entire database schema. Then, create a series of individual change scripts (migration based) for changes.
The second requirement for CI/CD is that the database schema (meaning, those scripts we just mentioned), just like your source code, must live in source control. You must treat your database schema changes as a controlled process just as you do with code.
Lastly, changes that involve removing a database object, such as deleting a column as shown below, can be more difficult to deal with due to the loss of data. Many organizations develop strategies to deal with this, such as only allowing additive changes (e.g. adding a column), or having a team of DBAs that deals with such changes.
Perhaps the best process for database changes and database CI/CD is ensuring you have a collaborative effort between DevOps and DBAs. Make sure your DBAs are part of the code review cycle; they can help to identify issues that only they may know about. DBAs have knowledge of the databases in each specific environment, including database specific dependencies such as ETL load jobs, database maintenance tasks, and more.
Be sure to consult a database SME in setting up your database for CI/CD, and in any migration process, when possible. Be sure to also follow sensible DevOps processes, such as test your changes in a test environment, performing backups, mitigating risks, being prepared for rollbacks, and so on.
When you create or update these scripts, and push them to source control, your CI tool (such as Jenkins or Heroku CI) will pull the changes and then:
- Rebuild your database to the newest version of the scripts in a test or staging environment. Since the database is being rebuilt, be sure to export the look up/reference data, then import it back to the new schema. Although it is possible to export and import transactional data, transactional data is out of scope for this article. You can read more about best practices here if interested.
- Run your tests. For testing your database changes, one possible time saver is to have two sets of tests. The first set is a quick test that verifies your build scripts and runs a few basic functional tests (such as referential integrity, stored procedures unit tests, triggers, and so on). The second set includes migration of transactional data (possibly scrubbed production data) to run a more realistic full set of tests.
- Deploy your database changes to your production environment or another selected environment. (Depending on your migration strategy, the CI tool should also simultaneously deploy and test any code changes dependent on the database change.)
In many cases, when you're making a simple schema addition with bidirectionally compatible code, then you can push code and database changes at the same time. This shouldn't be an issue, as rollbacks in our case will be easy and predictable. This is often true when we are dealing with microservices with simple database components.
However, in many scenarios, serious problems can happen with this simplistic approach:
- Production data may be different from test/stage data and cause unforeseen issues.A
- large number of changes in both code and database schema may be in the pipeline and need to be deployed simultaneously.
- CI/CD processes may not be consistent through every environment.
- You may be under a zero-downtime mandate.
- Even using tools that help you to achieve zero-downtime (such as Heroku preboot) you can end up with two versions of the code running simultaneously.
There are several strategies for addressing the above issues. Some popular solutions include:
- If your changes are backwards-compatible, then use a tick-tock release pattern. This approach involves releasing the new database column then releasing the new code. You can identify problems early in this manner, with minimal production changes. Additionally, the rollback remains small and manageable, and can be accomplished with tools such as Heroku's Postgres rollback, as noted above.
- If your provider supports it, use a blue/green rollout. In this pattern, an entirely new set of production servers is created side-by-side with the current production servers. Enable database synchronization and use a DNS or a proxyto cut over to the new servers/database. You can rollback by simply changing the proxy back to the original servers.
Let’s run through an example based on the the migration scripting option as explained above. Note that some frameworks (Rails, Django, ORM tools, and so on) abstract out or handle schema creation and migration for you. While the details may differ according to the framework you are using, the below example should still help you to understand these core concepts. For example, you may have a schema configuration file to include in your CI/CD process.
Here's the pertinent code for our example. We're going to create a simple database with a single table, and a Node.js file that writes to that database table on load.
Database creation SQL (we have just one simple table):
CREATE TABLE users ( id integer PRIMARY KEY, firstname varchar(40) NOT NULL, lastname varchar(40) NOT NULL, enrolled char(1) NOT NULL, created_at date NOT NULL );
const result = await client.query('INSERT INTO users (id,firstname,lastname,enrolled,created_at) values ($1,$2,$3,$4,$5) ',[1,'Becky','Smith','y',new Date()]);
release: bash ./release-tasks.sh
The content of the release-tasks file includes a list of SQL scripts to run. That list is updated with each release to include the needed schema modifications. For this very simple example, it will point to just one script:
psql -h <hostname> -d <database> -U <user> -w -f database/migrate.sql
(The database password can be supplied as a Heroku environment variable.)
const result = await client.query('INSERT INTO users (id,firstname,lastname,created_at) values ($1,$2,$3,$4) ',[2,'Becky','Smith',new Date()]);
Next, we create a migrate.sql file (referenced in the Procfile above) to alter the table and remove the column:
ALTER TABLE users DROP COLUMN enrolled;
Now, we commit the code change and SQL file, and watch the CI magic. First, the integration tests run. If you are using a common testing framework, the Heroku CI tool probably works with your test suite.
And now the CI tool creates a new release and deploys the app, which kicks off the migrate.sql file. (See the middle of the image below.)
We can check to see that the column was removed by inspecting the database through the Heroku CLI tool:
It worked! There is no longer a column named 'enrolled'. Our CI tool ran our script and deleted the column.
Some tools, like Liquibase, keep a detailed list of database changes. These tools allow you to easily see the last set of changes in cases like the above.
Now, any time that code or an updated migrate.sql is committed in the future, the CI tool will kick off the tests. If the tests pass, this creates a new release and pushes it to staging. When there is a new release, the migrate.sql file runs against the staging database.
Even with the best planning and forethought, there will be times when you need to roll back your database. There are many approaches to rolling back failed deployments.
- Create a SQL file that rolls back the changes quickly. (For example, while you are in staging, use a compare utility to generate the script.) This file should be part of the deployment package so that you can quickly run the rollback if there is an error.
- Roll forward (quickly push a new build that fixes the issue).
- Rely on source control and labels or branches to recreate and deploy the previous version.
- Restore a full backup of your database. (Use a tool that ships with your database, such as pg_restore in Postgres.)
- Use a tool provided by your platform, such as Heroku Postgres Rollback and Heroku Release Rollback for code. As the name implies, Heroku Postgres Rollback allows you to easily roll back your database to a previous point in time, quickly and confidently moving your database back to a working release.
Be aware that all these solutions come with their own challenges, such as potential loss of new data (restoring a backup or redeploying) and introducing new bugs.