Database DevOps
Date: Friday, January 12, 2018
DevOps is a well-established practice when it comes to application development but migrating databases poses additional problems. There could be differences in the definitions between the test and live environments that need to be maintained – perhaps the indexing is different or the synonyms map to different databases. The volume of data on the live system could be vastly greater so any alteration will run slower and lock tables for longer. The full live dataset might include edge cases that weren’t covered in the test dataset. Any change to that production data needs to be done with great care. And all this should be done while maintaining high availability and data integrity and in a manner that can be easily retried or rolled-back if things go wrong.
Of course, tooling does exist to make this easier. Projects using Microsoft’s Entity Framework can take advantage of a migrations system that can track changes in models and create alterations scripts accordingly. Those scripts can be deployed with a single PowerShell command so integrate nicely with CI/CD pipelines. However, the only interface for manipulating the actual data is hand-scripting SQL, which can be intimidating for developers who spend all their time in writing C#. The migrations system also gets very confused if the target database changes relative to the local models. In practice, this works acceptably four our application development projects where database changes are driven by changes at the app level so can be implemented in a code-first manner, but it doesn’t suit BI use cases where we build systems on top of existing databases for third-party programs where we have little control over
Alternatively, SQL Server Data Tools comes with a tool to compare the schema as specified within the project with the schema deployed on the target database. This is a slightly more flexible system that can be pointed at an arbitrary database and will, after a short while thinking, determine what schema changes are required. It integrates nicely with the deployment procedure and after the initial set up it’s a two-click process to choose the database and set it going. It’s still not perfect: if it sees that data manipulation is required, for example if a column has been dropped, the script will intentionally stop with an appropriate message. This is a useful safety feature as it highlights data changes that might have been missed, but it means it’s less suitable for automated deployment.
Frustratingly, the output from both tools isn’t atomic and fails to facilitate roll-backs of changes. If the alterations fall over part way, it’s unclear what they’ve done and what is still to do. Also, neither can really prepare for the difference in runtime for a script between live and test. Both these issues can be mitigated by good process. Ideally a new feature can be broken down into incremental changes that can be tested and released individually. This means that table locks are held for a much shorter time, that database drift is less likely when multiple developers are working simultaneously and that those developers can react much more smoothly if unexpected edge-cases crop up.
Life is rarely that simple. I’ve recently been involved in a project that required two simultaneous major features that both required sweeping changes to the schema and careful data changes, that happened to involve some of the same code. It required a lot of speculative development before we decided on a final solution so it’s not something that was in a state to deploy part-way through. We ended up hand-maintaining alterations scripts to keep track of what needed to change. We then ran those scripts on a clean copy of the live database to prove the role out. It was laborious and required careful tracking, but none of the tools would have done a better job.
Some companies rebuild their databases from the schema in source control every day. While this might annoy developers the first time they lose progress, people quickly learn to make a change, test it and check it in. This is probably the closest database development gets to proper CI/CD, but it’s quite the nuclear option. It’s embarrassing to write a script with bug that breaks an application but the change can be quickly retracted. It’s a much greater problem if you need to restore the live database after accidentally deleting important production data.
Ultimately, every solution has drawbacks in one way or another. As an agile company we want to keep our development cycles tight, but as a BI company we often end up tearing things apart to rebuild them more efficiently or to extract a bit of extra data that we can use for insight. Every new project, every new use case has different priorities and gives us an opportunity to try things a bit differently or incorporate the latest ideas and approaches. But regardless of the latest trend, there’s no substitute for experience, and that’s what we at Village have in spades.
If you want to get more out of your databases, perhaps this venerable 30-year-old company can help.