Blue Green Updates With Schema Changes in Oracle

Introduction

Blue-green deployments are a pattern whereby we reduce downtime during production deployments by having two production environments ("blue" and "green"). If we call the current live production environment "blue", the technique consists of bringing up a parallel "green" environment with the new version of the software and once everything is tested and ready to go live, you simply switch all production transactions to the "green" environment, leaving the "blue" environment idle and eventually drop it once the stability of "green" environment has confirmed.

This paper provides an overview of the blue-green deployment methodology and describes techniques people can implement using Amazon Web Services (AWS) services and tools. It particularly focuses on the synchronisation between databases across two environments with different schema structures.

Techniques

Pre-cutover activities

The following diagram illustrates the implementation steps need to be performed prior to the production cutover.

  1. Create a new "green" stack using the CloudFormation template. The new infrastructure includes an Amazon Route 53 for the testing routing entries but not the database tier. The latter will be created through the database snapshot restore.
  2. Create a manual production DB snapshot on S3 bucket. Multi-AZ DB instances are not affected by this I/O suspension since the backup is taken on the standby database.
  3. Create a separate Multi-AZ DB instance on the new stack by restoring the snapshot captured from step 2.
  4. Code deployments on the "green" environment include database schema changes for new version.
  5. Create a CDC only migration task on Data Migration Service (DMS) to replicate database transactions from source ("blue") to target ("green") databases. The ongoing replication task needs to be started with native start point where the snapshot based on, such as using SCN for Oracle. Table mapping and transformation rules may require as there are changes on the schema structure for the upgrade.
  6. Start up the applications.
  7. Testers perform application testings using separate URL from production.

Considerations for data replication

During the testing phrase, there will be two "live" systems working on the same logical data-set, precautions should be taken with updates on the database for data integrity. Here are some of considerations;

  • Sequences - each environment should works on exclusive set of sequence values so that there will be no conflicts arise when data replicated on the other side through DMS. This can be achieved by either using different ranges of sequence values or even/odd split on two sides.
  • Table Exclusion - some of the tables need to be excluded from replications when two systems running in parallel. This not only reduces the workloads on DMS but also critical for data integrity. For example, workflow records may be processed twice by the back-end engines on both sides if the corresponding table is replicated across. Generally only the business data will be included for replications and exclusions apply for repository, system, back-end processing, log and audit trail tables.
  • Column Exclusion - transformation rules are required for any table columns which exist in the source database but not on the target side.
  • Reference Data - normally new reference data will be introduced with version upgrade. Data mapping is required for such references when we reverse the replication direction during the cutover phrase so that the "blue" database won't be updated with new reference codes. For example, an introduction of "adjustment" entry type in the ledger system may need to convert back to either "credit" or "debit" entry type depends on the value of adjustment amount.

Alternative solution for data replication

The solution presented on this paper uses AWS DMS as the replication tool for database transactions and currently its only support some primitive transformation rules. It will be good if DMS can integrate with Lambda for serverless data transformation, like Amazon Kinesis Data Firehose, in the further releases.

If complex transformation of data is required, Oracle GoldenGate can be considered as an alternative solution. You can either use Oracle GoldenGate Cloud Service (GGCS) for database replications or setup an Oracle GoldenGate Hub on EC2 as follow

Cutover activities

If the system testings have completed successfully, the following tasks will be executed for switching the customer transactions from the "blue" to "green" environments.

  1. Stop the service for the customers and shutdown the applications on the "blue" environment.
  2. Reverse the database replications so that "green" becomes the source and "blue" as the target. This make sure there will be no data loss on the "blue" database if rollback is required later on. Again table mapping and transformation rules may require for reverse replications.
  3. Update the production Route 53 routing records which direct the end user traffics to the "green" environment and the system now become available for the customers.

The system outage for the end users is the duration between step 1 and 3 and does not dependent on the size of the database, complexity of the changes and the duration of the testings.

Post-cutover activities

Once the stability of "green" environment has been verified with the production traffics, we can terminate the DMS service and decommission the "blue" stack.

Conclusion

Traditionally, with in-place upgrade, any major software deployment will normally cause a notable outage for the platform. This will have a significant impact on the business with negative experience from the customers. With blue-green deployment methodology, the duration of outage will be greatly reduced and more importantly it does not depend on the complexity of the changes and volume of data.

In AWS, it is easy to build a new and consistent infrastructure stack from CloudFormation templates. The process can be automated with Continuous Integration and Delivery pipeline and human error can be eliminated on repetitive work. With most cloud service providers, customers only pay for the services you use, meaning it is economical to have two application stacks running in parallel for a short period of time. In addition, the deployment method provides an easy path for rollback if the new deployment is failed in performance with production transaction volumes and patterns. Customers will only experience a short duration of interruption of service when reverting the routing records on production Route 53 back to "blue" environment and restart the applications.

The only additional development tasks for such deployment framework are setting up the table mappings and transformation rules on DMS. These will be different from time to time and depend on the nature of database schema changes for each deployment. It is important to ensure that data integrity is maintained when we have two running systems working on the same logical data-set.

Presentation pack for deployment workflow

huynhhatem1945.blogspot.com

Source: https://www.linkedin.com/pulse/blue-green-deployment-aws-database-synchronisation-james-chan

0 Response to "Blue Green Updates With Schema Changes in Oracle"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel