Pro tips on migrating databases to the cloud. Q&A with Craig Silveira.

For many customers that are new to the cloud, the lack of in-house personnel with knowledge of migration/cloud architecture and the low level of experience with these tools and processes can be a large stumbling block. 

Q1: Please describe the Database Migration Service Solution Architects team and what they are responsible for.
The Database Migration Service Solution Architects (DMS SAs) are a team of data migration specialists with a combined background of 60 years of data migration experience, working across Oracle, SQL Server, MySQL and other RDBMS and non-relational (NoSQL) databases. We offer complimentary services to help customers at any point in their AWS database and analytics migration projects, from assessing and architecting their migration process to diagnosing and troubleshooting any migration issues, in order to successfully complete their migrations. Our primary focus is helping customers be successful using DMS to complete their data migrations, and we have extensive experience with other data migration methods as well to help customers choose the best solution for their situation. 

Q2: What are some of the common concerns you have heard that prevent customers from migrating to the cloud?
For many customers that are new to the cloud, the lack of in-house personnel with knowledge of migration/cloud architecture and the low level of experience with these tools and processes can be a large stumbling block. Our team dedicates a significant amount of time to knowledge transfer and education on the functionality of AWS products to prospective customers via enablement sessions, demos, hands-on training and collaborative working sessions. We perform architecture reviews and proof of concepts (POCs) to show customers the abilities of DMS. We also offer our services to help optimize the performance of their environments before and after they have moved to the cloud.

Another common concern of first-time cloud users is data security. We work with customers in various industries such as finance, government, healthcare, telecommunications, etc. where there is unease with moving their data over the internet. Our team engages with these customers and educates them on the options to mitigate those risks using SSL encryption and direct connect VPNs.

Network access and speeds are other common concerns. Some environments have network restrictions that we may need to work around or network speed issues that can be resolved by the customer increasing their network capabilities. Some customers have no access to their source instance from outside their network. In those cases, we work with their networking teams to possibly configure a jump server or some other option for access. 

Q3: What is the main customer misconception related to database migrations that your team has observed? Any tips on how to address this?
Customers have a misconception that moving the code base is the hardest part of a migration and moving data is easy. While there are data migration tools, such as DMS, that do a lot of the heavy lifting for you, customers still need to plan and test to verify their migration strategy. It is usually an iterative process where you plan the process, create the migration tasks, test, make some changes, and then test again until you get the optimal performance and configuration. A common problem with this process is most customers do not have a Dev/Test/QA instance that has the same volume of data as the production system, so you need to perform these tasks in production before the actual migration and cut over. This presents another obstacle since customers are wary of testing in production environments. Therefore, time and effort should be dedicated to creating a proper test environment in order to alleviate all concerns and establishing a better standard operating procedure.

Q4: Can you share some best practices to help customers successfully migrate their databases?
The number one best practice that a customer can implement is to take adequate time to plan the migration before they start. Bringing in the DMS SA team, which is complimentary, early in the process can address many of the commonly encountered concerns/issues before they happen. The DMS SA team will help the customer perform a thorough discovery and assessment of their systems and therefore, be able to provide the customer with the proper migration architecture and plan.

Other best practices include the creation of a proper testing environment, parallelization of data movement within and across replication tasks, range partitioning to multi-thread the data loads for large tables. In addition, ensure sufficient resources are available, proper monitoring of the end to end replication process is set up and configuration changes for specific source/target environments are made.

Q5: What are the common database migration issues your team helps resolve for customers looking to migrate to the cloud?
Some common migration issues that our team resolves for customers are:

  1. Performance issues: These include, but are not limited to, getting the desired data transfer rate for completing the migration within their desired timeframe and addressing bottlenecks caused by bandwidth/throughput constraints and sub-optimal architecture/configurations. Examples of sub-optimal configurations include insufficient or no parallelism in use, not splitting tasks based on different types of tables and needs, and dealing with large objects.
  2. Error Handling: During the course of replication, there can be errors or anomalies and customers need help understanding why they happened and how to resolve them. This requires investigation and debugging the replication process by leveraging data transformation rules or various DMS task/endpoint settings to determine the root cause of the issue. 
  3. Network Issues: Issues range from general access, networking knowledge and bandwidth. As the cloud is new to many customers, understanding networking in the cloud is a much-needed starting point. Once that is understood, the next step is understanding how their own network is configured and if changes are necessary to allow connectivity to the cloud. Many customers have closed networks that will not allow their databases, which hold sensitive data, access to the outside internet. Therefore, this needs to be addressed early in the process. Networks can be finicky as well so it is possible that customers will experience intermittent connectivity issues during the migration process. Overcoming these issues require troubleshooting the process end to end, starting with the source databases, the network, and the DMS engine to the target endpoint.


Q6: Can you walk me through how your team was able to help remediate some challenging problems?
We had one particularly challenging engagement with a customer where there were multiple issues that only revealed themselves after the previous one was resolved.

The engagement started when the customer reached out requesting our help with some specific data type migration challenges during their proof of concept (POC) for migrating on-premises Oracle to Aurora compatible PostgreSQL. Our team jumped in, gathered the details and setup a lab to reproduce the issue, diagnosed the root cause and recommended possible solutions. During this conversation, we learned of a greater issue where the customer was trying to migrate around 10 TB of data and the full load was taking way too long – around 13 days. While this was satisfactory for the customer, it seemed too long to our team and so we did some additional investigation. We discovered the customer had a few large partitioned tables which had their indexes in place on the target when performing the full load. When you have indexes in place during a load, there is extra overhead since the indexes have to be maintained. This was causing the CPU on the target r5.24x.large (94 vCPU) instance to hit 100% regularly.

Our team recommended leveraging parallel-loads to migrate the large tables and more importantly, remove the indexes and constraints prior to the load on the target database. That resulted in the CPU utilization to drop significantly and brought down the full load time to under 1.5 days. After the load, the index creation was then performed on the target database which would take another 1.5 days for a total of 3 days versus the previous 13 days. Then, we resumed the change data capture (CDC) on the DMS task for their QA and test environment to catch up without any issues. 

Next, an even bigger issue was seen in their production environment where they were generating an extreme amount of archive logs which is a challenge for any data replication solution. Their system had about 4TB of archived logs being generated per day, far beyond typical volumes. We saw the customer had configured DMS to use a single task to replicate those data changes and hence was running into large target latency. When it comes to DMS and replication, it applies transactions on the target in a sequential manner to maintain data consistency. Hence, the single task was not able to keep up with the data change volume on the source. At this point, the only solution after all optimizations were exhausted on the task was to split the tables into multiple tasks. We had advised the customer to split into a few CDC only tasks first, test and then slowly add more as needed to avoid running into source database latency.

Since the customer was using the Binary Reader configuration of DMS, source read latency can occur due to the fact that the Binary Reader has to read from all of the archived logs over the network for every task which ultimately consumes the network bandwidth. Additionally, the customer’s network was only a 1-gigabyte pipe that was shared with multiple applications. After several iterations with varying number of tasks, an appropriate configuration was found. The customer also increased their network to a 10-gigabyte pipe resulting in minimal to no replication latency.


Q7: Tell me about a migration problem in which your team had to develop an innovative solution? 
With exponential growth of data being collected, customers are looking for innovative solutions to analyze data at scale. One recent request was to help build a new data lake that would allow the customer to have flexibility to use a variety of data visualization and analysis tools. Our team was engaged since the movement of data was a key element to the solution as it had to be easy, scalable and efficient in order to meet their needs and allow for future growth.

First, we performed a discovery session with the customer in order to determine the full scope and requirements for the new system. The discussions showed that they were looking to migrate data from different sources into a single target and develop analytics against the combined data. Then, we designed a solution to meet the scoped business needs that used several AWS services/tools. They included Amazon S3, AWS DMS, Amazon Redshift Spectrum, which is an effective tool to query large datasets in parallel on Amazon S3, and Amazon Quicksight, our cloud-native business intelligence (BI) service. The solution started with using DMS to connect to the various source systems and replicate the data to S3. From there, the team showed the customer how to query the data directly using Amazon Redshift Spectrum and how to use Amazon Quicksight to build data visualizations. The solution required initial data loads and then ongoing change data capture (CDC) so the analysis would have access to the most current data. We were able to materialize a solution and show them the value of Amazon offerings and capabilities to help their business achieve their goals.

Q8: Do all database migrations require help from your team or other teams at AWS?
No, our team is engaged in only a small fraction of database migrations on AWS. For many customers, they are able to successfully migrate databases without our assistance using one or more of AWS self-service tools or native tools from their current source databases. 

Q9: How can customers engage with your team?
For prospective customers, please submit the following form to get started. For existing customers, reach out to your account team who can submit a special request on your behalf or contact us directly if you have previously worked with us.

Q10: Any last closing remarks or advice to share with the readers? 
Our mission is to make migrations easier. With over 650k databases migrated to AWS over the last decade, we have been a trusted resource to ensure customers achieve their goals. Migrations do require meticulous planning and execution, and having a good understanding of the source and target infrastructure is crucial and will set the foundation for the migration process. By engaging our team early in the process, we will be able to guide you through the process and help avoid pitfalls and issues to ensure a successful migration.

As next steps, I suggest getting acquainted with how DMS works as a service. We have great documentation available to the public to help you get started, including the ones listed below. We also have tools, such as pre-migration assessment to evaluate and identify any problems that might prevent a migration task from running as expected.

Documentation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html
https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Troubleshooting.html

Blogs:
https://aws.amazon.com/blogs/database/tag/dms/
https://aws.amazon.com/blogs/database/category/database/aws-database-migration-service/
https://aws.amazon.com/blogs/database/debugging-your-aws-dms-migrations-what-to-do-when-things-go-wrong-part-1/
https://aws.amazon.com/blogs/database/debugging-your-aws-dms-migrations-what-to-do-when-things-go-wrong-part-2/
https://aws.amazon.com/blogs/database/debugging-your-aws-dms-migrations-what-to-do-when-things-go-wrong-part-3/

……………………………………..

Craig Silveira, Senior Manager, DMA Advisor, DBS Migrations Programs, at AWS
Craig is a migration specialist team lead at AWS with over 25 years of technical expertise assisting customers to perform successful migrations. Prior to joining AWS, Craig was a partner and SVP of Sales and Sales Engineering at OpenSCG, a consulting firm focused on helping customers migrate to and utilize PostgreSQL. OpenSCG was acquired by AWS due to their expertise in the area of migrations and that is how Craig became part of AWS. Earlier in his career, Craig held various positions at EnterpriseDB, including Field CTO, VP Sales Engineering, and product manager where he helped bring PostgreSQL offerings to customers looking to leave their costly commercial databases behind. Craig also spent several years at Oracle where he helped financial institutions break free from legacy Sybase systems and was a driving force in improving Oracle’s migration tooling and services.

Sponsored by AWS

You may also like...