The future of databases and application development

Microservices have finally reached the database, easing scalability and other app development challenges for Amazon.com, The Pokemon Company, and others.

istock 1164885370
istock

Microservices have emerged as the favored development approach for cloud computing platforms. Importantly, microservices architectures have finally reached the database, unlocking new opportunities for innovation and business growth.

Why? First, a little history. Applications and code have grown increasingly distributed over the years as development platforms changed. Monolithic mainframes of the 1960s and ’70s gave way to client/server computing in the 1980s, which transitioned to the Internet’s three-tier application architectures of the 1990s and 2000s. Yet all the while, underlying data models and databases remained largely structured and monolithic, creating scalability, performance, and single-point-of-failure challenges.

“If you grew up with databases in the ’80s, ’90s, or early 2000s, your world had limited database choices,” says Shawn Bice, Vice President, Databases, with Amazon Web Services (AWS). “That’s not the world we live in now. No longer does one size fit all.”

Now, the cloud is established as a primary platform for hosting workloads and applications. Microservices architectures are suited to building cloud applications and databases. In this world, developers can break large apps into smaller services and pick the right tool for each component. This approach frees teams from having to use a single, overburdened database with one storage engine and one compute engine to handle every access pattern, dramatically improving scalability and performance.  

“IT shops are being asked to innovate faster with fewer resources and tighter timelines,” says Bice. “And a microservices architecture lets you do exactly that: you can achieve the highest levels of performance and scale, which lets you innovate faster, at lower cost and faster time to market, because you’re not limited by what one thing can do.”

A big mindset shift

Moving a legacy database to a modern, cloud-based environment is as much a mindset shift as a technology decision for CIOs and their IT teams, who may be hesitant to disrupt business-critical infrastructure.

“Don’t let familiarity become a blind spot that stifles future innovation,” Bice advises.

A cloud-native application architecture represents a big change from the days of the monolithic database, when developers used one enterprise solution to solve a variety of business problems. Purpose-built databases and microservices allow you to flip that scenario: understand the individual use cases or problems you’re trying to address, then use a set of highly distributed, loosely coupled, fully managed APIs that are optimized for specific needs.

This approach eliminates the traditional trade-off between functionality and performance or scale. It also lets developers move away from what Bice calls “undifferentiated heavy lifting” and move up the stack to focus on solving problems to grow the business.

The good news for organizations that have been hesitant to make the leap is the many early adopters who have been innovating in the cloud for a decade or more. They’re able to provide the industry with a deep set of lessons learned and best practices to follow.

What early adopters say

For example, Amazon.com once maintained one of the biggest Oracle data warehouses in the world. It made the strategic decision move move off the legacy data warehouse to a cloud-native AWS data lake architecture comprising a variety of AWS services, including Amazon DynamoDB, Amazon Redshift, and Amazon S3. The online retailer’s migration involved moving 50 petabytes of data, 75,000-plus data warehouse tables, and 7,500 OLTP databases supporting the company’s business-critical ordering, processing, and fulfillment systems.

Jeff Carter, VP of Data, Analytics, and Machine Learning at Amazon, admits to some initial concerns about such a massive migration causing disruptions. “We didn't want to be the team that shut down the business,” he says. “But what we found was in pretty much every instance, availability was better.”

Another benefit has been cost efficiencies: Carter says the new environment is about 30-50% less expensive to maintain than the previous architecture.

“Five years ago, our ability to grow and analyze our business was limited by technology choice,” Carter says. “By migrating to the AWS technologies and implementing the data lake, we have been able to scale to meet our business needs.”

Another early adopter is The Pokémon Company International (TPCi). The company was using a third-party NoSQL document database that contained profile data and changelogs for more than 300 million users of its Pokémon GO mobile game. “Achieving the speed we wanted required maintaining many indices and managing many complex nodes—more than 300 at one point,” says Jeff Webb, TPCi Development Manager.

To address the massive influx of Pokémon GO users, along with a variety of database-management issues, TPCi decided to migrate to AWS fully managed database services, including Amazon Aurora PostgreSQL to host its user profile data. “We went from 300 nodes to 30, and we are no longer paying for database licenses. Our monthly database cost has dropped by tens of thousands of dollars each month," says Webb.

Learn more about how to break free from legacy databases and get more value from your data.

To hear more insights from Shawn Bice, about modern database architectures, check out the Ahead of the Pack podcast.

 

Copyright © 2020 IDG Communications, Inc.