The current database landscape can be confusing, even for experienced technology professionals. There was a time when a one-size-fits-all database system was an adequate answer to any database question, but that’s no longer true. Decisions about database systems now involve a dizzying array of application requirements, products, features, buying-criteria, and vendor claims.
This confusing environment has left application architects and strategists with a mess of confusion as they consider database technology going forward. Meanwhile, the recent explosion in database choices has fragmented the market and made it more challenging to weed through the different options. The question for many is: Is this a permanent state of affairs or a transition phase? What can we expect next, and how can application architects plan for it?
I think we are starting to see the answers. As we move toward a new industry view of data management (more on that later), we are also moving towards a world in which smarter databases will enable simpler applications.
Let’s take a look at where we’ve been and where we’re going.
Breaking the mold: the introduction of NoSQL
In recent years the NoSQL battle cry has served as a welcome exhortation to broaden our thinking about database systems, and as an impetus to invest in core innovations designed to address 21st century challenges.
It was correctly observed that the traditional SQL RDBMS is ill-suited to many modern requirements. The central assertion was that SQL databases are not universally appropriate for every kind of data management requirement, and that there are better approaches for certain workloads and/or data center architectures. The idea is that a 21st century approach to database management should embrace non-SQL approaches in addition to SQL, because the orthodox SQL RDBMS has its limits. As stated the latter observation is undoubtedly true.
This has had a positive effect on the industry. The NoSQL movement has led to major open-source initiatives, a renaissance of academic research into database systems, and very substantial investment by venture investors. It is all good news, because the challenges of modern data management are very real.
Ask “Why NoSQL?” and you will usually get answers that sound like “SQL can’t do X, Y and Z”. Or Google “Why NoSQL?” for pages of commentaries that make these points repeatedly.
It’s a conversation about:
- Unstructured data,
- Developer productivity,
- Schema fragility,
- Disaster Recovery,
- Network partition models, etc.
These are clear requirements for many modern applications, and we absolutely need database systems that can deliver them.
It is also quite true that the traditional RDBMS products are poorly suited to the needs. For context consider that IBM shipped the first 1GB disk drive in the 1980, when these RDBMS systems were designed, and the drive was the size of a refrigerator. The IBM PC/XT got a hard drive in 1986 and it was 10MB in size. Clearly the 1980s RDBMS was designed for a very different kind of data management! The products have evolved beyond recognition in the intervening decades, but they have serious limitations in a world of cloud, mobile and IoT.
Note that the observations are true in relation to traditional SQL RDBMS products, but we should be careful to separate the limitations of old product designs from the data model (relational), the data manipulation language (SQL), and the data guarantees (ACID transactions). It is one thing to say that a client-server RDBMS does not autoscale in a modern commodity data center, and quite another thing to say that the SQL language cannot support autoscaling. In fact there are SQL database products today that do exactly that – we will come back to this part of the discussion.
In summary the NoSQL movement made a compelling case for change: It is true that the traditional RDBMS is unable to address the full spectrum of modern data management needs.
Does NoSQL address the problem?
And yet, arguably, the NoSQL movement hasn’t gone far enough. The question that remains is: Have we as an industry, come to understand the issues well enough to move forward with a coherent view of how to manage data? Does the CIO have a way to frame the challenges and develop a robust data management strategy? Few would argue that she does.
One view is that “whatever-works” is the new status quo, get used to it. In this vision, the CIO will have many different database systems, each specialized for different application requirements, each with its own storage models, SLAs, security systems, backup mechanisms, operational best practices, tool chains and programmer/DBA skillsets. Unfortunately that idea runs contrary to the self-evident economics of the software industry – every CIO wants fewer solutions, not more.
“Whatever-works” is unaffordable and unmanageable, and as a result, the industry will not remain fragmented. There is an inevitability that the database landscape will consolidate in an evolved fashion that supports the economic imperatives of operational simplicity, vendor rationalization, workforce skills management, strategic data warehousing, and so on.
So how and when will it consolidate? And how do we, as technology professionals, go about looking for answers in the meantime?
In my blog next week, I’ll discuss my views on this subject, but in the meantime, let me know yours!