By George Trujillo, Principal Data Strategist, DataStax\n\nThink about your favorite recipe. You might have all the ingredients for an apple pie, but there\u2019s no guarantee all the elements will come together to produce a delicious dessert. Similarly, many organizations have built data architectures to remain competitive, but have instead ended up with a complex web of disparate systems which may be slowing them down.\n\nIn an earlier article, I discussed three proven ingredients for a holistic data platform approach to managing and harnessing data \u2013 cloud-native technologies, real-time data, and open source software(OSS) \u2013 to drive business value. Here, I\u2019ll dive into the recipe for bringing these elements together to help enterprises take full advantage of the real-time data that\u2019s critical to being a competitive business. \n\n\n\nThe challenge of data silos\n\nThink of how frustrated you get when you have to wait 15 seconds for a response from a web browser. Then imagine how business users, analysts, and data scientists feel when they have to wait weeks or even months for the new datasets they\u2019ve requested. This is a reality faced by many organizations that have cobbled together an array of siloed data management technologies.\n\nIt isn\u2019t uncommon for an organization to operate as many as five messaging systems and a different database technology for every day of the week. Strategies intended to solve specific problems have in many cases created technology stacks resembling the Tower of Babel.\n\nToo often strategy focuses on success within the confines of a team. Teams that take a myopic view on cloud, analytics, database, and streaming technologies might create some measurable success, but viewed holistically their impact is limited. Even organizations that understand the importance of a cohesive data strategy can find it exceedingly difficult to execute it, without getting bogged down by cross-functional team barriers and business friction and impacting time to delivery.\n\nAligning data\n\nA real-time data architecture should be designed with a set of aligned data streams that flow easily throughout the data ecosystem. An enterprise data management strategy has to align applications, streams, and databases to create a unified real-time data platform. Data has to keep getting easier to work with to enable creativity and innovation.\n\nAs Einstein may or may not have said: \u201cInsanity is doing the same thing over and over and expecting different results.\u201d Likewise, data challenges must be addressed at a strategic level, not just at the project, use case, or line of business (LOB) level. Otherwise enterprises are doomed to keep repeating the same mistakes. By creating flexible and adaptable data architecture and ecosystems, organizations can drive business value.\n\nThe real-time data platform is the heart of an organization\u2019s data ecosystem. Like a heart, the real-time platform pumps data streams into the enterprise data ecosystem. And just as a human brain suffers from insufficient blood flow, a poor flow of data streams impacts real-time decision- making, machine learning, and AI. A strong real-time platform makes the entire data ecosystem healthier.\n\nAs I detailed in my previous article, the three keys to success for a data-driven business include: cloud-native technologies, real-time data, and OSS. These converge to create an optimum data management strategy (see the figure below).\n\nUsing OSS helps enterprises avoid vendor lock-in, manage unit cost economics, and boost innovation. When organizations consider the cloud, they see the potential for innovation, transformation, new capabilities, market disruption with new services, data democratization, and self-service. This presents the opportunity for a new look at which technology stack is the right one to drive the business forward. \n\nIt\u2019s important to consider the alignment of applications, streaming (messaging and queuing) technologies, and databases. Data streams from applications, external sources, and databases often need to be correlated, aggregated, and refined downstream. LoBs should be empowered with easy access to data streams. Leveraging data in these streams is easier when all three of the core pieces of the data ecosystem work together. Let\u2019s look at how to do this.\n\n\n\nA unified real-time data platform\n\nKubernetes, the open source container orchestration system that automates software deployment, scaling, and management, is a key part of enabling this. It is the glue that allows applications to easily scale and expand across different environments.\n\nData needs to move easily with applications. Aligning Kubernetes with streaming technologies (such as Apache Kafka or Apache Pulsar) increases the seed of delivering new applications and machine learning models.\n\nReal-time business needs are transforming databases into sources of streaming data, to be processed on demand. Having data flow from a database to a data warehouse or cloud storage then back into memory for real-time decision-making takes too long. Databases must ingest and generate streams that work with applications and external streaming data easily, with low unit costs, and at scale.\n\nPulsar and Apache Cassandra\u00ae, the NoSQL, high-throughput, open source database, are excellent examples of the role OSS can play in a unified data architecture. Pulsar and Cassandra are highly scalable and have built-in capabilities to enable data to move easily across private, hybrid, and multi-cloud environments \u2014 and the applications that operate in them. Kubernetes, Pulsar, and Cassandra can align as a platform to enable applications and data to work together, as shown in the diagram below.\n\nThis helps organizations accelerate or decelerate to a hybrid or multi-cloud strategy. Complexity and cross-team barriers are broken down when data streams from applications, external sources and databases can easily flow together across on-premise, cloud, and multi-cloud environments. There is complete freedom of choice to run Kubernetes, Pulsar, and Cassandra on-premise or across multiple clouds.\n\nWhen these components work together, they can enable a focus on digital transformation:\n\nDigital transformation is high on every organization\u2019s agenda to accelerate business innovation and increase customer satisfaction. This requires aligning the organization to a common vision that creates business value. A data operating model helps enable business value as the data ecosystem evolves, but it also has to reduce the complexity that\u2019s so common in today\u2019s enterprise data ecosystems. Leveraging the execution patterns of cloud-native technologies, real-time data, and OSS supports consistency across the organization for the data operating model. Simply put, for businesses to move faster, data has to be easier to work with \u2014 as easy as apple pie. \n\nLearn more about DataStax here.\n\nAbout George Trujillo:\n\nGeorge Trujillo is principal data strategist at DataStax. Previously, he built high-performance teams for data-value driven initiatives at organizations including Charles Schwab, Overstock, and VMware. George works with CDOs and data executives on the continual evolution of real-time data strategies for their enterprise data ecosystem.