By George Trujillo, Principal Data Strategist, DataStax
I’ve been a data practitioner responsible for the delivery of data management strategies in financial services, online retail, and just about everything in between. In all of these roles, I’ve come across patterns that enable organizations to build faster business insights and innovation with data.
These patterns encompass a way to deliver value to the business with data; I refer to them collectively as the “data operating model.” It facilitates the alignment of people, processes, and technology toward a common vision and objective. Organizational outcomes such as being data-driven, data democratization, automation, self-service, developer velocity, and delivering faster insights and increased revenue can all result from the efficiency that a data operating model engenders.
These outcomes are attractive, but for practitioners like you, execution is where the rubber hits the road. In this article, I’ll explore the three execution patterns I’ve come across that have engendered success with data: cloud-native technologies, real-time data, and open source software.
Execution patterns in an operating model
If, as Gartner puts it, an operating model brings the broader business model to life, then execution patterns are an important part of breathing life into an operating model. Patterns maintain consistency when executing on the operating model. Mike Tyson is often quoted as saying, “Everyone has a plan until they get punched in the mouth.”
Similarly, an operating model can be challenged when there are changes in leadership, architects, technical leaders, developers, product managers or new additions to a technology stack. But established execution patterns help the operating model, strategy, and vision stay on track. They’re also an excellent aid to bringing on new team members quickly.
1) The cloud-native pattern
The first execution pattern is cloud-native. Cloud-native platforms will serve as the foundation for more than 95% of new digital initiatives by 2025 — up from less than 40% in 2021. Why are enterprises shifting to the cloud? They’re trying to leverage the benefits of the private, hybrid, or public cloud. Lower total cost of ownership, scalable unit economics, multi-region reliability, digital transformation, faster delivery of applications, and machine learning models—these are all business benefits of cloud-native adoption.
Communicating the business value of cloud-native adoption is an important part of this pattern. Cloud-native is much more than cloud, Kubernetes, services, CI/CD, and automation. In the context of applications and data, creating and maintaining a cloud-native strategy provides portability, resilience, fault tolerance, scalability, and flexibility. A cloud-native pattern helps manage the costs and resources of the technology stack for the business in a consistent way.
Speed helps drive innovation. The faster applications can be deployed, data can be integrated and refined, different algorithms and data sets can be tested for new models, the faster business can make new decisions. A cloud-native pattern helps reduce barriers to innovation, supports frictionless change, and enables innovation with data to happen faster.
2) The real-time data pattern
The ability to assess data in real-time is set to be one of the biggest data analysis trends for 2022. According to Gartner, more than 50% of new business systems will use real-time data to improve decision-making by 2022. Making decisions faster in real-time with trusted data leads to a competitive advantage.
Real-time data flows through a data ecosystem. The “easier” the right data can flow to the right people at the right time, the healthier the ecosystem is for generating business outcomes. The data flows are generated from applications, streaming and messaging technologies, and databases. As the business seeks different ways of looking at data or additional data sources for business insights, the speed of the business is determined by how easy it is to correlate, integrate, and refine these data flows in new ways.
A real-time data pattern guides architects, data engineers, and developers in change management. Reducing barriers to data access and complexity facilitates innovation with data. Complexity is the nemesis of data quality, trust and business speed. Sticking with recognized and proven patterns helps minimize changes that will create barriers and complexity (data swamps) in future cycles.
It is increasingly common for lines of business to request integrating and refining different data sets downstream for real-time processing. I can’t remember the last time a business executive asked for more batched data. What I hear instead is: “We need to make decisions on data faster and in real-time.”
For digital applications, streaming and messaging technologies, and the databases that support them, data has to be able to flow easily through the ecosystem. Setting up a pattern for this kind of real-time data flow within an organization helps everyone in the data ecosystem understand and support the real-time data direction that the organization must move in to meet business objectives.
3) The open source software pattern
Finally, there’s open source software. OSS drives a lot of technology innovation for business. For one thing, OSS enables teams to experiment or build proofs of concept with fully featured software, essentially for free. This moves the decision-making process regarding which technologies to use away from endless debate and toward success (or failure with learning). It also reduces the risks that come along with getting locked-in to a particular vendor (read more on OSS and innovation here).
Leveraging open source has become an important part of application and data management strategies. In the CDO community, a consistent theme among data leaders is the importance of data culture. Open source is more than innovation, scalable unit economics, and ease of use. Open source is also about culture; it impacts how a group thinks, its values and beliefs. When cloud-native developers and real-time data engineers are looking at data innovation for digital transformation they naturally gravitate towards open source—it’s a pattern that helps nurture innovation in a data culture.
As a data practitioner, I consistently see businesses following the execution patterns of cloud-native adoption, an increased focus on real-time data, and leveraging open source. So, what’s the key to putting them all together?
It’s creating congruence with an operating model. It’s building harmony and synergy with a vision that aligns cloud-native adoption, a real-time data management strategy, and leveraging open source. These three execution patterns have to work together and complement one another. Unfortunately, too often cloud strategy, data strategy, and open source decisions are all led by different business units with separate goals that aren’t aligned. All of these execution patterns should be part of a unifying vision and a data operating model.
In a future article, I’ll share a data journey with execution patterns that create business outcomes with a real-time data operating model, and show you a real-time data platform to help data consumers innovate.
Learn more about DataStax.
About George Trujillo:
George is principal data strategist at DataStax. Previously, he built high-performance teams for data-value driven initiatives at organizations including Charles Schwab, Overstock, and VMware. George works with CDOs and data executives on the continual evolution of real-time data strategies for their enterprise data ecosystem.