The importance of good customer data stewardship was made abundantly clear by the Facebook-Cambridge Analytica scandal earlier this year. And, shepherding data into insights is only going to become a more demanding task with the implementation of General Data Protection Regulation (GDPR) by which the European Parliament, the Council of the European Union and the European Commission intend to strengthen and unify data protection in Europe, and by extension in many countries around the world. But what about the processes that surround that data, and the ever-growing customer feedback loops enterprises are building to deliver new sources of value and enhance their bottom line?
In my last post, I put forward a model for thinking about digital transformation and growth that centered on three pillars: customer intimacy, technology evolution and culture. I also described the first pillar centered around building the feedback loop. In this post, I’d like to focus on the second pillar around architecting, scaling and evolving that feedback loop. As we are learning every day, it’s not just technology itself that changes rapidly, but the environments in which we deploy it – be they around regulation and trust, or user experience and business models. The need for rapid adaptation may be fueling practices such as agile and devops, but the ability to evolve at today’s market speeds is ultimately won or lost through architecture.
What exactly is modern architecture anyway?
In the “old days” when technology was insulated from the outside world by the hard boundaries at the edges of the enterprise, architecture was a very simple concept. In the previous era, application architecture generally reflected infrastructure architecture and vice versa. Large, monolithic applications ran on virtualized “big iron” machines. Virtualization created some resource utilization efficiency, but the basic unit of work was still “the machine.”
But today, architecture is a moving target. It typically includes components of SaaS and Cloud, but tomorrow’s platforms may just as easily come to be defined by machine learning and distributed APIs. The truth is, we just don’t know exactly what the future will hold, which is why we need a very elastic approach to how we conceive of, build and improve technology and value delivery going forward.
At its core, modern architecture needs to be “Built to Change.” It must be a system for incorporating, organizing and structuring ongoing change rather than some permanent structure or framework. In fact, modern architecture must be quite dynamic – it’s like glaciers in some ways; it may seem to move almost imperceptibly a little bit at a time, but it needs to be constantly moving to reshape itself.
This is partly to stay ahead of technical debt – that sum of software development decisions made to satisfy short-term requirements that over the long term will limit the architecture’s ability to accommodate new needs. Unless technical debt is continuously paid down through architectural evolution and refactoring, even the most robust software architecture will devolve over time to a fragile and poorly understood patchwork quilt of changes and modifications increasingly unable to support new demands.
Why containers and microservices are no panacea
You might say: “But that’s exactly what containers and microservices are for!” After all, microservice-based architectural patterns aim to abstract and bound discrete chunks of functionality to the point where these individual chunks of functionality can be easily modified and even reorganized in response to new needs. Containers certainly turn the notion of large, fixed units of work around. A containerized unit of work can be as large as it needs to be to suit the task, and unlike virtual machines, can be created, shut down, or moved almost instantaneously.
Containers also allow clean partitioning of compute and data, and in fact, a microservice-based approach enforces a certain level of architectural rigor due to the insulation between units of functionality. And the need to orchestrate all of these units of functionality to create a complete system also requires designing and optimizing for how all of the different pieces fit together.
But even a microservice-based architecture requires ongoing evolution and re-architecture to be able to evolve the system over time. Obsolete pieces of functionality need to be carefully removed, new functionality requires related components to be redesigned, and the interactions between components and orchestration of the system as a whole must be carefully maintained to prevent inefficiencies, unintended consequences, and the creep of unnecessary complexity.
Enter “serverless computing”
Containers are a helpful tool for building highly efficient and scalable service architectures; it’s no accident that Google’s massive search engine is container-based. But if containers focus on smaller, more separable units of functionality instead of machines, is next-generation infrastructure architecture going to become invisible to developers and architects over time? Such increased abstraction away from machines and the nitty gritty details of physical infrastructure is part of the lure of serverless computing which allows focus almost entirely on code.
Given historical patterns and the emergence of serverless capabilities (e.g. FaaS – Functions as a Service), there’s little question that infrastructure will continue to be increasingly abstracted, with more focus on the work than the underlying mechanisms needed to accomplish it. How many developers today know – or need to know – about the underlying machine instructions their code eventually produces?
The serverless world is appealing, and containers are already pushing us in that direction. And, with containers, developers have a greater role in deploying their code, and operations must have a clearer understanding of what’s in the containers and how they interoperate. Devops itself will morph as new architectures take shape. With a serverless architecture, who is responsible for operations? Is it the cloud provider? Or some new hybrid function? Regardless, the need for attention to designing and evolving the architecture of a FaaS-based system will be as important as with previous technologies – if not more so – given the highly distributed and interconnected nature of such compute environments.
New “front end”: instrumentation and telemetry
If the advent of a serverless architectures with abstracted and deconstructed infrastructure is on the horizon, then perhaps our focus on “back end” infrastructure can shift to a deeper understanding of how software is actually being experienced rather than how it’s being operationalized. We’re currently just scratching the surface of understanding user needs through instrumentation, telemetry and analysis. The future isn’t about operational dashboards, it’s about deep insights.
The truth is, no matter how good a handle you have on your software architecture and your development processes, your feedback loop simply cannot evolve without deeply understanding the users of your software through data-driven insights. While the health of the architecture of your software will always remain a critical enabler for evolving your software, the architecture of your customer feedback loops will now be on the critical path to your ability to evolve the user experience itself.
The bottom line is that even as technology evolves, the role of architecture will remain central while its scope and purpose evolves. As data-driven insights and feedback loops become essential for your business, architecture will become more directly connected to your business model. The question goes from “Is your software working as intended?” to “What is your software telling us about our business?” How you architect and evolve the feedback loops for your business will determine your ability to effectively engage with your customers and to sense and respond faster than your competition.