For a few years now, really since the advent of x86 server virtualization, I’ve been hearing about “abstraction." The myth is that thanks to virtualization, application developers and IT operations don’t need to worry anymore about the underlying hardware because virtualization can take care of running it on any “commodity” server. Of course they did need to worry about this commodity server being x86, or “standard high volume” server. Oh, and they really wanted it to be “Nehalem” generation (or later) to minimize virtualization performance overhead. Oh, and if they used encryption (or decryption) they really wanted it to be “Westmere” generation (or later), thanks to Intel® AES-NI. Oh…you get my point.
The reality is that you can run your application on any server, but if you care about performance, or efficiency (i.e. being able to deploy as much application capacity on your given infrastructure), or security you really want to be able to control where and how your application runs. I started with servers, but you can extend this to storage and networking equipment of course.
This is all well and good if you’re running one application at a time and have the time to fine tune everything at any time. Few application developers develop a simple app anymore though. We live in a service oriented IT world now. Application developers typically develop rich services that tap into diverse data sources, and employ complex algorithms to serve intelligence to large numbers of users. And IT operations need to deal with deploying a growing number of these services quickly and continuously, but efficiently, to create business value for their organizations.
This is where Intel’s Software Defined Infrastructure (SDI) is aiming. I know you’ve heard of Software Defined XX and may be quite weary of this phrase. So let me point out that SDI is distinct in that it doesn’t focus necessarily on the mechanisms of programmability of the infrastructure but on how to utilize it in a Service Driven way! At the heart of SDI is the Orchestrator, whose role is to deploy services and run them in an automated, efficient, and secure fashion based on a specific Service Level Agreement (SLA). To do so, the orchestrator needs to watch – the infrastructure, the application, the service, so it can decide – on the best deployment outlay of the service, or improvements to its deployment mode to deal with exceptions, and Act – on those decisions. For example, the orchestrator could observe a bottleneck in an application’s interface and identify the root cause to be competition for I/O capacity of one of the instances. Therefore it could decide to migrate that instance to another server
One of the critical tools to facilitate the Watcher function is Telemetry. Telemetry is all the real time information that can be derived from infrastructure elements. It could be information related to temperature and power consumption, security posture, resource utilization, performance, etc… There is so much data hidden in the hardware that most of the time some level if processing of the raw data is required to make sense of it. In the last few years we, at Intel, have made significant progress in making this data more consumable. With Intel® Node Manager we’ve simplified power and thermal telemetry. (Node Manager can also help with the “Act” phase with functions like power/thermal capping). With Intel® TXT and attestation we’ve made trust posture accessible and we’re in the process of making much more telemetry accessible for performance and efficiency optimizations.
For more on this topic, check out the Intel Chip Chat Podcasts.
What kind of telemetry would you like to be able to use in your environment?