Back in February 2017, an Amazon S3 service disruption in AWS\u2019 oldest region (US-EAST-1) shut down several major websites and services, such as Slack, Trello, Quora, Business Insider, Coursera and Time Inc.\nOther users were reporting that they were also unable to control devices which were connected via the Internet of Things since IFTTT was also down.\nThese kinds of disruptions are becoming more and more business critical for today\u2019s digital economy. To prevent these situations, cloud users should always consider the shared responsibility model in the public cloud. However, there are also ways where artificial intelligence (AI) can help. An AI-defined infrastructure \u2013 specifically, an AI-powered IT management system \u2013 can help to avoid service disruptions of public cloud providers.\nA typo crashed the AWS-powered Internet!\nAfter every service disruption AWS writes a summary of what was going on during an incident. This is what happened on the morning of February 28, 2017:\n\u201cThe Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected. At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger\u00a0set of servers was removed than intended. The servers that were inadvertently removed supported two other S3 subsystems. \u00a0One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region. This subsystem is necessary to serve all GET, LIST, PUT, and DELETE requests. The second subsystem, the placement subsystem, manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate. The placement subsystem is used during PUT requests to allocate storage for new objects. Removing a significant portion of the capacity caused each of these systems to require a full restart. While these subsystems were being restarted, S3 was unable to service requests. Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including the S3 console, Amazon Elastic Compute Cloud (EC2) new instance launches, Amazon Elastic Block Store (EBS) volumes (when data was needed from a S3 snapshot), and AWS Lambda were also\u00a0impacted while the S3 APIs were unavailable.\u201d\nYou can read the full text of Amazon's outage message if you want to hear more.\nAWS outages aren\u2019t anything new, and the more AWS customers running their web infrastructure on the cloud giant, the more issues end users will experience in the future. According to SimilarTech, Amazon S3 is already used by 152,123 websites and 124,577 unique domains.\nHowever, the philosophy of \u201ceverything fails all the time\u201d (Werner Vogels, CTO Amazon.com) means if you are using AWS you must \u201cdesign for Failure\u201d\u2026something cloud role model and video on-demand provider Netflix is doing to perfection. In doing so, Netflix has developed its Simian Army, an open source toolset everyone can use to run a cloud infrastructure on AWS high-available.\nNetflix \u201dsimply\u201d uses the two levels of redundancy AWS offers: multiple regions and multiple availability zones (AZ). Multiple regions are the masterclass of using AWS, very complex and sophisticated, since you must build and manage entire separated infrastructure environments within AWS\u2019 worldwide distributed cloud infrastructure.\nMultiple AZs are the preferred and \u201ceasiest\u201d way for high availability (HA) on AWS. In this case, the infrastructure is built within more than one data center (AZ). In doing so, a single region HA architecture is deployed in at least two or more AZs \u2013 a load balancer in front of it is controlling the data traffic.\nHowever, even if \u201ctypos\u201d don\u2019t happen, the recent accident shows that human error is still the biggest issue running IT systems. In addition, you can blame AWS only to a certain extent, since the public cloud is about shared responsibility.\nShared responsibility in the public cloud\nAn important public cloud detail is its self-service. Depending on its DNA, providers are only taking responsibility for specific areas. The customer is responsible for the rest. The public cloud is based on the Shared Responsibility model. The provider and its customers divide the field of duties among themselves. In doing so, the customer\u2019s self-responsibility plays a major role.\nIn the context of IaaS utilization, the provider is responsible for the operations and security of the\u00a0physical environment. He is taking care of:\n\nSetup and maintenance of the entire data center infrastructure.\nDeployment of compute power, storage, network and managed services (like databases) and other micro services.\nProvisioning the virtualization layer customers are using to demand virtual resources at any time.\nDeployment of services and tools customers can use to manage their areas of responsibility.\n\nThe customer is responsible for the operations and security of the\u00a0logical environment. This includes:\n\nSetup of the virtual infrastructure.\nInstallation of operating systems.\nConfiguration of networks and firewall settings.\nOperations of own applications and self-developed (micro) services.\n\nThus, the customer is responsible for the operations and security of his own infrastructure environment and the systems, applications, services, as well as stored data on top of it. However, providers like Amazon Web Services or Microsoft Azure provide comprehensive tools and services customers can use e.g. to encrypt their data as well as ensure identity and access controls. In addition, enablement services (micro services) exist that customers can adopt to develop own applications more quickly and easily.\nIn doing so, the customer is all alone in its area of responsibility and thus must take self-responsibility. However, this part of the shared responsibility can be done by an AI-defined IT management system respectively an AI-defined Infrastructure.\nAn AI-defined infrastructure can help to avoid service disruptions\nAn AI-defined Infrastructure can help to avoid service disruptions in the public cloud. However, the basis of this kind of infrastructure is a General AI that combines three major human abilities that enable enterprises to tackle IT and business process challenges.\n\nUnderstanding: By creating a semantic data map the General AI understands the world of the company in which its IT and business exists.\nLearning: By creating Knowledge Items the General AI learns best practices and reasoning from experts. Knowledge is taught in atomic pieces of information (Knowledge Items) that represent separate steps of a process.\nSolving: With machine reasoning problems are solved in ambiguous and changing environments. The General AI dynamically reacts to the ever-changing context, selecting the best course of action. Based on machine learning the results are optimized through experiments.\n\nTo put this into the context of an AWS service disruption:\n\nUnderstanding: The General AI creates a semantic map of the AWS environment as part of the world in which the company exists.\nLearning: IT experts create Knowledge Items while they are configuring and working with AWS from what the General AI learns best practices. Thus, the experts teach the General AI contextual knowledge that includes what, when, where and why something needs to be done \u2013 for example when a specific AWS service is not responding.\nSolving: The General AI dynamically reacts to incidents based on the learned knowledge. Thus, the AI (probably) knows what to do at this very moment - even if no high availability setup was considered from the beginning.\n\nFrankly speaking, everything described above is no magic. Like every new born organism an AI-defined Infrastructure needs to be trained but afterwards can work autonomously as well as can detect anomalies as well as service disruptions in the public cloud and solve them. Therefore, you need the knowledge of experts who have a deep understanding of AWS and how the cloud works in general. These experts need to teach the General AI with their contextual knowledge that includes not only what, when and where but also why. They have to teach the AI with atomic pieces (Knowledge Items, KI) that can be indexed and prioritized by the AI. Context and indexing enable this KIs to be combined to form many solutions.\nKIs created by various IT experts create pooled expertise that is further optimized by machine selection of best knowledge combinations for problem resolution. This type of collaborative learning improves process time task by task. However, the number of possible permutations grows exponentially with added knowledge. Connected to a knowledge core, the General AI continuously optimizes performance by eliminating unnecessary steps and even changing routes based on other contextual learning. And the bigger the semantic graph and knowledge core gets, the better and more dynamically the infrastructure can act in terms of service disruptions.\nOn a final note, do not underestimate the \u201cpower of we!\u201d Our research at Arago revealed that with an overlap of 33 percent in basic knowledge, this knowledge can and is used outside a specific organizational environment, i.e., across different client environments. The reuse of knowledge within a client is up to 80 percent. Thus, exchanging basic knowledge within a community becomes imperative from an efficiency perspective and improve the abilities of the General AI.