The cloud market has been a picture of maturity of late.\n\nThe pecking order for cloud infrastructure has been relatively stable, with AWS at around 33% market share, Microsoft Azure second at 22%, and Google Cloud a distant third at 11%. (IBM, Oracle, and Salesforce are in the 2-3% range.)\n\nRevenue growth remains solid across the industry, but slowing somewhat, with none of the Big 3 outperforming the others enough to materially alter the balance of power. That overall market stability has extended to prices, which, with some exceptions, have remained relatively flat. And at this point, the market has evolved to the point where the major players all have similar offerings.\n\nBut the emergence of generative AI changes everything. \n\nThe frenzy created by the public release of OpenAI\u2019s ChatGPT has triggered an arms race among hyperscalers to differentiate themselves by developing their own large language models (LLMs), building platforms that enable enterprises to create generative AI applications, and integrating generative AI throughout their portfolios of service offerings.\n\nAs cloud computing expert David Linthicum explains, \u201cWhat\u2019s occurring is that the cloud providers are approaching feature saturation in terms of the services they can provide versus their peers. Thus, these services will begin to commoditize, and with the popularity of multicloud, core services such as storage and computing will be pretty much the same from cloud to cloud.\u201d\n\nHe adds, \u201cThis is behind the drive to generative AI by the cloud providers. It\u2019s a race to determine who owns this space and the ability to de-commoditize their services with this new technology layered on top of more traditional cloud services.\u201d At this early stage in the gen AI race, there\u2019s no clear leader, but all the players are pouring resources into the fray.\n\nMicrosoft, which bankrolled OpenAI to the tune of $10 billion, has embedded ChatGPT features into everything from productivity apps like Word and Excel, to its Edge browser, to a cloud offering aimed at enterprises, the Azure OpenAI Service.\n\nGoogle is racing to build out its gen AI platform; co-founders Sergey Brin and Larry Page have even come out of semi-retirement to jumpstart the Google AI initiative. Google has its own large language model called PaLM, is building its own AI chips (Tensor Processing Units), and is launching new industry specific AI-based services under the Vertex AI banner. Most recently the company launched gen AI-based services aimed at healthcare and life science organizations.\n\nAnd AWS recently announced Bedrock, a fully managed service that enables enterprise software developers to embed gen AI functionality into their programs. AWS also builds its own low-cost AI chips (Inferentia and Trainium) in limited volumes; the company is using the chips internally to power its gen AI capabilities, and it is making the chips available to customers. \n\nWhile generative AI is certainly the hottest trend in the cloud market, there are others that CIOs need to be aware of. Here are the top cloud market trends and how they are impacting CIO\u2019s cloud strategies.\n\nThe gen AI gold rush \u2014 with little clarity on cost\n\n\u201cIt\u2019s the year of AI,\u201d declares Forrester Research. \u201cEvery hyperscaler, SaaS provider, and startup hopes to use the buzz around AI to their advantage. Cloud providers are pushing AI services to break out of sluggish revenue growth and differentiate themselves from rivals. Enterprise cloud customers are eager to use AI wherever they can for strategic initiatives, but without busting IT budgets already under pressure from multicloud complexity and sprawl.\u201d\n\nThe Big 3 hyperscalers aren\u2019t the only players offering generative AI-based cloud services to enterprise IT. IBM is stepping up with its Open-Stack-based watsonx AI platform. And Nvidia, which supplies everybody with the vast majority of their generative AI chips (GPUs), has built its own full-stack cloud platform called DGX Cloud, an AI service that lives inside the Oracle cloud, and will soon be available on both Azure and Google Cloud.\n\nFor CIOs, this means there will be multiple cloud-based options for building generative AI functionality into existing business processes, as well as creating new AI-based applications.\n\nThe challenge, says Bernard Golden, executive technical advisor at VMware, is how to make sure sensitive corporate data is protected and kept out of the general pool that makes up LLM databases.\n\nLinthicum adds that generative AI-based apps will be \u201ccostly to run,\u201d so \u201cCIOs need to find the proper use cases for this technology.\u201d\n\nAnd for CIOs looking to make the most of gen AI capabilities built into the cloud offerings they depend on, initial explanations as to how pricing will work have been rather opaque.\n\nCloud price creep \u2014 with leaps thanks to AI\n\nIBM caused quite a stir when it announced price increases for storage services that ranged as high as 26%, as well as smaller price hikes for IaaS and PaaS offerings.\n\nGenerally speaking, however, cloud providers have held the line on price increases in order to remain competitive. But the slowdown in growth across the industry will likely put pressure on all cloud vendors to hike prices going forward. As Linthicum says, \u201cWe\u2019re entering the phase of technology when they need to harvest value from their investments. I would suspect that prices will creep up over the next several years.\u201d\n\nOf course, the benefit of using cloud services is that customers can select whatever infrastructure configuration suits their needs. If they choose a first-generation processor, there are values to be had. But for organizations that need high-performance computing, or organizations looking to reap the benefits of AI, selecting a newer model chip comes at a premium.\n\nFor example, choosing to run your workload on an Nvidia H100 chip versus an earlier model A100 will result in a price increase of more than 220%, says Drew Bixby, head of operations and product at Liftr Insights.\n\nAnd as the hyperscalers add more GPUs (which are exponentially more expensive than traditional CPUs) to the mix in their own data centers, those costs will likely be passed on to enterprise customers.\n\nIndustry clouds ripe for gen AI edge\n\nIndustry clouds are on the rise \u2014 and will benefit from the emergence of generative AI, says Brian Campbell, principal at Deloitte Consulting, who explains that industry clouds \u201ctend to be at the forefront of both business and technology executive\u2019s agendas.\u201d\n\nTech execs like the speed, flexibility, and efficiency that industry-specific clouds provide, and business leaders appreciate the ability to focus scarce internal resources on areas that enable them to differentiate their business. Early adopters of industry clouds were healthcare, banking, and tech companies, but that has expanded to energy, manufacturing, public sector, and media.\n\nCampbell adds, \u201cWith the recent explosion of gen AI, executives are increasingly looking at how to use gen AI beyond proofs-of-concept, thus turning to the major providers of industry clouds, hyperscalers, independent software vendors, and systems integrators who have been quickly embedding gen AI alongside other technologies in their offerings.\u201d \n\nLine between cloud, on-prem blurs\n\nThe old paradigm of a clear line of demarcation between cloud and on-prem no longer exists. There are many terms that apply to this phenomenon of cloud-style services being deployed in a variety of scenarios all at once \u2014 hybrid cloud, private cloud, multicloud, edge computing, or as IDC defines it, dedicated cloud infrastructure as a service (DCIaaS.)\n\nIDC analyst Chris Kanaracus says, \u201cWe increasingly see the cloud as not about a particular location, but rather a general operating model for IT. You can have the cloud anywhere in terms of attributes such as scalability, elasticity, consumption-based pricing, and so on. The challenge moving forward for CIOs is to stitch it all together in a mixed-vendor environment.\u201d\n\nFor example, AWS offers Outposts, a managed service that enables customers to run AWS services on-premises or at the edge. Microsoft offers a similar service called Microsoft Azure Stack. Traditional hardware vendors also have as-a-service offerings that can run in data centers or at the edge: Dell Apex and HPE GreenLake.\n\nIncreased interoperability as lock-in loses some luster\n\nCompeting cloud vendors aren\u2019t particularly incentivized to enable interoperation. The business model for cloud providers is to lock in a customer, get them used to that particular vendor\u2019s tools, processes, marketplaces, software development platforms, etc., and continue to encourage that customer to move more resources to their cloud.\n\nBut enterprise customers have overwhelmingly adopted a multicloud approach and cloud vendors have been forced to deal with that reality.\n\nFor example, Microsoft and Oracle recently launched Oracle Database@Azure, which allows customers to run Oracle\u2019s database services on Oracle Cloud Infrastructure (OCI) and deploy them in Microsoft Azure datacenters. \n\nAnd storage leader NetApp recently announced a fully managed service that enables customers to seamlessly bring business-critical workloads across both Windows and Linux environments to the Google Cloud without refactoring code or redesigning processes.\n\nAs these barriers to interoperability come down, enterprises will benefit by being able to move storage volumes and applications to the most appropriate cloud platform.\n\nRise of the citizen developer\n\nThere has always been a tension between traditional IT and so-called shadow IT. The emergence of low-code, no-code solutions has made it easier for non-IT staffers to build simple applications. For example, Microsoft\u2019s Power Platform enables the creation of mobile and web apps that can interact with business tools.\n\nBut ChatGPT has blown any technical constraints out of the water. For example, with Microsoft\u2019s Copilot, end users can write content and create code with a simple prompt. For IT leaders, this can be a double-edged sword. It\u2019s beneficial to the organization if employees can boost their productivity through the creation of new tools and software programs.\n\nBut Golden points out that tools like Copilot are \u201cgreat until they\u2019re not great.\u201d In other words, these simple, one-off applications created by citizen developers can create security risks, they aren\u2019t built to scale, and they don\u2019t necessarily interoperate with complex business processes.\n\nFinOps gains traction and tools\n\nDuring the pandemic, there was a \u201cmad dash\u201d of enterprises shifting workloads to the cloud in order to make them more easily accessible to remote workers. \u201cNow they are getting the big bills,\u201d Linthicum says.\n\nAs a result, organizations are adopting FinOps technology to manage and optimize cloud costs. Linthicum says that FinOps enables organizations to reduce technical debt and \u201cdrive more cost savings by normalizing the use of cloud resources. In essence, it fixes mistakes that were made in the past, such as the use of the wrong cloud services, too much data movement, etc.\u201d\n\nForrester researchers concur, noting that, \u201cwhenever economic headwinds hit, IT cost optimization gains momentum. For cloud cost management, high interest hit in 2018 and once again this year.\u201d The good news for IT is that all of the cloud providers offer FinOps services and there is a slew of third-party software vendors that offer cloud cost management tools.