Fans of HBO\u2019s \u201cSilicon Valley\u201d may recall the plotline earlier this season in which Erlich Bachman secures $200,000 in VC funding for See Food, a camera app that recognizes various kinds of food and instantly surfaces useful information, such as nutritional data.\nBachman is 5% technologist and 95% charlatan, give or take, so naturally there\u2019s a hitch: See Food doesn\u2019t exist. The funding is the result of a misunderstanding that Bachman quickly compounded into a lie. Antics ensue as Bachman, determined to keep the money, attempts to transmute his vaporware into a working prototype.\nHere\u2019s what struck me: Many of these antics, such as Bachman\u2019s attempt to con a class of Stanford undergrads into training a machine learning model, are predictably hilarious\u2014but from a technical standpoint, virtually none of them is implausible. \u00a0\nIndeed, whereas just a few years ago, an app like See Food would have been literally impossible to build, today it\u2019s not a stretch that a few Bachman-like misfits could actually cobble together the pieces. \u201cSilicon Valley\u201d is TV fantasy but Bachman and company rely on the same resources developers would use in the real world: cloud infrastructure, neural nets, etc. That these resources have become so accessible\u2014accessible enough to be casually mentioned in a mainstream TV show\u2014is a testament to how much and how quickly things have changed.\nCrucially, this shift is about more than camera apps; it\u2019s about machine intelligence (MI) moving from niche applications to ubiquity. As we\u2019ve written previously, thanks to APIs and cloud services, we\u2019re entering the first years in which virtually any enterprise that wants to harness machine intelligence will be able to\u2014which means that enterprises that don\u2019t harness intelligence will risk being left behind.\nThe prospect of digital ecosystems that widely incorporate MI raises a critical question for CIOs: When it comes to machine intelligence, how does one assess when to build a system from the ground up versus when to invest in third-party solutions?\nRent vs. build: Machine intelligence\u2019s core ingredients\nMachine intelligence requires three ingredients: computing muscle, algorithms, and data. The degree to which a company is strong in any one area informs when that company should go proprietary and when it should fill gaps with third-party as-a-service offerings. Strength should be assessed not only in terms of technologies and budgets but also human talent and ability to execute.\u00a0\nCompute\nPrior to the cloud, harnessing the computing power necessary for machine learning (ML) typically required building one\u2019s own supercomputer\u2014a spectacularly forbidding prospect in terms of cost, time, and requisite expertise. Cloud infrastructure has changed that, rapidly diminishing the marginal cost for additional computing power, and enabling companies to rent when they previously would have been forced to build or to enter spectacularly expensive partnerships.\nConsequently, fewer scenarios exist in which building one\u2019s own ML compute infrastructure confers a competitive advantage. The expense and effort might be justified if you\u2019re building a unique service or require specialized hardware\u2014but even then, the benefit is debatable. If a custom system performs 5% better than as-a-service offerings but involves 10 times the cost and takes 10 times longer to develop and deploy, the system can still easily lose money in the end, despite its superior performance.\nThat\u2019s not to say all cloud infrastructure is equal, of course. Top providers such as Google (our employer) and Microsoft don\u2019t just scale up spare computing cores, for example; they build custom chips specifically for MI. This sort of specialty hardware reinforces the challenges companies face in building proprietary systems that can compete with top cloud services\u2014but it also reinforces that companies must carefully vet cloud infrastructure providers. \u00a0\nAlgorithms\nProprietary algorithms can be incredibly lucrative. If your company\u2019s code can do something better than anyone else\u2019s\u2014such as extracting profit signals from noisy datasets\u2014the IP may be valuable enough to justify the cost of development.\nBut that cost can be prohibitively high, especially given that many, if not most, companies don\u2019t already possess the requisite in-house expertise. Moreover, in many cases, what\u2019s important isn\u2019t owning algorithms\u2014it\u2019s quickly delivering improved experiences to the market and staying ahead of the competition. Many apps include a search function, for example, but most app makers don\u2019t build search engines themselves\u2014they use Google or Bing. Whatever advantage a company might gain with a proprietary search solution usually ends up negated by the lost opportunity cost the company sacrifices during development. Machine intelligence is likely to function similarly.\nIt\u2019s likely that legions of future apps will include machine intelligence technologies being pioneered today, such as natural language processing, speech-to-text features, and image recognition. These capabilities are increasingly packaged as pretrained models exposed via APIs\u2014meaning that for many companies, it may be more important to develop API management skills than to invest heavily in in-house ML algorithms.\nData\nWhereas relatively few businesses organically possess the compute and algorithm ingredients, data strength is somewhat more common. There may eventually be a degree of commodification in the infrastructure and algorithms many services use, but the data those services analyze will often vary from company to company. Take the same neural net technology and feed it drastically different training sets, and its conclusions over time may well be quite different.\nIndeed, companies have been pining for years for ways to activate their data\u2014that\u2019s why \u201cbig data\u201d became a mainstream phrase and \u201cdata scientist\u201d remains one of today\u2019s hottest job titles. We\u2019re starting to see more companies find creative ways to turn their data into revenue opportunities. Vivanda, a company that matches flavor preferences to food, for example, was spun out of spice company McCormick\u2019s FlavorPrint technology, which leveraged the company\u2019s expertise and data to create a \u201cdigital fingerprint\u201d for any food or recipe. Imagine how many other data-to-product transitions are possible now that ML algorithms and compute infrastructure have become so accessible.\nMachine learning will be defined by ecosystems: You don\u2019t have to do it all yourself\nIn the last decade, we\u2019ve seen innumerable examples of platforms and ecosystems that redefine the way products and services are built. Google Maps, for example, is both an aggregator of demand because it can be infinitely replicated for consumers and developers, and also a fulfiller of demand because it can be integrated into services developers build, such as ride-hailing services. The takeaway is that opportunities continue to explode for companies to leverage one another's technology, creating end-user experiences defined by multiple parties and distributing the work of demand generation and value creation across ecosystems of participants.\nThis phenomenon is extending to machine learning and artificial intelligence. When you have strengths in core ML ingredients, leverage them. When you have blank spots, look for partners with adjacent needs and skills, and look to the growing as-a-service market to move quickly without spending years and millions of dollars in development. Our intelligent future is waiting to be assembled\u2014but CIOs don\u2019t need to do it alone.