2017 saw an explosion of machine learning in production use, with even deep learning and artificial intelligence (AI) being leveraged for practical applications.
“Basic analytics are out; machine learning (and beyond) are in,” says Kenneth Sanford, U.S. lead analytics architect for collaborative data science platform Dataiku, as he looks back on 2017.
Sanford says practical applications of machine learning, deep learning, and AI are “everywhere and out in the open these days,” pointing to the “super billboards” in London’s Piccadilly Circus that leverage hidden cameras gathering data on foot and road traffic (including the make and model of passing cars) to deliver targeted advertisements.
So where will these frameworks and tools take us in 2018? We spoke with a number of IT leaders and industry experts about what to expect in the coming year.
Enterprises will operationalize AI
AI is already here, whether we recognize it or not.
“Many organizations are using AI already, but they may not refer to it as ‘AI,'” says Scott Gnau, CTO of Hortonworks. “For example, any organization using a chatbot feature to engage with customers is using artificial intelligence.”
But many of the deployments leveraging AI technologies and tools have been small-scale. Expect organizations to ramp up in a big way in 2018.
“Enterprises have spent the past few years educating themselves on various AI frameworks and tools,” says Nima Negahban, CTO and co-founder of Kinetica, a specialist in GPU-accelerated databases for high-performance analytics. “But as AI goes mainstream, it will move beyond small-scale experiments to being automated and operationalized. As enterprises move forward with operationalizing AI, they will look for products and tools to automate, manage, and streamline the entire machine learning and deep learning life cycle.”
Negahban predicts 2018 will see an increase in investments in AI life cycle management, and technologies that house the data and supervise the process will mature.
AI reality will lag the hype once again
Ramon Chen, chief product officer of master data management specialist Reltio, is less sanguine. Chen says there have been repeated predictions for several years that tout potential breakthroughs in the use of AI and machine learning, but the reality is that most enterprises have yet to see quantifiable benefits from their investments in these areas.
Chen says the hype to date has been overblown, and most enterprises are reluctant to get started due to a combination of skepticism, lack of expertise, and most important of all, a lack of confidence in the reliability of their data sets.
“In fact, while the headlines will be mostly about AI, most enterprises will need to first focus on IA (information augmentation): getting their data organized in a manner that ensures it can be reconciled, refined, and related, to uncover relevant insights that support efficient business execution across all departments, while addressing the burden of regulatory compliance,” Chen says.
Chad Meley, vice president of marketing at Teradata, agrees that 2018 will see a backlash against AI hype, but believes a more balanced approach of deep learning and shallow learning application to business opportunities will emerge as a result.
While there may be a backlash against the hype, it won’t stop large enterprises from investing in AI and related technologies.
“AI is the new big data: Companies race to do it whether they know they need it or not,” says Monte Zweben, CEO of Splice Machine.
Meley points to Teradata’s recently released 2017 State of Artificial Intelligence for Enterprises report, which identified a lack of IT infrastructure as the greatest barrier to realizing benefits from AI, surpassing issues like access to talent, lack of budget, and weak or unknown business cases.
“Companies will respond in 2018 with enterprise-grade AI product and supporting offerings that overcome the growing pains associated with AI adoption,” Meley says.
Bias in training data sets will continue to trouble AI
Reltio’s Chen isn’t alone in his conviction that enterprises need to get their data in order. Tomer Shiran, CEO and co-founder of analytics startup Dremio, a driving force behind the open source Apache Arrow project, believes a debate about data sets will take center stage in 2018.
“Everywhere you turn, companies are adding AI to their products to make them smarter, more efficient, and even autonomous,” Shiran says. “In 2017, we heard competing arguments for whether AI would create jobs or eliminate them, with some even proposing the end of the human race. What has started to emerge as a key part of the conversation is how training data sets shape the behavior of these models.”
It turns out, Shiran says, that models are only as good as the training data they use, and developing a representative, effective training data set is very challenging.
“As a trivial example, consider the example tweeted by a Facebook engineer of a soap dispenser that works for white people but not those with darker skin,” Shiran says. “Humans are hopelessly biased, and the question for AI will become whether we can do better in terms of bias or will we do worse. This debate will center around data ownership — what data we own about ourselves, and the companies like Google, Facebook, Amazon, Uber, etc. — who have amassed enormous data sets that will feed our models.”
AI must solve the ‘black box’ problem with audit trails
One of the big barriers to the adoption of AI, particularly in regulated industries, is the difficulty in showing exactly how an AI reached a decision. Kinetica’s Negahban says creating AI audit trails will be essential.
“AI is increasingly getting used for applications like drug discovery or the connected car, and these applications can have a detrimental impact on human life if an incorrect decision is made,” Negahban says. “Detecting exactly what caused the final incorrect decision leading to a serious problem is something enterprises will start to look at in 2018. Auditing and tracking every input and every score that a framework produces will help with detecting the human-written code that ultimately caused the problem.”
Cloud adoption will accelerate to support AI innovation
Horia Margarit, principal data scientist for big-data-as-a-service provider Qubole, agrees that enterprises in 2018 will seek to improve their infrastructure and processes for supporting their machine learning and AI efforts.
“As companies look to innovate and improve with machine learning and artificial intelligence, more specialized tooling and infrastructure will be adopted in the cloud to support specific use cases, like solutions for merging multi-modal sensory inputs for human interaction (think sound, touch, and vision) or solutions for merging satellite imagery with financial data to catapult algorithmic trading capabilities,” Margarit says.
“We expect to see an explosion in cloud-based solutions that accelerate the current pace of data collection and further demonstrate the need for frictionless, on-demand compute and storage from managed cloud providers,” he adds.
Related artificial intellgience articles: