Cracking the code: solving for 3 key challenges in generative AI

BrandPost By Chet Kapoor, Chairman & CEO of DataStax
Jul 20, 20237 mins
Artificial IntelligenceMachine Learning

A few key AI challenges, how leaders are thinking about them, and what we can do to address them.

Credit: iStock/Peoplelmages

By Chet Kapoor, Chairman and CEO, DataStax

Generative AI is on everyone’s mind. It will revolutionize how we work, share knowledge, and function as a society. Simply put, it will be the biggest innovation we will see in our lifetime.

One of the biggest areas of opportunity is productivity. Think about where we’re at right now – we’re facing workforce shortages, debt, inflation, and more. If we don’t improve the productivity of society, there will continue to be economic implications.

With AI, we will see the compounding effects of productivity throughout society. In fact, McKinsey has referred to generative AI as the next productivity frontier. But while technology is definitely a catalyst for productivity, it doesn’t drive transformation on its own. This starts with us – leaders and enterprises. When we bring AI to the enterprise, companies deploy AI to increase productivity around the world, which in turn drives society forward.

Like with any powerful new technology (think: the internet, the printing press, nuclear power), there are great risks to consider. Many leaders have expressed a need for caution, and some have even called for a pause in AI development.

Below, I’ll share a few key AI challenges, how leaders are thinking about them, and what we can do to address them.

Overcoming bias

AI systems draw data from limited sources. The vast majority of data these systems rely on is produced by a section of the population in North America and Europe, so AI systems (including GPT) reflect that worldview. But there are 3 billion people who still do not have regular access to the internet and have not created any data themselves. Bias doesn’t just come from data; it comes from the humans working on these technologies.

Implementing AI will bring these biases to the forefront and make them transparent. The question is: how can we address, manage, or mitigate inherent bias as we build and use AI systems? A few things:

  • Tackle bias not just in your data, but also be aware it can result from how the data is interpreted, used, or interacted with by users
  • Lean into open source tools and data science. Open source can ease technical barriers to fighting AI bias via collaboration, trust, and transparency
  • Most importantly, build diverse AI teams who bring multiple perspectives to detecting and fighting bias. As Reid Hoffman and Maelle Gavet discussed in a recent Masters of Scale Strategy Session, we should “also incorporate a diversity of mindsets towards AI, including skeptics and optimists.”

Policy and regulations

The pace of AI advancement is lightning-fast; new innovations seem to happen every day. With important ethical and societal questions around bias, safety, and privacy, smart policy and regulations around AI development are crucial.

Policy makers need to figure out a way to have a more agile learning process for understanding the nuances in AI. I have always said that over time markets are more mature than the single mind. The same can be said about policy, except given the rate of change in the AI world, we will have to shrink time. There needs to be a public-private partnership, and private institutions will play a strong role.

Cisco’s EVP and GM of Security and Collaboration, Jeetu Patel, shared his perspective in our recent discussion:

“We have to make sure that there’s policy, regulation, government- and private-sector assistance in ensuring that that displacement does not create human suffering beyond a certain point so that there’s not a concentration of wealth that gets even more exacerbated as a result of this.”

‘Machines taking over’

People are really afraid of machines replacing humans. And their concerns are valid, considering the human-like nature of AI tools and systems like GPT. But machines aren’t going to replace humans. Humans with machines will replace humans without machines. Think of AI as a co-pilot. It’s the user’s responsibility to keep the co-pilot in check and know its powers and limitations.

Shankar Arumugavelu, SVP and Global CIO at Verizon, says we should start by educating our teams. He calls it an AI literacy campaign.

“We’ve been spending time internally within the company on raising the awareness of what generative AI is, and also drawing a distinction between traditional ML and generative AI. There is a risk if we don’t clarify machine learning, deep learning, and generative AI – plus when you would use one versus the other.”

Then the question is: What more can you do if something previously took you two weeks and now it takes you two hours? Some leaders will get super efficient and talk about reducing headcount and the like. Others will think, I’ve got all these people, what can I do with them? The smart thing to do is figure out how we channel the benefits of AI into more knowledge, innovation, and productivity.

As Goldman Sachs CIO Marco Argenti said, the interaction between humans and AI will completely redefine how we learn, co-create, and spread knowledge.

“AI has the ability to explain itself based on the reader. In fact, with the prompt, the reader almost becomes the writer. The reader and the writer are, for the very first time, on equal footing. Now we can extract relevant information from a corpus of knowledge in a way that actually follows your understanding.”

Working together

We’ve seen leaders calling for a pause on the development of AI, and their concerns are well-founded. It would be negligent and harmful not to consider the risks and limitations around the technology, and we need to take governance very seriously.

However, I don’t believe the answer is to stop innovating. If we can get the brilliant people working on these technologies to come together, and partner with government institutions, we’ll be able to balance the risks and opportunities to drive more value than we ever thought possible.

The outcome? A world where productivity is abundant, knowledge is accessible to everyone, and innovation is used for good.

Learn about vector search and how DataStax leverages it to unlock AI capabilities and apps for enterprises.

About Chet Kapoor:

Chet is Chairman and CEO of DataStax. He is a proven leader and innovator in the tech industry with more than 20 years in leadership at innovative software and cloud companies, including Google, IBM, BEA Systems, WebMethods, and NeXT. As Chairman and CEO of Apigee, he led company-wide initiatives to build Apigee into a leading technology provider for digital business. Google (Apigee) is the cross-cloud API management platform that operates in a multi- and hybrid-cloud world. Chet successfully took Apigee public before the company was acquired by Google in 2016. Chet earned his B.S. in engineering from Arizona State University.