Navigating the AI Landscape: Where to Invest to Realize the Greatest Gains


The key to unlocking value using AI is to ensure that you are using the right tools to solve particular problems. This session will cover examples of how to make the most of AI’s potential and will identify the key roles that need to be involved for successful implementation. You will also hear strategies for moving more quickly from pilot programs to integrating AI into business strategies across your organization.

Register Now


00:00 [This transcript was auto-generated.]
Thank you for that introduction. And good afternoon to all of you. It's such a pleasure to be here and be part of this meeting, I'm hoping that you can have some great takeaways from this discussion. What I'm going to be discussing today is navigating the artificial intelligence landscape, where to invest to realize the greatest gains, just as a full disclaimer, I am an employee at NYU. And I serve on the boards of the quarry in global medical response. The views expressed here are my own personal views and do not represent any of the institutions in which I'm affiliated. So, but we can also make these slides widely distributed after the meeting, if you if you so request, but I do ask that you make a formal request so that we know what they're the intended uses. Alright, let's jump right into it. I'm always fascinated with with the World Economic Forum, they produce a future of jobs report annually, where they talk about the fastest growing jobs and the top 10 fastest declining jobs, you can see that the number one role that is the fastest growing role is AI and machine learning specialists. That is an indication of where the greatest need is, and also what the lowest supply is. But if you look at if you were to take a quick look at the top 10, fastest declining roles, you can, you can see that many of these roles are basically impacted with technology and in some cases, artificial intelligence. You know, so what it did is it prompted me to think about what should I do? And how do I familiarize myself with the AI technologies that are out there? So I actually on March 7, earlier this year, I did my very first prompt, with open AI and on our chat TPT specifically was can you tell me who is Chris Bowen from at V. And what it actually stated was, in which I thought to be relatively accurate, was Chris boon is the Vice President and Global Head of health economics and Outcomes Research at at V, a pharmaceutical company that focuses on developing drugs for various health conditions, as the head of he or crystal is responsible for leading a team that conducts research to demonstrate the value of add these drugs to patients, healthcare providers, and payers. It goes on and on and on. But it essentially concludes with prior to joining Abby, Chris held several leadership positions at other pharmaceutical companies and academic institutions. You know, I read this and I, the first thing I thought was, Wow, that's pretty accurate. It's pretty spot on. So it prompted me to go then to my second official prop, which is where did Chris Boon work prior to add v. And I only asked that because it said that I worked at other pharmaceutical companies and academic institutions before. And what it actually gave back to me was, I am not aware of which Chris Boon you're referring to, there's there may be multiple people with that name. However, if you're referring to the Chris Boone, who is the Vice President of Global Head of ATR at Abbey, He previously worked at Pfizer, where he held the position of Vice President and head of real world data and analytics. Now That part is true. But then it says, prior to that, he worked at United biosource Corporation as a senior vice president of real world evidence, and at Farm metrics, as the vice president of research and data operations. That unfortunately, is not true. So what you're seeing is a clear sort of indication or an opportunity, if you will, to ensure that data quality and data accuracy are important, as we feed into many of these models. These, these, this output is only as good as the data and so it's truly a garbage in garbage out. situation. What I want to do now is I'm I'm fascinated by this particular quote, you know, speaking along the lines of data quality, Joshua Smith, who's over at the IBM TJ Watson Research Center actually was quoted saying anywhere there is data, and the need for deeper insight. AI has a role to play in acceleration, accelerating innovation at scale. My personal belief is that whether companies want to accept that or not every single company in every single industry is a data company. So if you think of what Joshua Smith is saying, right now you can see how apropos it is to think about how every company should be thinking about it, the utility of AI, in its business operations to either be more efficient or more effective in certain activities. There was a also a annual survey that's actually being that that's produced by Forbes every year and this year, they decided to focus on AI. So I'm going to give you a little bit of the feedback that we're that we're hearing now from the CEOs Vantage point of how they think about AI. The first question was which emerging AI technologies will create the most opportunities for business? And so as they were asked this question, many 58% of them responded with predictive ai 12% responded with generative AI, to a percent responded with robotic process automation. Then you see 5g and Internet of Things and sort of vr Virtual Reality. But when asking about most companies who have actually utilized or used generative ai 59% of those companies, and then keep in mind this is from the perspective of the CEO, are using or experimenting with generative AI as a part of their business processes. 29% plan to use it but have not yet begun it and 12% literally have no plans to use it, which I think is very concerning considering I just said that. You know that every company has a data company, whether they want to accept it or not. When it asks, what is the impact it in expected impact of AI on headcount, you know, in 2024 22% of them said fuel workers are needed, but they anticipate by 2020 Hey, 74% of the respondents feel like fuel workers will be needed. So so as you can see from the eyes of the CEO, there is a an impetus to really think about how to utilize artificial intelligence to ultimately use it in lieu of actual headcount or human intervention. The questions that I hope to address today, in light of some of the data that I just provided is, number one, what value use cases are we seeing across industries utilizing generative AI? Number two, what are some leading practices and near term actions learn from the pioneering companies advancing use of generative AI? And third, what are the three ways corporate executives can prepare their organizations for generative AI? I'm going to end with those questions. That'll essentially be my outline for how we go through the rest of this this discussion. We'll talk about the potential value. We'll talk about the public opinion on generative AI, we'll talk about this need for corporate oversight. And then I'm going to leave you with my three point game plan. And of course, we'll have our q&a thereafter. First and foremost, let's talk about what is generative AI, I know that many people hear the term they've heard it, most folks are repeating it. But just to ground ourselves. For this conversation, I'm going to use this definition. Unlike traditional AI systems that are typically designed to solve specific tasks or problems, generate generative AI is more focused on creativity, and generating new and original content, comparable to what a human might create, without human input. It works by using deep learning algorithms to analyze and learn from large amounts of data, such as images or text. When you think about some of the key industries that are utilizing generative AI at this point, you have healthcare and life sciences, which is where I work. You have manufacturing and engineering with many of the robotics technologies. You're seeing robotics and autonomous systems as in in, as you saying with many of the electric vehicles, and there's those cars, you're seeing it definitely in financial services with much of the stock data reporting, you're seeing it in creative industries. Hence the reason that you actually have a strike now with the Writers Guild. And then you're also seeing it very much so and transportation and logistics as they do a lot of the route planning and, and, you know, in sort of tracking the demand for drivers with many of those deliveries. But some, some industries have already seen a transformative impact or effect from using generative AI. Now biopharma which is exactly where I am in this state has been reported by BCG, just so you know, what they saw was, so generative AI identified a novel drug candidate for the treatment of idiopathic pulmonary fibrosis in 21 days. Now, typically, that would take years when you use many of the traditional methods. So you can see you can see where generative AI is being used, and actually the discovery of novel drugs or molecules to treat certain diseases. In the technology industry, you're seeing that 88% of software developers reported having higher productivity when using a generative AI code assistant in the insurance industry 30% of Insure tech platforms leveraging general AI were able to reduce up to approximately 30% of customer service cost. So there are many there is there are measurable impacts of how general AI is being used in many of these industries and how they're being essentially making these
organizations much more efficient. When you think about globally, this sort of corporate investment in AI when it's in 2013, you know, you saw it roughly $14.57 billion. Now you've seen a 13x increase by 2021. And there was a slight dip in 2022. I think it was sort of a factor of many of the companies sort of assessing where they were in their investments. But but you can still see, I still see a pretty substantial increase from 2013. Even if you think about it, from a public sector perspective, you're seeing a significant increase in the, in the budget federal budget for AI r&d from FY 2018, to now where we are in 2023. Now, one of the things that you probably have heard about is the significant costs it takes to build your own large language model. Now, if you looked at some of the training costs for many of these, none that granted, this is a millions of dollars, you can see Alpha code, which is point o nine, GBT nu X point two for Pom, which is eight, roughly $8 million bloom, which is a little bit over $2 million. So there is a significant cost to training many of these large models. And I want you to keep that in mind as we continue to go through this conversation. As you train a custom logic language model will all will offer greater flexibility. But that comes with high cost, which is essentially what I was just describing. So you really have three sorts of opportunities to think about. And as you're, as you're considering your models, you can, you can develop a new cutting edge foundation model, which will pretty much run run you 50 to $90 million plus, to build out, you can you can enhance an existing foundation model, which is certainly much more reasonable you see from one to $10 million of an estimated cost. Or you can fine tune an existing foundation model that is out there. So you can essentially take something such as chat GPT, and you can do it for a specific task. And that's anywhere from 10 to $100,000 of estimated cost. Now we're gonna jump in a little bit of how the public is reacting to generative AI. You know, I'm always fascinated with this because in basically in 2015 2016, there was a open letter published by some very prominent individuals, and it was titled research priorities for robust and beneficial artificial intelligence and open letter. This was essentially the genesis of what we know now is open AI, the organization. But many of the prominent signatories that signed this open letter were individuals like Stephen Hawking, Elan musk, the Wozniak, Yoshua, Bengio, Stuart Russell, and many, many, many others, but I just highlighted these few individuals for reasons. But then when you go to where you fast forward to where we are, in March of 2023, there was an open letter to pause all giant AI experiments and again, the prominent signatories were Elan musk, the Wozniak, you know, national NGO, Stuart Russell, pretty much the symbol of the very same individuals who were part of the push for AI. You know, and I bring this up to simply say that there is a, there is a recognized concern about the potential of AI and what the impact is. And we honestly, we still don't fully understand all the the unintended consequences that are associated with technology. Just from a corporate perspective, you know, earlier this year, you saw that, of course, Elon Musk was quoted saying there's a chance that AI goes wrong and destroys humanity. Apple's co founder, says AI will make it easier for bad actors to get away with more convincing scans. And then you just saw a number of we have Warren Buffett, who actually said that it's like unleashing the atomic bomb, you know, as is what he felt like aI was analogous to so do you know, there's generally there's some genuine concern from folks who have very big platforms, and how they're thinking about generative AI. But you also saw a number of companies who actually banned employees for using chat GPT at work. As you think about the uptick in AI controversies from 2017 to 2021. This is actually an Index report that's published annually. And you can see from 2017 to 2021, a 440% increase in the number of AI incidents and controversies basically in AI And I wish I would, I'm sure that, you know, once the data is reported for 2023, that number would be significantly higher than it was in 2021. But there's also been a, an increase in the number of ethics related publications, as it pertains to the use of AI, which I think is a good thing. Now, many of these ethical papers are produced by providing ethical frameworks, and how we can think about the use of AI. But some are expressing ethical concerns, you know, to sort of bring greater awareness to the public about how AI is, is and can be used or, you know, as Elon Musk would say, its impact potential impact on humanity broadly, Neil deGrasse, Tyson was actually actually put out a tweet back in May. And he says, If artificial intelligence fouls up society, then how intelligent was it? Really, you know, I actually thought that that quote was funny. But, but in many respects, accurate. But, you know, when you look at much of the data, and you look at many of the publications that are out there, there, you know, I think there, there are many risks that are being identified, but there are six, that really stood out to me as it pertains to gender they are they the first thing is the shadowing. And how you can create, oh, well, let me let me go back to that shadow AR. And then you got biased outputs, you have enhance fraud and phishing, you have lack of truth function, meaning that that's where the garbage in, garbage out comes out. And most people and and even as you start to build models, they start to utilize the outputs of prior models. So if those prior models actually had inaccurate data, then you have, you know, an even worse situation, then you have the copyright infringement, where many of the artists have expressed concern about their voices, and or music, lyrics, whatever being used to create new music, and it's all developed by AI. And then of course, one of the big things for many of the companies is the proprietary data leaks. Hence, the reason so many of these companies actually been in the US government has actually taken action on the AI risk. So the biotech Biden Harris administration has convened a number of the leading companies that are driving much of what we're seeing in generative AI to produce well to provide input into what is now what they call the AI Bill of Rights, much of that is put out there. It's it's sort of based on five dimensions, safe and effective systems, algorithmic algorithmic discrimination, protections, data privacy, notice and explanation, and human alternatives consideration and fall back, I would encourage you to go check it out, read it in detail, that's actually pretty fascinating. And you'll see they have much of the meeting minutes and notes from the discussions that they had available as well. And you can see how they got to this point. Let's jump into corporate oversight of AI. This is some data that was actually produced by Baker McKenzie, as they did an AI survey to better understand what is the level of corporate oversight we're seeing in the US. And then when they did the survey, 4% of the respondents consider the risks associated with using AI to be significant only 4% 38% believe their company's board of directors is not fully aware of how AI is being used across the organization. 36% of respondents currently have a chief AI officer in place. So only 30% 36% of the companies that were representative and survey 24% of those respondents say their corporate policies for corporate policies for managing AI risks are undocumented or they do not exist at all. So you have roughly a quarter of those companies who were still trying to find their way with developing policies around how AI what the AI risk are and how it should be managed. Now, let's jump into a little bit about not having sea level representation on AI. You know, as I earlier stated, every company is a data company, every therefore every company needs to think about how AI impacts is business. So this particular question, as part of that survey was, Does your company have a dedicated chief AI officer? Now 36% of them said yes, 64% of them said no, but of that 64% 43% of those companies plan to hire in the next two years and 15% plan to hire in the future and you have 6% that have no plans on hiring a building this function at all. And so, so that's, that's where we go and then just so you know, there was 500 companies represented in this survey. When he asked them about how do they think about the bias and reputational issues as top risk of AI. More formally, the question was,
what are the biggest current AI related risks of your organization? 69% said cybersecurity 65% said data privacy 57% said legal liability. But it's interesting. It was fascinating to me that many of these companies overlooked the reputation, the recruitment and algorithm bias that are perceived as top risk. But they're not at the top of the top concern for many of these organizations, which I would have actually put organizational reputation recruitment, fairness and things like that, at the top if it were me. The next question was sort of around which department, if any, is currently responsible for the oversight and management of AI related or enable tools and technology at the enterprise level, as you would expect, 83% of those respondents felt that it was an IT issue. And 70% felt like it was an information security issue. And a lot of that is sort of linked back to the cyber security concerns. And then you have, you know, human resources, risk management operations, and even legal as being responsible for the oversight and management of AI tools. As you start to think about what types of org structure should you have for, for an AI function, you know, there are many models that you're seeing, that are being created. You know, you've seen the federated or decentralized model, where you may have a team of folks embedded in a particular business unit and are responsible for the data and AI strategy for that particular business unit, you have the centralized approach, which is where all AI capabilities are sort of centralized into a, we'll call it a center of excellence. Now, the great thing about that is sort of the economies of scale, you have a concentrated group, they typically learn from each other, and whereas they feel like a, like you're on an island a little bit under the federated or decentralized model. But then you have a hybrid model, where you may still have that center of excellence that serves more of a hub, a hub and spoke, where they will set governance, they will set best practices, and they will also do much of your knowledge sharing, but you also get the benefit of having those dedicated teams within those business units who know those particular business units, and can provide better guidance in that way. That brings us to now our three point game plan. Now I call this the GPT. Game Plan sort of a double entendre, but in my case, it actually means governance people and technology is how I'm thinking about it. First, as you think about the governance of AI or generative AI, what you're really seeking to do is to realize the gains by protecting your business with clear policies that address the critical risks. And the first thing in order to to achieve that you must conduct a risk assessment assessment to understand if and where any sort of AI capabilities exist are being used or even considered. The second thing is you want to be able to sort of develop and disseminate governance policies, we, you know, we highlighted in those survey results that many of these companies do not have that in place. There are many consulting firms and law firms and others who are providing that level of guidance on how you should be thinking about oversight and guidance and government policies and to be quite frank, to not have that capability or to not have that level of understanding, even at the Board of Directors level, I would think it's concerning. The second dimension of this game plan is people. And this is where you're trying to realize the gains by utilizing strategic workforce planning, and transforming your operating models. The reality is this. Most people, most organizations really don't want to change. But to utilize generative AI or AI in general, more effectively, it's important that you establish a senior C suite role, for example, the CI chief AI officer, so that you can elevate the importance and elevate the dialogue to the C suite an end to the board and signaled to the organization, how important it is to be organized around this particular technology. The second thing you want to think about is how do you assemble a cross functional team, it's not just data scientists, but it's aI experts is ethics specialists, it's domain specialists in those particular be used and you also have your, your IT personnel that's part of this group as well. It will take a interdisciplinary approach in order to to effectively develop and execute a JNI strategy. The third thing is that you want to consider centralizing those those functions, where you have your r&d function, which I'm saying ai r&d is what I mean by that, and it and supplying LLM and data engineers. And the reason that that is important we just talked about you have three different approaches to sort of building a thinking about your LLM strategy, you can certainly build it yourself, you can sort of take one that's existing and customize, you can take, you can certainly send one that's already there. And pretty much were you 80% of the way there, and you tinker with it to get into specific your specific needs, but it's all about the time in the budget that you have to dedicate to it. The last dimension is technology, where you realize the gains by investing in infrastructure and experimentation that really requires a building a culture of innovation, and experimentation, you have to raise your own risk tolerance for that. The second thing is that you want to plan for a long term advantage through investment, and talent and infrastructure. The third thing is you want to be able to start small and really focus on very targeted use cases, then you want to be able to consider which existing LLM is and customize them with your own data sources. And then you can create private LLM, so that you don't have you don't expose your, your proprietary data to the public. And then the last thing is that you have to be in this sort of perpetual learning state, right, where you're continuously learning and staying up to date on emerging developments. So the key takeaways from this talk is that a start small and focus on specific use cases. Number two, involve stakeholders from across the organization, because that's critically important. You want to be able to invest in the right talent and infrastructure. Please maintain a focus on privacy, transparency, algorithmic bias, and ethics. Consider using existing ll M's and customizing them with your own data sources. And lastly, continue to learn and to stay up to date on emerging developments. I'm going to leave you with this. This is my call to action as you think about building a culture of innovation and experimentation. Your first option is that you can just do nothing. And you say just do nothing, it is impossible or you can just do it. Nothing is impossible. And hopefully you take that as your monitor as you move forward. Best of luck to all of you and I look forward to connecting with you and having further discussion. You can actually connect with me here. LinkedIn you to my web page, wherever you choose. I'm always available. Thank you very much.