Artificial intelligence (AI) is cropping up everywhere in business, as organizations deploy the technology to gain insights into customers, markets and competitors, and to automate processes in almost every facet of operations.\nBut AI presents a wide range of hidden dangers for companies, especially in areas such as regulatory compliance, law, privacy and ethics. There is little visibility into how AI and machine learning technologies come to their conclusions in solving problems or addressing a need, leaving practitioners in a variety of industries flying blind into significant business risks.\nThe concerns are especially relevant for companies in industries such as healthcare and financial services, which have to comply with a number of government and industry regulations.\n[ Cut through the hype with our practical guide to machine learning in business and find out whether your organization is truly ready for taking on artificial intelligence projects. | Get an inside look at 3 machine learning success stories. | Get the latest insights with our CIO Daily newsletter. ]\n\u201cContext, ethics, and data quality are issues that affect the value and reliability of AI, particularly in highly regulated industries,\u201d says Dan Farris, co-chairman of the technology practice at law firm Fox Rothschild, and a\u00a0former software engineer who focuses his legal practice on\u00a0technology, privacy, data security, and infrastructure matters. \u201cDeploying AI in any highly regulated industry may create regulatory compliance problems.\u201d\nRisky AI business\nFinancial technology companies are investing heavily in AI, but the losses and\/or administrative actions that might result are potentially catastrophic for financial services companies, Farris says. \u201cIf an algorithm malfunctions, or even functions properly but in the wrong context, for example, there is a risk of significant losses to a trading company or investors,\u201d he says.\nHealthcare also provides particularly compelling examples of where things can get troublesome with AI. \u201cRecognition technology that can help identify patterns or even diagnose conditions in medical imaging, for example, is one way that AI is being deployed in the healthcare industry,\u201d Farris says. \u201cWhile image scanning may be more accurate when done by computers versus the human eye, it also tends to be a discrete task.\u201d\nUnlike a physician, who might have the value of other contextual information about a patient, or even intuition developed over years of practice, the results from AI and machine learning programs can be narrow and incomplete. \u201cReliance on such results without the benefit of medical judgment can actually cause worse patient outcomes,\u201d Farris says.\nAnd like humans, machines will make mistakes, \u201cbut they could be different from the kinds of mistakes humans make such as those arising from fatigue, anger, emotion, or tunnel vision,\u201d says Vasant Dhar, professor of information systems at New York University and an expert on AI and machine learning.\n\u201cSo, what are the roles and responsibilities of humans and machines in the new world of AI, where machines make decisions and learn autonomously to get better?\u201d Dhar says. \u201cIf you view AI as the \u2018factory\u2019 where outputs [or] decisions are learned and made based on the inputs, the role of humans is to design the factory so that it produces acceptable levels of costs associated with its errors.\u201d\nWhen machines learn to improve on their own, humans are responsible for ensuring the quality of this learning process, Dhar says. \u201cWe should not trust machines with decisions when the costs of error are too high,\u201d he says.\nThe first question for regulators, Dhar says, is do state-of-the-art AI systems \u2014 regardless of application domain \u2014 result in acceptable error costs? For example, transportation regulators might determine that since autonomous vehicles would save 20,000 lives a year, the technology is worthwhile for society. \u201cBut for insurance markets to emerge, we might need to consider regulation that would cap damages for errors,\u201d he says.\nIn the healthcare arena, the regulatory challenges will depend on the application. Certain areas such as cataract surgery are already performed by machines that tend to outperform humans, Dhar says, and recent studies are finding that machines can similarly outperform radiologists and pathologists.\n\u201cBut machines will still make mistakes, and the costs of these need to be accounted for in making the decision to deploy AI,\u201d Dhar says. \u201cIt is largely an expected value calculation, but with a stress on \u2018worst case\u2019 as opposed to average case outcomes.\u201d\nIn the future, as machines get better through access to genomic and fine-grained individual data and are capable of making decisions on their own, \u201cwe would similarly need to consider what kinds of mistakes they make and their consequences in order to design the appropriate regulation,\u201d Dhar says.\nLegal issues to consider\nIn addition to regulatory considerations, there are legal ramifications for the use of AI.\n\u201cThe main issue is who will be held responsible if the machine reaches the \u2018wrong\u2019 conclusion or recommends a course of action that proves harmful,\u201d says Matt Scherer, an associate with the international labor and employment law firm Littler Mendelson P.C., where he is a member of the robotics, AI, and automation industry group.\nFor example, in the case of a healthcare-related issue, is it the doctor or healthcare center that\u2019s using the technology, or the designer or programmer of the applications who\u2019s responsible?\u00a0\u201cWhat if the patient specifically requests that the AI system determine the course of treatment?\u201d Scherer says. \u201cTo me, the biggest fear is that humans tend to believe that machines are inherently better at making decisions than humans, and will blindly trust the decision of an AI system that is specifically designed for the purpose.\u201d\nSomeone at the organization using AI will need to take accountability, says Duc Chu, technology innovation officer at law firm Holland & Hart. \u201cThe first issues that come to mind when artificial intelligence or machine learning reach conclusions and make decisions are evidence, authentication, attestation, and responsibility,\u201d he says.\nIn the financial industry for instance, if an organization uses AI to help pull together information for financial reports, a human is required to sign and attest that the information presented is accurate and what it purports to be, and that there are appropriate controls in place that are operating effectively to ensure the information is reliable, Chu says.\n\u201cWe then know who the human is who makes that statement and that they are the person authorized to do so,\u201d Chu says. \u201cIn the healthcare arena, a provider may use [AI] to analyze a list of symptoms against known diseases and trends to assist in diagnosis and to develop a treatment plan. In both cases, a human makes the final decision, signs off on the final answer, and most importantly, is responsible for the ramifications of a mistake.\u201d\nSince AI \u2014 and in particular neural networks \u2014 are not predictable \u201cit raises significant challenges to traditional [tort law], because it is difficult to\u00a0link cause and effect in a traditional sense, since many AI programs do not permit a third party to determine how the conclusion is used,\u201d says Mark Radcliffe, a partner at DLA Piper, a global law firm that specializes in helping clients understand the impact of emerging and disruptive technologies.\n\u201cThe traditional tort theory requires \u2018proximate cause\u2019 for\u00a0liability,\u201d Radcliffe says. \u201cThe tort \u2018negligence\u2019 regime applies a reasonable man standard, which is very unclear in the context of software design. Another issue is whether the AI algorithms introduce \u2018bias\u2019 into results based on the programming.\u201d\nBest practices for safe AI\nOrganizations can do a number of things to guard against the legal and compliance risks related to AI.\nOne key requirement is to have a thorough grasp of how machines make their decisions. That means understanding that legislatures and courts and juries are likely to frown on the creation and deployment of systems whose decisions cannot properly be understood even by their designers, Scherer says.\n\u201cI tend to think that the black box issue can be addressed by making sure that systems are extensively tested before deployment, as we do with other technologies \u2014 such as certain pharmaceuticals \u2014 that we don't fully understand,\u201d Scherer says. \u201cI think in practice, and on a macro-scale, it will be a process of trial and error.\u00a0We will figure out over time which decisions are better left to humans and which are better made by computers.\u201d\nCompanies need to consider whether they can design a system to \u201ctrack\u201d the reasoning at a level which would satisfy regulators and legal thresholds, Radcliffe says. \u201cRegulators may encourage such transparency through rules of liability and other approaches,\u201d he says.\nEnterprises should be participating in the rule making with the relevant regulatory agencies that are developing the rules governing their operations, to ensure that they are realistic.\u00a0\u201cGovernment agencies cannot make practical rules without the real-world input from the industry,\u201d Radcliffe says.\nInvolvement should also extend to participation in industry organizations in developing industry specific rules for AI. \u201cGovernment will be reactive and may make impractical rules based on a single incident or series of incidents,\u201d Radcliffe says. \u201cCompanies need to work with industry organizations and government regulatory agencies to avoid those knee-jerk responses.\u201d\nOrganizations should also be well versed in knowing when it\u2019s safest to rely on AI conclusions vs. human decision making when liabilities are a factor. \u201cThis concern will vary by industry and even within industries,\u201d Radcliffe says.\nFor example, the use of AI by internists to assist in a patient diagnosis is a much lower risk than the use of AI in powering robotic surgeons, Radcliffe says. \u201cFor high-risk activities such as medicine, which also has a mature legal regulatory regime, companies will need to work with regulators to update the rules to apply to this new approach.\u201d\u00a0\nIn addition, companies need to consider how to allocate liability between themselves and their customers and business partners for the use of AI.\n\u201cFor example, if a company develops an AI application for a bank, the parties would consider who would be liable if the AI program creates regulatory problems, such as \u2018redlining,\u2019 or makes mistakes [such as miscalculating payments], since such issues have no precedent and the parties can design the allocation of liability between them in the contract,\u201d Radcliffe says.\nAI is improving when it comes to neural networks, but companies that want to control regulatory and legal risk should continue to rely on AI as one factor among many within a human decision-making process, Farris says.\n\u201cAI is a tool, one that humans control and must continue to deploy in thoughtful ways,\u201d Farris says. \u201cCompanies that want to successfully deploy AI should first invest in data sets and data quality. Evaluating the quality and fitness of data for AI applications is an important first step. Parsing the tasks and decisions that are ripe for machine learning, and those which continue to require human input, is the next major hurdle.\u201d\u00a0\nRelated artificial intelligence articles:\n\n\n A practical guide to machine learning in business \n 5 artificial intelligence trends that will dominate 2018 \n Machine learning success stories: An inside look \nAI\u2019s biggest risk factor: Data gone wrong\n Winning the war for AI talent \n 9 IT projects primed for machine learning \n 10 signs you\u2019re ready for AI -- but might not succeed \n 10 strategic tips for getting started with machine learning \n Which deep learning network is best for you? \n How to build a highly effective AI team \n Why you should invest in AI talent now \n Why AI careers can start with a degree in linguistics \n The year of Alexa and the coming decade of A.I.