Machine learning algorithms are everywhere.\u00a0 It is not just Facebook and Google.\u00a0 Companies are using them to provide personalized education services and advanced business intelligence services, to fight cancer and to detect counterfeit goods. From farming to pharmaceuticals.\u00a0 From AI-controlled autonomous vehicles to clinical decision support software.\u00a0\nThe technology will make us collectively wealthier and more capable of providing for human welfare, human rights, human justice and the fostering of the virtues we need to live well in communities.\u00a0 We should welcome it and do all that we can to promote it.\nAs with any new technology, there are ethical challenges.\u00a0 Will the new technologies be fair and transparent? Will the benefits be distributed to all?\u00a0 Will they reinforce existing inequalities?\nOrganizations that develop and use AI systems need ethical principles to guide them through the challenges that are already upon us and those that lie ahead.\u00a0\nLast year, my trade association, the Software & Information Industry Association, released an issue brief on Ethical Principles for AI and Data Analytics that addresses these challenges. It draws on the classical ethical traditions of rights, welfare, and virtue to enable organizations to examine their data practices carefully.\nCompanies need to recover their ability to think in ethical terms in business and in particular in their institutional decisions regarding the collection and use of information.\u00a0 These principles are a practical, actionable guide.\u00a0\nSIIA is not the only entity seeking to bring ethical considerations into the world of AI and data analysis.\u00a0 The computer science group Fairness, Accountability and Transparency in Machine Learning (FAT\/ML) has drafted its own principles. Another group of computer scientists meeting at Asilomar drafted broader principles. IEEE has proposed principles relating to ethical values in design.\u00a0 ACM recently released a set of principles designed to ensure fairness in the use of AI algorithms. And the Information Accountability Foundation has formulated a very useful set of principles in its report on Artificial Intelligence, Ethics and Enhanced Data Stewardship.\nThese efforts on AI ethics are also intergovernmental in character\nSome of the different ethical approaches to AI were aired at session at the OECD conference in October 2017 on AI: Intelligent Machines, Smart Policies. \u00a0The need for ethical rules for AI was raised by the Japanese at the 2016 G7 meeting and by the Italians at the 2017 G& meeting. The most recent G7 meeting concluded on March 28, 2018 with a Statement on Artificial Intelligence encouraging research \u201cexamining ethical considerations of AI.\u201d The U.S. Administration stepped into the field with its recent announcement that it would is \u201cworking with our allies\u201d to \u201cpromote trust in\u201d artificial intelligence technologies.\nIn its recently released Communication on Artificial Intelligence for Europe, the European Commission is proposing to develop \u201cAI ethics guidelines\u201d within the AI Alliance that \u201cbuild on\u201d this statement published by the European Group of Ethics in Science and New Technologies.\nThese are all positive developments.\u00a0 But a couple of cautions are needed.\u00a0 Abstract ethical statements will get us only so far.\u00a0 Actionable ethical principles need to consider how AI is used in a particular context. The ethical issues involved in autonomous weapons, for instance, are very different from the ethical issues involved in the use of AI for recidivism scores or employment screening. That\u2019s why SIIA provided specific recommendations on how to apply the general principles of rights, justice, welfare and virtue to the specific case of ensuring algorithmic fairness through the use of disparate impact analyses.\nIn addition, there are no special ethical principles that apply uniquely to AI, but not to other modes of data analysis and prediction.\u00a0 The ethical demands to respect rights, promote welfare and cultivate human virtues need to be applied and interpreted in the development and implementation of AI applications, and there is plenty of hard conceptual and empirical work needed to do this properly.\u00a0 But that is not the same as seeking out unique normative guidelines for AI.\nSome such as Elon Musk have suggested going beyond ethical standards to a regulatory response\nThere\u2019s a place for some of that \u2013 in specific areas where problems are urgent and must be addressed in order to deploy the technology at all.\u00a0 Think of the need to understand liability for autonomous cars or to set a regulatory framework at the Food and Drug Administration for clinical decision support systems.\nBut just as there are no special ethical principles for AI, there need not be any special regulations or laws applying to AI as such.\u00a0 AI encompasses an indefinitely large range of analytical techniques; it is not a substantive enterprise at all.\u00a0 A general AI regulation implemented by a national agency would be like having a regulatory agency for statistical analysis!\nThe 2016 report of the One Hundred Year Study on Artificial Intelligence\u00a0 concluded that \u201cattempts to regulate \u201cAI\u201d in general would be misguided, since there is no clear definition of AI (it isn\u2019t any one thing), and the risks and considerations are very different in different domains.\u201d\nThis does not mean AI is, or should be, unregulated. Current law and regulation still apply.\u00a0 There\u2019s no get out of jail free card for using AI.\u00a0 It is not a defense for violating the law.\u00a0 Companies cannot escape liability under the fair lending or fair housing laws, for example, by explaining that they were using AI technology to discriminate.\nRegardless of the state of regulation, organizations need guidance to adapt to the many ethical challenges they will face in bringing this technology to fruition.\u00a0 The principles of beneficence, respect for persons, justice and the fostering of virtues can provide a roadmap and some important guardrails for AI and advanced data analytics.