by Mark MacCarthy

Ethical principles for algorithms

Opinion
Oct 13, 2017
AnalyticsArtificial IntelligenceIT Governance

As big data analytics continues to transform the economic and social landscape, is it time to ask questions about the ethical nature of the algorithms employed by various organizations?

gear robotic hand touching human hand
Credit: Thinkstock

Over the summer, a software engineer was sentenced to 40 months in prison for his role in helping Volkswagen evade pollution control rules. Earlier this month, the Pew Foundation reported that 85% of the public would support regulation to restrict the use of AI by businesses and organizations.  These stories suggest powerful reasons for tech companies to do the right thing. But, beyond the fear of legal action or adverse public opinion, companies are also fully aware of the ethical implications of the AI-algorithms they create. 

Tech companies are actively seeking principles to guide them through these ethical challenges. Last year, Google, Facebook, Microsoft, and IBM set up the Partnership on AI, which is dedicated to promoting AI for social good and to address ethical issues in AI. In October DeepMind set up a research initiative on Ethics and Society to ensure that development and use of AI is “held to the highest ethical standards.”

Such industry developments are based on the realization that algorithms are not just neutral tools without any intrinsic ethical character.

When an algorithm is designed to accomplish a specific purpose, its ethical character depends on the evaluation of this purpose. For example, is it right to enable facial recognition of gays, autonomous weapons systems, subprime credit models, sex robots, or targeted advertising aimed at exploiting psychological weakness of vulnerable populations? The answers to these questions are all parts of an algorithm’s character.  Therefore, an algorithm can’t possibly avoid having an ethical character when the overwhelming likelihood is that the algorithm will be used for a specific purpose.

Ethical issues can also be an unavoidable part of an algorithmic model’s construction. Some researchers note that choosing which of several competing algorithms to use requires taking ethical concerns into account. For instance, setting a certain threshold for whether a cell should count as diseased or not depends on the ethical value of avoiding false positives versus false negatives. Other researchers restrict the functional form of an algorithm so that it can be easily explained to those affected by it. The algorithm itself embodies the value judgment that some sacrifice of accuracy is worth the gain in intelligibility.  

To address disparate impact concerns, standards of fairness can be built into algorithms that guide eligibility decisions in areas like credit, insurance, employment, school admissions, and parole, but the standards can be in conflict. An algorithm that maximizes accuracy regardless of statistical parity embodies the moral principle that fairness is accuracy in classification. An algorithm that has been adjusted to achieve statistical parity embodies the normative judgment that some loss of accuracy is needed to avoid further subordination of historically disadvantaged groups. Only ethical reflection can determine which of these fairness standards to embed in the algorithm.

So how should organizations approach the ethical evaluation of the algorithms they develop and use?  SIIA recently released an issue brief reviewing principles for ethical data use, ranging from the Belmont principles that guide research on human subjects to the UN principles on business and human rights. It recommends that organizations consider principles based on rights, welfare, justice and virtue.

Organizations should limit their data practices to those that respect internationally recognized principles of human rights. This requires that they should respect the equal dignity and autonomy of individuals through ensuring that their algorithms conform to the right to life, privacy, religion, property, freedom of thought, and due process before the law.

Organizations should also aim to achieve social justice in their development and use of algorithms, This means favoring algorithms whose benefits can be equitably distributed and avoiding algorithms that disproportionately disadvantage vulnerable groups. Organizations should not be indifferent to how the models they develop are used, by whom the models are used, and how the benefits of their new analytical services are distributed.

Organizations should aim to create algorithms that provide the greatest possible benefit to people around the world. This means creating and using algorithms that increase human welfare through improvements in health care, workplace opportunities, insurance, credit granting, authentication, fraud prevention, marketing, personalized learning, recommendation engines, online advertising, to name just a few.  In short, companies have a responsibility to use algorithms for social good.

Finally, organizations should engage in data practices that encourage virtues that contribute to human flourishing. They should design and implement algorithms that enable affected people to develop and maintain virtuous character traits such as honesty, courage, moderation, self-control, humility, empathy, civility, care, and patience.  For example, would the use of robot caregivers reduce the need for the virtue of personally caring for the needs of loved ones?  Would the use of automated weapons systems reduce the need for human courage?

Following these principles is not a recipe that can be followed automatically. Ethical judgments will be needed as companies struggle to do the right thing in specific contexts. Not everyone will agree all the time with these ethical judgments, but together these principles provide general guides to the development of ethical data practices.

Over the summer, a software engineer was sentenced to 40 months in prison for his role in helping Volkswagen evade pollution control rules. Earlier this month, the Pew Foundation reported that 85% of the public would support regulation to restrict the use of AI by businesses and organizations.  These stories suggest powerful reasons for tech companies to do the right thing. But, beyond the fear of legal action or adverse public opinion, companies are also fully aware of the ethical implications of the AI-algorithms they create. 

Tech companies are actively seeking principles to guide them through these ethical challenges. Last year, Google, Facebook, Microsoft, and IBM set up the Partnership on AI, which is dedicated to promoting AI for social good and to address ethical issues in AI. In October DeepMind set up a research initiative on Ethics and Society to ensure that development and use of AI is “held to the highest ethical standards.”

Such industry developments are based on the realization that algorithms are not just neutral tools without any intrinsic ethical character.

When an algorithm is designed to accomplish a specific purpose, its ethical character depends on the evaluation of this purpose. For example, is it right to enable facial recognition of gays, autonomous weapons systems, subprime credit models, sex robots, or targeted advertising aimed at exploiting psychological weakness of vulnerable populations? The answers to these questions are all parts of an algorithm’s character.  Therefore, an algorithm can’t possibly avoid having an ethical character when the overwhelming likelihood is that the algorithm will be used for a specific purpose.

Ethical issues can also be an unavoidable part of an algorithmic model’s construction. Some researchers note that choosing which of several competing algorithms to use requires taking ethical concerns into account. For instance, setting a certain threshold for whether a cell should count as diseased or not depends on the ethical value of avoiding false positives versus false negatives. Other researchers restrict the functional form of an algorithm so that it can be easily explained to those affected by it. The algorithm itself embodies the value judgment that some sacrifice of accuracy is worth the gain in intelligibility.  

To address disparate impact concerns, standards of fairness can be built into algorithms that guide eligibility decisions in areas like credit, insurance, employment, school admissions, and parole, but the standards can be in conflict. An algorithm that maximizes accuracy regardless of statistical parity embodies the moral principle that fairness is accuracy in classification. An algorithm that has been adjusted to achieve statistical parity embodies the normative judgment that some loss of accuracy is needed to avoid further subordination of historically disadvantaged groups. Only ethical reflection can determine which of these fairness standards to embed in the algorithm.

So how should organizations approach the ethical evaluation of the algorithms they develop and use?  SIIA recently released an issue brief reviewing principles for ethical data use, ranging from the Belmont principles that guide research on human subjects to the UN principles on business and human rights. It recommends that organizations consider principles based on rights, welfare, justice and virtue.

Organizations should limit their data practices to those that respect internationally recognized principles of human rights. This requires that they should respect the equal dignity and autonomy of individuals through ensuring that their algorithms conform to the right to life, privacy, religion, property, freedom of thought, and due process before the law.

Organizations should also aim to achieve social justice in their development and use of algorithms, This means favoring algorithms whose benefits can be equitably distributed and avoiding algorithms that disproportionately disadvantage vulnerable groups. Organizations should not be indifferent to how the models they develop are used, by whom the models are used, and how the benefits of their new analytical services are distributed.

Organizations should aim to create algorithms that provide the greatest possible benefit to people around the world. This means creating and using algorithms that increase human welfare through improvements in health care, workplace opportunities, insurance, credit granting, authentication, fraud prevention, marketing, personalized learning, recommendation engines, online advertising, to name just a few.  In short, companies have a responsibility to use algorithms for social good.

Finally, organizations should engage in data practices that encourage virtues that contribute to human flourishing. They should design and implement algorithms that enable affected people to develop and maintain virtuous character traits such as honesty, courage, moderation, self-control, humility, empathy, civility, care, and patience.  For example, would the use of robot caregivers reduce the need for the virtue of personally caring for the needs of loved ones?  Would the use of automated weapons systems reduce the need for human courage?

Following these principles is not a recipe that can be followed automatically. Ethical judgments will be needed as companies struggle to do the right thing in specific contexts. Not everyone will agree all the time with these ethical judgments, but together these principles provide general guides to the development of ethical data practices.