At the heart of AI systems are statistical models that have no concept of social inequality, fairness, or hardships. In Cathy O’Neil’s book, Weapons of Math Destruction (WMD), she points out that big data is discriminating nearly at every juncture of our society and pummeling the poor at each opportunity. How is this happening? Her book points to many avenues of misuse of data, but most offensive is through the use of proxies. Proxies are statistical correlations. Data statistics that are designed for one purpose but are repurposed to be used for economic or convenience sake. There are a number of examples of this. The most profound is the use of your FICO score as a proxy.
Garbage in, garbage out
The FICO score was initially designed to evaluate the risk that an individual would default on a bank loan. However, the FICO score is now being used for everything from evaluating your risk for the purchase of automobile insurance to hiring practices, and even for finding the perfect mate. This is all fine if you have a high FICO score. However, will a hiring manager dig deeper to understand your circumstances of why you don’t have such a stratospheric score, or will he simply discard your application? How about selecting a mate? Women are more often the victim of abusive male relationship and bear the financial burden of dead beat dads that default on child support payments. How will this be taken into consideration when it comes to dating?
Another example of a proxy is your zip code. In most applications or online purchase forms, it is illegal to ask for your race. There are exceptions where the government wants to capture statistics to assure minorities are getting equal access to items such as home mortgages. However, by using your zip code, an AI system can infer your race, your religion or your economic standing in your community by where you live. This is a very powerful proxy that is used more often than not for predatory advertising.
Mitigating risks at all costs
Will an AI system take a person’s socioeconomic condition into consideration as part of using FICO scores or zip codes in its statistical model? If there are no laws or regulations, why would any company incur this overhead in building their AI system? It defeats the whole purpose of developing quick, cheap solutions to weed out bad candidates.
AI is providing a powerful way for companies to evaluate risk and cut unnecessary expenses. Employers are incentivizing their employees to participate in health risk assessments and wellness programs with cheaper insurance premiums. The author points out those employees that want to keep their personal health data private are being punished through higher insurance premiums. How long will it be before your employer requests that you submit to a genome analysis and at what cost are you willing to keep that information private?
Most AI systems statistical models lack any transparency. Addressing a bad AI system through litigation is a feeble solution for the poor and middle classes. This approach can be long and arduous, causing catastrophic and irreversible damage to those in society that have the least resources to adjudicate such erroneous grievances.
This is a very dangerous situation. One could argue that accountants, lawyers, doctors, engineers, etc. all have to go through some type of certified examination to determine their competency to practice their profession. There needs to be some comprehensive examination to determine that an AI system is well designed not to inflict social harm. After all, ”Would you like to play thermal nuclear war?”
Mark Skilton, a professor at the Warwick Business School in the UK proposes an artificial intelligence test that goes beyond the imitation test pioneered by mathematician Alan Turing. He envisions a future integration of perception, action, language and cognition capabilities into a holistic system that can process general unstructured data. The system needs to interact with free-form objects, interpret and infer to higher forms of contextual and information awareness.
Finding political will to fix the problem
In our society, where money buys influence and our current political orientation of laissez-faire towards business, the middle class is quickly becoming politically disenfranchised. The U.S. constitution provides Americans the presumption of innocence, the right of free speech, and the right to own guns, but provides no guarantees for the protection of malevolent AI systems.
However, in Europe, the European Commission is proposing reform to the European Union’s data protection framework. Known as the EU Data Protection Directive, or Directive 95/46/EC. The directive’s intention is to respect the rights of privacy in personal and family life, as well as in the home and in personal correspondence.
It is designed for the protection of all personal data collected for or about citizens of the EU, especially as it relates to processing, using, or exchanging such data. The directive is founded on seven principles with two particular clauses providing concrete protection. The consent clause states: “Personal data should not be disclosed or shared with third parties without consent from its subject(s).” While the accountability clause closes loop holes with: “Subjects should be able to hold personal data collectors accountable for adhering to all seven of these principles.”
One can argue that we have already entered into an “Orwellian” state. We are witnessing the implications of WMD with attitudes that are acceptable for policies that are controlled by propaganda, misinformation, and the denial of truth. If we cannot muster the will to reign in WMD, then we have begun the deconstruction to our most cherished democracy of a free and open society.