Do big data algorithms treat people differently based on characteristics like race, religion, and gender? Cathy O'Neil in her new book Weapons of Math Destruction\u00a0and Frank Pasquale in The Black Box Society both look closely and critically at concerns over discrimination, the challenges of knowing if algorithms are treating people unfairly and the role of public policy in addressing these questions.\n\n\nTech leaders must take seriously the debate over data usage -- both because discrimination in any form has to be addressed, and because a failure to do so could lead to misguided measures such as mandated disclosure of algorithmic source code.\n\n\nWhat\u2019s not in question is that the benefits of the latest computational tools are all around us. Machine learning helps doctors diagnose cancer, speech recognition software simplifies our everyday interactions and helps those with disabilities, educational software improves learning and prepares children for the challenges of a global economy, new analytics and data sources are extending credit to previously excluded groups. And autonomous cars promise to reduce accidents by 90 percent.\n\n\nJason Furman, the Chairman of the Council of Economic Advisors, got it right when he said in a recent speech\u00a0that his biggest worry about artificial intelligence is that we do not have enough of it.\n\n\nOf course, any technology, new or old, can further illegal or harmful activities, and the latest computational tools are no exception. But, in the same regard, there is no exception for big data analysis in the existing laws that protect consumers and citizens from harm and discrimination.\n\n\nThe Fair Credit Reporting Act protects the public against the use of inaccurate or incomplete information in decisions regarding credit, employment, and insurance.\u00a0 While passed in the 1970s, this law has been effectively applied to business ventures that use advanced techniques of data analysis, including the scraping of personal data from social media to create profiles of people applying for jobs.\n\n\nFurther, no enterprise can legally use computational techniques to evade statutory prohibitions against discrimination on the basis of race, color, religion, gender and national origin in employment, credit, and housing. In a 2014 report on big data, the Obama Administration emphasized this point and told regulatory to agencies \u201cidentify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations....\u201d\n\n\nEven with these legal protections, there is a move to force greater public scrutiny -- including a call for public disclosure of all source code used in decision-making algorithms. Full algorithmic transparency would be harmful. It would reveal selection criteria in such areas as tax audits and terrorist screening that must be kept opaque to prevent people from gaming the system. And by allowing business competitors to use a company\u2019s proprietary algorithms, it would reduce incentives to create better algorithms.\n\n\nMoreover, it won\u2019t actually contribute to the responsible use of algorithms. Source code is only understandable by experts, and even for them it is hard to say definitively what a program will do based solely on the source code. This is especially true for many of today\u2019s programs that update themselves frequently as they use new data.\n\n\nTo respond to public concern about algorithmic fairness, businesses, government, academics and public interest groups need to come together to establish a clear operational framework for responsible use of big data analytics. Current rules already require some validation of the predictive accuracy of statistical models used in credit, housing, and employment. But technology industry leaders can and should do more.\n\n\nFTC Commissioner McSweeny has the right idea, with her call for a framework of \u201cresponsibility by design.\u201d This would incorporate fairness into algorithms by testing them -- at the development stage -- for potential bias. This should be supplemented by audits after the fact to ensure that algorithms are not only properly designed, but properly operated. \u00a0\n\n\nImportant in this cooperative stakeholder effort would be a decision about which areas of economic life to include -- starting with sensitive areas like credit, housing and employment. The framework would also need to address the proper roles of the public, developers and users of algorithms, regulators, independent researchers, and subject matter experts, including ethics experts.\n\n\nIt is important to begin to develop this framework now, and to ensure the uses of the new technology are, and are perceived to be, fair to all. The public must be confident in the fairness of algorithms, or a backlash will threaten their very real and substantial benefits.