March Issue

CIOs Have to Learn the New Math of Analytics

Today's data-driven business runs on the almighty algorithm. But if you're not careful, those geeky formulas can stir up legal and ethical trouble.

1 2 Page 2
Page 2 of 2

In another scenario, a company could open itself up to discrimination claims if it keeps too much data and insights about its employees, he says. Someone might be able to prove the company knew about, say, a health condition before letting him go.

Or if a car insurance company discovers there's a higher chance a customer will get into a crash after driving a certain number of miles, it may find itself in a "duty to warn" situation, Pasquale says. That's when a party is legally obligated to warn others of a potential hazard that they otherwise couldn't know about. It usually applies to manufacturers in product liability cases, or to mental health professionals in situations involving dangerous patients. And as the use of revelation-producing algorithms spreads, Pasquale says, people in other sectors could be subject to a similar standard--at least ethically, if not legally.

"At what point will things be a liability for you by knowing too much about your customers?" he asks.

Sometimes companies don't set out to uncover uncomfortable truths. They just happen upon them.

Insurance company executives, for example, should think carefully about results that could emerge from algorithms that help with policy decisions, says Croll, the consultant and author. That's true even when a formula looks at metadata -- descriptions of customer data, not the data itself. For example, an algorithm could find that families of customers who had changed their first names were more likely to file claims for suicide, he speculates. Further analysis could conclude that it is likely those customers were transgender people who couldn't cope with their changes.

An algorithm that identified that pattern would have uncovered a financially valuable piece of information. But if it then suggested that an insurer turn down or charge higher premiums to applicants who had changed their first names, the company might appear to be guilty of discrimination if it did so, Croll says.

The CIO's Best Role

The best way a CIO can support data science is to choose technologies and processes that keep data clean, current and available, says Chris Pouliot, vice president of data science at Lyft, a competitor of Uber. Before joining Lyft in 2013, Pouliot was director of algorithms and analytics at Netflix for five years and a statistician at Google.

CIOs should also create systems to monitor changes in how data is handled or defined that could throw off the algorithm, he says. Another key: CIOs should understand how best to use algorithms, even if they can't build algorithms of their own.

For example, if a payment service needs to figure out whether pending transactions could be fraudulent, it might hard-code an algorithm into its payment software. Or the algorithm could be run offline, with the results of the calculations applied after the transaction, potentially preventing future transactions. The CIO has to understand enough about what the service is and how the algorithm works to make such decisions, Pouliot says.

CIOs should, of course, provide the technology infrastructure to run corporate algorithms, and the data they require, says Mark Katz, CIO of the American Society of Composers, Authors and Publishers, which licenses, tracks and distributes royalties to songwriters, composers and music publishers.

Katz meets regularly with ASCAP's legal department to make sure the results of the algorithms comply with the organization's charter and pertinent regulations.

"We're all information brokers at the end of the day," he says.

CIOs can expect increasing scrutiny of analytics programs. The Federal Trade Commission, in particular, is watching the use of algorithms by banks, retailers and other companies that may inadvertently discriminate against poor people. An algorithm to advise a bank about home loans, for example, might unfairly predict that an applicant will default because certain characteristics about that person place him in a group of consumers where defaults are high.

Or online shoppers might be shown different prices based on criteria such as the devices they use to access an e-commerce site, as has happened with Home Depot, Orbitz and Travelocity. While companies may think of it as personalization, customers may see it as an unfair practice, Luca says.

The Consumer Federation of America recently expressed concern that, in the auto insurance industry, pricing optimization algorithms could violate state insurance regulations that require premiums to be based solely on risk factors, not profit considerations.

Consumers, regulators and judges might start asking exactly what's in your algorithm, and that's why algorithms need to be defensible. In a paper published last year in the Boston College Law Review, researchers Kate Crawford and Jason Schultz proposed a system of due process that would give consumers affected by data analytics the legal right to review and contest what algorithms decide.

The Obama administration recently called on civil rights and consumer protection agencies to expand their technical expertise so that they'll be able to identify "digital redlining" and go after it. In January, President Obama asked Congress to pass the Consumer Privacy Bill of Rights, which would give people more control over what companies can do with their personal data. The president proposed the same idea in 2012, but it hasn't moved forward.

Meanwhile, unrest among some consumers grows. "Customers don't like to think they are locked in some type of strategic game with stores," Pasquale says. CIOs should be wary when an algorithm suddenly produces outliers or patterns that deviate from the norm, he warns. Results that seem to disadvantage one group of people, he says, are also cause for concern. Even if regulators don't swoop in to audit the algorithms, customers may start to feel uneasy.

As Harvard's Luca puts it, "Almost every type of algorithm someone puts in place will have an ethical dimension to it. CIOs need to have those uncomfortable conversations."

Copyright © 2015 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
Survey says! Share your insights in our 2020 CIO Tech Poll.