by Tarun Gangwani

Building a strong web of trust in the machine learning age

Opinion
Mar 23, 2017
Artificial IntelligencePrivacySecurity

Businesses must reconcile the promise of machine learning with user privacy.

facial recognition
Credit: Thinkstock

Every day, we provide data to companies in exchange for great experiences powered by machine learning (ML). Facebook’s ability to tag friends in a photo seems obvious. Gmail’s ability to prioritize messages provides an intuitive way to triage conversations. Our user data makes these ML-based experiences possible. We provide this data under the assumption of trust that it doesn’t fall into the wrong hands.

Businesses that wish to provide similar experiences face two challenges: They need ample amounts of data, and they require ML algorithms that can provide the desired end user experience. The former challenge is overcome by leveraging social platforms to provide the data set. Twitter, for example, offers an API for businesses to accept the fire hose of data from its millions of users so businesses can build on top of it. Every time you see a “sign in with” button, the app uses the social platform as both an authorization layer and as a tool to extract your data. Once a business has access to your data, it can pick up a machine learning algorithm off the shelf to build out a feature. Thanks to these two practices, it is easier than ever before to build an ML-powered application.

While people may trust established businesses with data, new businesses or services that use social platforms have an important responsibility to maintain user trust. Companies creating solutions based on social platforms and open-source algorithms must inherit the trust provided to the original platform. When people opt to use a social sign-on feature, they provide a window into intimate details but they might not realize they’re doing it. Most people are unaware of the algorithms and tools used to process their data. Both of these details weave what I call the trust web. At any given time, a user photo resides within a social network, a third-party business, a machine learning company, and/or a cloud data center. Each party in this trust web shares the permission given by users to leverage their data for the ultimate benefit of those users.

The founders of NTechLab, a facial recognition software provider, appreciate their importance in a trust web. In 2015, NTechLab secured its place as a leader in its market. It won Washington University’s Megaface challenge, beating 90 teams — including Google’s Facenet. Alexander Kabakov, co-founder of NTechLab, provided me with some insights into the technology’s capabilities. He stressed the role NTechLab plays for businesses creating compelling user experiences. “When our algorithm processes a photo, it creates a vector, which describes a face using just 80 numbers. This is basically a hash, and we use only that vector to compare it with [other face vectors] when the algorithm tries to identify a person,” Kabakov said. He went on to note, “The facial recognition data belongs directly to our clients and only to them…. According to our company’s policy, only our clients can manage and use their facial recognition data.” NTechLab’s technology not only obfuscates the data, but also makes clear who owns the data.

Platforms can work with algorithm providers to strengthen the trust web. For example, NTechLab takes great strides to ensure data integrity. It works with app makers to reinforce their commitment to user privacy as well.

“NTechLab thoroughly monitors how their partners are using their platform to make sure it will be used appropriately,” said a spokesperson from the company. The company worked with Russian citizens and VK, the Facebook for Russia. NTechLab’s analysis platform, FindFace.ru, identified criminals using face data from VK. In this case, NTechLab saw a civil use for its work to capture wrongdoers using technology. In the United States, Facebook leverages ML algorithms to identify people at risk of committing suicide. As these algorithms are used for more humanitarian purposes, users will begin to trust their use in their everyday lives.

As ML continues to play a role in our lives, app makers, platforms and algorithm providers must work together to weave a strong web of trust. Many consumers make buying decisions based on a vendor’s commitment to privacy and security. Businesses solving user problems with machine learning can work with algorithm providers to create safe, meaningful experiences. Platform providers can ensure least privilege to app makers. They can also ensure their terms ensure data ownership belongs to the consumers themselves.

Each party that stands up for the end user will help the adoption of ML continue without friction and for the benefit of society as a whole.