by Paul Rubens

Beware technology that delivers less than its name promises

Feature
Aug 29, 2016
InnovationIT StrategyTechnology Industry

Don't be lured by what the words imply. Instead consider what the technology actually does.

beware
Credit: Thinkstock

Invisibility cloaks sound like a lot of fun, and the good news is that they really do exist. But the reality, for now at least, is not nearly as fantastical as what you imagine.

That’s because they’ve been designed for military purposes. It turns out that typical cloaking technology can be used to make a vehicle’s infrared pattern match the land behind it, or to absorb or deflect radio waves to render it near invisible to heat detectors and radar. It’s very effective technology, but if you were expecting some sort of Harry Potter invisibility cloak then you’re going to be disappointed.

[ Related: Don’t look now, but Harry Potter’s invisibility cloak just got a big step closer ]

More topically, there’s been a huge amount of publicity recently surrounding technology that supports autonomous driving, much of that fueled by Tesla’s decision to release its Autopilot software as an over-the-air update to its Model S and Model X vehicles on October 14 last year.

A car capable of autonomous driving sounds like it should be capable of driving itself to where the owner needs it to go – regardless of whether anyone happens to be inside it. That would be what the National Highway Traffic Safety Administration defined in 2013 as Level 4 Full Self-Driving Automation:

The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicle.

But the truth is that systems such as Tesla’s Autopilot fall far short of providing this type of autonomous driving. Tesla “requires drivers to remain engaged and aware” when Autopilot is activated, and drivers “must keep their hands on the steering wheel.” Rather than autonomous driving, the technology available in Teslas, as well as other brands, including BMW and Mercedes, is more accurately described as a high level of driver assistance. If you fancy relaxing in the back with a book while the car whisks you to your destination, then once again you’re likely to be disappointed.

[ Related: US probes Tesla on autopilot system failures after fatal crash ]

These two examples send an important message to CIOs: New technology may provide valuable benefits for your business, but also might not deliver quite what you expect.

Here’s a look at what that implies for emerging technologies that are generating a lot of buzz: artificial intelligence and machine learning.

Reality bites

In science fiction books AIs are connected to computer and sensor networks so they can process vast amounts of data and outwit mere humans anytime they choose. But the current reality is rather more prosaic. While relatively sophisticated artificial intelligence software does exist, some experts believe its arrival in the IT department is unlikely to make a huge impact in the near term.

“We see AI as another piece of software, like an ERP system,” Marc Carrel-Billiard, managing director of Global Technology Research & Development at Accenture Technology, told CIO.com. “It will be another tool in the CIO’s toolbox, and it will need to be integrated in to the IT landscape and connected to legacy environments.”

[ Related: IBM’s Watson just landed a new job: helping Macy’s shoppers ]

The same may also be true of a field closely related to artificial intelligence: machine learning. The concept of a system that is given a goal and figures out for itself the best way of achieving it is a striking and possibly frightening one. But, inevitably, there’s a catch. Rather than sitting cogitating for a while before coming up with the best approach to a problem like an academic poring over journals and writing on whiteboards before a eureka moment, machine learning systems are more hands-on. They learn by doing, and to improve they need a stream of new data to learn from.

A good example is provided by the researchers at Google’s DeepMind artificial intelligence, creators of AlphaGo, a system designed to play the formidably complex game of Go. (Last year DeepGo beat Fan Hui, the European Go champion.)

DeepGo is good at playing Go, but to get better it learns what works and what doesn’t from the games it plays, and because it doesn’t get tired like a human it could, in theory, play millions of games every day. So you’d think that it would improve very quickly. But there’s just one problem: its learning is limited by the number of games humans can play with it.

The implication of this in the business world is that machine learning systems may be able to get better at their tasks, but only in fields where large amounts of new data is constantly being generated.

How learning works

If a five-year-old human can learn a language then surely a computer system can learn three or four and translate between them? But attempts over the last 40 years to codify and teach computers the rules of grammar and provide them with sufficient vocabulary to translate languages have failed.

As a result companies like Google have switched to a kind of brute force approach to machine translation called the statistical method of translation. This works by making use of “parallel corpora” — bodies of text that have already been translated from one language to another. It analyzes the texts and spots co-occurrences of phrases in one language and the equivalents in the other language and stores these co-occurrences in a “phrase table.”

When it translates a new piece of text it breaks it down into phrases and looks these phrases up in the phrase table, and when there are several possible choices or questions about word order it uses statistics to decide what is most likely to be correct. “Essentially we are translating using probabilities to find the best solution,” explained Phil Blunsom, a lecturer and machine translation researcher at the University of Oxford. “The computer doesn’t understand the languages or know any grammar, but it might use statistics to determine that ‘dog the’ is not as likely as ‘the dog’.”

To get a reasonable translation you need about a million sentences, but the system can learn to do better – as long as it has a stream of new parallel corpora to learn from. No new data, no machine learning.

A cautionary tale

There’s more than one way to fall short of expectations. Sometimes it means delivering quite a lot more— and something far different — from what the maker intended.

And older technologies are not immune.

Tavis Ormandy, a security researcher working for Google’s zero-day exploit-hunting Project Zero team, recently discovered that many of security software vendor Symantec’s enterprise and consumer products were riddled with security problems that make any machine running Symantec’s software highly vulnerable to attacks. “These vulnerabilities are as bad as it gets,” Ormandy explained in a blog post. “They don’t require any user interaction, they affect the default configuration, and the software runs at the highest privilege levels possible.”

Some of these vulnerabilities were due to the inclusion of code derived from open source libraries that hadn’t been updated for seven years, and a number of these vulnerabilities had exploits that were publicly available.

To be fair to Symantec the company has now fixed the problems, but what’s worth considering is that it is almost certainly not the only vendor with serious vulnerabilities in its security software. And because antivirus software is usually integrated deep into the workings of a computer’s operating system a vulnerability in this type of software can have devastating consequences.

Does antivirus software actually make your computer more secure? Ormandy is not convinced. “Network administrators should keep scenarios like this in mind when deciding to deploy antivirus,” he concluded. “It’s a significant trade-off in terms of increasing (the) attack surface.”

In other words, security software could save you from a security breach, but then again it may be the cause of one. Put like that, it’s perhaps the perfect example of technology that may deliver something different from what it sounds like it should.