For as long as anyone can remember, the US has had a game show called Jeopardy. The basic premise of the show is that an answer is displayed and the contestants have to ‘answer’ by giving the question. The information is given in a particularly obtuse manner, not dissimilar in form to a crossword clue.
Recently, Jeopardy saw two human champions take on a computer called Watson. To cut to the chase, Watson slaughtered its competition.
This was a major step forward. Not that long ago, Watson would make the kind of idiot mistakes that we humans love to snigger about when an artificial intelligence system is rash enough to take us on.
Understanding us humans is not easy: “She’s a star” does not mean that she’s a cosmic gas ball — the relationship is conditional on context.
Then we have the language disambiguation — “time flies like an arrow” — or the fact that meaning can oscillate: in any 16-year-old’s tweet, ‘wicked’ can mean good or bad, even within the same 140 characters.
While any human could have used a traditional search engine to come up with the facts, Watson’s ability was to interpret the bizarre Jeopardy-speak and output the answer itself, not just find a document detailing it.
Why does this matter?
In the late sixties, when computers were first being properly applied to business, it was clear that they could never cope with the real way in which business was done.
People sent letters in prose or had telephone conversations; rich, human, subtle, flexible information. However, it was realised that some tasks could be simply defined and were highly repetitive.
The information could be defined and laid out so that where it actually was solved the computer’s problem of what it meant.
The number in row 34, column 15 was the inventory of widgets and the computer could do a wonderful thing: it could watch that number and if it got near zero, it could order more widgets.
The IT industry was born, a purchase ledger clerk was automated out of existence and we never looked back. The format of laid-out information became structured information and the thing processing it evolved into the relational database. Almost every software application now has a database powering it.
Now, after 50 years, the computers are catching up and the technologies are becoming available to let computers understand human-friendly information.
Not just spotting keywords, but actually extracting meaning from emails, phone calls and video. We no longer need to slavishly fit to what the computer needs, but rather they fit to us — we are in the era of Meaning-Based Computing.
This is important as 85 per cent of all information in a company is human-friendly, or unstructured. But where transaction-type data is growing at 22 per cent, human-friendly information is growing at 62 per cent a year.
If we need to process it all, either humans need to breed faster or computers have to get a handle on meaning.
Our legal and regulatory systems all function on meaning, customers don’t send you database tables — they email, tweet and call you. We all carry little human-friendly information processors with us in the form of smartphones.
Humans like audio, video and prose but not rows of numbers.
The IT industry is facing perhaps its biggest change yet. All previous changes have been about technology — that’s the T in IT: client server, cloud, virtualisation.
This change is about the I in IT — the very form of the information itself is changing to a more human compatible form.
In a few years’ time, all software will have to understand the content of emails and phone calls, and be able to make decisions based on this information.
This will deal with the volume of work and free up the humans to do the creative bits.
Watson is impressive. It’s easy to assume its skills can be applied to tasks other than trivia, but this may not be strictly true.
But it is an illustration of the hard-fought progress in the area. The territory of artificial intelligence has been won a foot at a time, with modern self-learning systems far exceeding the old rules-based approaches.
In biological terms, current computing may be on par with a sea slug, so Terminator-type fears of these machines becoming self-aware and destroying the human race are still fanciful.
On the other hand, everyday artificial intelligence is processing petabytes of our information in mission-critical tasks across all industries and software sectors.
Let’s not get too species-ist about this. After all, sitting on a train out of London on a Friday evening and suffering the disruption of loud, drunken City types, one might be forgiven for asking not when will machines become self-aware but rather when some humans will.
At least Watson has a power switch.
Mike Lynch is the founder and CEO of UK software company Autonomy