Forecasting the future isn’t easy but some things are predictable. It is predictable, for example, that every now and then everyone will get excited about the imminent arrival of machines that think like people and that will therefore destroy people’s jobs. This triggers a mix of enthusiasm and paranoia. But after the humanoid machines fail to materialize as predicted, everyone calms down and gets back to work.
In 2016, we are approaching the end of one of those periodic bouts of excitement. This is the one triggered by the 2013 report from Frey and Osborne at Oxford University, who analysed lots of U.S. jobs and formed some conclusions about the likelihood of those jobs being replaced by computers. The resultant paranoia led Google’s Eric Schmidt to warn everyone about it at Davos in January 2014. And The Economist weighed in, too, saying “The effect of today’s technology on tomorrow’s jobs will be immense – and no country is ready for it.”
IT jobs in the gunsight
This general angst gave a platform to those who believe that IT jobs like computer programming or coding jobs could be automated. Here’s Guillaume Bouchard, a London University academic: “If you say to your computer ‘write me a computer game in which a shark chases a man’ the computer should know what you want and create the game before your eyes. Then maybe you say ‘make the shark fiercer and faster’ and the computer will revise the code.’”
The Oxford University study gave this view some support by estimating the probability that programmers’ jobs could be automated at 48 percent. I suspect very few coders will worry about such predictions because they know just how complex their jobs are. But others might find it useful to reflect on previous outbreaks of enthusiasm about automation.
To understand the present, know about the past
Looking back, what have other generations said about AI, machine learning and machines that write software? To underline the range of views there has always been about this, I contrast two experts’ views expressed way back in the New Scientist in 1964. Both were talking about what computers would be able to do in 1984. Here is Dr. Arthur L. Samuel of IBM’s Thomas J. Watson Research Center, New York:
“This problem [of machine learning] should certainly have been solved well within the next twenty years, and the computer will then have become a very much more useful device … programming as we now know it will have ceased to exist and the computer will then be a truly ‘intelligent’ machine.”
And this less optimistic picture of intelligence and learning was painted by Professor Maurice Wilkes of Cambridge University:
“We read in science fiction of computers acquiring superhuman reasoning powers and beginning to exert a tyranny over man. I do not have any fear of this happening, and certainly not by 1984. It would mean a breakthrough in the direction of programming computers so that they can learn, and this would, it seems to me, be such a stupendous breakthrough that it is unlikely to happen for a very long time. There is interesting work going on in AI, but the term is misleading and what is really being studied is new ways of programming computers to solve problems.”
Fifty-two years later, Wilkes’ comments are still amazingly apt. But Samuel was in excellent company: most past commentators on AI have been proven to have been wildly, hopelessly over-optimistic, including — and perhaps even especially — those working in the field.
For example, back in 1967 in studies published in the Science Journal October 1967, a panel of experts were asked to give a date for “[The] availability of a machine which ‘comprehends’ standard IQ tests and scores above 150.” Their median answer was the year 1990. Despite the occasional press report to the contrary, even in 2016 we still do not have a computer anywhere that can perform at that level in intelligence tests. It seems those that perform even moderately well need the tests to be pre-digested and formatted if they are to have a chance of doing them at all.
And a final example: 2016’s sensational achievements of Google’s AlphaGo machine in beating one of the top two Go champions takes on a different hue when one reads these words, again written by Samuel in 1964: “The world’s checkers, chess and Go champions will, of course, have met defeat at the hands of the computer [by 1984].”
From these — and there are many others that could be cited — we can say that predictions about AI tend to be at least 20 years "out."
AI and IT jobs: the real worry
To summarize, IT jobs are safe from AI. While more and better software tools will appear in the years ahead, that is no different from what has been happening for many years. It started with COBOL, a huge step forward but one that failed in its objective of allowing clerks to automate their own clerical processes. Readers will be able to identify dozens more, all serving to make IT people more productive, and to make new solutions possible. The only IT jobs eliminated have been punch card operators and the like. Today’s tech teams are not afraid of clever technical advances: they are relying on them to help deliver all the new stuff that will be demanded in coming years.
The same job data used by Frey and Osborne (available from the U.S. Bureau of Labor Statistics) illustrates the point. The U.S. computer job count has been rising at around 3.9 percent compound growth for the past five years. More and better tools — including AI tools — are badly needed if this unsustainable increase is to be brought under control. Better resourcing options like cloud will also be needed, and will in my view have more short-term impact than AI.
The real worry is that CEOs and CFOs reading these fanciful accounts of what AI can do will put CIOs under pressure to cut headcount at a time when they perhaps ought to be hiring more people.
This article is published as part of the IDG Contributor Network. Want to Join?