Forecasting the future isn\u2019t easy but some things are predictable. It is predictable, for example, that every now and then everyone will get excited about the imminent arrival of machines that think like people and that will therefore destroy people\u2019s jobs. This triggers a mix of enthusiasm and paranoia. But after the humanoid machines fail to materialize as predicted, everyone calms down and gets back to work.\nIn 2016, we are approaching the end of one of those periodic bouts of excitement. This is the one triggered by the 2013 report from Frey and Osborne at Oxford University, who analysed lots of U.S. jobs and formed some conclusions about the likelihood of those jobs being replaced by computers. The resultant paranoia led Google\u2019s Eric Schmidt to warn everyone about it at Davos in January 2014. And The Economist weighed in, too, saying \u201cThe effect of today\u2019s technology on tomorrow\u2019s jobs will be immense \u2013 and no country is ready for it.\u201d\nIT jobs in the gunsight\nThis general angst gave a platform to those who believe that IT jobs like computer programming or coding jobs could be automated. Here\u2019s Guillaume Bouchard, a London University academic: \u201cIf you say to your computer \u2018write me a computer game in which a shark chases a man\u2019 the computer should know what you want and create the game before your eyes. Then maybe you say \u2018make the shark fiercer and faster\u2019 and the computer will revise the code.\u2019\u201d\nThe Oxford University study gave this view some support by estimating the probability that programmers\u2019 jobs could be automated at 48 percent. I suspect very few coders will worry about such predictions because they know just how complex their jobs are. But others might find it useful to reflect on previous outbreaks of enthusiasm about automation.\nTo understand the present, know about the past\nLooking back, what have other generations said about AI, machine learning and machines that write software? To underline the range of views there has always been about this, I contrast two experts\u2019 views expressed way back in the\u00a0New Scientist\u00a0in 1964. Both were talking about what computers would be able to do in 1984. Here is Dr. Arthur L. Samuel of IBM\u2019s Thomas J. Watson Research Center, New York:\n\u201cThis problem [of machine learning] should certainly have been solved well within the next twenty years, and the computer will then have become a very much more useful device \u2026 programming as we now know it will have ceased to exist and the computer will then be a truly \u2018intelligent\u2019 machine.\u201d\nAnd this less optimistic picture of intelligence and learning was painted by Professor Maurice Wilkes of Cambridge University:\n\u201cWe read in science fiction of computers acquiring superhuman reasoning powers and beginning to exert a tyranny over man. I do not have any fear of this happening, and certainly not by 1984. It would mean a breakthrough in the direction of programming computers so that they can learn, and this would, it seems to me, be such a stupendous breakthrough that it is unlikely to happen for a very long time. There is interesting work going on in AI, but the term is misleading and what is really being studied is new ways of programming computers to solve problems.\u201d\nFifty-two years later, Wilkes\u2019 comments are still amazingly apt. But Samuel was in excellent company: most past commentators on AI have been proven to have been wildly, hopelessly over-optimistic, including \u2014 and perhaps even especially \u2014 those working in the field.\nFor example, back in 1967 in studies published in the\u00a0Science Journal\u00a0October 1967,\u00a0a panel of experts were asked to give a date for \u201c[The] availability of a machine which \u2018comprehends\u2019 standard IQ tests and scores above 150.\u201d Their median answer was the year 1990. Despite the occasional press report to the contrary, even in 2016 we still do not have a computer anywhere that can perform at that level in intelligence tests. It seems those that perform even moderately well need the tests to be pre-digested and formatted if they are to have a chance of doing them at all.\nAnd a final example: 2016\u2019s sensational achievements of Google\u2019s AlphaGo machine in beating one of the top two Go champions takes on a different hue when one reads these words, again written by Samuel in 1964: \u201cThe world\u2019s checkers, chess and Go champions will, of course, have met defeat at the hands of the computer [by 1984].\u201d\nFrom these \u2014 and there are many others that could be cited \u2014 we can say that predictions about AI tend to be at least 20 years "out."\nAI and IT jobs: the real worry\nTo summarize, IT jobs are safe from AI. While more and better software tools will appear in the years ahead, that is no different from what has been happening for many years. It started with COBOL, a huge step forward but one that failed in its objective of allowing clerks to automate their own clerical processes. Readers will be able to identify dozens more, all serving to make IT people more productive, and to make new solutions possible. The only IT jobs eliminated have been punch card operators and the like. Today\u2019s tech teams are not afraid of clever technical advances: they are relying on them to help deliver all the new stuff that will be demanded in coming years.\nThe same job data used by Frey and Osborne (available from the U.S. Bureau of Labor Statistics) illustrates the point. The U.S. computer job count has been rising at around 3.9 percent compound growth for the past five years. More and better tools \u2014 including AI tools \u2014 are badly needed if this unsustainable increase is to be brought under control. Better resourcing options like cloud will also be needed, and will in my view have more short-term impact than AI.\nThe real worry is that CEOs and CFOs reading these fanciful accounts of what AI can do will put CIOs under pressure to cut headcount at a time when they perhaps ought to be hiring more people.