Invisibility cloaks sound like a lot of fun, and the good news is that they really do exist. But the reality, for now at least, is not nearly as fantastical as what you imagine.\nThat's because they've been designed for military purposes. It turns out that typical cloaking technology can be used to make a vehicle's infrared pattern match the land behind it, or to absorb or deflect radio waves to render it near invisible to heat detectors and radar. It's very effective technology, but if you were expecting some sort of Harry Potter invisibility cloak then you're going to be disappointed.\n[ Related: Don't look now, but Harry Potter's invisibility cloak just got a big step closer ]\nMore topically, there's been a huge amount of publicity recently surrounding technology that supports autonomous driving, much of that fueled by Tesla's decision to release its Autopilot software as an over-the-air update to its Model S and Model X vehicles on October 14 last year.\nA car capable of autonomous driving sounds like it should be capable of driving itself to where the owner needs it to go \u2013 regardless of whether anyone happens to be inside it. That would be what the National Highway Traffic Safety Administration defined in 2013 as Level 4 Full Self-Driving Automation:\n\nThe vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicle.\n\nBut the truth is that systems such as Tesla's Autopilot fall far short of providing this type of autonomous driving. Tesla "requires drivers to remain engaged and aware" when Autopilot is activated, and drivers "must keep their hands on the steering wheel." Rather than autonomous driving, the technology available in Teslas, as well as other brands, including BMW and Mercedes, is more accurately described as a high level of driver assistance. If you fancy relaxing in the back with a book while the car whisks you to your destination, then once again you're likely to be disappointed.\n[ Related: US probes Tesla on autopilot system failures after fatal crash ]\nThese two examples send an important message to CIOs: New technology may provide valuable benefits for your business, but also might not deliver quite what you expect.\nHere\u2019s a look at what that implies for emerging technologies that are generating a lot of buzz: artificial intelligence and machine learning.\nReality bites\nIn science fiction books AIs are connected to computer and sensor networks so they can process vast amounts of data and outwit mere humans anytime they choose. But the current reality is rather more prosaic. While relatively sophisticated artificial intelligence software does exist, some experts believe its arrival in the IT department is unlikely to make a huge impact in the near term.\n"We see AI as another piece of software, like an ERP system," Marc Carrel-Billiard, managing director of Global Technology Research & Development at Accenture Technology, told CIO.com. "It will be another tool in the CIO\u2019s toolbox, and it will need to be integrated in to the IT landscape and connected to legacy environments."\n[ Related: IBM's Watson just landed a new job: helping Macy's shoppers ]\nThe same may also be true of a field closely related to artificial intelligence: machine learning. The concept of a system that is given a goal and figures out for itself the best way of achieving it is a striking and possibly frightening one. But, inevitably, there's a catch. Rather than sitting cogitating for a while before coming up with the best approach to a problem like an academic poring over journals and writing on whiteboards before a eureka moment, machine learning systems are more hands-on. They learn by doing, and to improve they need a stream of new data to learn from.\nA good example is provided by the researchers at Google's DeepMind artificial intelligence, creators of AlphaGo, a system designed to play the formidably complex game of Go. (Last year DeepGo beat Fan Hui, the European Go champion.)\nDeepGo is good at playing Go, but to get better it learns what works and what doesn't from the games it plays, and because it doesn't get tired like a human it could, in theory, play millions of games every day. So you'd think that it would improve very quickly. But there's just one problem: its learning is limited by the number of games humans can play with it.\nThe implication of this in the business world is that machine learning systems may be able to get better at their tasks, but only in fields where large amounts of new data is constantly being generated.\nHow learning works\nIf a five-year-old human can learn a language then surely a computer system can learn three or four and translate between them? But attempts over the last 40 years to codify and teach computers the rules of grammar and provide them with sufficient vocabulary to translate languages have failed.\nAs a result companies like Google have switched to a kind of brute force approach to machine translation called the statistical method of translation. This works by making use of "parallel corpora" \u2014 bodies of text that have already been translated from one language to another. It analyzes the texts and spots co-occurrences of phrases in one language and the equivalents in the other language and stores these co-occurrences in a "phrase table."\nWhen it translates a new piece of text it breaks it down into phrases and looks these phrases up in the phrase table, and when there are several possible choices or questions about word order it uses statistics to decide what is most likely to be correct. "Essentially we are translating using probabilities to find the best solution," explained Phil Blunsom, a lecturer and machine translation researcher at the University of Oxford. "The computer doesn't understand the languages or know any grammar, but it might use statistics to determine that 'dog the' is not as likely as 'the dog'."\nTo get a reasonable translation you need about a million sentences, but the system can learn to do better \u2013 as long as it has a stream of new parallel corpora to learn from. No new data, no machine learning.\nA cautionary tale\nThere\u2019s more than one way to fall short of expectations. Sometimes it means delivering quite a lot more\u2014 and something far different \u2014 from what the maker intended.\nAnd older technologies are not immune.\nTavis Ormandy, a security researcher working for Google's zero-day exploit-hunting Project Zero team, recently discovered that many of security software vendor Symantec's enterprise and consumer products were riddled with security problems that make any machine running Symantec's software highly vulnerable to attacks. "These vulnerabilities are as bad as it gets," Ormandy explained in a blog post. "They don\u2019t require any user interaction, they affect the default configuration, and the software runs at the highest privilege levels possible."\nSome of these vulnerabilities were due to the inclusion of code derived from open source libraries that hadn't been updated for seven years, and a number of these vulnerabilities had exploits that were publicly available.\nTo be fair to Symantec the company has now fixed the problems, but what's worth considering is that it is almost certainly not the only vendor with serious vulnerabilities in its security software. And because antivirus software is usually integrated deep into the workings of a computer's operating system a vulnerability in this type of software can have devastating consequences.\nDoes antivirus software actually make your computer more secure? Ormandy is not convinced. "Network administrators should keep scenarios like this in mind when deciding to deploy antivirus,\u201d he concluded. "It\u2019s a significant trade-off in terms of increasing (the) attack surface."\nIn other words, security software could save you from a security breach, but then again it may be the cause of one. Put like that, it's perhaps the perfect example of technology that may deliver something different from what it sounds like it should.