Famous mock musician David St. Hubbins once said, \u201cThere\u2019s a fine line between stupid and clever.\u201d On one side of the line is an endless celebration of genius. On the other: failure and ignominy.\nThe tech industry has no choice but to embrace innovation and risk taking. As such, some innovations start out looking crazy but end up being brilliant. Others start out looking just as crazy and implode under the weight of their own insanity.\u00a0\n[ Be sure to learn the secrets of highly effective digital transformations \u2014 and beware the 7 myths of digital transformation. | Get the latest on digital transformation by signing up for our CIO Leader newsletters. ]\nIn that light, here are seven next-horizon ideas that ride that fine line between amazing and amazingly stupid. The developers of these innovations might prove to be crackpots \u2014 or they could turn out be insanely great. The technology could end up being a blackhole for venture cash or a savvy play for business value emerging along the fringe. It all depends on your perspective.\nQuantum computers\nOf all the out-there technologies, nothing gets more press than quantum computers \u2014 and nothing is spookier. The work is done by a mixture of physicists and computer scientists fiddling with strange devices at super-cold temperatures. If it requires liquid nitrogen and lab coats, well, it\u2019s got to be innovation.\nThe potential is huge, at least in theory. The machines can work through bazillions of combinations in an instant delivering exactly the right answer to a mathematical version of Tetris. It would take millions of years of cloud computing time to find the same combination.\nCynics, though, point out that 99% of the work that we need to do can be accomplished by standard databases with good indices. There are few real needs to look for strange combinations, and if there are, we can often find perfectly acceptable approximations in a reasonable amount of time.\nThe cynics, though, are still looking through life through old glasses. We haven\u2019t begun to ask the questions that quantum computing might answer. Once the machines are readily available, we might begin to think of asking the new questions. This is one reason why IBM is offering quantum computing toolkits \u2014 and certification for those who want to explore the outer boundaries of what the machines might do.\nPotential first adopters: Domains where the answer lies in the search for an exponentially growing combination of hundreds of different options.\nChance of happening in the next five years: Low. Google and IBM are warring with press releases. Your team will spend many millions just to get to the press release stage. \u00a0\nHeating with computing\nEvery decision made by a CPU sends a few electrons down the grounding wire and whatever energy they carried is turned into heat. Traditionally, these joules have been treated as waste and finding a way to get rid of this heat has been a headache for circuit designers, computer case builders, and colo architects.\nWhy not use it to heat buildings in winter? Why not replace the boilers and heat pumps of the world with miniature racks of servers pumping out heat? The people living upstairs would be grateful and welcoming. The compute jobs could migrate from north to south to north again with the seasons just like the arctic terns that spend half of the year in the northern hemisphere and half in the southern.\nThere would be a few challenges. If a warm front came through Vermont in January, the inhabitants would be turning off the \u201cheaters\u201d and decreasing the cycles available for AI researchers, data scientists, and everyone else buying spot instances. It could also mean installing twice as many servers or maybe shipping servers would be cheap enough.\nRight now the cloud companies keep their servers in huge racks in central locations where the electricity is cheap. If they move them into residences, they can reuse the heat.\nPotential first adopters: Cold climates like Canada.\nChance of happening in the next five years:\u00a0 High. Pilot projects are already being tested around the globe.\u00a0\nGreen AI\nIf the buzzwords \u201cgreen\u201d and \u201cartificial intelligence\u201d are good on their own, why not join the two and double the fun? The reality is a bit simpler than doubling the hype might suggest. AI algorithms require computational power and at some point computational power is proportional to electrical power. The ratio keeps getting better, but AIs can be expensive to run. And the electrical power produces tons of carbon dioxide.\nThere are two strategies for solving this. One is to buy power from renewable energy sources, a solution that works in some parts of the world with easy access to hydro-electric dams, solar farms, or wind turbines.\u00a0\nThe other approach is to just use less electricity, a strategy that can work if questions arise about the green power. (Are the windmills killing birds? Are the dams killing fish?) Instead of asking the algorithm designers to find the most awesome algorithms, just ask them to find the simplest functions that come close enough. Then ask them to optimize this approximation to put the smallest load on the most basic computers. In other words, stop dreaming of mixing together a million layered algorithm trained by a dataset with billions of examples and start constructing solutions that use less electricity.\nThe real secret force behind this drive is alignment between the bean counters and the environmentalists. Simpler computations cost less money \u2014 and use less electricity which means less stress on the environment.\nPotential early adopters: Casual AI applications that may not support expensive algorithms.\nPotential for success in five years: High. Saving money is an easy incentive to understand.\u00a0\nBuild your own cloud clusters\nYes, they\u2019re little computers that cost less than $50. Yes, some 4th graders are wiring them up for science fairs. But just because they are cheap toys doesn\u2019t mean that they can\u2019t be quite useful for real work. That\u2019s why some are building out Raspberry Pi clusters with racks filled with the little Linux nodes sporting chips with four ARM cores that sip electricity not chug it.\nThere are plenty of reasons to avoid the idea. Big, fat machines can be much more efficient. They can offer dozens of cores running dozens of threads and sharing big blocks of RAM and disk packs. When the loads get heavy, these can power through the work.\nBut working with smaller, separate machines offer redundancy exactly because they\u2019re separate. You might think your instance is separate from the other virtual machines but they are all usually sharing the same CPU and there may be dozens or even hundreds of them. Separate machines with separate circuit boards offer security and redundancy. The biggest win, though, may be price. These clusters can be much, much cheaper than some of the instances in the major clouds. Sure, some cloud machines are just $5 per month, but after a year the Raspberry Pi can start to be cheaper.\nClusters like these allow massively parallel algorithms to run free. Many of the most intriguing problems require churning through huge collections of data and often the tasks don\u2019t need to be done in order. These machines make it possible for programmers to not just think about inherently parallel algorithms but start building and deploying them.\nThe trend also follows the way that some major clouds are embracing solutions that offer hybrid options for moving data back on premises. Some want to save money. Some want security. Some want assurance.\u00a0\u00a0\nPotential first adopters: Shops with big data sets that need parallel analysis.\nPotential for success in five years: High. Clusters are already being deployed.\u00a0\u00a0\nHomomorphic encryption\nThe weak spot in the world of encryption has been using the data. Keeping information locked up with a pretty secure encryption algorithm has been simple. The standard algorithms (AES, SHA, DH) have withstood sustained assault from mathematicians and hackers for some years. The trouble is that if you want to do something with the data, you need to unscramble it and that leaves it sitting in memory where it\u2019s prey to anyone who can sneak through any garden-variety hole.\nThe idea with homomorphic encryption is to redesign the computational algorithms so they work with encrypted values. If the data isn\u2019t unscrambled, it can\u2019t leak out. There\u2019s plenty of active research that\u2019s produced algorithms with varying degrees of utility. Some basic algorithms can accomplish simple tasks such as looking up records in a table. More complicated general arithmetic is trickier and the algorithms are so complex they can take years to perform simple addition and subtraction. If your computation is simple, you might find that it\u2019s safer and simpler to work with encrypted data.\nIBM, one of the leaders in the field, has been nurturing exploration by offering toolkits for\u00a0 Linux, iOS and MacOS developers who want to include the functionality in their applications.\u00a0\nPotential first adopters: Medical researchers, financial institutions, data-rich industries that must guard privacy.\nPotential for success in five years: Varies. Some basic algorithms are used commonly to shield data. Elaborate computations are still too slow.\nTricorders everywhere\nMost of the technology in Star Trek remains a distant dream, but we\u2019ve already grown accustomed to putting one of their so-called \u201ccommunicators\u201d in our pockets. If anything, the current generation of mobile phones is much slicker than the flip phones that Kirk and Spock would use.\nThe next target for our society may be the tricorder, the box that the medical teams would wave around in Star Trek to diagnose disease and look at our hidden guts. The good news is that the script writers were never specific about what a tricorder does. We know a phaser could kill or be set to stun, but the tricorder was essentially a prop to occupy Dr. McCoy\u2019s hands before he said, \u201cHe\u2019s dead, Jim.\u201d\nSome researchers are already tossing around the word. One group is working on a \u201cDNA tricorder\u201d that will decode DNA sequences and fit in your pocket. Others have put together a digital stethoscope, EKG sensor, lung sensor, and a blood sampler that pricks your finger. Qualcomm awarded $10 million in prizes and defined a tricorder as a device that could capture five vital signs and diagnose 13 possible conditions.\nBut we can do more. Right now CT and MRI scanners are large and expensive, requiring elaborate radiation emitters and super-cooled sensors. But point sources of radiation are everywhere in the form of cell phone towers. If a sensor for this radiation could be made with only a fraction of the sensitivity and resolution of the digital cameras tuned to the visible spectrum, well, the computational power of GPUs should start making sense of the insides of our bodies. The signals from nearby cell towers or television stations could act as point sources attenuated by the various bodily tissues.\nPotential first adopters: Everyone from doctors in surgery to first-responders at the scene of an accident. Home users with chronic diseases and hypochondriacs will be big fans.\nChance of happening in the next five years: Low. It depends on what you think a tricorder can do. Some basics like measuring blood oxygen are simple and are on the market already. Spotting tumors buried inside your pancreas, though, will take longer.