Kaspersky Labs this week scared the living Hell out of a lot of us with the disclosure that the Equation Group has developed one of the nastiest malware attacks ever discovered. And, if that weren’t enough, the security software maker identified another group called Carbanak, which has drained mostly Russian Banks of around $1 billion.
I ended the week talking about the predictions of Bill Gates, Elon Musk, and Steven Hawking that future artificial intelligence machines represent the biggest threat to our existence -- kind of putting the whole hacking thing in better perspective. Yes, theft is bad. Death is generally considered worse.
What if one problem could be the cure for the other?
Breaking Down the Exposure
In the Equation Group attack report, the conclusion seemed to be that while the group's old tools are well beyond what anyone else has, their new tools are undetectable, making them far worse.
In the Carbanak case, the conclusion was that the “circle the wagons” approach that banks typically use is exactly what makes huge thefts possible because there is no public notice of the attack, which prevents un-attacked banks from putting up a defense that could stop it from happening to them. Had banks shared information early on, the total loss would have been much smaller and maybe the criminal organization would have been caught.
The only way to find and eliminate the new set of undetectable tools that the Equation Group is using is likely though massive computer behavioral analysis. Forensically looking at every machine for a set of unusual behaviors so that through the resulting profile, the malware could be identified, the infected machine quarantined, and then the malwareremoved using a linked management platform or the system reimaged and then tested to see if either result actually eliminated the software.
Given how invasive some of the tools are (actually rewriting the control software on hard drives), it might be more cost effective to destroy the infected machine. This is the kind of militarized software that can come only from governments, and some believe that it likely came from the US. However, now that the destructive software is known, it can be found, reverse-engineered and given new targets in the U.S. Lucky us.
For the Carbanak kind of exposure, no bank is going to want to broadcast that it has been successfully hacked, making timely notification problematic. What is needed is some kind of intermediary that captures the nature of the attack and can come up with a defense that then communicates both in real-time to other companies without identifying the source of the information.
Applying Artificial Intelligence
Both problems would seem to have be obvious choices for a new kind of security solution -- one with AI at its heart. In the first case, AI can adapt itself based on what it sees to both better identify and more quickly eliminate a threat like that represented by the Equation Group. It could be missioned to do so anonymously, but still be certified as a valid source, report any attack to a pool of AIs who would then be prepared to stop any similar attack on their companies. Because this is AI and not a person, the scanning can be far more comprehensive, the response far faster, and the result itself far more secure.
However, the level of development cost to create such AI likely falls more into the range of national defense funding than it does any security firm. That in and of itself might be poetic given that the most advanced threats appear to be coming from governments. Perhaps these governments should also fund the technology that mitigates them. It would certainly be embarrassing should a broad U.S. market collapse be connected to a something like the Equation Group and through it back to the U.S. government. That tool set is very capable of having this outcome.
Wrapping Up: AI Cyber War
Given the government funding of malware, that the technology the Equation Group was using is partially intelligent and that it can evolve to a more advanced offering if it finds something interesting on the computer it has infected, the idea of AI-powered malware package is likely in development. That could very well evolve into just what Gates, Hawking, and Musk are worried about: hostile AI with the power to destroy connected and disconnected computers or turn them against us.
Right now, we are clearly in a cyber-arms race, and what is most frightening is that far more development is going into offensive than defensive tools. If that doesn’t change, we are likely going to have some future problems that make what we’ve seen so far seem like a walk in the park. Put a final way: While I’m not quite ready to replace Hawkins, Gates, and Musk’s view with Erick Horvitz’s far friendlier outcome, I do think if we don’t start working hard to ensure Horvitz’s vision, then the world we don’t want becomes unavoidable. Something to noodle on this weekend.