A machine doesn’t have to be that advanced in its artificial intelligence capabilities to be dangerous, it just needs to have a high level of autonomy, according to Toby Walsh, AI researcher at Data61 (formerly NICTA).
Speaking at the recent 28th Australasian Joint Conference on Artificial Intelligence in Canberra, Walsh talked about autonomous weapons being the main cause of concern for humanity, rather than AI per se.
“It’s actually not The Terminator that we need to worry about, it’s actually the more simpler technology. You can have very intelligent programs with very little consequences, so it’s not AI that is typically the threat,” Walsh said.
Walsh gave an example of drones, which are not known for their advanced AI capabilities, but can be almost fully autonomous. A dumb drone that has the power to decide who lives and dies in a war setting is of great concern, Walsh said.
Some argue that drones used in war settings that are remotely controlled by a soldier at a ground base station keep humans in the loop. But that doesn’t mean it’s not technically feasible to have fully autonomous war drones, Walsh said.
“It wouldn’t take much, and most of us understand it’s a piece of engineering more than anything, to take that human out of the loop. The UK’s Ministry of Defence said that it’s technically feasible today.”
For Walsh, it’s obvious: A fairly dumb machine with a high amount of power will result in havoc implications.
“Drone papers that were recently released, that were leaked out of the Pentagon showed that nine out of 10 people were being killed in Somalia and elsewhere with drones and were not the intended target.”
He added that AI researchers, scientists and developers have not properly cracked how to program human ethics into computers to ensure that drones will reach 100 per cent accuracy in hitting the right targets and not innocent civilians.
Another concern of highly autonomous machines is robots built for the battlefield. It could be argued that robots fighting robots means humans are kept out of harm’s way. But Walsh said that is not really the case, as wars usually strike in cities and towns, where civilians live.
“There is not a separate part of the world or battlefield, a sign ahead that says ‘wars over here, please’. Civilians are going to be caught up in this in our cities and towns, as wars happen around us,” Walsh said.
Autonomous weapons exist and are used today, but Walsh and about 20,000 others in the AI field want to ban this technology. Some have argued that because they already exist, there’s no point in banning them. However, just because the technology can be developed doesn’t mean it should be, he said.
“Just because we have these technologies that exist, doesn’t mean you can’t ban them. Blinding lasers is one example. If you go to the battlefields of Syria today, you don’t find blinding lasers being used. And yet, prior to 1998 they were being developed by one US and one Chinese company, and they had announced they were going to start selling them.
“You have to remember that whenever we introduce a technology, it will be used against us. It could very quickly fall into the [wrong] hands, get sold on the black market. The nature of warfare today is it’s a war against civilian populations, a war of terror. So it will be used against us, [not soldiers].”
Walsh said he has often been told that “bans won’t work”, but said history contradicts that.
“Anti-personnel mines still exist today, but 40 million of them have been banned as a result of the anti-personnel mine ban convention. And the world’s a safer place; fewer children are having their limbs blown off in mine fields. And we are on target with the convention. So weapon bans do work.”