Humans should get ‘out of the loop’ of artificial intelligence systems, UTS roboticist Professor Mary-Anne Williams argued last week at an Australian Human Rights Commission technology conference in Sydney. AI needn’t consult a flesh and blood individual even when making life or death decisions, said Williams, director of The Magic Lab at the university’s Centre of Artificial Intelligence. The rise of autonomous weapons systems which operate without human control is being campaigned against around the world. “States must draw the line now against unchecked autonomy in weapon systems by ensuring that the decision to take human life is never delegated to a machine,” theCampaign to Stop Killer Robotsstates. In Australia, 122 AI experts last year signed a letter to Prime Minister Malcolm Turnbull urging him to “take a firm global stand” against weapons systems that remove “meaningful human control” when selecting targets and deploying lethal force. But such ‘golden rules’ were dangerous, Williams argued. “This golden rule – very, very dangerous. Golden rules like, really? Didn’t golden rules go out with the Greeks? I think it’s very, very worrisome that people hold on to this, like a security blanket, like a teddy bear. And it’s not going to work,” she said. “Who is going to challenge an AI? AI is already outperforming people. People have plateaued; you’re not going to challenge an AI. If an AI said shoot to kill – you’re not going to say don’t shoot. And vice versa. The sooner we throw it out the more opportunity we will have to build a future worth living in.” Effort would be better spent on making sure such AI systems operated as they were supposed to, Williams added. “Let’s monitor and be sure that the AI is actually competent, that’s what we should be doing. Not putting us in the loop. Because putting humans with our own frail intellect and cognitive bias in the loop – what are you doing? – we need to be out of the loop,” Williams said. “The idea that somehow we bring accountability is I think just nonsense,” she added. What do you think? Do deadly AI systems need a human in the loop? Let us know in the comments below. Related content brandpost Sponsored by SAP Generative AI’s ‘show me the money’ moment We’re past the hype and slick gen AI sales pitches. Business leaders want results. By Julia White Nov 30, 2023 5 mins Artificial Intelligence brandpost Sponsored by Zscaler How customers capture real economic value with zero trust Unleashing economic value: Zscaler's Zero Trust Exchange transforms security architecture while cutting costs. By Zscaler Nov 30, 2023 4 mins Security brandpost Sponsored by SAP A cloud-based solution to rescue millions from energy poverty Aware of the correlation between energy and financial poverty, Savannah Energy is helping to generate clean, competitively priced electricity across Africa by integrating its old systems into one cloud-based platform. By Keith E. Greenberg, SAP Contributor Nov 30, 2023 5 mins Digital Transformation feature 8 change management questions every IT leader must answer Designed to speed adoption and achieve business outcomes, change management hasn’t historically been a strength of IT orgs. It’s time to flip that script by asking hard questions to hone change strategies. By Stephanie Overby Nov 30, 2023 10 mins Change Management IT Leadership Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe