The AI Safety Summit, convened by the UK government, is the latest in a series of regional and global political initiatives to shape the role AI will play in society.\n\nPrime Minister Rishi Sunak sees the summit as an opportunity for the UK, sidelined since its departure from the European Union, to create a role for itself alongside the US, China, and the EU in defining the future of AI.\n\nThe summit, on November 1-2, is to consider the risks posed by AI, especially \u201cfrontier\u201d AI models such as the more advanced examples of generative AI. Its goals are to convince people of the need to take action to reduce risks; identify measures organizations should take to increase AI safety; and to agree on processes for international collaboration on AI safety, including on research and governance standards.\n\nIf Sunak\u2019s ambitions are realized, then the summit could lead to requirements for enterprises to take more precautions in their deployment of advanced AI technologies, and limitations on the development of such tools by software vendors.\n\nAt the same time, there already exist many regulations \u2014 guaranteeing privacy, for example, or prohibiting discrimination \u2014 that implicitly impose limits on what enterprises can or should do with AI or any other technology.\n\nWhat is frontier AI?\n\nFrontier AI, as defined by the UK government, refers to highly capable general-purpose AI models that can perform a wide variety of tasks at a level that meets or exceeds the most powerful technologies available today.\n\nToday\u2019s frontier AI includes foundation models using transformer architectures such as GPT-4, its rivals and successors \u2014 although as the technology advances, views on what constitutes the frontier are likely to move, too.\n\nEnterprises such as Unilever are already using GPT to deliver business value, although rarely in business-critical situations and almost always only to recommend courses of action for an employee to review and approve \u2014 the so-called \u201chuman in the loop\u201d approach.\n\nWhy is frontier AI considered unsafe?\n\nFrontier AI models may take significant computing and financial resources to train, but once that\u2019s done, they can be deployed to, or accessed from, almost anywhere for relatively little cost.\n\nAll new technologies come with a range of risks and benefits, but people are particularly concerned about the safety of frontier AI technologies because of the speed and scale at which their impact could be felt, especially if they\u2019re left to function autonomously without human supervision or intervention.\n\nThe potential risks identified by the UK government include threats to biosecurity, cybersecurity, and election fairness, as well as the potential loss of control over the development and operation of the foundation AI models themselves. There\u2019s also the possibility of \u201cunknown unknowns\u201d arising from unpredictable leaps in the capabilities of frontier AI models as they develop.\n\nHow might the AI Safety Summit change things?\n\nBehind the references to biosecurity and cybersecurity on the summit agenda are fears that super-powered AI could facilitate or accelerate the development of lethal bioweapons or cyberattacks that bring down the global internet, posing an existential risk to humanity as a whole, or to modern civilization.\n\nThere\u2019s also the alignment problem to contend with: whether an AI system will pursue its programmers\u2019 intended goals, or follow its instructions to the letter, ignoring implicit moral considerations such as the need not to harm humans. A classic thought experiment illustrating this is to consider just how far an AI system might go if given the narrow goal of optimizing the output of a factory, making paper clips, for example, and pursuing it to the exclusion of all else.\n\nThe threat of something like this happening has prompted much letter-writing and hand wringing \u2014 and even a few street protests around the world, such as those by Pause AI, which is calling for a global halt to training of general AI systems more powerful than GPT-4 until the alignment problem is provably solved.\n\nWhile creating such things is probably not something AI developers intend, unconstrained enhancement of AI capabilities could make it possible for bad actors to misuse them or, if the alignment problem isn\u2019t solved, for the use of AI systems to have unintended side-effects. That\u2019s why learning to better forecast unpredictable leaps in AI capability, and keeping AI under human control and oversight, are also on the summit agenda.\n\nBut there\u2019s a danger, say some observers, that by focusing on the unlikely but existential risks to civilization that frontier AI may pose, longstanding concerns about algorithmic bias, fairness, transparency, and accountability will be pushed to the fringe.\n\nWhat to do about those risks, both existential and everyday, is less clear.\n\nThe UK government\u2019s first suggestion is \u201cresponsible capability scaling\u201d \u2014 asking industry to set its own risk thresholds, assess the threat its models pose, choose to follow less risky paths, and to specify in advance what it will do if something goes wrong.\n\nAt a national level, the UK government is suggesting it and other countries monitor what enterprises are up to, and perhaps require enterprises to obtain a license for some AI activities.\n\nAs for international collaboration and regulation, more research is needed, the UK government says. It\u2019s inviting other countries to talk about how they can work together to talk about the most urgent areas for research, and where promising ideas are already emerging.\n\nWho is attending the AI Safety Summit?\n\nWhen the UK government first announced the summit, its intention was to include \u201ccountry leaders\u201d from the world\u2019s largest economies, alongside academics and representatives of tech companies leading AI development, with a view to set a new global regulatory agenda.\n\nA week or two before the summit, though, reports emerged that leaders of several countries with strong AI industries were unlikely to attend, raising doubts about how effective the summit will be.\n\nFrench President Emmanuel Macron will not be there, and German Chancellor Olaf Scholz is unlikely to show up either, European political news site Politico.eu reported. US President Joe Biden will not attend either, although Vice President Kamala Harris may.\n\nWhile some of the European Union\u2019s biggest member states are disengaging from the summit, the bloc as a whole will be well-represented. European Commission President Ursula von der Leyen will be there and, according to her official engagement calendar, she plans to meet Secretary-General of the United Nations Ant\u00f3nio Guterres at the event.\n\nMeanwhile, European Commission Vice-President V\u011bra Jourov\u00e1\u2019s calendar indicates she\u2019ll meet South Korean Minister of Science and ICT Lee Jong-ho there.\n\nGoogle DeepMind CEO Demis Hassabis is expected to be among the 100 or so attendees \u2014 a safe bet since the company was founded in London and maintains its headquarters there.\n\nThe UK government has been playing up the recent decisions of a number of other AI companies to open offices in London, including ChatGPT developer OpenAI and Anthropic, whose CEO Dario Amodei is reportedly also attending. Palantir Technologies, too, has announced plans to move its European headquarters to the UK, and is said to be sending a representative to the event. A Microsoft representative will also reportedly attend, although not its CEO.\n\nWhere else are AI directions being set?\n\nThe UK\u2019s AI Safety Summit is far from the only place that governments and enterprises are attempting to influence AI policy and development.\n\nOne of the first big attempts of a commitment to ethical AI in the enterprise was the Rome Call. In 2020, Microsoft and IBM signed on to a non-denominational initiative of the Vatican to promote six principals of AI development: transparency, inclusion, responsibility, impartiality, reliability, and security\/privacy.\n\nSince then, legislative, regulatory, industry, and civil society initiatives have multiplied. The European Union\u2019s all-encompassing Artificial Intelligence Act seemed ahead of its time and full of good intention, but has drawn criticism and calls for stronger action from civil society groups, including Statewatch and service workers\u2019 union Uni Europa.\n\nAlso, the White House has secured voluntary commitments to AI safety standards from seven of the largest AI developers, the Cyberspace Administration of China has issued regulations on generative AI training, and New York City has set rules on the use of AI in hiring.\n\nEven the United Nations Security Council has been debating the issue.\n\nSoftware developers are joining in, too. The Frontier Model Forum is the industry\u2019s attempt to get ahead of state or international controls by demonstrating its members \u2014 including Microsoft, Google, Anthropic, and OpenAI \u2014 can be good global citizens through self-regulation.\n\nAll this activity puts the UK AI Safety Summit in a highly competitive environment, with legislators competing on the one hand to create a safe environment for their citizens, free from the menace of opaque automated discrimination or even \u2014 if the most alarmist critics are to be believed \u2014 global extinction, while on the other hand allowing businesses to innovate and benefit from the increases in productivity that AI may enable. \n\nWho gets to set those regulations, and who will have to abide by them, is unlikely to be decided any time soon, much less this week.