The good news is that European policymakers are responding to the challenges posed by artificial intelligence. Credit: Thinkstock In March, the European Political Strategy Centre (EPSC), the European Commission’s in-house think tank, released a note urging the development of an artificial intelligence (AI) strategy for “human-centric machines.” In early April, French President Emmanuel Macron issued the French AI strategy, which he described in a detailed interview with Wired magazine. The impressive depth, sophistication and detail in these strategy notes is a tribute to the energy and care that went into their production. And there is much to welcome in them, including Macron’s emphasis on ethics and the need for humans to maintain meaningful control of consequential decisions like those that occur in the use of autonomous weapons systems. Other thoughtful elements include providing incentives for investment in AI technologies, building up AI research centers, and addressing the shortage of AI-expertise and talent. In a tribute to these forward-looking policies, Facebook, Google, Samsung, IBM, DeepMind, Fujitsu have chosen Paris to create AI labs and research centers. But some policy recommendations go in the wrong direction, as if pre-existing concerns were simply grafted onto the topic of artificial intelligence as a convenient place to recycle them. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe European policymakers are concerned with the ability of terrorists and extremists to spread recruitment material and hate speech on Internet platforms. They also think that Internet disinformation campaigns directed by hostile foreign powers pose an existential threat to the integrity of liberal democratic political institutions. To counter these threats, they are considering modification of the principle embodied in the E-Commerce Directive that internet platforms are not the publishers of material posted by third parties. The EPSC report endorses these concerns, calling for an end to the “exceptionalism” that made users of internet platforms responsible for the content they posted. Internet companies need to work together with governments to do more to address common Internet challenges, including increasing their efforts to limit harmful content on their systems. But this is a completely unrelated topic that has nothing to do with fostering AI. Moreover, it is bad policy. Making Internet users, not the platforms, responsible for the material they post is not just an infant industries measure that can be safely repealed once the platforms have gotten big. It is a policy that continues to be necessary to allow these platforms to distribute information provided by individuals and companies outside their editorial control. If they have to exercise the same control over content as traditional broadcasters and newspapers do, only material approved by their editors will see the light of day. The vast increase in voices and information made possible by the new technology will be a brief interlude, not a permanent improvement in the marketplace of ideas. Many critics have called for breaking up the big tech companies Here’s another worrisome turn in European AI strategy proposals. Many critics have called for breaking up the big tech companies. A revived, populist movement wants to reduce their control over various economic sectors including search, social media, and online retailing. Much of this is coming from Europe, where Google is appealing a proposed fine of €2.4 billion for allegedly favoring its own affiliates in search results, an allegation rejected by the U.S. Federal Trade Commission. Unfortunately, President Macron echoes these calls, suggesting that the very companies he is welcoming to Paris to create new AI research centers are “too big to be governed” and should be “dismantled.” The EPSC report also urges competition authorities to stop mergers that “might allow merged companies to use AI technologies to discriminate against their users.” The larger tech companies are now the largest sources of cutting edge research in AI outside of China. Google’s DeepMind astonished the world with software that taught itself to play Go better than any human with only a knowledge of the rules of the game as a starting point. But China is a major rival. It uses an intricate system of subsidies, incentives and below-cost loans from state-owned banks to direct research and development to AI. It is now leading the way on frontier AI research such as deep learning. Engaging in an antitrust crusade to break up tech companies is not a strategy to promote AI. Instead it will undermine their ability to finance the basic research that will allow us to compete in the AI race against countries with fundamentally different economic and political structures. Engaging in an antitrust crusade to break up tech companies is not a strategy to promote AI. In one other area, the E.U. AI reports raise concerns. They suggest that the new General Data Protection Regulation might be a long-term comparative advantage for Europe. But privacy is not a fundamental divide between the U.S. and Europe. Privacy is a shared human rights objective of both the U.S. and the E.U., a common commitment that puts both of us on the right side of a crucial line dividing authoritarian systems from liberal democratic ones. GDPR is here to stay, but sensible implementation should be a priority, especially if we want to focus resources on AI research. The bigger tech companies are spending literally millions of dollars and hiring substantial additional staff to prevent the ticking time bomb of privacy violations that could trigger GDPR fines of up 4 percent of global revenue. One indication of this compliance burden is the growth in the number of privacy officers. The International Association of Privacy Professionals has a world-wide membership of 35,000, with an increase of 10,000 last year alone, due almost entirely to new personnel hired in Europe for GDPR compliance. By all means, as President Macron says, AI must be designed “within ethical and philosophical boundaries.” Moreover, strategic plans in the U.S. and E.U. to foster AI research and development are crucial, especially given the determination of China to establish and maintain dominance in this sector. But waging an unrelated war on tech does not promote the development of AI within ethical boundaries. It just guarantees that we will together finish the AI race in second place. Related content opinion What’s next for content moderation? The Transatlantic Working Group considers transparency, alternative dispute resolution mechanisms and limitations on algorithms as ways forward. By Mark MacCarthy Nov 25, 2019 6 mins Government Technology Industry Legal opinion Social media companies shouldn’t censor campaign ads from legitimate political candidates Congress needs to extend the old broadcasting rules against media control of candidate messages to cover cable and social media. By Mark MacCarthy Oct 24, 2019 8 mins Government Technology Industry opinion Digital platforms are under attack The state AGsu2019 new antitrust investigation and Californiau2019s new independent contractor law both target digital platforms. Ironically, that could be their defense too. By Mark MacCarthy Sep 19, 2019 7 mins Government Technology Industry Legal opinion Should social media delete “provably false” stories? Itu2019s a dangerous standard that misses the real challenges facing digital platforms. By Mark MacCarthy Sep 09, 2019 8 mins Government Technology Industry Legal Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe