The other day, I was starting at my phone, happily ignoring my family, when it hit me like a ton of bricks. Oh my God, I realized. We desperately need an AI chatbot law! Like NOW. I immediately called my lawyer (and oldest friend), David Neumann, and explained to him the situation. The following is a transcript of our conversation:\nNeum: Hey, hey, Brykman. How\u2019s it going?\nBryk: No time for small talk! We need to propose a new law ASAP, but everything I know about lawmaking comes from Schoolhouse Rock.\nNeum: Hit me.\nBryk: We need a law that requires chatbots to identify themselves as such in robo-calls.\nNeum [singing]: I\u2019m just a bot. Oh yes I\u2019m only a bot. And I\u2019m sitting here pretending I\u2019m not...\nBryk: I\u2019m telling you, man, we are on the verge of commercially-available AI chatbots being able to easily pass the Turing Test\u2026\nNeum: Pass the what now?\nBryk: That\u2019s the test of whether a computer can convincingly pass itself off as a human. If a person can\u2019t tell they\u2019re talking to a chatbot, the bot has passed the test. Folks have claimed their bots have passed, but that\u2019s not really true. But it\u2019s all just a matter of time. Some predict we\u2019re about ten years away but my gut tells me it\u2019s more like five.\nNeum: So, five, ten years. What\u2019s the problem?\nBryk: The problem is, once commercially available AI-driven chatbots are indistinguishable from humans, scammers will come up with innovative new ways to use them to scam people. At least with humans calling humans, there\u2019s still a limiting factor\u2014a human can only make so many calls per hour, per day. Humans have to be paid, take breaks and so forth. But an AI chatbot could literally make an unlimited number of calls to an unlimited number of people in an unlimited variety of ways!\nNeum: AI as in Artificial Intelligence?\nBryk: Right. AI, neural networks, machine learning. Don\u2019t you listen to my podcast?\nNeum: Sure I do! Anyway, humans also have a conscience and emotions that betray. They can be thrown off their game. Plus\u2014and this may be the worst thing of all\u2014philosophically-speaking, cutting out the human factor removes culpability, guilt, right? I didn\u2019t create that scam, the bot did!\nBryk: The bots could even call the same person over and over, using a different voice and a different sales pitch each time. Pitches could even be generated automatically by the AI itself by patching together snippets of phrases it finds online. Would a lot of it be gibberish? Sure. But who cares? Odds are a chatbot that\u2019s behaving randomly will still likely invent some convincing messages now and then, particularly if the rules are good and the elderly are targeted. Worse yet, a chatbot could present itself as anything, any agency or institution.\nNeum: Sure! Law enforcement, Utility rep, Government agency.\nBryk: Exactly. Everyone I know has gotten the \u201cIRS\u201d call claiming they owe the government money. But nobody falls for it because the dude making the call is clearly not a native speaker and his patter is ridiculous. But imagine if a bot were making the call! Suddenly, the English is perfect, and the details are legitimately convincing. Hell, AI can even be used to replicate real people\u2019s voices! Even voices of people we know\u2014by grabbing snippets of their voice from videos on Facebook. Talk about fake news! The AI could even pull actual information about the call recipient from big data like their social media accounts, say, and then incorporate that info into the conversation to help \u2018prove\u2019 its identity. And it would all seem totally natural. Just like how magicians pretend to read the minds of audience members by using information the theater already asked them for when they bought the tickets.\nNeum: Is that how they do that?\nBryk: Scammers could even create scams that involve sequences of calls; the first calls extract a bit or two of info from the user and then the follow-up calls consolidate all that info and use it in a completely different context, so the calls don\u2019t seem remotely related. The sky\u2019s the limit!\nNeum: Oh my God. You weren\u2019t kidding.\nBryk: So, what\u2019s needed is some obligatory statement upfront that identifies the call as coming from an AI chatbot, and not from a real person. Can we do that? Is that a first Amendment issue?\nNeum: But it\u2019s not a person. It\u2019s a bot. Ha! Now you\u2019re getting confused.\nBryk: Right! Or maybe a law that says if you ask a chatbot if it\u2019s a chatbot, the chatbot has to say, \u201cYes. I\u2019m a chatbot.\u201d Right?\nNeum: Maybe. Though of course that relies on the call recipient even knowing they can do this...and being suspicious of the bot in the first place\u2014which they may not be if the bot is that convincing.\nBryk: Good point. Something must be done!\nNeum: Hmm. There\u2019s jurisdictional issues, of course. Enforcement issues. Identity blockers.\nBryk: But at least we can set consequences\u2026\nNeum: I\u2019m drafting the letter now.\nBryk: Who\u2019re we writing to?\nNeum: Richard Cordray, former head of the Consumer Protection Agency and Ohio Governor John Kasich. [Neumann\u2019s in Cleveland] He\u2019s big into these sorts of issues.\nBryk: Okay. Exciting! Let\u2019s make history, my friend.\nStay tuned, folks. Progress updates to follow. This is real. My lawyer is real. I am real. And soon enough the bots will be real. But fear not, dear reader! We\u2019re going to make this happen.