It's not about protecting them.  It's about protecting us.

Writing nonstop about the Constitution and the "Mode of Electing the President," in the Federalist Papers No. 68, Alexander Hamilton warned about "the desire in foreign powers to gain an improper ascendant in our councils."

The Hamilton 68 dashboard has taken up that charge. It tracks Twitter accounts affiliated with the Russian government, including many bots. For example, in December 2017, 16 percent of the articles promoted by Kremlin-affiliated Twitter handles discredited the FBI and the Mueller investigation or some element of the American "deep state." This is consistent with a 2017 report by the New York Times, which found that on Election Day in 2016, organized armies of Twitterbots sent out identical messages within seconds or minutes of each other in alphabetical order, promoting material and information from the "hacking and leaking of Democratic emails." As the article notes:

[O]ne such list cited leaks from Anonymous Poland in more than 1,700 tweets. Snippets of them provide a sample of the sequence:

@edanur01 #WarAgainstDemocrats 17:54

@efekinoks #WarAgainstDemocrats 17:54

@elyashayk #WarAgainstDemocrats 17:54

@emrecanbalc #WarAgainstDemocrats 17:55

@emrullahtac #WarAgainstDemocrats 17:55

Journalists and researchers have spent a great deal of time sifting through the returns and data (or ashes, depending on your perspective) of the 2016 presidential election, looking for lessons to take into elections this year and in 2020. As the Times' report and Hamilton 68 suggest, one of the major revelations has been the extent to which autonomous bots created content on social media, particularly Russia-created bots that flooded Twitter and Facebook leading up to Election Day. The degree to which they (and their cousin, fake news stories) influenced the election is unknown, but the intent to do so appears all but certain.

In the wake of these revelations, some writers have alleged that tweets from bots are not speech but "speech ricochets" that "represent a form of technology that can be weaponized." To protect democracy, these critics argue, we need to silence the bots.

This debate is part of a larger conversation. As we have more regular interaction with speech from autonomous devices and artificial intelligence like Amazon's Alexa, more people are beginning to wonder if the First Amendment is (or should be) so broad that it protects human and nonhuman speakers alike. It might sound a bit ridiculous. But how we make space for robot and bot speech will be a fundamental issue in 21st-century discourse. Of course, the First Amendment doesn't apply to private companies like Facebook and Twitter, which can ban autonomous accounts in their user agreements. But it is still unclear what local, state, and federal governments can do to autonomous speech from A.I.-enabled bots.

Those who call Twitterbots weaponized technology openly compare them to situations where speech was lawfully prohibited—the conviction of labor leader Eugene Debs for using his speeches to obstruct the draft, the prohibition of words and phrases that incite an immediate breach of the peace, etc. To the best of my knowledge, there have not yet been any serious bills proposed at the federal or state level that attempt to ban speech from bots or other types of A.I. However, given the history of unconstitutional attempts to limit or control speech in the United States—including prohibiting hate speech, permitting courts to shut down newspapers viewed as "malicious, scandalous and defamatory," and criminalizing the distribution of anonymous pamphlets—it seems inevitable that there will, at some point in the not-too-distant future, be governmental efforts to ban A.I. speech. (A.I. speech refers to any speech created autonomously by a machine or program without direct human supervision or control. Experts and engineers in the field may object to describing some Twitterbots as A.I., but the broad definition is appropriate here.)

So what should we do when the question inevitably comes up? There is one clear answer.

When considering this question, there are essentially four different models of governing A.I. free speech:

  1. Speech produced by A.I. is not protected by the First Amendment. Under this model, the federal government and states can regulate and prohibit speech from A.I. however they want, with none of the constitutional limits that have historically applied to speech produced by human beings. This philosophy could be used as a blunt defense against foreign bots created to interfere with American elections.

    However, there are a few issues with this approach. The first is enforceability. Twitter has hundreds of millions of accounts, and Facebook has billions. Is it logistically possible for them to effectively police all of those accounts and terminate the ones operated by autonomous bots? Typically, the companies only respond to user complaints. What would the penalty be? Would there be fines for each autonomous account discovered by law enforcement?

    Furthermore, the First Amendment isn't just about the speaker. As professor Tim Wu from Columbia Law School (and a former Future Tense fellow) has noted, the primary concern of the First Amendment is the listeners and viewers, not the broadcasters. This interpretation of the First Amendment would hurt real people who find bot content interesting or worthwhile. The First Amendment is intended to preserve a marketplace of ideas in which all opinions are able to come forward and be considered, with the best ones winning in the long run. Prohibiting any ideas—even autonomously created ones—is contrary to that purpose.

  2. A.I. is only capable of producing speech based on code from a human programmer. Therefore, speech from A.I. is merely another form of human speech. This is a more nuanced approach and arguably ensures that listeners and viewers receive all available opinions and ideas. However, the model doesn't accurately capture what's already happening with technology. Although you wouldn't necessarily know it from the Mueller-bashing, #MAGA-promoting Russian bots, A.I.s have already begun creating speech that embarrasses and causes problems for code writers.

    Take, for instance, Tay, the A.I. system created by Microsoft's technology and research and Bing teams that operated a Twitter account very briefly in 2016. Tay was intended to tweet as a normal teenage girl and learn from the Twitter accounts that interacted with it. Unfortunately, based on those interactions, Tay became racist and anti-Semitic, forcing Microsoft to deactivate the account less than 24 hours after first going online.

    That was hardly the intent of the programmers.

    Similarly, in 2015, Amsterdam police questioned the programmer behind a Twitterbot that autonomously tweeted "I seriously want to kill people" at a fashion event in the city. The bot was programmed to create comprehensible sentences based on "random chunks" of the creator's actual Twitter feed. Although the programmer explained this to the police, apologized, and deleted the bot, he also claimed he didn't know "who is/should be held responsible (if anyone)."

    If we are going to create a model for addressing speech from bots and other A.I. under the First Amendment, we need to make sure that the model accurately reflects what's really happening with those technologies.

  3. Speech produced by A.I. is only protected by the First Amendment when that speech represents the speech of its human programmer. Otherwise, speech from A.I. is not protected. This may seem like a reasonable way to address the problems with model No. 2. The problem, though, is that it relies on a fundamental question—"Is this speech representative of what the programmer would say?"—that is frequently impossible to answer. As the 2016 election illustrated, Twitterbots are frequently anonymous. How can a regulatory agency, state government, or court determine what an anonymous programmer thought? And in many cases, as with A.I. personal assistants, there isn't a single programmer—there are many. Is Alexa's speech only protected when it reflects Amazon's publicly available statements? Do we have to compare "her" statements to the notes taken at Amazon's board meetings? This model could make regulating A.I. speech unmanageable.

    So if we shouldn't think that A.I. speech is unprotected by the First Amendment, and we shouldn't consider A.I. speech as just the speech of its programmers, and we can't distinguish A.I. speech that reflects its programmers' opinions from A.I. speech that doesn't, what are we left with? The right policy choice:

  4. Speech produced by A.I. is protected by the First Amendment. This leaves us with the final and most compelling model for applying the First Amendment to speech produced by A.I. and robots: a literal reading. The actual text of the First Amendment suggests this is the correct model to apply to machine speech, as the text simply states that the government "shall make no law ... abridging the freedom of speech, or of the press." Nothing there specifically suggests freedom of speech is limited to people. Under this interpretation, all the constitutional speech protections that humans enjoy in the United States would also apply to A.I., robots, and Twitterbots, whether they are Russian intruders or Microsoft mistakes.

    Companies that produce A.I. personal assistants appear ready to support this model. In 2017, when police sought access to a murder suspect's Echo, Amazon filed a motion in the murder trial that seemed to assert Alexa has First Amendment rights. (It was couched in language that referred to "Amazon's First Amendment-protected speech" but referred specifically to "Alexa's decision" about the information Alexa chooses "to include in its response," suggesting that Alexa's First Amendment rights were at stake as well.) A court didn't get a chance to weigh in, because the company dropped the assertion when the defendant agreed to provide the information the police sought. Admittedly, Amazon, Google, Apple, etc. have a vested interest in this position, but that doesn't make it intrinsically wrong. On top of that, this model is the easiest to enforce: Do unto bots as you would do unto you.

    This conclusion is bound to rub some people the wrong way. But the First Amendment is among America's greatest strengths, both in terms of substantive protections offered here and in advertising our values abroad. Autonomous speech from A.I. like bots may make the freedom of speech a more difficult value to defend, but if we don't stick to our values when they are tested, they aren't values. They're hobbies.

    The goal shouldn't be, then, to lean on private companies or government to eliminate Twitterbots. The focus instead should be on educating people so they are less susceptible to them. Human beings have shown an ability to become savvy media consumers when given the chance and tools. After Masaccio's Holy Trinity, people got used to realistic paintings. After The Blair Witch Project, people got used to found-footage movies. We can get used to Russia bots and identify the signal through the noise.

    Autonomous tweets from bots are not the only expression at issue. A.I. programs and robots like Google's Project Magenta, the music-generation A.I. programs created by David Cope, Jukedeck's autonomous music-composition service, and Harold Cohen's AARON create art and music without human control. A.I.-based programs like Automated Insights' Wordsmith analyze large data sets and convert them into natural language narratives, easy to read as a report from the Associated Press, which uses Automated Insights' A.I. to write certain stories about a number of sectors, including about the financial industry. Although there is a general consensus that A.I.-generated art and narrative is currently "good in small chunks, but lacks any sort of long-term narrative arc," it's not hard to see that developing in the not-too-distant future. What if A.I. creates a series of paintings that offend a state legislature or produces an exposé that reveals a president's questionable financial status after analyzing publicly available documents? Can the A.I. be prohibited from publishing and broadcasting those?

    By permitting the government to ban lawful speech, even A.I. speech, we eliminate a potentially useful voice. That useful bot could provide us with information to defend ourselves from a parking ticket, estimates for Uber fares, new horror stories, instant nostalgia, or just laughter, as in the case of the delightful (though recently silent) DeepDrumpf. The First Amendment protects the speaker, but more importantly it protects the rest of us, who are guaranteed the right to determine whether the speaker is right, wrong, or badly programmed. We are owed that right regardless of who is doing the speaking. As discourse unfolds in this century, that should be the underlying principal—just as it has been in this country for centuries.

Originally published in Slate.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.