The most terrifying thing about Mark Zuckerberg’s appearance on Tuesday before a special joint hearing of the Senate Judiciary and Commerce committees wasn’t anything he had to say about privacy, or the mishandling of millions of users’ personal data, or creepy Russian advertising in the 2016 presidential election.
In fact, on those questions, the Facebook CEO was often vague to the point of frustration. He apologized for his company and claimed to be open to regulation, as long as it’s the “right kind.” Whatever that means.
Who, exactly, is going to set the parameters of speech that AI will police? Where will programmers turn to sort through the ‘linguistic nuances’ Zuckerberg mentioned?
No, the most frightening part of the hearing was Zuckerberg’s utopian exuberance for artificial intelligence to solve a vexing, age-old problem.
Sen. John Thune, R-N.D., asked the 33-year-old billionaire about the challenges his company faces evaluating “hate speech” versus “legitimate political discourse.”
That’s a thorny technical problem, Zuckerberg replied. But he thinks it’s only a matter of time that an algorithm could catch and block “hate speech” before it’s even posted.
“Hate speech, I am optimistic that over a five to 10-year period we’ll have AI tools that can get into some of the nuances, the linguistic nuances of different types of content to be more accurate in flagging things for our systems, but today is just not there on that,” he said.
Oh, no. No, no, no.
Here’s the problem: Nobody can offer a clear, objective definition of what “hate speech” is. Sen. Ben Sasse, R-Nebraska, asked Zuckerberg directly. He couldn’t give a straight answer. The best he could do was sputter and reply, “This is a really hard question.”
Let’s leave aside calls to violence. The U.S. Supreme Court has ruled incitement is not protected by the First Amendment. We run into trouble when we get into what Sasse called “the psychological categories” of speech – speech that might trigger negative thoughts or make people feel badly.
Sasse pointed out that 40 percent of Americans under 35 tell pollsters that “the First Amendment is dangerous because you might use your freedom to say something that hurts somebody else’s feelings.”
A Brookings Institution survey of 1,500 undergraduates last year found that 44 percent of students surveyed – Democrats, Republicans, and independents – believe the First Amendment doesn’t protect “hate speech.” What’s more, about a fifth of respondents said it would be appropriate to use force to prevent a speaker from making “offensive or hurtful statements.”
We’re well past the old standard of falsely shouting “fire” in a crowded theater or inciting imminent violence.
What about a hot-button topic like abortion? “It might really be unsettling to people who’ve had an abortion to have an open debate about that, wouldn’t it?” Sasse asked.
“It might be,” Zuckerberg replied. “But I do generally agree with the point that you’re making.” He conceded that as AI “proactively” looks at content, companies – and countries – will have to wrestle with big questions.
Here are a couple: Who, exactly, is going to set the parameters of speech that AI will police? Where will programmers turn to sort through the “linguistic nuances” Zuckerberg mentioned?
The fact is, the AI and the algorithms that have made Zuckerberg the seventh-richest man on earth are the products of human ingenuity. The promise and the peril of AI is that it will one day expand beyond the capacity of humans to comprehend and control.
Before Zuckerberg’s artificially intelligent censor bots are loosed upon the world, they’ll need to learn a few little things, like two or three millennia worth of literature on freedom of thought. Will our AI protectors know Aristotle, John Milton, and John Stuart Mill? Or will they be programmed to emulate the thinking of a growing cohort of students that believes “speech is violence”?
Americans have been having this fight since the Alien and Sedition Acts of 1798. Yet Zuckerberg is confident all of this can be sorted out algorithmically in five, 10 years tops. Heaven help us.
Ben Boychuk is managing editor of American Greatness. He can be contacted at email@example.com or on Twitter @benboychuk.