California Forum

‘Deepfake’ videos threaten our privacy and politics. Here’s how to guard against them

Editor-in-Chief at ProPublica discusses the issue of ‘fake news’

Stephen Engelberg, Editor-in-Chief at ProPublica, discussed the issue of “fake news” and media distrust on Tuesday Feb. 26, 2019 during a Roger Tatarian Symposium held at Fresno State’s Satellite Student Union.
Up Next
Stephen Engelberg, Editor-in-Chief at ProPublica, discussed the issue of “fake news” and media distrust on Tuesday Feb. 26, 2019 during a Roger Tatarian Symposium held at Fresno State’s Satellite Student Union.

The California Legislature should pass pending bills that deal with the significant problem of “deepfakes,” images that seem real but are totally created by a computer.

Artificial intelligence now allows for the creation of video and audio that appear genuine but are complete fakes. “Deepfakes” undermine the very essence of freedom of speech. They also harm those falsely depicted and deceived by the images.

“Deepfakes” are relatively new, but already have been used to create false celebrity pornography and revenge pornography. Similarly, deep fakes have been created in the political realm in order to show a person saying something that he or she never uttered. The threats to privacy and to our political process are enormous.

Assembly Bill 602 would prohibit the creation of fake digital imagery and sexually explicit audiovisual works that, without consent, depict an identifiable person engaged in sexual activity. This legislation addresses a problem of major importance to performers whose personas are exploited without their consent through the use of increasingly sophisticated digital imagery. Also, revenge pornography is created in this way. No one should be shown as engaging in sexual activity without his or her consent.

Opinion

Assembly Bill 730 would prohibit a person or entity, within 60 days of an election, from knowingly or recklessly distributing deceptive audio or visual media of a candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, unless the media includes a disclosure stating that the media has been manipulated. The bill would define “deceptive audio or visual media” to mean an image or audio or video recording that has been intentionally manipulated in such a manner that it would falsely appear to a reasonable observer to be authentic.

The Supreme Court has often explained that freedom of speech is protected as a fundamental right so as to further the marketplace of ideas. Rather than have the government determine what ideas can be said, the alternative is for all views to be capable of expression. But “deepfakes” add nothing to the marketplace of ideas and, indeed, detract from it. Depicting a person engaged in sexual activity that never occurred or showing a political candidate saying things that never were uttered offers nothing useful to public discourse.

erwin chemerinsky.JPG
Erwin Chemerinsky

Denials cannot cure the problem. A visual depiction of a person naked and engaging in sexual activity is a profound affront to dignity even if the person announces that he or she was not actually filmed. Similarly, having a candidate seem to express words they never said risks deceiving voters and deciding elections on the basis of misinformation. There is no assurance that those who heard the false statements will hear the denials or that will be enough to cure the harms created by the “deepfakes.”



Although these two bills would regulate speech, they would not violate the First Amendment. False speech, at times, is protected, but often the government is allowed to prohibit it without running afoul of the Constitution. For example, lying in court under oath – perjury – is not protected by the First Amendment even though it is speech. The Supreme Court has been clear that false and deceptive advertising has no constitutional protection.

Most importantly, the court has said that speech which is defamatory of public officials and public figures has no First Amendment protection if the speaker knows the statements are false or acts with reckless disregard of the truth. The Court has explained that the importance of preventing wrongful harm to reputation and of protecting the marketplace of ideas justifies the liability for the false speech.

These pending bills use exactly this legal standard. For example, AB 730, which prohibits deep fakes in the political realm, applies only where the false images were knowingly or recklessly created and disseminated. AB 602 has exceptions for a matter of “legitimate public concern” or a work of “political or newsworthy value, or similar work.”

The technology for deep fakes is advancing quickly. The law must keep up and deal with this serious problem. By enacting these bills, the California Legislature will be providing a model for Congress, and for state legislatures all over the country.

Erwin Chemerinsky is dean and professor of law at the UC Berkeley School of Law. He can be contacted at echemerinsky@law.berkeley.edu.

Related stories from Sacramento Bee

  Comments