top of page
Writer's pictureWilliam C. Little

Voice Cloning and “Deepfake” Evidence is Coming to a Courtroom Near You!



On a recent trip to San Francisco, while strolling along the city streets, I saw several driverless cars pass by. That was the closest I have ever been to a next-gen driverless vehicle, and I will admit that it was bit disconcerting to see a robot vehicle take to the streets with no human behind the wheel. It certainly made me think twice about jaywalking!


A few weeks later, I saw a news report about a Cruise driverless vehicle in San Francisco that got stuck in freshly-poured concrete. Something like that was bound to happen sooner or later with a fleet of autonomous, self-driving vehicles sharing our city streets.

As artificial intelligence (AI) applications continue to become integrated into the automotive sector and beyond, and cities struggle to keep up with how to regulate AI technologies, there are bound to be setbacks along the way. For example, in 2018 an Apple engineer named Walter Huang was killed when his Tesla vehicle crashed in Mountain View, California. Huang’s family filed a wrongful death lawsuit asserting product liability claims against Tesla claiming that the vehicle’s autopilot feature was defective and Tesla failed to provide adequate warnings.

During discovery, the Huang family sought to take Elon Musk’s deposition and question him over comments that Elon allegedly made during a 2016 conference. Elon stated, “A Model S and Model X, at this point, can drive autonomously with greater safety than a person. Right now.” A recording of the conference is available on YouTube, so you can watch the video clip HERE if you are interested.


According to Reuters, Tesla’s attorneys took steps to block Elon's deposition because he could not recall details about the statements in question and because Elon, “like many public figures, is the subject of many ‘deepfake’ videos and audio recordings that purport to show him saying and doing things he never actually said or did.”


AI “Deepfakes” Pose New Challenges for the Administration of Justice


The proliferation of AI-generated "deepfake" photographs, videos, voice recordings, and even written work product creates a host of new challenges for courts, attorneys, and litigants alike. A deepfake is defined as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” Deepfakes are easily made today using generative AI technology available to the masses.


The potential for abuse of deepfake evidence used in legal proceedings is obvious. According to an article by Stanford Law Professor Riana Pfefferkorn, “In pre-trial and trial practice, deepfakes will touch every role in the courtroom: lawyers attempting to introduce or exclude videos as evidence; judges determining whether a video is admissible; expert and lay witnesses asked to testify about the video, and, finally; jurors weighing the evidence in order to reach a verdict.”


Just as there is the potential for fake evidence used at trial, an opportunistic witness could take advantage of growing skepticism among more people about the veracity of digital exhibits by denying the authenticity of something that is actually legitimate. This situation, referred to as the “Liar’s Dividend,” is a growing concern. Hany Farid, a professor at the UC Berkeley’s School of Information, said in an interview, “When we enter this world where anything can be fake – any image, any audio, any video, any piece of text, nothing has to be real – we have what’s called the liar’s dividend, which is anybody can deny reality.”


“When we enter this world where anything can be fake – any image, any audio, any video, any piece of text, nothing has to be real – we have what’s called the liar’s dividend, which is anybody can deny reality.” -Hany Farid

“Is it Live, or is it Memorex?”


If you recognize the “Is it live, or is it Memorex?” slogan, you are probably a child of the 1970s and 80s. That was the Memorex company’s slogan to tout the superior quality of its audio cassettes. (If you don’t know what an audio cassette is, then none of this make sense. Google it!) The slogan conveyed the idea that the audio quality of Memorex cassette tapes was so superior that it was difficult to discern whether the audio was live or recorded.


Fast forward to the present day, and we are having another “Memorex moment” when it comes to discerning the authenticity of AI-generated online content. On a scale of 1 to 10, how good are you are detecting fake online content? You probably gave yourself a pretty high score. But, according to a recent study, most people are prone to overestimate their own abilities. According to a 2021 study, researchers found that people are generally poor truth detectors when it comes to spotting deepfakes. Here is a summary of the study’s conclusions:

(1) people cannot reliably detect deepfakes and (2) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (3) people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and (4) they overestimate their own detection abilities. Together, these results suggest that people adopt a “seeing-is-believing” heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content.


To test your ability to spot a deepfake, watch this video entitled “In Event of Moon Disaster.”


The 1869 Trial of William Mumler


Courts have a long history of dealing with the authentication of photographic evidence. Back in 1869, William Mumler – a so-called "spiritual photographer" in New York – was charged with larceny and fraud. Mumler claimed that he could capture the spirits of a person's deceased loved ones in his photographs. When he photographed Mary Todd Lincoln, the spirit of the late President Abraham Lincoln was supposedly standing in the background. Here is a copy of the photo, so judge for yourself. Is this really the ghost of Abraham Lincoln or a 19th-century deepfake?



Mumler was brought up on charges for selling these kinds of photographs. As I understand it, the trial took on a religious aspect and ultimately the charges were either dismissed or Mumler was acquitted.

Whether dealing with a 19th-century spiritual photograph, or a 21st-century deepfake, courts have long experience applying the rules of evidence to ensure that only properly-authenticated and relevant evidence is admitted at trial.


Authenticating Evidence Under Rule 901


Photographic and video evidence is admitted under Rule 901 of the Federal Rules of Evidence. Rule 901(a) states, “To satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is.” A video or photo can be admitted through Rule 901(b)(1), through the testimony of a competent witness who can attest “that an item is what it is claimed to be.”


What about deepfake audio of a person’s voice? Rule 901(b)(5) states: “An opinion identifying a person’s voice — whether heard firsthand or through mechanical or electronic transmission or recording — based on hearing the voice at any time under circumstances that connect it with the alleged speaker.” That is all well and dandy in most circumstances, but AI is now capable of producing astonishingly realistic voice deepfakes. The technology has advanced to the point where the machine is able to mimic voice patterns, inflections, and other characteristics so accurately that it can be hard to distinguish the real from the fake.


Given the advancements in AI technology, it is no surprise that the "deepfake defense" is already here, and not just in the Tesla case. For example, in one of the January 6th cases, a defendant expressed concern about deepfake videos of the U.S. Capitol riots circulating on the internet, and he refused to stipulate to the authenticity of YouTube videos the prosecution intended to use at trial. Here is an excerpt from a brief filed in that case.

The burden of truth-detecting and spotting deepfakes will not fall solely on judges. Attorneys, as officers of the court, share in the responsibility to ensure that only legitimate evidence is utilized. Many of these questions will be determined in pre-trial hearings on motions to exclude evidence. I can envision the focus on those hearings to center around chain-of-custody issues, expert testimony on metadata analysis, and specialized deepfake detection tools and software.


We live in interesting times, and we all have a part to play in protecting the integrity of our justice system. Whether you provide training for other legal professionals or help to raise public awareness about deepfake technology and detection methods, your contribution is important so keep moving forward!

33 views0 comments

Recent Posts

See All

Comments


bottom of page