By: Talia Cherry
Volume X – Issue II – Spring 2025
I. INTRODUCTION
From the way we communicate with others to how we search for information, modern society surrounds itself with technology that allows us each to be individually identified. Whether we want to acknowledge it or not, most people have their face and likeness publicly accessible in some way. Recently, the role of individual markers has gone so far as to involve our biometrics—or characteristics that include biological information about ourselves and our likeness. [1] A controversial example of the way biometrics are being used in society today is through “deepfakes.” Deepfakes can be audio or visual, such as recordings, videos, or still images. These creations are made through AI software that compiles human biometrics, what some would consider their “likeness,” and assembles videos of fake people, sometimes in the likeness of real people, that appear completely genuine and in which they are saying or doing something that has not actually happened. [2] As more and more biometrics are provided to AI software, the products become more accurate and realistic. These types of software are freely available for anyone to use and are often nearly impossible to distinguish from real videos, audio, and images. Deepfakes not only pose a threat to the perceived validity of evidence, due to their ability to adjust or fully create new works, but also threaten the rule of law in the United States, which will require updates to and higher thresholds for Federal Rules of Evidence.
II. FEDERAL RULE OF EVIDENCE 901
Rule 901 of the Federal Rules of Evidence discusses the authentication of evidence. To be considered sufficient evidence, the proponent of said evidence is required to submit proof that the evidence is authentic to what they claim it is. [3] Some common types of evidence include eyewitness testimony, expert witness statements, and opinions (both expert and non-expert can apply under certain circumstances), timestamps, or public records and stored data. [4]
In 2024, the Advisory Committee on Evidence Rules held a meeting run by experts with the goal to ultimately raise the threshold for verifying evidence, specifically if AI produced it. [5] Beyond requiring accuracy, proposed new additions to FRE 901 would require that if evidence is created by AI, it be both “reliable” and “valid.” The suggested updated version of FRE 901(b)(9) states that in order to admit AIgenerated evidence, the proponent party must do the following: (i) describe the software or program that was used; and (ii) show that it produced valid and reliable results in this instance.” [6]
To target deepfakes, experts proposed another addition to FRE 901, which would raise the standard for the court to accept possibly AI-generated evidence. The proposed Rule 901(c) states: “If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated or altered in whole or in part, the evidence is admissible only if the proponent [of evidence] demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.” [7] Under this provision, whoever decides to submit the evidence that has been challenged for potentially being AI-generated must be able to prove that despite its contested origins, the evidence’s utility is greater than its potential to prejudice the jury.
III. FEDERAL RULE OF EVIDENCE 702
Federal Rule of Evidence 702 concerns expert testimony and its admissibility.8 This rule requires that experts are truly experts by some notable measure, whether that be education, training, or technical experience of some sort. For Rule 702, the expert’s testimony must actually help elucidate a fact or promote comprehension of evidence. Furthermore, expert witnesses must ground their conclusions in trustworthy information and methods and must apply their knowledge only to what can realistically be surmised. [9] Additionally, whoever brought forth the expert witness must illustrate that their testimony likely meets the standards of Rule 702. [10]
Under the FRE, the judge is given the role of the gatekeeper of evidence, meaning they determine the admissibility of evidence submitted to the court. A judge’s role as gatekeeper stems from Daubert v. Merrell Dow Pharmaceuticals (Inc., 509 U.S. 579 (1993)), in which the Supreme Court shifted from requiring expert testimony to reach the threshold of broad recognition to stringent assessment. [11] This ensures that the judge has the responsibility to gauge whether appropriate scientific methods were used to come to the conclusion presented and if those findings were correctly applied to the case. For example, it is important to determine if the expert's methods have been peer-reviewed, and if they have been peer reviewed, what the rate of error is for said method. Furthermore, it is necessary to ask if these methods are generally scientifically accepted. [12]
However, the role as gatekeeper is sometimes contested, especially when considering developments in AI technology and deepfakes. [13] Some legal experts maintain that judges draw on their legal discipline and professional experience so they are less vulnerable to traditional cognitive biases that the average layperson may use to make decisions. [14] While there is evidence supporting the claim that judges are more resistant to the use of heuristics and biases, it is clear that the discrepancies between judges and the average layperson are minimal. [15] Some believe that it is more nuanced than this—while judges may hold expertise in a specific, niche area that certainly requires legal training, such as patent law, they are more apt to interpret claims in that regard than the average person. While, when it comes to areas in which judges have little or no familiarity, they may overestimate their own fact-finding ability rather than favor the jury, based on unfounded ideas as laypersons have less ability and/or objectivity relative to the cognitive processes of judges. [16]
Currently, most judges, along with laypeople, do not have the software literacy to accurately determine if evidence would be classified as a deepfake, and thus be inadmissible. However, with the rules in place now, many judges have some skepticism and hesitancy towards evidence due to formal legal training. This serves as a placeholder for more rules and regulations as the controversy with deepfakes develops. With this argument in mind, courts should begin to turn to various sources to support the authenticity of evidence, moving beyond just eyewitnesses to those who are familiar with analyzing software forensics. The prospect of “self-authenticating software” that can detect deepfakes will likely restore much confidence in the judge's role as a gatekeeper during this digital age. [17] Such technology would, in the case of deepfakes, theoretically be able to confirm AI usage. This means that the verification of its authenticity is provided by technology that in its absence would need human verification.
IV. FEDERAL RULE OF CIVIL PROCEDURE 34(b)(1)(c)
Federal Rule of Civil Procedure 34(b)(1)(c) states that a party’s request, “may specify the form or forms in which electronically stored information is to be produced.” [18] Fed. R. Civ. P. 34(b)(1)(c) regulates the formats in which electronically stored information is viewed. This provides a party with the right to request evidence to be submitted in a certain file format in a case.19 Certain formats may provide more access to details about a file’s creation and editing. This essentially means that the party providing the evidence is responsible for translating it into the requested form, given the ever-changing formats of technology. This rule may help to prevent the manipulation of evidence if one form provides more transparency or opens up the possibility of analyzing its metadata.
V. APPLICATION OF FEDERAL RULES OF EVIDENCE TO TECHNOLOGY
Federal Rule of Evidence 901 required that evidence derived from technology (e.g., photographs, screenshots, etc.) must be verified with metadata, or the history and information embedded in a file, to prove their authenticity. [20] The evidence must be an accurate representation of what it is supposed to be, and it must provide a full picture of the data. It must not exclude clarifying details or include anything that was not previously there. This means that the evidence does not omit, distort, or add any details. According to FRE 901, if any type of evidence stems from a website or social media page, it must have the original source code and metadata, including details like timestamps and digital signatures. All of this data needs to be provided in EDRM-XML formatting compatible with systems used for modern eDiscovery. [21]
There are various examples of innovations in technology that lend themselves to how the Federal Rules of Evidence are applied in practice. In Griffin v. State (419 Md. 343 (Md. 2011)), evidence from MySpace was used to prove threats to witnesses made by the defendant’s significant other. [22] Despite threats being sent through a profile that matched her personal information (profile picture, name, birthday, hometown, etc.), a lack of metadata meant there was no proof that the threats were sent from her. Information that could have been used to corroborate this evidence could include an IP address or other proof that the messages were sent from her computer. [23] Due to the ease of creating misleading social media accounts and posts, it is fully possible that these messages could have been fake, and thus, they were inadmissible. With the rise of Adobe Photoshop and other photo editing apps, photographs now need to be authenticated in order to support their role as evidence. During cross-examinations in People v. Lenihan (30 Misc. 3d 289 (Sup. Ct. Queens County 2010)), witnesses asked about images from an online platform that indicated possible criminal affiliations. [24] However, with just the photographs alone, there was no way of verifying the validity of said photographs and confirming that they were in no way edited or misrepresentative. Therefore, both the photographs and the cross-examinations were thrown out. [25] This illustrates how, in the past, when new technology could have potentially undercut the legitimacy of evidence in the criminal justice system, the threshold for evidence increased when there was potential that it was not fully representative of the truth. Thus, People v. Lenihan will likely be used as a precedent for how to respond when there is a threat of deepfake evidence.
VI. DEEPFAKES AND DEEPFAKE DEFENSE
One of the more popular examples of a realistic deepfake, especially when they were in their infancy, was of former President Barack Obama. This video was incredibly realistic, which can be attributed to the fact that the most realistic deepfakes are often those for which the software has the most biometric data, for example, a president. [26] This example illustrates just how simple it is to create a seemingly real video of someone doing or saying something when in reality, there is a real possibility that they might have never met the creator of the video. The fact that someone on the other side of the world can make a deepfake of a complete stranger just from biometrics available online shows just how far reaching the implications of deepfakes are in our society.
While President Obama’s deepfake was an example of a deepfake that was not tied to a trial, in recent years, this type of situation has presented itself. The deepfake defense is a term used to describe when the defendant claims that incriminating audiovisual evidence used against them in court is fabricated and has been fabricated so realistically that it must include the use of artificial intelligence to have been characterized as realistic. Lawyers for those charged in the insurrection of January 6, 2021, argued that the videos of the defendants storming the Capitol building should not be trusted due to the possibility that the evidence was fabricated and/or tampered with through the use of AI and that there was no way at the time for the defense to verify the authenticity these videos. [27] As of now, an indisputably unfounded deepfake defense has yet to work in any cases.
One example of how deepfakes can be used to manipulate the will of the courts first presented itself in a family court in the United Kingdom. During a routine child custody case, the plaintiff used a deepfake audio recording to insinuate that her former partner should not have custody. Despite the recording’s surface-level validity, the defendant was able to prove his innocence by analyzing metadata to prove he never recorded said audio. [28] However, this scenario demonstrates the incredible vulnerability society has against the danger of AI-generated evidence penetrating our criminal justice system. This vulnerability is seen through the potential for our legal landscape to be manipulated in a way that no longer serves to create fairness and equity.
While there have been few cases so far where deepfake evidence has been a documented issue, as AI continues to grow, there are concerns about the potential implications of AI-generated evidence. A hypothetical issue that could arise is a case in which a client presents his attorney with printed photographs of an accident scene, which the attorney uses as a foundation to establish a case. However, in this scenario, these images could be heavily doctored, effectively adding or omitting key details in the scene, and/or completely AI-generated overall. Furthermore, the printed photographs could feasibly have the correct timestamps. [29] While the attorney has no hesitation accepting these images as evidence, after all, they are printed, this technology is so easily accessible and inexpensive that even those who are not tech-savvy could create misleading narratives in their favor. This situation highlights that since the dawn of deepfakes the confidence that our evidence, beyond just digital evidence, has not been tampered with has decreased. Emphasizing how the rise of deepfakes in society will only further complicate the ability to verify the authenticity of evidence, especially in circumstances where metadata is not available, especially in the scenario with printed doctored photographs.
Even if the evidence is determined to be credible, juries may begin to doubt the credibility of the audiovisual due to the possible influence of AI, even if they are told that the information has been vetted and is true, effectively deteriorating the trust in the rule of law. In this case, where juries may fail to fairly evaluate evidence due to concerns about AI, even after the judge may have permitted the evidence, this lends more weight to fabricated claims of AI interference, such as in the case of the Deepfake Defense. Some scholars argue for “…prohibiting the production, offering, use, or possession of defamed technology.” [30] While this would prevent any possible negative scenarios from arising, there are concerns regarding First Amendment violations that could come from this measure. The First Amendment not only protects speech but also expression, even in forms other than speech. In the scenario that deepfake software technology is banned to prevent criminal activity from occurring, this could be deemed as a prior restraint of one’s First Amendment protections under the United States Constitution.
Unfortunately, many criminal acts can result from the use of deepfakes. For example, a staggering number of deepfakes are non-consensual pornographic material, some of which include blackmail or child pornography. Furthermore, deepfakes are often used to commit financial crimes. Oftentimes, this fraud manifests as ‘grandma call’ scams and can lead to identity theft. These scams, also known as family emergency scams, often prey on the elderly and use audio deepfakes to claim that a loved one needs money due to an emergency. [31] This prevalence of this scam highlights the complex nature of biometrics and their nature of being widely available on the internet.
VII. FUTURE OF EVIDENCE AND CONCLUSION
With the ever-evolving nature of artificial intelligence technology, the Deepfake Defense is something that will undoubtedly be invoked more as it creates a sense of plausible deniability, effectively lengthening trials and decreasing trust in evidence that is presented to the court. Ultimately, the Deepfake Defense serves as a reminder that our criminal justice system is vulnerable to having a lack of accountability moving forward. Fabricated claims of innocence, or alternatively, fabricated evidence of guilt, breaks down the trust of the general public and undermines the rule of law. With this potential lack of institutional trust, our systems are vulnerable to manipulation by people with malicious intentions.
The Federal Rules of Evidence stipulate a need for verification of multimedia evidence. While metadata can in some ways serve this function, it likely can only describe attributes about a file, not necessarily about the contents of the file itself. However, deeper forensic analysis may be able to compensate for the gaps in the information that is provided by metadata. With the new concern of deepfakes, there needs to be a more serious reliance on forensic scholars and the development of technology to ensure authenticity.
Endnotes
[1] Yvonne Apolo and Katina Michael. “Beyond a Reasonable Doubt? Audiovisual Evidence, AI Manipulation, Deepfakes, and the Law.” IEEE Transactions on Technology and Society 5, no. 2 (2024): 156–68. https://doi.org/10.1109/tts.2024.3427816.
[2] Rebecca A. Delfino, “Deepfakes on Trial: A Call to Expand the Trial Judge’s Gatekeeping Role to Protect Legal Proceedings from Technological Fakery,” Hastings Law Journal 74, no. 2 (2022): 293–348, https://doi.org/10.2139/ssrn.4032094.
[3] Cornell Law School. 2011. “Rule 901. Authenticating or Identifying Evidence.” LII / Legal Information Institute. 2011. https://www.law.cornell.edu/rules/fre/rule_901. 4 Cornell Law School,“Rule 901.”
[4] Cornell Law School,“Rule 901.”
[5] Herbert B Dixon, “The ‘Deepfake Defense’: An Evidentiary Conundrum,” American Bar Association, June 11, 2024, https://www.americanbar.org/groups/judicial/publications/judges_journal/2024/spring/deepfake-defenseevidentiary-conundrum/.
[6] “Addressing Challenges of Deepfakes and AI-Generated Evidence | Law Bulletins | Taft Law.” 2024. Taftlaw.com. 2024. https://www.taftlaw.com/news-events/law-bulletins/addressing-challenges-of-deepfakes-ai-generatedevidence/.
[7] Dixon, “The ‘Deepfake Defense.’”
[8] E. Richard Webber and Dana Malkus, "Detailing Daubert," The St. Louis Bar Journal (Spring 2006), Saint Louis U. Legal Studies Research Paper Forthcoming, available at SSRN: https://ssrn.com/abstract=4025527.
[9] Webber and Malkus, “Detailing Daubert.”
[10] Webber and Malkus, “Detailing Daubert.”
[11] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), https://supreme.justia.com/cases/federal/us/509/579/
[12] Webber and Malkus, “Detailing Daubert.”
[13] Delfino, “Deepfakes on Trial.”
[14] Delfino, “Deepfakes on Trial.”
[15] Delfino, “Deepfakes on Trial.”
[16] Delfino, “Deepfakes on Trial.”
[17] Delfino, “Deepfakes on Trial.
[18] “Rule 34. Producing Documents, Electronically Stored Information, and Tangible Things, or Entering onto Land, for Inspection and Other Purposes.” 2015. LII / Legal Information Institute. 2015. https://www.law.cornell.edu/rules/frcp/rule_34.
[19] Legal Information Institute, “Rule 34.”
[20] “Can Social Media Be Used in Court? 23 Court Cases That Prove Social Media Evidence Can Make or Break a Case," Pagefreezer, 2024, https://blog.pagefreezer.com/social-media-digital-evidence-forensics-court-cases.
[21] Pagefreezer, “Can Social Media Be Used in Court?”
[22] Griffin v. State, 19 A.3d 415 (2011), https://caselaw.findlaw.com/court/md-court-of-appeals/1565367.html
[23] Pagefreezer, “Can Social Media Be Used in Court?”
[24] People v. Lenihan, 30 Misc. 3d 289 (Sup. Ct. Queens County 2010), https://law.justia.com/cases/new-york/othercourts/2010/2010-20462.html.
[25] Pagefreezer, “Can Social Media Be Used in Court?”
[26] Shannon Bond, “People Are Trying to Claim Real Videos Are Deepfakes. The Courts Are Not Amused.” NPR, (2023). https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-real-videos-are-deepfakes-thecourts-are-not-amused.
[27] Bond, “People Are Trying.”
[28] Apolo, Yvonne, and Katina Michael. 2024. “Beyond a Reasonable Doubt?” IEEE Transactions on Technology and Society 5 (2): 156–68. https://doi.org/10.1109/tts.2024.3427816.
[29] Apolo and Michael, “Beyond a Reasonable Doubt?”
[30] Maria-Paz Sandoval, Maria De Almeida Vau, John Solaas, and Luano Rodrigues. “Threat of Deepfakes to the Criminal Justice System: A Systematic Review.” Crime Science 13, no. 1 (2024): 1-16. https://doi.org/10.1186/s40163-024-00239-1.
[31] Sandoval, de Almeida Vau, Solaas, and Rodrigues, “Threat of Deepfakes.”