By Andrew Cohen
Over the past six months, Audrey Mitchell ’26 has seen her budding legal career soar on pace with the subject of her growing fascination: artificial intelligence.
Last summer she gained valuable tech law experience as a patent litigation associate at Desmarais LLP in San Francisco, and added to her skill set working with UC Berkeley Law’s Samuelson Law, Technology & Public Policy Clinic. In June, Mitchell was chosen to join the AI Policy Hub — an interdisciplinary campus center focused on translating scientific research into governance and policy frameworks to shape AI’s future — and began work as a student fellow in August.
Six graduate students from various disciplines across the university are conducting innovative research to help reduce the harmful effects of AI and amplify its benefits. They’ll share their findings to inform policymakers through symposia, policy briefings, papers, and other resources.
Mitchell is exploring whether current legal rules provide adequate safeguards for AI use during legal proceedings. Her work probes the Federal Rules of Evidence, Federal Rules of Civil Procedure, and judicial standing orders to analyze how they’ve been creatively used to respond to the new challenges AI brings — and identifies their shortcomings in practice.
Below, Mitchell describes her ascending arc with AI, her research, and her concerns about ensuring the integrity of legal proceedings.
What sparked your interest in AI, and how has your time here fueled that?
My undergraduate experience at Stanford, both being immersed in the Silicon Valley world and majoring in engineering, left me with a lasting interest in new technology. When I came to Berkeley, I wasn’t sure how that would fit with my legal career. I’ve been very lucky to combine the two in a number of ways: my job in patent litigation last summer, this project, and the Samuelson Clinic.
My two 1L spring electives last year, Intellectual Property and Evidence, also featured lots of talk about how AI intersects with patent, copyright, and evidentiary doctrine. Since then, I’ve continued to think about the myriad ways AI will impact the world, and how the legal field can and must respond.
How do you foresee this fellowship enriching your career path?
This experience has been a wonderful opportunity to speak with judges and other prominent members of the legal community, and I look forward to keeping those connections down the road. I’m also currently working on a paper, which I later hope to publish in a law journal.
There aren’t many opportunities in law school to lead yearlong research projects, so this experience has allowed me to build a new set of skills in defining a research project, creating a methodology, and working with academic mentors to set myself up for success. This has been a great way to expand my understanding of policymaking and begin to build my research and publications portfolio.
Why is it important to keep a close eye on how AI affects legal proceedings?
The legal field doesn’t have much control over how AI technologies are developed — instead, it plays a reactive role. The capabilities of AI to generate writing, images, audio, and video impact all stages of litigation: discovery, court filings, expert opinions, and evidence presented at trial itself. It is crucial that the legal field evaluates whether its existing rules systems, such as the Federal Rules of Evidence and Federal Rules of Civil and Criminal Procedure, are sufficient to preserve the integrity of the litigation process in light of AI and, if not, how those rules systems need to change.
What has your research revealed?
My current research focuses on the Federal Rules of Evidence (FRE) and judicial standing orders as tools to ensure that AI use is not interfering with the litigation process. So far, I’ve interviewed judges and the reporter for the Advisory Committee for the FRE, reviewed AI-related standing orders that judges have promulgated, and searched for case law (including motions in limine, trial transcripts, and orders) reflecting how judges are handling AI’s use during litigation.
It’s clear from even the limited number of cases where judges have had to rule on allegations of AI-generated evidence that generative AI will stretch the current evidentiary rules on authenticity, potentially to their breaking point. It’s also clear that the field will need some top-down guidance on how to handle AI during litigation given varying levels of technological understanding. The legal field is moving in what I think is the right direction — for example, the Advisory Committee is tentatively considering a new measure to respond to the authentication difficulties with gen AI — but there’s more work to be done to ensure consistency, fairness, and efficacy.
How is your time with the AI Policy Hub structured and what deliverables are involved?
We meet weekly with our two supervisors and meetings have included presentations on our research, guest speakers from the AI world, and workshops on how to make a policy impact. We also attended the Berkeley Law AI Institute in September. This was an amazing opportunity to hear from AI innovators and researchers, and would not have been possible without the AI Policy Hub’s funding. Each fellow’s main deliverable for this semester was a draft paper, which we’ll submit for publication in a variety of venues in the spring. Next semester, we’ll be writing op-eds or memos more directly focused on policy impact.
Why is interdisciplinary work vital for this initiative, and for effectively shaping AI policy?
AI is an interdisciplinary issue! The AI Policy Hub’s first two cohorts were largely Ph.D. students in Berkeley’s Department of Electrical Engineering and Computer Sciences (EECS) and School of Information. Our current cohort includes three EECS Ph.D.s, a master’s student at the Goldman School of Public Policy, a Ph.D. student in Social Welfare, and me. I think it’s mutually beneficial: As a law student, it’s very helpful to hear from EECS students about the technological side of the AI discussion.
It’s also been helpful for all six of us to hear about each other’s projects and widen the range of AI use cases we’re familiar with. Policy decisions in this area can’t be siloed — the technical decisions that AI developers make will impact all sectors and industries, and the expertise that specific sectors have can be broadly beneficial when figuring out how to regulate and deploy AI tools out in the world.
What’s your biggest macro concern about AI, and how will the group address it?
I don’t know that there’s one right answer. For me, I think it’s AI’s ability to facilitate and justify bias. AI models are built on training data that often have inherent bias. These biases are reflected in AI output but, because they’re coming from a machine, may be viewed as neutral or unimpeachable. Depending on where AI tools are deployed, this can impact everything from the healthcare a person receives, to the prison sentence a person gets, to the jobs a person is considered qualified for.
I want to give a huge shout-out to two of my AI Policy Hub peers here: Ezinne Nwankwo is using her expertise from the EECS Ph.D. program to create best practices for the use of AI in allocating homelessness services, and Laura Pathak is creating policy recommendations to ensure that gen AI systems are held publicly accountable in health and human services. I think their projects go directly towards this macro concern, and all of us need to continue thinking about how AI can be regulated from both ends — on the development side and on the deployment side — to prevent this bias from causing negative impacts.