
When Professor Sonia Katyal went to law school, she aspired to be a civil rights lawyer.
Then the internet happened.
“I realized that the very same things I cared about in civil rights—equal protection, privacy, due process—had become concerns in technology,” she says. “Questions about who had access to technology, who was being targeted by surveillance, and how privacy and creativity are protected became central concerns in my work.”
Katyal mainly focuses on trademark law, but also studies artificial intelligence (AI) and copyright and writes about the relationship between new media and public institutions. Gender and sexuality is another area of interest.
An article published last year drew many of those threads together. In it, Katyal noted parallels between pushing for the law to be fairer to all—the foundation of the civil rights movement—and debating how to ensure that algorithms aren’t replicating existing biases, or baking in new ones.
She proposed that the AI community should rely more on things like impact statements, loosely modeled after the “environmental impact statements” federal law requires to detail the benefits and costs of a particular project or rule.
The industry also needs to set up codes of conduct, she argues. But first, companies must acknowledge the ways big databases—and programs written to produce, for example, a decision about whether a defendant is likely to commit another crime—can be skewed against vulnerable groups. Trade secret law, she adds, can also employ whistleblower protections for engineers who explain how their algorithms work, not just disclose their use.
“By encouraging researchers to delineate the impact of AI techniques on vulnerable communities,” Katyal says, “we can encourage them to be more thoughtful about how people of color, women and trans/nonbinary communities, and the disabled, among others, can be negatively affected by greater reliance on AI.”
Her projects related to information access and social justice address how AI and machine learning techniques impact classifications based on race and gender, the relationship between trademarks and AI, and (with colleague Erik Stallman ’03) protecting access to open data.
Another paper, The Paradox of Source Code Secrecy, outlines the lack of transparency in AI governance, linking it to the changing areas of legal protection for software. It was selected as one of the best law review articles relating to intellectual property published within the last year and will be published in an upcoming anthology by Thomson Reuters (West).
“It’s an exciting time to be drawing these kinds of connections,” Katyal says. “My colleagues and students are simply the best, and every day brings new opportunities to link technology and civil rights in our work.”