UC Berkeley Law Explores AI’s Impact on Racial Equity

Experts Converge at UC Berkeley to Examine AI and Racial Justice

Two thought-provoking events at UC Berkeley Law recently addressed the intersection of artificial intelligence (AI) and racial justice. Drawing together renowned scholars, legal professionals, and policymakers, the discussions highlighted the growing concerns about AI’s impact on equity and civil rights. These gatherings emphasized the urgent need for inclusive and ethical AI governance frameworks.

The first event, the annual Race & Tech Symposium, centered on integrating racial justice into AI policy. Hosted by the Berkeley Center for Law & Technology and the Berkeley Technology Law Journal, the symposium featured panels moderated by UC Berkeley Law professors Daniel Farber, Andrea Roth, Colleen Chien, and Osagie Obasogie. The discussions examined how AI affects racial equity in areas such as environmental policy, criminal justice, labor rights, and healthcare.

Criminal Justice and AI: A Troubling Intersection

Juliana DeVries, a staff attorney at the Samuelson Law, Technology & Public Policy Clinic, shed light on the risks of using AI-generated evidence in criminal cases. Citing that nearly 25% of U.S. prisoners are incarcerated for violating parole or probation, DeVries emphasized the dangers of relying on technologies like electronic monitoring and facial recognition. These systems often demonstrate significant bias, particularly against Black women, and lack transparency.

“These are complex technologies used against people who have little chance to defend against these allegations,” DeVries explained. She underscored the need for greater transparency and technical expertise in public defense offices, though she acknowledged this is far from current realities.

Nicole Ozer, a prominent civil liberties advocate and executive director of UC Law San Francisco’s new Center for Constitutional Democracy, discussed her two-decade career defending digital rights. Ozer presented cases against facial surveillance companies that collected billions of faceprints worldwide, detailing a settlement that restricts the use of such data by private entities.

“At the core of all this, really, is power,” Ozer stated. She emphasized the need for strategic, collaborative advocacy to ensure AI technologies serve the public and uphold democratic values.

AI in the Labor Market: Uncertainty and Opportunity

The labor-focused panel explored the complex relationship between AI and employment. Professor Diana S. Reddy of UC Berkeley Law highlighted that while many American workers fear being replaced by AI, a similar number are open to AI improving their work lives. She warned, however, that AI’s adaptability could lead to widespread job displacement and a deeper concentration of wealth among corporations.

Reddy argued that the evolving landscape could spur a resurgence in labor union activity, as workers seek a voice in how AI is implemented. She also criticized current labor laws that allow companies to classify workers as contractors—bypassing protections that disproportionately affect marginalized communities.

“If AI dramatically displaces human workers, it’s not just about short-term job loss—it’s potentially a permanent replacement,” said Reddy. She also cautioned that decentralized, state-level AI regulation could create a patchwork of protections, further disadvantaging vulnerable workers.

Democracy, Diversity, and Algorithmic Bias

The event “Tech Policy for a Just Future: AI, Racial Equity, and Democracy,” moderated by Catherine E. Lhamon of the Edley Center on Law & Democracy, featured George Washington Law Professor Spencer Overton and Lawrence Norden from the Brennan Center for Justice.

Overton, drawing from his forthcoming article “Ethnonationalism by Algorithm,” argued that AI is often designed to benefit dominant racial or cultural groups while marginalizing others. He cited disparities in healthcare, criminal justice, and mortgage lending as outcomes of biased algorithms.

“Racial diversity is no longer considered a public good,” Overton warned, pointing to a global surge in nativist sentiment. He criticized federal rollbacks on anti-discrimination policies in AI and expressed concern over incentives that discourage bias mitigation in algorithm development.

Norden emphasized the importance of transparency and data privacy in AI governance. He noted that AI has the potential to support democratic goals, such as enabling fairer redistricting or improving polling access. However, he acknowledged that the growing influence of tech companies and executive overreach poses significant challenges.

“There’s a lot of potential for AI to be a great equalizer,” Norden said. “But it’s not going to just happen if companies don’t have incentives to make that a priority.”

Forging Inclusive AI Governance

The UC Berkeley Law events reflected a growing consensus: AI’s transformative power must be harnessed responsibly to avoid reinforcing systemic inequalities. Experts called for more inclusive policymaking, robust legal protections, and increased collaboration between technologists, legal advocates, and affected communities.

These discussions serve as a vital reminder that technology is not neutral. Its development and deployment must be guided by ethical principles that prioritize equity and justice for all.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter