Co-hosted by Ranjit Singh, Jillian Powers, Gina Helfrich, and Borhane Blili-Hamelin.
Courts are a vital organ of algorithmic & AI accountability. They are the site where legal accountability becomes a reality, where existing law is turned into actionable legal protections against and legal redress for decisions that led to algorithmic harms. However, the diffuse, complex nature of algorithmic harms makes it immensely challenging to turn specific instances of harm into actionable legal claims. What are the barriers to legal accountability for decisions around algorithms? How can we better empower communities to navigate the hurdles to actionable legal protections against decisions that led to algorithmic harms? Our session aims to raise awareness about pathways to legal accountability for algorithmic harms, and to empower participants to help their own communities take algorithms to court.
Our session drew on Accountability Case Labs’ approach to case study-based AI accountability workshops. We began with a short presentation sharing insights from Science and Technology studies into algorithmic accountability and the mechanisms through which adversarial courts negotiate the legal standing of algorithmic harms, such as court decisions about standing, admissible expert testimony, and precedents. We then considered a Frye motion about ShotSpotter evidence, and invite participants to examine stakeholders and identify pathways and barriers to legal accountability. From there, participants were invited to collaborate on identifying case studies that speak to the barriers of their own communities, and on identifying opportunities to support their own communities in realizing legal accountability for decisions around algorithms.
For folks interested in the topic of the place of the courts in algorithmic accountability, we highly recommend this article by Ranjit and his colleagues.
- Posted on:
- June 10, 2022
- 2 minute read, 275 words
- See Also: