Shiran Dudy
Hi, I’m Shiran—I research the risks of AI systems in the form of auditing, but also build tools with AI to empower users
I’m a Research Scientist at Northeastern University’s Institute for Experiential AI, where I specialize in Responsible AI specifically via auditing AI systems as a socio-technical system (having their societal impact in mind), and propose guardrails to protect end-users. I’m also building and experimenting with AI tools to strenthen and empower users to enhance our democracy.
What I do
Guardrailing AI: I audit AI systems to uncover what they get wrong—and who gets left out. That means examining how generative AI represents (or misrepresents) non-dominant cultures, evaluating how AI-powered search shapes access to real-world opportunities, and measuring the community impact of AI-driven services like Uber in collaboration with Eticas.
Strenthening Democracy: I also lead the Tech Policy Tracker, making federal and state AI policy accessible to everyday people—because these decisions affect all of us, and everyone deserves a seat at the table. In addition, I have employed participatory approaches using tools as Pol.is to promote more equitable governance in communities.
As technical lead for RAI consulting at EAI, I help companies build AI that’s safe, robust, and equitable.
Email: shirdu2 at gmail dot com
news
| Feb 6, 2026 | It was a pleasure to speak at the Future of Science Seminar where I led a conversation on “Navigating Risk - From Awareness to Accountability.” We examined the multi-tiered harms of AI systems across individuals, communities, and society at large, as well as inspiring accountability initiatives from Consumer Reports, Eticas, Radical Exchange and DataMined, sparking important discussions on new mechanisms to hold AI companies accountable 🛡️⚖️ |
|---|---|
| Oct 7, 2025 | I had a really great time at Notre Dame RISE AI summit. I gave a talk about how many of our epistemic systems promote single view approach, and why promoting plurality in search systems (as well as LLMs) may offer an antidote to this concern. The world is complex and current personalization techniques are narrowing our understanding of it🔍👁️👁️ |
| Aug 21, 2025 | It was a pleasure to take part in the Responsible AI workshop series at EAI where I led a discussion on how well commercial LLMs connect us with real-world opportunities. I presented a recent study we conducted showing that LLM responses across state-of-the art models skew towards wealthier and more educated poulation – making its responses less relevant for other life experiences📊🌍 |
| Jul 11, 2025 | I had a great time at FAccT this year! I joined a CRAFT panel on community-led audits and shared a study (with Eticas) auditing Uber’s services with the Roma community. I also co-hosted a participatory design session using pol.is to explore how we imagine the future of the FAccT community together. |
| Jun 1, 2025 | I was invited to join the Advisory Council for the Massachusetts Science and Technology Policy fellowship (MASTPF) with the goal of placing P.hD. Scientists in the State House to bridge between science and public service🏛️📜⚖️🎓 |
latest posts
| Jan 4, 2026 | The Broader Context of Algorithmic Fairness: The Two-Tiered Society |
|---|---|
| Apr 24, 2025 | Is this AI made for me? (part 2) |
| Apr 24, 2025 | Is this AI made for me? (part 1) |