Reframing "AI for Good"
Dominant narratives claim AI can make the world better, but in reality, “AI for Good” is an oxymoron: models are built through surveillance, exploitation, and hubris. Alix talks with Abeba Birhane about reframing "AI for Good" and how nation-states could address systemic problems through aid and wealth redistribution that account for colonial histories.
ABEBA BIRHANE – AI FOR GOOD
Dr Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL) and is an assistant professor of AI at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, and she served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.
In this conversation, Abeba Birhane critiques the “AI for Good” framing, embraced by industry and international organizations, for its technocratic treatment of complex political and social issues. She argues the framing puts a shiny veneer on bad data, extractive and exploitative practices, and weak evidence. Instead of pouring resources into big tech-driven models, she says we should support smaller, community-based efforts that actually deliver – doing work that serves the community without making the same grand claims. She proposes that governments move away from uncritical adoption of AI via “AI for Social Good” initiatives and instead demand sound, empirical evidence for claims.

"If you look at major companies and corporations like Microsoft and Google they are the very entities that are powering genocide, powering war. You start to realize that a lot of the claims around AI for good start to crumble."

.png)

