Lesson Summary:
Who writes an algorithm shapes who wins and who loses. In this lesson we examine how the absence—or celebration—of diversity in AI design ripples outward, touching everything from hiring tools to policing software. Learners will analyze real-world case studies, hear marginalized voices in the AI-ethics movement, and leave with a critical lens for evaluating who benefits and who bears the risks.
Why “Who Builds” Matters
Algorithms are trained on data that inevitably reflects the biases present in human society and decision-making. When machine learning models learn from historical data, they absorb and amplify existing patterns of discrimination and prejudice. For example, if hiring data shows that companies historically favored certain demographics, an algorithm trained on this data will perpetuate those same biases in its recommendations. When only a narrow slice of humanity writes the code, the software sees the world through a keyhole.
Hidden Outcomes
Facial recognition systems trained mostly on lighter-skinned faces misidentify darker-skinned people.
Language models trained on internet text repeat harmful stereotypes.
Hiring tools trained on biased data disadvantage underrepresented groups.
Ultimately, the people who build AI systems make countless decisions throughout the development process—from what data to collect and how to label it, to what metrics define success and what trade-offs are acceptable. Each of these decisions embeds values and assumptions into the technology. When development teams are diverse and inclusive, they bring different perspectives to these decisions, resulting in systems that are more likely to be fair, effective, and beneficial across different populations. This isn't just about avoiding harm—it's about building technology that truly serves the full spectrum of humanity rather than perpetuating existing inequalities.
Milestones in AI-Ethics Activism
2016 – BlackInAI was established to counter the stark underrepresentation of Black professionals in AI and the lack of attention to algorithmic bias impacting Black communities. Begun by Timnit Gebru and Rediet Abebe . Black in AI created a global community for Black researchers to collaborate, share findings, and advocate for diversity and equity within AI
2018 – The “Gender Shades” study by Joy Buolamwini and Timnit Gebru, published in 2018, systematically evaluated commercial facial recognition systems. The research found those systems were remarkably less accurate when classifying the gender of darker-skinned and female faces, exposing intersectional bias in widely used AI products.
2019 – The European Union published its Ethics Guidelines for Trustworthy AI in April, following an extensive consultation process that began in December 2018 Ethics guidelines for trustworthy AI | Shaping Europe’s digital future. This represented one of the first major governmental efforts to establish comprehensive ethical frameworks for AI development and deployment.
2020 – Timnit Gebru, co-leader of Google’s Ethical AI research team and a pioneering critic of AI and racial bias, was abruptly ousted following a dispute over a research paper she co-authored that critiqued biases in large language models. The event triggered outrage and international debate over ethics in tech, researcher independence, and whistleblower protection.
2023 – EU AI Act draft finalized. The EU AI Act became a “global template” for how nations could regulate AI in ways that explicitly foreground rights, ethics, and democratic control. Driven by concerns surfaced by events and activism such as the above, it set concrete legal standards around fairness, discrimination, and oversight in AI development and deployment.
Profiles in Resistance
-
Timnit Gebru
Founder of The Distributed AI Research Institute (DAIR)
“At the end of the day, this needs to be about institutional and structural change. If we had the opportunity to pursue this work from scratch, how would we want to build these institutions?”
-
Karen Hao
NYT Bestselling Author of EMPIRE OF AI
“The empires of AI are not engaged in the same overt violence and brutality that marked this history. But they, too, seize and extract precious resources to feed their vision of artificial intelligence: the work of artists and writers; the data of countless individuals posting about their experiences and observations online; the land, energy, and water required to house and run massive data centers and supercomputers.”
-
Joy Buolamwini
Founder of The Algorithmic Justice League
“As tempting as it may be, we cannot use AI to sidestep the hard work of organizing society so that where you are born, the resources of your community, and the labels placed upon you are not the primary determinants of your destiny. We cannot use AI to sidestep conversations about patriarchy, white supremacy, ableism, or who holds power and who doesn’t.”
-
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research. Available online
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. Publisher link
Raji, I. D. et al. (2020). "Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing." FAT Conference. Available online