Zusammenfassungen
To most of us, it seems like recent developments in artificial intelligence emerged out of nowhere to pose unprecedented threats to humankind. But to Dr. Joy Buolamwini, who has been at the forefront of AI research, this moment has been a long time in the making.
After tinkering with robotics as a high school student in Memphis and then developing mobile apps in Zambia as a Fulbright fellow, Buolamwini followed her lifelong passion for computer science, engineering, and art to MIT in 2015. As a graduate student at the “Future Factory,” she did groundbreaking research that exposed widespread racial and gender bias in AI services from tech giants across the world.
Unmasking AI goes beyond the headlines about existential risks produced by Big Tech. It is the remarkable story of how Buolamwini uncovered what she calls “the coded gaze”—the evidence of encoded discrimination and exclusion in tech products—and how she galvanized the movement to prevent AI harms by founding the Algorithmic Justice League. Applying an intersectional lens to both the tech industry and the research sector, she shows how racism, sexism, colorism, and ableism can overlap and render broad swaths of humanity “excoded” and therefore vulnerable in a world rapidly adopting AI tools. Computers, she reminds us, are reflections of both the aspirations and the limitations of the people who create them.
Encouraging experts and non-experts alike to join this fight, Buolamwini writes, “The rising frontier for civil rights will require algorithmic justice. AI should be for the people and by the people, not just the privileged few.”
Von Klappentext im Buch Unmasking AI (2023) Unmasking AI goes beyond the headlines about existential risks produced by Big Tech. It is the remarkable story of how Buolamwini uncovered what she calls “the coded gaze”—the evidence of encoded discrimination and exclusion in tech products—and how she galvanized the movement to prevent AI harms by founding the Algorithmic Justice League. Applying an intersectional lens to both the tech industry and the research sector, she shows how racism, sexism, colorism, and ableism can overlap and render broad swaths of humanity “excoded” and therefore vulnerable in a world rapidly adopting AI tools. Computers, she reminds us, are reflections of both the aspirations and the limitations of the people who create them.
Encouraging experts and non-experts alike to join this fight, Buolamwini writes, “The rising frontier for civil rights will require algorithmic justice. AI should be for the people and by the people, not just the privileged few.”
Dieses Buch erwähnt ...
Personen KB IB clear | Meredith Broussard | ||||||||||||||||||
Aussagen KB IB clear | Machine Learning kann bestehende Vorurteile/Ungerechtigkeiten verstärken/weitertragen | ||||||||||||||||||
Begriffe KB IB clear | Alexa , amazon , Chat-GPT , Digitalisierung , GenderGender , Gesichtserkennungface recognition , IBM , Large-scale Artificial Intelligence Open Network (LAION) , machine learning , Watson | ||||||||||||||||||
Bücher |
|
Dieses Buch erwähnt vermutlich nicht ...
Nicht erwähnte Begriffe | Generative Machine-Learning-Systeme (GMLS), GMLS & Bildung, Künstliche Intelligenz (KI / AI), Siri, Sprachassistenten |
Tagcloud
Zitationsgraph
Zitationsgraph (Beta-Test mit vis.js)
1 Erwähnungen
- Alles überall auf einmal - Wie Künstliche Intelligenz unsere Welt verändert und was wir dabei gewinnen können (Miriam Meckel, Léa Steinacker) (2024)
- 9. Das ethische Spiegelkabinett - Wenn KI Werte nachahmt
Volltext dieses Dokuments
Unmasking AI: Gesamtes Buch als Volltext (: 1956 kByte) | |
Unmasking AI: Gesamtes Buch als Volltext (: , 3144 kByte) |
Anderswo suchen
Beat und dieses Buch
Beat hat dieses Buch während seiner Zeit am Institut für Medien und Schule (IMS) ins Biblionetz aufgenommen. Beat besitzt kein physisches, aber ein digitales Exemplar. (das er aber aus Urheberrechtsgründen nicht einfach weitergeben darf). Es gibt bisher nur wenige Objekte im Biblionetz, die dieses Werk zitieren.