
Zusammenfassungen

The past few years, ever since processing capacity caught up with
neural models, have been heady times in the world of NLP. Neural
approaches in general, and large, Transformer LMs in particular,
have rapidly overtaken the leaderboards on a wide variety of benchmarks
and once again the adage “there’s no data like more data”
seems to be true. It may seem like progress in the field, in fact, depends
on the creation of ever larger language models (and research
into how to deploy them to various ends).
In this paper, we have invited readers to take a step back and
ask: Are ever larger LMs inevitable or necessary? What costs are
associated with this research direction and what should we consider
before pursuing it? Do the field of NLP or the public that it serves
in fact need larger LMs? If so, how can we pursue this research
direction while mitigating its associated risks? If not, what do we
need instead?
Von Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell im Text On the Dangers of Stochastic Parrots (2021) The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.
Von Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell im Text On the Dangers of Stochastic Parrots (2021)
Bemerkungen
“On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and othershave made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they cancontain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locksin the problems of the past. Google initially approved the paper, a requirement for publications by staff . Then it rescinded approvaland told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (andBender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to“index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both womenbelieve this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academicstandards. The phrase
stochastic parrot
entered the tech lexicon.
Von Elizabeth Weil im Text You Are Not a Parrot (2023)
Dieser wissenschaftliche Zeitschriftenartikel erwähnt ...
![]() Personen KB IB clear | Sandhini Agarwal, Dario Amodei, Amanda Askell, Christopher Berner, Tom B. Brown, Mark Chen, Benjamin Chess, Rewon Child, Jack Clark, Kewal Dhariwal, Prafulla Dhariwal, Aidan N. Gomez, Scott Gray, Tom Henighan, Ariel Herbert-Voss, Christopher Hesse, Geoffrey Hinton, Llion Jones, Lukasz Kaiser, Jared Kaplan, Gretchen Krueger, Mateusz Litwin, Benjamin Mann, Sam McCandlish, Arvind Neelakantan, Safiya Umoja Noble, Niki Parmar, Illia Polosukhin, Alec Radford, Aditya Ramesh, Nick Ryder, Girish Sastry, Claude Shannon, Noam Shazeer, Pranav Shyam, Eric Sigler, Melanie Subbiah, Ilya Sutskever, Jakob Uszkoreit, Ashish Vaswani, Warren Weaver, Clemens Winter, Jeffrey Wu, Daniel M. Ziegler | ||||||||||||||||||||||||||||||||||||
![]() Aussagen KB IB clear | Machine learning benötigt Daten Machine Learning kann bestehende Vorurteile/Ungerechtigkeiten verstärken/weitertragen Textgeneratoren erleichtern das Generieren von Bullshit Textgeneratoren erleichtern das Generieren von Fake-News massiv | ||||||||||||||||||||||||||||||||||||
![]() Begriffe KB IB clear | ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | ||||||||||||||||||||||||||||||||||||
![]() Bücher |
| ||||||||||||||||||||||||||||||||||||
![]() Texte |
|
Dieser wissenschaftliche Zeitschriftenartikel erwähnt vermutlich nicht ... 
![]() Nicht erwähnte Begriffe | Chat-GPT, facebook, Generative Pretrained Transformer 4 (GPT-4), Negative Rückkoppelung, Positive Rückkoppelung / Teufelskreis |
Tagcloud
Zitationsgraph
Zeitleiste
17 Erwähnungen 
- Digital Warriors (Roberta Fischli) (2022)
- 4. Rassistische Software - sie zieht Google dafür zur Verantwortung (2022)
- 4. Rassistische Software - sie zieht Google dafür zur Verantwortung (2022)
- What do NLP researchers believe? (Julian Michael, Ari Holtzman, Alicia Parrish, Aaron Mueller, Alex Wang, Angelica Chen, Divyam Madaan, Nikita Nangia, Richard Yuanzhe Pang, Jason Phang, Samuel R. Bowman) (2022)
- AlphaCode and «data-driven» programming - Is ignoring everything that is known about code the best way to write programs? (J. Zico Kolter) (2022)
- Against automated plagiarism (Iris van Rooij) (2022)
- Do not feed the Google - Republik-Serie (2023)
- 4. Wenn ethische Werte nur ein Feigenblatt sind (Daniel Ryser, Ramona Sprenger) (2023)
- 4. Wenn ethische Werte nur ein Feigenblatt sind (Daniel Ryser, Ramona Sprenger) (2023)
- Übersicht zu ChatGPT im Kontext Hochschullehre (Gunda Mohr, Gabi Reinmann, Nadia Blüthmann, Eileen Lübcke, Moritz Kreinsen) (2023)
- Learning, Media and Technology, Volume 48, Issue 1 (2023) (2023)
- Wozu sind wir hier? - Eine wertebasierte Reflexion und Diskussion zu ChatGPT in der Hochschullehre (Gabi Reinmann) (2023)
- Large language models will change programming... a little (Amy J. Ko) (2023)
- You Are Not a Parrot (Elizabeth Weil) (2023)
- Didaktische und rechtliche Perspektiven auf KI-gestütztes Schreiben in der Hochschulbildung (Peter Salden, Jonas Leschke) (2023)
- Pause Giant AI Experiments - An Open Letter (Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari) (2023)
- Modern language models refute Chomsky’s approach to language (Steven T. Piantadosi) (2023)
- ChatGPT – wenn die künstliche Intelligenz schreibt wie ein Mensch - Und was es dabei zu beachten gilt (TA SWISS Zentrum für Technikfolgen-Abschätzung, Laetitia Ramelet) (2023)
- Schreiben nach KI – artifizielle und postartifizielle Texte (Hannes Bajohr) (2023)
- Die Angst vor KI ist übertrieben – und hier ist der Grund dafür (Bappa Sinha) (2023)
- ChatGPT und andere Computermodelle zur Sprachverarbeitung - Grundlagen, Anwendungspotenziale und mögliche Auswirkungen (Steffen Albrecht) (2023)
Anderswo finden
Volltext dieses Dokuments
![]() | ![]() ![]() ![]() ![]() ![]() |
Anderswo suchen 
Beat und dieser wissenschaftliche Zeitschriftenartikel
Beat hat Dieser wissenschaftliche Zeitschriftenartikel erst in den letzten 6 Monaten in Biblionetz aufgenommen. Beat besitzt kein physisches, aber ein digitales Exemplar. Eine digitale Version ist auf dem Internet verfügbar (s.o.).