GPT-2 |
|
BiblioMap 
Bemerkungen
In 2019 came GPT-2. Its designers trained a Transformer network on millions of postings from the Reddit internet forum. They demonstrated its prowess by showing how it could continue web articles on topics ranging from cooking to computing, translate from English to French (and the reverse), and answer difficult questions about the content of news stories. Newspapers such as USA Today and the New York Post focused on the “perfectly convincing narrative" of a fake news story it wrote about scientists discovering a herd of unicorns living in a remote valley.
Appreciating the potential for abuse, OpenAI waited six months to
release the full trained network for GPT-2. During that time, the company
carried out a survey where they generated news stories from
different versions of GPT-2 – with small, medium and large networks –
and asked people to rate them for credibility (there was not much difference
between the medium and large networks). Along with researchers
at Cornell University, the company looked at bias in the generated stories
(for example, GPT-2 tended to continue “The criminal was” with male
words, and to continue “God is” with words relating to Christianity rather
than other religions).
Verwandte Objeke
![]() Verwandte Begriffe (co-word occurance) | Generative Pretrained Transformer 3 (GPT-3)(0.09) |
Häufig co-zitierte Personen
IlyaSutskever
ArielHerbert-Voss
KewalDhariwal
MarkChen
ChristopherHesse
ClemensWinter
JeffreyWu
Daniel M.Ziegler
AdityaRamesh
RewonChild
MateuszLitwin
GretchenKrueger
ScottGray
SandhiniAgarwal
AmandaAskell
GirishSastry
PranavShyam
ArvindNeelakantan
JaredKaplan
MelanieSubbiah
NickRyder
BenjaminMann
TomHenighan
BenjaminChess
JackClark
ChristopherBerner
SamMcCandlish
AlecRadford
Tom B.Brown
PrafullaDhariwal
EricSigler
Statistisches Begriffsnetz 
Zitationsgraph
Zitationsgraph (Beta-Test mit vis.js)
Zeitleiste
31 Erwähnungen 
- Addressing Global Challenges and Quality Education - 15th European Conference on Technology Enhanced Learning, EC-TEL 2020, Heidelberg, Germany, September 14-18, 2020, Proceedings (Carlos Alario-Hoyos. María Jesús Rodríguez-Triana, Maren Scheffel, Inmaculada Arnedillo-Sánchez, Sebastian Maximilian Dennerlein) (2020)
- All the News that’s Fit to Fabricate - AI-Generated Text as a Tool of Media Misinformation (Sarah Kreps, Miles McCain, Miles Brundage) (2020)
- Original oder Plagiat? - Der schnelle Weg zur wissenschaftlichen Arbeit im Zeitalter künstlicher Intelligenz (Doris Weßels, Eike Meyer) (2021)
- On the Dangers of Stochastic Parrots - Can Language Models Be Too Big? (Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell) (2021)
- Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases (Ryan Steed, Aylin Caliskan) (2021)
- Gewissenloser Autor - GPT-3 generiert Texte ganz nach Bedarf – auch Fake News (Wolfgang Stieler) (2021)
- Should you believe Wikipedia? - Online Communities and the Construction of Knowledge (Amy Bruckman) (2022)
- The Robots Are Coming - Exploring the Implications of OpenAI Codex on Introductory Programming (James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, James Prather) (2022)
- Aufmerksamkeit reicht - So funktionieren Sprach-KIs vom Typ „Transformer“ (Pina Merkert) (2022)
- Story Machines - How Computers Have Become Creative Writers (Mike Sharples, Rafael Pérez y Pérez) (2022)
- KI, schreib meine Thesis! - Welchen Einfluss ChatGPT auf die Bildung haben könnte (Wolfgang Stieler) (2022)
- How to spot AI-generated text (Melissa Heikkilä) (2022)
- The End of Programming (Matt Welsh) (2023)
- Wie funktioniert eigentlich ChatGPT? (Marcel Waldvogel) (2023)
- ChatGPT for Good? - On Opportunities and Challenges of Large Language Models for Education (Enkelejda Kasneci, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stephan Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, Gjergji Kasneci) (2023)
- What Is ChatGPT Doing … and Why Does It Work? (Stephen Wolfram) (2023)
- Unlocking the Power of Generative AI Models and Systems such as GPT-4 and ChatGPT for Higher Education - A Guide for Students and Lecturers (Henner Gimpel, Kristina Hall, Stefan Decker, Torsten Eymann, Luis Lämmermann, Alexander Mädche, Maximilian Röglinger, Caroline Ruiner, Manfred Schoch, Mareike Schoop, Nils Urbach, Steffen Vandirk) (2023)
- Modern language models refute Chomsky’s approach to language (Steven T. Piantadosi) (2023)
- Die große Bonanza mit Künstlicher Intelligenz (Masimilian Sachse) (2023)
- Speak, Memory - An Archaeology of Books Known to ChatGPT/GPT-4 (Kent K. Chang, Mackenzie Cramer, Sandeep Soni, David Bamman) (2023)
- The Curse of Recursion - Training on Generated Data Makes Models Forget (Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson) (2023)
- Sollen Frauen Karriere machen? (Joachim Laukenmann) (2023)
- The Coming Wave - Technology, Power, and the Twenty-first Century's Greatest Dilemma (Mustafa Suleyman, Michael Bhaskar) (2023)
- Künstliche Intelligenz - Mehr als nur ein Hype? - Bildungsbeilage der NZZ vom 22.11.2023 (2023)
- Darf der Computer die Seminararbeit schreiben? (Reto U. Schneider)
- Darf der Computer die Seminararbeit schreiben? (Reto U. Schneider)
- Talking about Large Language Models (Murray Shanahan) (2024)
- Alles überall auf einmal - Wie Künstliche Intelligenz unsere Welt verändert und was wir dabei gewinnen können (Miriam Meckel, Léa Steinacker) (2024)
- 9. Das ethische Spiegelkabinett - Wenn KI Werte nachahmt
- 9. Das ethische Spiegelkabinett - Wenn KI Werte nachahmt
- Challenging systematic prejudices - An Investigation into Bias Against Women and Girls in Large Language Models (UNESCO United Nations Educational, Scientific and Cultural Org.) (2024)
- The Singularity is nearer (Ray Kurzweil) (2024)
- AI models collapse when trained on recursively generated data (Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, Yarin Gal) (2024)

- Denkbar echt simuliert - KI versus Gehirn: so ähnlich, so unterschiedlich, so undurchschaubar (Andrea Trinkwalder) (2025)

Generative Pretrained Transformer 3 (GPT-3)
Biblionetz-History