GMLS-Detektor
BiblioMap
Synonyme
GMLS-Detektor, AI Text Detection Tools, GMLS-Erkennung, Detection Tools for AI-Generated Text
Bemerkungen
Ein IT-Fachmann aus meinem Bekanntenkreis grinst. „Lass sie das Zeug doch kaufen, das ist eh von gestern.“
Von Susanne Bach, Doris Weßels im Text Das Ende der Hausarbeit (2022) Der Untergang der klassischen Ghostwriter-Szene und der Anbieter von Plagiaterkennungssoftware ist wohl schon eingeläutet.
Von Susanne Bach, Doris Weßels im Text Das Ende der Hausarbeit (2022) Obwohl LLMs einen elaborierten Schreibstil noch nicht perfekt imitieren können, ist also zu erwarten, dass natürliche und künstliche Texte nunterscheidbar werden und der hybride Text zur Norm wird.
Do AI detectors work? In short, no, not in our experience. Our research into detectors didn't show them to be reliable enough given that educators could be making judgments about students with potentially lasting
consequences.
Von OpenAI im Text How can educators respond to students presenting AI-generated content as their own? (2023) Sich selbst könnte GPT-4 schon gar nicht überprüfen. Ebenso wenig kann eine Maschine entscheiden, ob ein Text von einer KI oder einem Menschen formuliert wurde. Das hat wohl auch OpenAI erkannt: Die Firma schaltete ihren Detektor für KI-Texte vor einigen Wochen ab. Er lieferte einfach keine verlässlichen Ergebnisse.
Von Hartmut Gieselmann in der Zeitschrift c't 21/2023 im Text Die 80-Prozent-Maschinen (2023) auf Seite 30There is a wide range of software available which has been designed to classify whether text is machine or human generated, with providers claiming high levels of accuracy in being able to identify whether text is written by a human or by a GenAI tool (GPTZero, n.d.; Turnitin, 2023). While some of these tools are free and others require either registration or payment, research by Walters (2023) has identified that the accuracy of paid-for tools is only slightly higher than that of free versions. However, claims of accuracy are contradicted by studies which demonstrate the varied levels of the detectors' ability to distinguish accurately between AI and human-generated content. (Chaka, 2023a; Gao et al., 2022; Krishna et al., 2023; Orenstrakh et al., 2023; Perkins, Roe, et al., 2023; Walters, 2023; Weber-Wulff et al., 2023).
Von Mike Perkins, Jasper Roe, Binh H. Vu, Darius Postma, Don Hickerson, James McGaughran, Huy Q. Khuat im Text GenAI Detection Tools Adversarial Techniques and Implications for Inclusivity in Higher Education (2024) Overall, our results demonstrate the challenges of current AI text detection tools being able to accurately determine whether a given piece of text was created by a human or a GenAI tool. This ability is further reduced when adversarial techniques are used to obscure the nature of a sample. If the goal of any given HEI was to use AI text detectors solely to determine whether a student has breached academic integrity guidelines, we would caution that the accuracy levels we have identified, coupled with the risks inherent in false accusations, means that we cannot recommend them for this purpose. This is not because of the demonstrated abilities of any one tool tested, as we recognise that developers are continuously updating these tools, and the detection of AI-generated content when subject to adversarial techniques is likely to improve. However, simultaneously, advances are being made in the development of more capable FMs that can produce more human-like content, resulting in a constant arms race between FMs and AI text detectors, with student inclusivity paying the price.
Von Mike Perkins, Jasper Roe, Binh H. Vu, Darius Postma, Don Hickerson, James McGaughran, Huy Q. Khuat im Text GenAI Detection Tools Adversarial Techniques and Implications for Inclusivity in Higher Education (2024) Do AI detectors work?
Von OpenAI im Text How can educators respond to students presenting AI-generated content as their own? (2023) - In short, no, not in our experience. Our research into detectors didn't show them to be reliable enough given that educators could be making judgments about students with potentially lasting consequences.While other developers have released detection tools, we cannot comment on their utility.
- Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.
- To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.
- When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.
- There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.
- Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.
In our baseline testing protocol of both non-manipulated AI-generated samples tested alongside the human-written control samples, we see an initially lower than expected average accuracy rating for the detection of AI-generated content, coupled with a substantial rate of false accusations in the human-written control samples. When the AI-generated samples were subjected to manipulation, significant vulnerabilities in accurately detecting text were observed. If the goal of implementing AI detection tools as part of an overall academic integrity strategy is to support academic staff in identifying where machine-generated content has been used and has not been declared, these inaccuracies may lead to a false sense of security and a broader reduction in assessment security. As assessment security is a key component in ensuring inclusive, equitable, and fair opportunities for learners, this is problematic. The varying degrees of reduction in accuracy following the application of adversarial techniques also point to the broader issue of inconsistency and unpredictability in the current AI detection capabilities. The effectiveness of these techniques varies dramatically across detectors, suggesting that the internal algorithms and heuristics of these detectors are tuned differently and react distinctively to similar inputs. Therefore, the results even within an institution may differ depending on the tool being employed and how it is being used.
Von Mike Perkins, Jasper Roe, Binh H. Vu, Darius Postma, Don Hickerson, James McGaughran, Huy Q. Khuat im Text GenAI Detection Tools Adversarial Techniques and Implications for Inclusivity in Higher Education (2024) Verwandte Objeke
Verwandte Begriffe (co-word occurance) | GPT Zero(0.08), Textgeneratoren-Verbot(0.03) |
Statistisches Begriffsnetz
Zitationsgraph
Zitationsgraph (Beta-Test mit vis.js)
Zeitleiste
21 Erwähnungen
- Das Ende der Hausarbeit (Susanne Bach, Doris Weßels) (2022)
- ChatGPT & Schule - Einschätzungen der Professur „Digitalisierung und Bildung“ der Pädagogischen Hochschule Schwyz (Beat Döbeli Honegger) (2023)
- Hinweise zu textgenerierenden KI-Systemen im Kontext von Lehre und Lernen (Beatrix Busse, Ingo Kleiber, Franziska C. Eickhoff, Kathrin Andree) (2023)
- Der universelle Texter - Warum ChatGPT so fasziniert (Themen-Special von c't 05/23 (2023)
- Wer soll das alles lesen? - KI-Textgeneratoren überschwemmen das Internet (Hartmut Gieselmann)
- Wie ChatGPT die Schule verändern wird (Jochen Zenthöfer) (2023)
- info7 1/2023 - Das Magazin für Medien, Archive und Information (2023)
- Yes, We Are in a (ChatGPT) Crisis (Inara Scott) (2023)
- Jede Lehrkraft muss ChatGPT kennen (Lisa Becker) (2023)
- ChatGPT und andere Computermodelle zur Sprachverarbeitung - Grundlagen, Anwendungspotenziale und mögliche Auswirkungen (Steffen Albrecht) (2023)
- Hausaufgaben machen mit ChatGPT? (Heike Schmoll) (2023)
- ChatGPT & Co. - Mit KI-Tools effektiv arbeiten (2023)
- Wer soll das alles lesen? - KI-Textgeneratoren überschwemmen das Internet (Hartmut Gieselmann)
- Testing of Detection Tools for AI-Generated Text (Debora Weber-Wulff, Alla Anohina-Naumeca, Sonja Bjelobaba, Tomáš Foltýnek, Jean Guerrero-Dib, Olumide Popoola, Petr Šigut, Lorna Waddington) (2023)
- Forschung & Lehre 7/23 (2023)
- Vom Akkordarbeiter zum Gutachter (Dirk Siepmann) (2023)
- Robuste Erkennung von KI-generierten Texten in deutscher Sprache (Tom Tlok) (2023)
- c't 21/2023 (2023)
- Die 80-Prozent-Maschinen - Warum KI-Sprachmodelle weiterhin Fehler machen und was das für den produktiven Einsatz bedeutet (Hartmut Gieselmann) (2023)
- Künstliche Intelligenz, Large Language Models, ChatGPT und die Arbeitswelt der Zukunft (Michael Seemann) (2023)
- How can educators respond to students presenting AI-generated content as their own? (OpenAI) (2023)
- KI-Tools für den Unterricht (Inez De Florio-Hansen) (2023)
- Künstliche Intelligenz - Mehr als nur ein Hype? - Bildungsbeilage der NZZ vom 22.11.2023 (2023)
- Darf der Computer die Seminararbeit schreiben? (Reto U. Schneider)
- ChatGPT: Student aus Wedel entlarvt künstliche Intelligenz (Johannes Tran) (2024)
- GenAI Detection Tools Adversarial Techniques and Implications for Inclusivity in Higher Education (Mike Perkins, Jasper Roe, Binh H. Vu, Darius Postma, Don Hickerson, James McGaughran, Huy Q. Khuat) (2024)