Publikationsbias publication bias
Diese Seite wurde seit 2 Jahren inhaltlich nicht mehr aktualisiert.
Unter Umständen ist sie nicht mehr aktuell.
BiblioMap
Synonyme
Publikationsbias, publication bias
Definitionen
Unter Forschern werden meist nur positive Ergebnisse zitiert. Das führt dazu, dass die empirische Grundlage für einen ganzen Forschungszweig sehr klein sein kann, die Theorie aber aufgrund der Anzahl an Zitationen als gut bestätigt erscheint.
Von James Evans im Text «Massenmeinungen gefährden die Wissenschaft» (2014) A publication bias arises when editors and reviewers exhibit a preference for publishing
statistically significant results in contrast with methodologically sound studies reporting
nonsignificant results. To test whether such a bias exists, Atkinson et al. (1982)
submitted bogus manuscripts to 101 consulting editors of APA journals. The submitted
manuscripts were identical in every respect except that some results were statistically
significant and others were nonsignificant. Editors received only one version of the
manuscript and were asked to rate the manuscripts in terms of their suitability for publication.
Atkinson et al. found that manuscripts reporting statistically nonsignificant
findings were three times more likely to be recommended for rejection than manuscripts
reporting statistically significant results. A similar conclusion was reached by Coursol
and Wagner (1986) in their survey of APA members. These authors found that 80%
of submitted manuscripts reporting positive outcome studies were accepted for publication
in contrast with a 50% acceptance rate for neutral or negative outcome studies
(see Part B of Table 6.1).
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) auf Seite 119Bemerkungen
Dadurch, dass die ersten Artikel tausendfach zitiert wurden, sind sie immun geworden gegen jegliche Form des Hinterfragens. Daran ändert auch der globale Zugang zu diesem Wissen nichts.
Von James Evans im Text «Massenmeinungen gefährden die Wissenschaft» (2014) In their review Lipsey and Wilson (1993) found that published studies reported effect sizes that were on average 0.14 standard deviations larger than unpublished studies. Knowing the difference between published and unpublished effect sizes, reviewers can make informed judgments about the threat of publication bias and adjust their conclusions accordingly.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) im Text Minimizing bias in meta-analysis auf Seite 120Remember Goodhart’s law? “When a measure becomes a target, it ceases to be a good measure.” In a sense this is what has happened with p-values. Because a p-value lower than 0.05 has become essential for publication, p-values no longer serve as a good measure of statistical support. If scientific papers were published irrespective of p-values, these values would remain useful measures of the degree of statistical support for rejecting a null hypothesis. But since journals have a strong preference for papers with p-values below 0.05, p-values no longer serve their original purpose.
Von Carl T. Bergstrom, Jevin D. West im Buch Calling Bullshit (2020) im Text The Susceptibility of Science The existence of a publication bias is a logical consequence of null hypothesis
significance testing. Under this model the ability to draw conclusions is essentially
determined by the results of statistical tests. As we saw in Chapter 3, the shortcoming
of this approach is that p values say as much about the size of a sample as they do
about the size of an effect. This means that important results are sometimes missed
because samples were too small. A nonsignificant result is an inconclusive result. A
nonsignificant p tells us that there is either no effect or there is an effect but we missed
it because of insufficient power. Given this uncertainty it is not unreasonable for editors
and reviewers to exhibit a preference for statistically significant conclusions.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) auf Seite 119The existence of a publication bias is a logical consequence of null hypothesis significance testing. Under this model the ability to draw conclusions is essentially determined by the results of statistical tests. As we saw in Chapter 3, the shortcoming of this approach is that p values say as much about the size of a sample as they do about the size of an effect. This means that important results are sometimes missed because samples were too small. A nonsignificant result is an inconclusive result. A nonsignificant p tells us that there is either no effect or there is an effect but we missed it because of insufficient power. Given this uncertainty it is not unreasonable for editors and reviewers to exhibit a preference for statistically significant conclusions.1 Neither should we be surprised that researchers are reluctant to write up and report the results of those tests that do not bear fruit. Not only will they find it difficult to draw a conclusion (leading to the awful temptation to do a post hoc power analysis), but the odds of getting their result published are stacked against them. Combine these two perfectly rational tendencies - selective reporting and selective publication - and you end up with a substantial availability bias.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) im Text Minimizing bias in meta-analysis auf Seite 119Verwandte Objeke
Verwandte Begriffe (co-word occurance) | Schubladenproblemfile drawer problem(0.1), GIGO-Argumentgarbage in - garbage out argument(0.07), Kritik an Metaanalysen(0.06), apple-and-oranges-Problemapple-and-oranges-problem(0.06), Replikationskrise(0.04), type I errortype I error(0.04) |
Häufig co-zitierte Personen
John P. A.
Ioannidis
Ioannidis
Statistisches Begriffsnetz
Zitationsgraph
Zeitleiste
12 Erwähnungen
- Allgemeine Didaktik (Karl Frey, Angela Frey-Eiling)
- Forschungsmethoden und Evaluation - für Human- und Sozialwissenschaftler (Jürgen Bortz, Nicola Döring) (2001)
- Visible Learning - A Synthesis of Over 800 Meta-Analyses Relating to Achievement (John Hattie) (2009)
- 2. The nature of evidence - a synthesis of meta-analysis
- Ausgewählte Methoden der Didaktik (Karl Frey, Angela Frey-Eiling) (2009)
- The Essential Guide to Effect Sizes - Statistical Power, Meta-Analysis, and the Interpretation of Research Results (Paul D. Ellis) (2010)
- The Chrysalis Effect - How Ugly Initial Results Metamorphosize Into Beautiful Articles (2014)
- «Massenmeinungen gefährden die Wissenschaft» (James Evans, Max Neufeind, Haluka Maier-Borst) (2014)
- Mangelhafte Studien mit Patienten (Hanno Böck) (2019)
- Calling Bullshit - The Art of Skepticism in a Data-Driven World (Carl T. Bergstrom, Jevin D. West) (2020)
- Bernoulli's Fallacy - Statistical Illogic and the Crisis of Modern Science (Aubrey Clayton) (2021)
- Launching Registered Report Replications in Computer Science Education Research (Neil Brown, Eva Marinus, Aleata Hubbard Cheuoua) (2022)