/ en / Traditional / help

Beats Biblionetz - Begriffe

Publikationsbias publication bias

Diese Seite wurde seit 2 Jahren inhaltlich nicht mehr aktualisiert. Unter Umständen ist sie nicht mehr aktuell.

iconBiblioMap Dies ist der Versuch, gewisse Zusammenhänge im Biblionetz graphisch darzustellen. Könnte noch besser werden, aber immerhin ein Anfang!

Diese Grafik ist nur im SVG-Format verfügbar. Dieses Format wird vom verwendeteten Browser offenbar nicht unterstützt.

Diese SVG-Grafik fensterfüllend anzeigen

iconSynonyme

Publikationsbias, publication bias

iconDefinitionen

Unter Forschern werden meist nur positive Ergebnisse zitiert. Das führt dazu, dass die empirische Grundlage für einen ganzen Forschungszweig sehr klein sein kann, die Theorie aber aufgrund der Anzahl an Zitationen als gut bestätigt erscheint.
Von James Evans im Text «Massenmeinungen gefährden die Wissenschaft» (2014)
Paul D. EllisA publication bias arises when editors and reviewers exhibit a preference for publishing statistically significant results in contrast with methodologically sound studies reporting nonsignificant results. To test whether such a bias exists, Atkinson et al. (1982) submitted bogus manuscripts to 101 consulting editors of APA journals. The submitted manuscripts were identical in every respect except that some results were statistically significant and others were nonsignificant. Editors received only one version of the manuscript and were asked to rate the manuscripts in terms of their suitability for publication. Atkinson et al. found that manuscripts reporting statistically nonsignificant findings were three times more likely to be recommended for rejection than manuscripts reporting statistically significant results. A similar conclusion was reached by Coursol and Wagner (1986) in their survey of APA members. These authors found that 80% of submitted manuscripts reporting positive outcome studies were accepted for publication in contrast with a 50% acceptance rate for neutral or negative outcome studies (see Part B of Table 6.1).
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) auf Seite  119

iconBemerkungen

Dadurch, dass die ersten Artikel tausendfach zitiert wurden, sind sie immun geworden gegen jegliche Form des Hinterfragens. Daran ändert auch der globale Zugang zu diesem Wissen nichts.
Von James Evans im Text «Massenmeinungen gefährden die Wissenschaft» (2014)
Paul D. EllisIn their review Lipsey and Wilson (1993) found that published studies reported effect sizes that were on average 0.14 standard deviations larger than unpublished studies. Knowing the difference between published and unpublished effect sizes, reviewers can make informed judgments about the threat of publication bias and adjust their conclusions accordingly.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) im Text Minimizing bias in meta-analysis auf Seite  120
Calling BullshitRemember Goodhart’s law? “When a measure becomes a target, it ceases to be a good measure.” In a sense this is what has happened with p-values. Because a p-value lower than 0.05 has become essential for publication, p-values no longer serve as a good measure of statistical support. If scientific papers were published irrespective of p-values, these values would remain useful measures of the degree of statistical support for rejecting a null hypothesis. But since journals have a strong preference for papers with p-values below 0.05, p-values no longer serve their original purpose.
Von Carl T. Bergstrom, Jevin D. West im Buch Calling Bullshit (2020) im Text The Susceptibility of Science
Paul D. EllisThe existence of a publication bias is a logical consequence of null hypothesis significance testing. Under this model the ability to draw conclusions is essentially determined by the results of statistical tests. As we saw in Chapter 3, the shortcoming of this approach is that p values say as much about the size of a sample as they do about the size of an effect. This means that important results are sometimes missed because samples were too small. A nonsignificant result is an inconclusive result. A nonsignificant p tells us that there is either no effect or there is an effect but we missed it because of insufficient power. Given this uncertainty it is not unreasonable for editors and reviewers to exhibit a preference for statistically significant conclusions.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) auf Seite  119
Paul D. EllisThe existence of a publication bias is a logical consequence of null hypothesis significance testing. Under this model the ability to draw conclusions is essentially determined by the results of statistical tests. As we saw in Chapter 3, the shortcoming of this approach is that p values say as much about the size of a sample as they do about the size of an effect. This means that important results are sometimes missed because samples were too small. A nonsignificant result is an inconclusive result. A nonsignificant p tells us that there is either no effect or there is an effect but we missed it because of insufficient power. Given this uncertainty it is not unreasonable for editors and reviewers to exhibit a preference for statistically significant conclusions.1 Neither should we be surprised that researchers are reluctant to write up and report the results of those tests that do not bear fruit. Not only will they find it difficult to draw a conclusion (leading to the awful temptation to do a post hoc power analysis), but the odds of getting their result published are stacked against them. Combine these two perfectly rational tendencies - selective reporting and selective publication - and you end up with a substantial availability bias.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) im Text Minimizing bias in meta-analysis auf Seite  119

iconVerwandte Objeke

icon
Verwandte Begriffe
(co-word occurance)
Schubladenproblemfile drawer problem(0.1), GIGO-Argumentgarbage in - garbage out argument(0.07), Kritik an Metaanalysen(0.06), apple-and-oranges-Problemapple-and-oranges-problem(0.06), type I errortype I error(0.04), Replikationskrise(0.03)

iconHäufig co-zitierte Personen

John P. A. Ioannidis John P. A.
Ioannidis

iconStatistisches Begriffsnetz  Dies ist eine graphische Darstellung derjenigen Begriffe, die häufig gleichzeitig mit dem Hauptbegriff erwähnt werden (Cozitation).

iconZitationsgraph

Diese Grafik ist nur im SVG-Format verfügbar. Dieses Format wird vom verwendeteten Browser offenbar nicht unterstützt.

Diese SVG-Grafik fensterfüllend anzeigen

iconZitationsgraph (Beta-Test mit vis.js)

iconZeitleiste

icon12 Erwähnungen  Dies ist eine nach Erscheinungsjahr geordnete Liste aller im Biblionetz vorhandenen Werke, die das ausgewählte Thema behandeln.

iconAnderswo suchen  Auch im Biblionetz finden Sie nicht alles. Aus diesem Grund bietet das Biblionetz bereits ausgefüllte Suchformulare für verschiedene Suchdienste an. Biblionetztreffer werden dabei ausgeschlossen.

iconBiblionetz-History Dies ist eine graphische Darstellung, wann wie viele Verweise von und zu diesem Objekt ins Biblionetz eingetragen wurden und wie oft die Seite abgerufen wurde.