The Clickbayes Factor

« previous post | next post »

Among many other applications, this hypothesis (from the most recent xkcd) may finally offer a quantitative explanation for the generally poor quality of language-related articles in Science and Nature:

Mouseover title: "When comparing hypotheses with Bayesian methods, the similar 'clickbayes factor' can account for some harder-to-quantify priors."

(Though in the case of Science and Nature, the test subjects would need to be limited to something like "readers of interest to biomedical device and reagent advertisers".)



2 Comments

  1. Jonathan Badger said,

    June 2, 2018 @ 6:56 pm

    "generally poor quality of language-related articles in Science and Nature"

    You can probably leave out "language". As the example suggests, biomedical science is particularly prone to this problem.

  2. Brett said,

    June 3, 2018 @ 3:32 pm

    @Jonathan Badger: No, there is a real difference in quality between the biomedical sciences articles published in "top" journals and the articles in other fields. In biochemistry (and other such fields), the issue is that Science and Nature are unduly likely to publish results that are wrong, because the tendency is to publish articles with unexpected conclusions. However, in these fields the journals have a stable of referees who are genuinely able to distinguish results that are striking put not totally unreasonable from ones that are total nonsense. Results that turn out to be wrong may simply be products of three-sigma deviations.

    That is not the case even in hard science fields like physics; aside from some results in condensed matter physics, these journals are worthless to physicists. For linguistics, the problems are even more severe; there is no one on the editorial staff who can distinguish reasonable results from total nonsense, and they do not have relationships with referees to carry out the task for them.

RSS feed for comments on this post