Finally, after Fermat’s last theorem, someone (Sharon McGrayne) has written a book on Bayes’ theorem. It’s been reviewed by Andrew Robinson in Nature. From the review:

Considering the widespread effectiveness of Bayesian inference in physics and astronomy, genetics, imaging and robotics, Internet communication, finance and commerce, it is surprising that it has remained controversial for so long… McGrayne explains [users’] reticence [to admit to using Bayes] in her impressively researched history of Bayes’ theorem,

The Theory That Would Not Die. The statistical method runs counter to the conviction that science requires objectivity and precision, she writes. Bayes’ theorem “is a measure of belief. And it says that we can learn even from missing and inadequate data, from approximations, and from ignorance.”

The reviews on Amazon are mixed, and venom seems present in some of them…

Thanks to Román for the good find!

RStudio is add-on software for R that gives it an improved (more user-friendly) interface and some very useful additional features (e.g. color coding of code, options to manipulate graphs with the Manipulate package), definitely worth trying out for anyone relatively new to R as well as routine users!

More info here:

http://www.rstudio.org/

- Step 1: send your paper to the
*Journal of Universal Rejection* - Step 2: send your paper to
*Rejecta Mathematica: caveat emptor*

Unfortunately, although JUR guarantees rejection of your paper, RMce doesn’t guarantee acceptance of all previously rejected articles. But I bet the odds are good!

(Nothing lewd follows.)

Just stumbled upon a fascinating short article by Rothman (1990) in the first volume of *Epidemiology* 1:43–6. The title of the paper says it all: no adjustments are needed for multiple comparisons. An exerpt from the end:

Suppose the drug C differs considerably in its effect from drug B. Will this difference be less worthy of attention when, sometime in the future, information on drug D comes along as part of the same research programme? Should an investigator estimate on the first day of data analysis how many contrasts ultimately will come along before making adjustments for multiple comparisons? Where do the boundaries of a specific study lie…?

What I’ve taken to doing when I have multiple nil hypothesis significance tests to perform is to write in the figure/table caption or methods the expected number of spurious positive findings conditional on the incredibly pessimistic premise that all nil hypotheses really are true (which is probably impossible for observational studies). Maybe I should cease even this?

This paper should be compulsory reading for everyone interested in statistics (and able to meet the prerequisites).

Co-author it with primary school children.

It’s actually quite an interesting study, but methinks the reviewers perhaps went easy on the eight year olds. There are more than two figures (expressly forbidden by the journal), spelling doesn’t seem to abide by the OED (e.g. “duh duh duuuuhhh”: duh is in, but not duuuuhhh), the introduction starts with “Once upon a time” despite there being no historical aspect to the work, and one author appears to be made up (a certain “P S Blackawton” whose affiliation is Blackawton Primary School). It reminds me of those letters to national newspapers by “Bobby, 7” and his friends.

The article has attracted quite a bit of attention in the press. More so than any of mine, so perhaps my words are tinged with jealousy…

© 2019 statistics and applied probability

Theme by Anders Noren — Up ↑

## recent comments