Online Resources & Articles

Faculty & Staff from the BERD Unit both curate and contribute to the articles found below. If you would like to receive notifications when we post new articles, please click here. If you have any suggestions for new topics or content, please contact us here.

Statistical tests

  • Anatomy of a Diagnostic Test – an R Shiny Example by Saunak Sen- The positive and negative predictive values of a dignostic test depend not only on its sensitivity and specificity, but also the prevalence of the disease (or the pre-test probability of disease).  This interactive display illustrates that inter-dependence. Move the sensitivity, specificity, and disease prevalence sliders, and watch the positive and negative predictive value sliders change… Read More
  • Which test to use in what situation [outside article]- Having trouble deciding what statistical test to use for your data? Use this handy flowchart from Penn State to decide. It includes a review of all the statistical techniques provided, as well as a table consisting of inferences, parameters, statistics, types of data, examples, analysis, Minitab commands, and conditions.

R, RStudio, Shiny & SAS


  • American Statistical Association’s statement on p-values [outside article]- The American Statistical Association (ASA) has released a “Statement on Statistical Significance and P-Values” with six principles underlying the proper use and interpretation of the p-value. The ASA releases this guidance on p-values to improve the conduct and interpretation of quantitative science and inform the growing emphasis on reproducibility of science research. The statement also… Read More
  • Why Frank Harrell does not like p-values by Frank Harrel [outside article]- With the many problems that p-values have, and the temptation to “bless” research when the p-value falls below an arbitrary threshold such as 0.05 or 0.005, researchers using p-values should at least be fully aware of what they are getting. They need to know exactly what a p-value means and what are the assumptions required… Read More
  • Note on small p-value hacking by Thomas Lumley [outside article]- The proposal to change p-value thresholds from 0.05 to 0.005 won’t die. I think it’s targeting the wrong question:  many studies are too weak in various ways to provide the sort of reliable evidence they want to claim, and the choices available in analysis and publication process eat up too much of that limited information. … Read More
  • Comment on proposal to lower the p-value threshold to 0.005 by John Ionnidis [outside article]- P values and accompanying methods of statistical significance testing are creating challenges in biomedical science and other disciplines. The vast majority (96%) of articles that report P values in the abstract, full text, or both include some values of .05 or less.1 However, many of the claims that these reports highlight are likely false.2 Recognizing the major importance of the… Read More


  • Julia Debugging Basics by Gregory Farage- This is a practical how-to guide on best practices for debugging code in Julia using the Gallium package. We explore two methods: REPL and Juno-Atom. Installation To use the debugger Gallium in Julia 0.6+, the following packages should be installed: Gallium and ASTInterpreter2. julia> Pkg.add("Gallium") julia> Pkg.clone("") julia> Pkg.clone("") There are two possible ways to debug with Gallium,… Read More
  • Missing values in Julia by Milan Bouchet-Valat [outside article]- Starting from Julia 0.7, missing values are represented using the new missing object. Resulting from intense design discussions, experimentations and language improvements developed over several years, it is the heir of the NA value implemented in the DataArrays package, which used to be the standard way of representing missing data in Julia.

Computation, statistical learning, and optimization