Ron Sandland recently wrote about the new phenomenon of 'big data' - weighing up the benefits and concerns. Terry Speed reflected on the same issue in a talk earlier this year inGothenburg, Sweeden noting that this is nothing new to statisticians. So what's all the fuss about? Here's another take on the 'big data' bandwagon.
Professors Murray Aitkin and David Fox are invited speakers at the Australian Applied Statistics Conference (AASC) 2014.
This talk describes a model-based framework for assessing response to toxins in small animal laboratory studies with very small numbers of animals, to assess correctly the variability of estimation of the standard measures of toxicity. Such studies are the basis of establishing water quality limits on toxic pollutants in ecotoxicology. With very small n we cannot rely on asymptotic arguments or results, and there are no general frequentist procedures with exact sampling distributions. Bayesian methods are essential. We give a well-known example of a small animal dose-response study and compare the available frequentist procedures with the Bayesian analysis.