Department of Statistics Home

M.S. Statistical Theory Exam Syllabus

M.S. Exams
Applied | Theory | Thesis


This exam is a three-hour exam on statistical theory. It is assumed that all candidates will have a background corresponding to Statistics 512 and 513. The exam will typically consist of 4-7 questions on the following topics:

A study guide for each of these topics and references are given below.


This exam is given once a year, currently in mid-June.

Study Guide and References

BASIC PROBABILITY THEORY: Basic facts of probability, conditional probability, independence, and random variables; means, variances; transformations; discrete distributions that include Bernoulli, binomial, geometric and negative binomial; the Poisson process and Poisson, exponential, and gamma distributions; continuous distributions that include uniform, normal, beta, Cauchy, and double exponential; moment generating functions.

MULTIVARIABLE MODELS: Joint, marginal, and conditional distributions; conditional mean and variance, independence, factorizing joint densities; expectation, moments, covariance and correlation, various methods of covariance calculation that include conditioning; location and scale families, exponential families; multivariate transformations, orthogonal transformation; bivariate normal distribution, linear regression, jointly normal random variables and orthogonal transformations; multinomial distribution.

SAMPLING RESULTS AND ASYMPTOTIC THEORY: Law of large numbers and central limit theorem; basic properties of the sample mean X-bar and sample variance; the distribution of student-t, Snedecor-F, and the range of a sample; sampling from finite populations. Asymptotics: propagation of error (=delta method), variance stabilizing transformations, asymptotic distributions of sample quantiles.

ESTIMATION: Mean square error, method of moments, maximum likelihood estimators (MLE's), MLE's for Uniform(0,y) and Gamma(r,s), other examples of MLE's; exponential families and sufficiency, ancillary statistics and Basu's theorem, completeness; Cramér-Rao inequality, uniform minimum variance unbiased estimation, consistency and asymptotic distributions.

TESTING AND CONFIDENCE INTERVALS: Hypothesis tests and power functions, likelihood ratio tests, Neyman-Pearson lemma and uniformly most powerful tests, asymptotic properties; confidence intervals, relationship between hypothesis tests and confidence intervals; one-sample, two-sample, and linear regression models. Pearson's chi-square goodness-of-fit statistics for multinomial data.

BAYESIAN APPROACHES: Bayes estimators and conjugate priors, Bayesian tests, Bayesian intervals, Bayesian estimators and decision theory.