Adaptive experimental design (AED), or active learning, leverages already-collected data to guide future measurements, in a closed loop, to collect the most informative data for the learning problem at hand. In both theory and practice, AED can extract considerably richer insights than any measurement plan fixed in advance, using the same statistical budget. Unfortunately, the same mechanism of feedback that can aid an algorithm in collecting data can also mislead it: a data collection heuristic can become overconfident in an incorrect belief, then collect data based on that belief, yet give little indication to the practitioner that anything went wrong. Consequently, it is critical that AED algorithms are provably robust with transparent guarantees. In this talk I will present my recent work on near-optimal approaches to adaptive testing with false discovery control and the best-arm identification problem for linear bandits, and how these approaches relate to, and leverage, ideas from non-adaptive optimal linear experimental design.
Kevin G. Jamieson