Update 4/25/2019: Location of this seminar has been moved to SMI 211.
Bayesian hierarchical modeling is a powerful tool for demography and climate science. In this talk we will focus on its use for accounting for uncertainty about past demographic quantities in population projections. Since the 1940s, population projections have in most cases been produced using the deterministic cohort component method. However, in 2015, for the first time, in a major advance, the United Nations issued official probabilistic population projections for all countries based on Bayesian hierarchical models for total fertility and life expectancy.
A common challenge in estimating parameters of probability density functions is the intractability of the normalizing constant. While in such cases maximum likelihood estimation (MLE) may be implemented using numerical integration, the approach becomes computationally intensive. In contrast, the score matching method of Hyvärinen (2005) avoids direct calculation of the normalizing constant and yields closed-form estimates for exponential families of continuous distributions on the m-dimensional Euclidean space R^m.
Green Dot is a movement, a program, and an action. The aim of Green Dot is to prevent and reduce sexual assault & relationship violence at UW by engaging students as leaders and active bystanders who step in, speak up, and interrupt potential acts of violence. The Green Dot movement is about gaining a critical mass of students, staff and faculty who are willing to do their small part to actively and visibly reduce power-based personal violence at UW.
Hawkes processes has been a popular point process model for capturing mutual excitation of discrete events. In the network setting, this can capture the mutual influence between nodes, which has a wide range of applications in neural science, social networks, and crime data analysis. In this talk, I will present a statistical change-point detection framework to detect in real-time, a change in the influence using streaming discrete events.
The celebrated Grenander (1956) estimator is the maximum likelihood estimator of a decreasing density function. In contrast to alternative nonparametric density estimators, Grenander estimator does not require any smoothing parameters and is often viewed as a fully automatic procedure. However, the monotonic density assumption might be questionable. While testing qualitative constraints such as monotonicity are difficult in general, we show that a likelihood ratio test statistic Kₙ has an incredibly simple asymptotic null distribution: n¹
Randomization is a basis for inferring treatment effects with minimal additional assumptions. Appropriately using covariates in randomized experiments will further yield more precise estimators. In his seminal work Design of Experiments, R. A. Fisher suggested blocking on discrete covariates in the design stage and conducting the analysis of covariance (ANCOVA) in the analysis stage. In fact, blocking can be embedded into a wider class of experimental design called rerandomization, and the classical ANCOVA can be extended to more general regression-adjusted estimators.
At Amazon’s Inventory Planning and Control Laboratory (IPC Lab) we run randomized controlled trials (RCTs) that evaluate the efficacy of in-production buying and supply chain policies on important business metrics. Our customers are leading supply chain researchers and business managers within Amazon, and our mission is to help them best answer the question, ‘Should I roll out my policy?’ In this talk we discuss how we navigate multiple obstacles to fulfilling our mission.
Deep neural nets have become in recent years a widespread practical technology, with impressive performance in computer vision, speech recognition, natural language processing and many other applications. Deploying deep nets in mobile phones, robots, sensors and IoT devices is of great interest. However, state-of-the-art deep nets for tasks such as object recognition are too large to be deployed in these devices because of the computational limits they impose in CPU speed, memory, bandwidth, battery life or energy consumption.
Causal inference is a challenging problem because causation cannot be established from the observational data alone. Researchers typically rely on additional sources of information to infer causation from association. Such information may come from powerful designs such as randomization, or background knowledge such as information on all confounders. However, perfect designs or background knowledge required for establishing causality may not always be available in practice.