In recent years, new technologies in neuroscience have made it possible to measure the activities of large numbers of neurons in behaving animals. For each neuron, a fluorescence trace is measured; this can be seen as a first-order approximation of the neuron's activity over time. Determining the exact time at which a neuron spikes on the basis of its fluorescence trace is an important open problem in the field of computational neuroscience. Recently, a convex optimization problem involving an $\ell_1$ penalty was proposed for this task. In this paper, we slightly modify that recent proposal by replacing the $\ell_1$ penalty with an $\ell_0$ penalty. In stark contrast to the conventional wisdom that $\ell_0$ optimization problems are computationally intractable, we show that the resulting optimization problem can be efficiently solved for the global optimum using an extremely simple and efficient dynamic programming algorithm.
The emergence of compact GPS systems and the establishment of open data initiatives has resulted in widespread availability of spatial data for many urban centres. These data can be leveraged to develop data-driven intelligent resource allocation systems for urban issues such as policing, sanitation, and transportation. We employ techniques from Bayesian non-parametric statistics to develop a process which captures a common characteristic of urban spatial datasets. Specifically, our new spatial process framework models events which occur repeatedly at discrete spatial points, the number and locations of which are unknown a priori. We develop a representation of our spatial process which facilitates posterior simulation, resulting in an interpretable and computationally tractable model. The framework's superiority over both empirical grid-based models and Dirichlet process mixture models is demonstrated by fitting, interpreting, and comparing models of graffiti prevalence for both downtown Vancouver and Manhattan.
To examine the variance reduction from portfolios with both primary and derivative assets we develop a mean–variance Markovitz portfolio management problem. By invoking the delta–gamma approximation we reduce the problem to a well-posed quadratic programming problem. From a practitioner’s perspective, the primary goal is to understand the benefits of adding derivative securities to portfolios of primary assets. Our numerical experiments quantify this variance reduction from sample equity portfolios to mixed portfolios (containing both equities and equity derivatives).
Recently reconstructing evolutionary histories has become a computational issue due to the increased availability of genetic sequencing data and relaxations of classical modelling assumptions. This thesis specializes a Divide & conquer sequential Monte Carlo (DCSMC) inference algorithm to phylogenetics to address these challenges. In phylogenetics, the tree structure used to represent evolutionary histories provides a model decomposition used for DCSMC. In particular, speciation events are used to recursively decompose the model into subproblems. Each subproblem is approximated by an independent population of weighted particles, which are merged and propagated to create an ancestral population. This approach provides the flexibility to relax classical assumptions on large trees by parallelizing these recursions.
Phylogenetic Inference by Divide and Conquer Sequential Monte Carlo (DCSMC), Statistical Society of Canada Annual Meeting, Dalhousie University (June 2015)
Bayesian Non-Parametric Model for a class of Spatial Point Processes, Statistical Society of Canada Annual Meeting, University of Toronto (May 2014)
Stochastic pairs trading through cointegration, The First 3-C Risk Forum & 2011 International Conference on Engineering and Risk Management (ERM), Fields Institute (October 2011)