University of Washington - Department of Statistics
HME (Hierarchical Mixture of Experts) is a tree structured architecture for supervised learning. It is characterized by Soft multi-way probabilistic splits, generally based on linear functions of input values, and by linear or logistic fit of the terminal nodes (in HME literature called Experts) rather then constant function as in CART. The statistical model underlying HME is a hierarchical mixture model, which allows for maximum likelihood estimation of the parameters using EM methods. HME have been applied with success to the analysis of correlated data, Cox regression, predictive models and classification. I will present an application of this technique to classification problems.
This presentation is based on a paper by Michael I. Jordan and Robert A. Jacobs, published in Neural Computation in 1991, Hierarchical Mixture of Experts and the EM Algorithm. Application to classification are from Titsias and Likas (2002), and Gilardi and Bengio(2000).