Cornell University - Computing and Information Science
This talk will present a series of work on probabilistic hashing methods which typically transform a challenging (or infeasible) massive data computational problem into a probability and statistical estimation problem. For example, fitting a logistic regression (or SVM) model on a dataset with billion observations and billion (or billion square) variables would be difficult. Searching for similar documents(or images) in a repository of billion web pages (or images) is another challenging example. In certain important applications in the search industry, a web page is often represented as a binary (0/1) vector in billion square (2 to power 64) dimensions. For those data, both data reduction (i.e., reducing number of nonzero entries) and dimensionality reduction are crucial for achieving efficient search and statistical learning.
This talk will present two closely related probabilistic methods: (1)
b-bit minwise hashing and (2) one permutation hashing, which
simultaneously perform effective data reduction and dimensionality
reduction on massive, high-dimensional, binary data. For example,
training an SVM for classification on a text dataset of size 24GB took
only 3 seconds after reducing the dataset to merely 70MB using our
Bio: Ping Li is an assistant professor in the Faculty of Computing
and Information Science (CIS) at Cornell University. His research
interests include bigdata and statistical learning. He received the
ONR Young Investigator Award in 2009 and AFOSR Young Investigator Award in 2013.Ping Li's research has been supported by Google, Microsoft, NSF, and DoD. He also won a prize in the 2010 Yahoo! Learning to Rank Grand
Challenge using his own boosting and tree algorithm/code.