Seminar Details

Seminar Details


Nov 14

3:30 pm

Towards Understanding Behavior and Intent

Tomas Singliar


Boeing Research - Department of Autonomous Systems

We consider the problem of “understanding”, on a massively automated scale, the behavior of people. To a first approximation, people are rational agents in the sense of optimizing a utility function. If their utility function can be estimated, one can define “understanding” as identifying the likely underlying incentives for the observed actions and “intent” as the best course of action available to an agent optimizing such a function. Inverse reinforcement learning (IRL) techniques provide the necessary foundation for estimating an agent’s utility function and predicting agent intent. While applied IRL algorithms do suffer from the large dimensionality of the reward function space, many applications that can benefit from an IRL-based approach to assessing agent intent involve a domain expert or analyst. We describe a procedure for scaling up the estimation by eliciting good IRL basis functions from the domain expert and discuss issues and some experiments in modeling the perceptual and decision space for the analysis of vehicle movements and detection of anomalous driving behavior.