Learning Maps

Speaker: Dana Wilkinson

Given a collection of data points annotated with action labels (obtained, for example, from a robot mounted with a web camera or a laser range finder) we wish to learn a representation of the data which can be used as a map (for planning, etc.). We argue that creating maps involves a high degree of compression and that the actions must correspond to simple operations on the map, in other words the problem will be cast as a variation of a common machine learning task---dimensionality reduction---which somehow respects the action labels.

Here, a solution is proposed where the actions correspond to distance-preserving transformations in a representation learned with a semidefinite program. Additionally, the transformations in the learned representation which correspond to the actions can be recovered and used. Some resulting maps (and their uses) will be demonstrated in a robot-inspired image domain.