Estimating Motion Codes from Demonstration Videos

TL;DR – This work uses the features from the motion taxonomy to improve action recognition on egocentric videos from the EPIC-KITCHENS dataset. This is done by integrating motion code detection for action sequences.

January 2021 · Maxat Alibayev, David Paulius, Yu Sun

Estimating Motion Codes from Demonstration Videos

TL;DR – In this work, we showed how motion codes (which can be constructed using the motion taxonomy proposed in our RSS 2020 paper) can be used to improve action recognition with deep neural networks.

October 2020 · Maxat Alibayev, David Paulius, Yu Sun

A Motion Taxonomy for Manipulation Embedding

TL;DR – In this work, we introduce new changes to the features of the motion taxonomy and show how action verbs encoded as motion codes better capture differences between them than conventional word embedding (as word2vec).

July 2020 · David Paulius, Nicholas Eales, Yu Sun

Manipulation Motion Taxonomy and Coding for Robots

TL;DR – This paper introduces the motion taxonomy, a collection of robot-relevant features that are better suited for verb or action embedding than conventional word embedding. Motion codes are constructed per verb using the taxonomy. In this work, we show that motion codes assigned to verbs are closely related to one another based on force and trajectory data.

November 2019 · David Paulius, Yongqiang Huang, Jason Meloncon, Yu Sun

Long Activity Video Understanding using Functional Object-Oriented Network

TL;DR – This work leverages functional object-oriented networks and deep learning for video understanding. In addition, with the deep network framework, we jointly recognize object and action types, which can then be used for constructing new FOON structures.

December 2018 · Ahmad Babaeian Jelodar, David Paulius, Yu Sun