MaLGa logoMaLGa black extendedMaLGa white extendedUniGe ¦ MaLGaUniGe ¦ MaLGaUniversita di Genova | MaLGaUniversita di GenovaUniGe ¦ EcoSystemics
News

MaLGa's MLV Unit presents the MoCa Dataset

02/03/2022

moca actions - [Font Linea Rettangolo]

MaLGa’s MLV Unit presents MoCa, a dataset created in collaboration with the Contact and RBCS Units of IIT.

MoCA is a bi-modal dataset with Motion Capture data and video sequences acquired from multiple views. The focus is on upper body actions in a cooking scenario. A specific goals is investigating view-invariant action properties in both biological and artificial systems and in this sense it may be of interest for multiple research communities in the cognitive and computational domains.


The dataset includes 20 cooking actions, involving either a single or both arms of the volunteer, some of them including tools which may require different forces. Three different view-points have been considered for the acquisitions, i.e. lateral, egocentric, and frontal. For each action a training and a test sequence is available, each containing, on average, 25 repetitions of the action. Furthermore, acquisitions of more structured activities are included, in which the actions are performed in sequence for a final, more complex goal.

An annotation is available, which includes the segmentation of single action instances in terms of time instants in the MoCap reference frame. A function then allows to map the time instants on the corresponding frame in the video sequences. In addition, functionalities to load, segment, and visualize the data are also provided in Python and Matlab.


Visit the website at the link below.



Authors using this code in their pubblications should cite this paper:

The MoCA dataset, kinematic and multi-view visual streams of fine-grained cooking actions

E. Nicora, G. Goyal, N. Noceti, A. Vignolo, A. Sciutti, F. Odone Scientific Data 7 (1), 1-15