AROS: Affordance Recognition with One-Shot Human Stances

Pacheco-Ortega, Abel and Mayol-Cuevas, Walterio (2023) AROS: Affordance Recognition with One-Shot Human Stances. Frontiers in Robotics and AI, 10. ISSN 2296-9144

[thumbnail of pubmed-zip/versions/1/package-entries/frobt-10-1076780/frobt-10-1076780.pdf] Text
pubmed-zip/versions/1/package-entries/frobt-10-1076780/frobt-10-1076780.pdf - Published Version

Download (32MB)

Abstract

We present Affordance Recognition with One-Shot Human Stances (AROS), a one-shot learning approach that uses an explicit representation of interactions between highly articulated human poses and 3D scenes. The approach is one-shot since it does not require iterative training or retraining to add new affordance instances. Furthermore, only one or a small handful of examples of the target pose are needed to describe the interactions. Given a 3D mesh of a previously unseen scene, we can predict affordance locations that support the interactions and generate corresponding articulated 3D human bodies around them. We evaluate the performance of our approach on three public datasets of scanned real environments with varied degrees of noise. Through rigorous statistical analysis of crowdsourced evaluations, our results show that our one-shot approach is preferred up to 80% of the time over data-intensive baselines.

Item Type: Article
Subjects: Universal Eprints > Mathematical Science
Depositing User: Managing Editor
Date Deposited: 17 Jun 2023 04:36
Last Modified: 01 Nov 2023 03:40
URI: http://journal.article2publish.com/id/eprint/2170

Actions (login required)

View Item
View Item