22/11/2021

Human-object Interaction Detection without Alignment Supervision

Mert Kilickaya, Arnold W.M. Smeulders

Keywords: human-object interactions, visual relationship detection, weakly supervised learning, visual transformers

Abstract: The goal of this paper is Human-object Interaction (HO-I) detection. HO-I detection aims to find interacting human-objects regions and classify their interaction from an image. Researchers obtain significant improvement in recent years by relying on strong HO-I alignment supervision. HO-I alignment supervision pairs humans with their interacted objects, and then aligns human-object pair(s) with their interaction categories. Since collecting such annotation is expensive, in this paper, we propose to detect HO-I without alignment supervision. We instead rely on image-level supervision that only enumerates existing interactions within the image without pointing where they happen. Our paper makes three contributions: 1. We propose Align-Former, a visual-transformer based CNN that can detect HO-I with only image-level supervision. 2. Align-Former is equipped with HO-I align layer, that can learn to select appropriate targets to allow detector supervision. 3. We evaluate Align-Former on HICO-DET and V-COCO, and show that Align-Former outperforms existing image-level supervised HO-I detectors by a large margin (4.71 mAP improvement from 16.14 to 20.85 on HICO-DET).

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers