22/11/2021

Exemplar-Based Early Event Prediction in Video

ZEKUN ZHANG, FARRUKH M KORAISHY, Minh Hoai

Keywords: early prediction, video prediction, exemplar

Abstract: Observing a video stream and being able to predict target events of interest before they occur is an important but challenging task due to the stochastic nature of visual events. This task requires a classifier that can separate the precursory signals that lead to the events and the ones that do not. However, a naïve approach for training this classifier would require seeing many examples of the target events before a model with high precision can be obtained. In this paper, we propose a method for early prediction of visual events based on an ensemble of exemplar predictors. Each exemplar predictor is associated with an instance of the target event, being trained to separate the target event from negative samples. The exemplar predictors can be calibrated and integrated to create a stronger predictor. Experiments on several datasets show that the proposed exemplar-based framework outperforms other methods, yielding higher precision given fewer training samples. Our code and datasets can be found at https://github.com/cvlab-stonybrook/EnEx.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers