22/11/2021

OODformer: Out-Of-Distribution Detection Transformer

Rajat Koner, Poulami Sinhamahapatra, Karsten Roscher, Stephan Günnemann, Volker Tresp

Keywords: Out-Of-Distribution Detection, Vision Transfomer, Repsentation Learning

Abstract: A serious problem in image classification is that a trained model might perform well for input data that originates from the same distribution as the data available for model training, but performs much worse for out-of-distribution (OOD) samples. In real-world safety-critical applications, in particular, it is important to be aware if a new data point is OOD. To date, OOD detection is typically addressed using either confidence scores, auto-encoder-based reconstruction, or contrastive learning. However, the global image context has not yet been explored to discriminate the non-local objectness between in-distribution and OOD samples.  This paper proposes a first-of-its-kind OOD detection architecture named OODformer that leverages the contextualization capabilities of the transformer. Incorporating the transformer as the principle feature extractor allows us to exploit the object concepts and their discriminate attributes along with their co-occurrence via visual attention.  Using the contextualized embedding, we demonstrate OOD detection using both class-conditioned latent space similarity and a  network confidence score.   Our approach shows improved generalizability across various datasets.   We have achieved a new state-of-the-art result on CIFAR-10/-100 and ImageNet30

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers