Abstract:
Extraction of high quality alpha mattes from natural images has been a crucial problem with wide range of applications in the real world. Currently, most of the image matting techniques require a marked unknown region known as “Trimap”, as input for estimating alpha. But due to lack of trimap, majority of the techniques tend to generate the trimap by eroding and dilating ground-truth alpha maps. This in turn makes the prior-art inflexible towards minor inaccuracies introduced while making use of segmentation-based trimaps. In this paper, we introduce a novel, state of the art alpha matting model, “IamAlpha”, which uses trimap adaptation as an auxiliary task to adapt and fix the input trimap errors so that our alpha network focuses primarily on estimating transparency of high-level features (fine structures like hair, furs etc.) crucial to image matting. This in-turn helps us to enable high quality matting applications in real time at 60fps on GPU and 30fps on mobile hardware.