Abstract:
Human gaze is a cost-efficient physiological data that reveals human underlying attentional patterns. Also known as the selective attention mechanism, it helps the cognition system focus on task-relevant visual clues by ignoring the presence of distractors. Thanks to this ability, our human is able to efficiently learn from the very limited number of training samples. Here, we aim to leverage gaze for medical image analysis on both segmentation and classification. Our proposed framework includes a backbone encoder and a Selective Attention Network (SAN) that simulates the underlying attention. The SAN internally encodes information such as suspicious regions that is relevant to the medical diagnose tasks by estimating the actual human gaze. Then we design a novel Auxiliary Attention Block (AAB) to allow information from SAN to be utilized by the backbone encoder to focus on selective areas. Specifically, this block uses a modified version of a multi-head attention layer to simulate the human visual search procedure. With this design, the SAN and AAB can be plugged into different backbones, and the framework can be used for multiple medical image analysis tasks when equipped with task-specific heads. Our method is demonstrated to achieve superior performance on both tasks of 3D tumor segmentation and chest X-ray diagnosis. We also show that the estimated gaze probability map of the SAN is consistent with an actual gaze fixation map obtained by board-certified doctors.