Abstract:
Vision for autonomous driving is a uniquely challenging problem: the number of tasks required for full scene understanding is large and diverse; the quality requirements on each task are stringent due to the safety-critical nature of the application; and the latency budget is limited, requiring real-time solutions. In this work we address these challenges with QuadroNet, a one-shot network that jointly produces four outputs: 2D detections, instance segmentation, semantic segmentation, and monocular depth estimates in real-time (>60fps) on consumer-grade GPU hardware. On a challenging real-world autonomous driving dataset, we demonstrate an increase of +2.4% mAP for detection, +3.15% mIoU for semantic segmentation, +5.05% mAP@0.5 for instance segmentation and +1.36% in delta<1.25 for depth prediction over a baseline approach. We also compare our work against other multi-task learning approaches on Cityscapes and demonstrate state-of-the-art results.