30/11/2020

Multi-task Learning with Future States for Vision-based Autonomous Driving

Inhan Kim, Hyemin Lee, Joonyeong Lee, Eunseop Lee, Daijin Kim

Keywords:

Abstract: Human drivers consider past and future driving environments to maintain stable control of a vehicle. To adopt a human driver's behavior, we propose a vision-based autonomous driving model, called Future Actions and States Network (FASNet), which uses predicted future actions and generated future states in multi-task learning manner. Future states are generated using an enhanced deep predictive-coding network and motion equations dened by the kinematic vehicle model. The nal control values are determined by the weighted average of thepredicted actions for a stable decision. With these methods, the proposed FASNet has a high generalization ability in unseen environments. To validate the proposed FASNet, we conducted several experiments, including ablation studies in realistic three-dimensional simulations. FASNet achieves a higher Success Rate (SR) on the recent CARLA benchmarks under several conditions as compared to state-of-the-art models.

The video of this talk cannot be embedded. You can watch it here:
https://accv2020.github.io/miniconf/poster_991.html
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ACCV 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers