

DAP: A Framework for Driver Attention Prediction
Human drivers employ their attentional systems during driving to focus on critical items and make judgments. Because gaze data can indicate human attention, collecting and analyzing gaze data has emerged in recent years to improve autonomous driving technologies. In safety-critical situations, it is important to predict not only where the driver focuses his attention but also on which objects. In this work, we propose DAP, a novel framework for driver attention prediction that bridges the attention prediction gap between pixels and objects. The DAP Framework is evaluated on the Berkeley DeepDrive Attention (BDD-A) dataset. DAP achieves state-of-the-art performance in both pixel-level and object-level attention prediction, especially improving object detection accuracy from 78 to 90%. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.