scispace - formally typeset
Search or ask a question

How does visual modality affect to driver comprehension in adaptive cruise control autonomous car? 


Best insight from top research papers

Visual modality has an impact on driver comprehension in adaptive cruise control (ACC) autonomous cars. Drivers using ACC tend to devote less attention to the forward road compared to manual driving . However, visual behavior is influenced by driving situation characteristics, and drivers are responsive to perceptual cues that alert them to an impending threat . In a study examining the effects of explanation modalities on drivers' situational trust and cognitive workload, it was found that participants were more satisfied with visual-only explanations in lower levels of situation awareness (SA) (SA L1 and SA L2) and more satisfied with visual and auditory explanations in higher levels of SA (SA L3) . The study also showed that participants reported the highest trust with SA L2 explanations, although the mental workload was assessed higher in this level . These findings suggest that visual modality plays a role in driver comprehension and trust in ACC autonomous cars.

Answers from top 5 papers

More filters
Papers (5)Insight
The provided paper does not specifically mention the effects of visual modality on driver comprehension in adaptive cruise control autonomous cars.
The provided paper does not specifically address the effect of visual modality on driver comprehension in adaptive cruise control autonomous cars.
The provided paper does not specifically mention the effects of visual modality on driver comprehension in adaptive cruise control autonomous cars.
Open accessPosted Content
Dongming Sun, Xiao Huang, Kailun Yang 
11 Citations
The provided paper does not discuss the effect of visual modality on driver comprehension in adaptive cruise control autonomous cars.
Open accessProceedings ArticleDOI
Dongming Sun, Xiao Huang, Kailun Yang 
07 Oct 2019
11 Citations
The provided paper does not discuss the effect of visual modality on driver comprehension in adaptive cruise control autonomous cars.

Related Questions

What are the extra components for autonomous driving?4 answersAdditional components crucial for autonomous driving include active sensors like LiDAR and Radar for accurate depth information, mono cameras for interpreting 2D information and estimating lateral velocity, and a differentiable integrated prediction and planning (DIPP) framework for learning cost functions and optimizing trajectories based on predicted surrounding agent trajectories. These components play vital roles in enhancing the capabilities of autonomous vehicles by providing essential data inputs, tracking multiple vehicles simultaneously, and enabling efficient prediction and planning for safe and socially compliant driving behavior. Incorporating these components ensures better navigation, object detection, trajectory planning, and overall performance of autonomous vehicles in various driving scenarios.
What is the autonomous vehicles perception?4 answersAutonomous vehicles rely on perception systems to interpret their surroundings. These systems utilize advanced sensors like cameras, radars, and lidars to detect objects and ensure safety. Machine learning algorithms play a crucial role in object detection and classification within the vehicle's environment. The integration of perception and control systems is essential for efficient obstacle recognition and real-time decision-making in various road conditions. Deep learning-based perception systems enable real-time vehicle detection, multi-target detection, and collision detection, enhancing the overall safety and functionality of autonomous driving. The continuous enhancement of sensor technologies and the evolution of machine learning methods contribute to improving the accuracy and reliability of autonomous vehicles' perception capabilities.
Can autonomous driving imrpove low vision people mobility?4 answersAutonomous driving has the potential to improve mobility for individuals with low vision. Studies have shown that persons with visual impairments are optimistic about the enhanced mobility and independence that self-driving vehicles can provide. However, concerns exist regarding the implementation of this technology, which have not been thoroughly explored. To address the accessibility issues faced by older adults with age-related macular degeneration, a prototype design has been developed. It includes features such as a voice-activated navigation system, a 360-degree in-vehicle camera, and a physical button for manual activation. Another study reports the development of an autonomous mobility system specifically designed for blind individuals. This system uses audio reconstruction to provide real-time information about traffic lights, crosswalks, and location, improving the independent behavior and quality of life for blind people. Overall, autonomous driving technology has the potential to enhance mobility and independence for individuals with low vision, but further research and development are needed to address concerns and ensure accessibility.
Do valence and arousal during driving affect visual attention?5 answersValence and arousal during driving can affect visual attention. Previous research has shown that emotions have differential influences on attention across time. Sadness produces a larger attentional blink, while fear and happiness produce smaller attentional blinks. Emotional processing can influence visual attention by increasing or decreasing the resources allocated for processing stimuli. Affective content, regardless of arousal levels, can increase dwelling times and modulate visual orientation. Negative arousal enhances attention biases towards perceptually salient stimuli, while valence does not impact salience biases. Innate attentional mechanisms prioritize the processing of potential threats and opportunities for satisfying basic needs, suggesting that high arousing, appetitive stimuli may also preferentially capture attention. Therefore, both valence and arousal can influence visual attention during driving.
How driver understanding warning in level 3 autonomous car?5 answersDriver understanding of warnings in level 3 autonomous cars is a critical factor in ensuring safe and effective operation. Research has shown that the level of warning interfaces displayed to drivers can significantly impact their decision-making and driving behavior. In order to meet the needs of high-level autonomous driving while ensuring driver awareness, combined warning strategies have been proposed to effectively remind drivers of their responsibilities and actions. The transition from active to passive driver roles in autonomous vehicles can lead to a decrement in situation awareness and driver abilities, highlighting the importance of studying driver recognition times in complex situations. Older drivers, in particular, face challenges in understanding and responding to takeover requests in level 3 autonomous vehicles due to their declined cognitive and motor capacities. Overall, understanding driver responses to warnings in level 3 autonomous cars is crucial for the development of effective automated driving systems.
How can multimodal language learning be used to improve reading comprehension?3 answersMultimodal language learning can be used to improve reading comprehension by incorporating different modes of communication, such as text, images, and collaborative learning. By using a multimodal approach, students are able to make connections between different forms of information, enhancing their understanding of the text. This approach promotes the productive and receptive use of images, linking language and image to assign meaning in a literary context. Additionally, the use of images alongside text has been shown to support reading comprehension and facilitate learning. Collaborative learning, where students work together in groups to discuss and share their knowledge, also enhances reading comprehension by encouraging students to teach each other and collaborate in understanding the text. Overall, multimodal language learning provides a holistic approach to reading comprehension, incorporating various modes of communication to enhance understanding and engagement.