scispace - formally typeset
Search or ask a question
Topic

Turn-by-turn navigation

About: Turn-by-turn navigation is a research topic. Over the lifetime, 2243 publications have been published within this topic receiving 52838 citations.


Papers
More filters
Proceedings ArticleDOI
06 Sep 2016
TL;DR: This work investigates the use of ambient light as a navigation aid in the car, in order to shift navigation aids to the periphery of human attention, and finds that drivers spent significantly less time glancing at the ambient light navigation aid than on a GUI navigation display.
Abstract: Car navigation systems typically combine multiple output modalities; for example, GPS-based navigation aids show a real-time map, or feature spoken prompts indicating upcoming maneuvers. However, the drawback of graphical navigation displays is that drivers have to explicitly glance at them, which can distract from a situation on the road. To decrease driver distraction while driving with a navigation system, we explore the use of ambient light as a navigation aid in the car, in order to shift navigation aids to the periphery of human attention. We investigated this by conducting studies in a driving simulator, where we found that drivers spent significantly less time glancing at the ambient light navigation aid than on a GUI navigation display. Moreover, ambient light-based navigation was perceived to be easy to use and understand, and preferred over traditional GUI navigation displays. We discuss the implications of these outcomes on automotive personal navigation devices.

35 citations

Proceedings ArticleDOI
15 Jul 2013
TL;DR: The results show that it is feasible to make a blind user to travel independently by providing the constraints required for safe navigation.
Abstract: This paper presents a novel approach of utilizing the floor plan maps posted on the buildings to infer a semantic plan that aids in the navigation of a visually impaired person. The extracted landmarks such as room numbers, doors, etc act as a parameter to infer the way points to each room. This provides a mental mapping of the environment to design a navigation framework for future use. A human motion model is used to predict a path based on how real humans ambulate towards a goal by avoiding obstacles. Travel route is presented in terms of blind understandable units, which is achieved by accurate estimation of the user's location and confirmed by extracting the landmarks posted on the doors. The results show that it is feasible to make a blind user to travel independently by providing the constraints required for safe navigation.

35 citations

Proceedings ArticleDOI
07 Apr 2017
TL;DR: Test results show that the proposed system can provide more abundant surrounding information and more accurate navigation, and verify the practicability of the newly proposed system.
Abstract: Safe navigation and detailed perception in unfamiliar environments are a challenging activity for the blind people. This paper proposes a cloud and vision-based navigation system for the blind. The goal of the system is not only to provide navigation, but also to make the blind people perceive the world in as much detail as possible and live like a normal person. The proposed system includes a helmet molded with stereo cameras in the front, android-based smartphone, web application and cloud computing platform. The cloud computing platform is the core of the system, integrates object detection and recognition, OCR (Optical Character Recognition), speech processing, vision-based SLAM (Simultaneous Localization and Mapping) and path planning, which are all based on deep learning algorithm. The blind people interact with the system in voice. The cloud platform communicates with the smartphone through Wi-Fi or 4G mobile communication technology. For testing the system performance, two groups of tests have been conducted. One is perception and the other is navigation. Test results show that the proposed system can provide more abundant surrounding information and more accurate navigation, and verify the practicability of the newly proposed system.

35 citations

Patent
02 May 2005
TL;DR: A navigation system for a video program viewing device generates user interfaces enabling the user to navigate among lists of personalized content, view information about individual content, update user preferences to reflect a preference for a characteristic of a program appearing in a personalized content list, receive personalized alerts regarding upcoming content, manage viewing preferences and configure navigation system options as mentioned in this paper.
Abstract: A navigation system for a video program viewing device generates user interfaces enabling the user to navigate among lists of personalized content, view information about individual content, update user preferences to reflect a preference for a characteristic of a program appearing in a personalized content list, receive personalized alerts regarding upcoming content, manage viewing preferences and configure navigation system options.

35 citations

Proceedings ArticleDOI
12 May 2009
TL;DR: An EEG-based human brain-actuated robotic system, which allows performing navigation and visual exploration tasks between remote places via internet, using only brain activity, and shows a high robustness of the system.
Abstract: This paper describes an EEG-based human brain-actuated robotic system, which allows performing navigation and visual exploration tasks between remote places via internet, using only brain activity. In operation, two teleoperation modes can be combined: robot navigation and camera exploration. In both modes, the user faces a real-time video captured by the robot camera merged with augmented reality items. In this representation, the user concentrates on a target area to navigate to or visually explore; then, a visual stimulation process elicits the neurological phenomenon that enables the brain-computer system to decode the intentions of the user. In the navigation mode, the target destination is transferred to the autonomous navigation system, which drives the robot to the desired place while avoiding collisions with the obstacles detected by the laser scanner. In the camera mode, the camera is aligned with the target area to perform an active visual exploration of the remote scenario. In June 2008, within the framework of the experimental methodology, five healthy subjects performed pre-established navigation and visual exploration tasks for one week between two cities separated by 260km. On the basis of the results, a technical evaluation of the device and its main functionalities is reported. The overall result is that all the subjects were able to successfully solve all the tasks reporting no failures, showing a high robustness of the system.

35 citations


Network Information
Related Topics (5)
User interface
85.4K papers, 1.7M citations
82% related
Object detection
46.1K papers, 1.3M citations
76% related
Feature extraction
111.8K papers, 2.1M citations
75% related
Wireless sensor network
142K papers, 2.4M citations
74% related
Robustness (computer science)
94.7K papers, 1.6M citations
73% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202318
202227
20212
20204
20194
20186