scispace - formally typeset
Search or ask a question

Showing papers by "João Guerreiro published in 2019"


Proceedings ArticleDOI
02 May 2019
TL;DR: An assistive suitcase system for supporting blind people when walking through crowded environments using pre-emptive sound notifications, BBeep, and it is observed that the proposed system significantly reduces the number of imminent collisions.
Abstract: We present an assistive suitcase system, BBeep, for supporting blind people when walking through crowded environments. BBeep uses pre-emptive sound notifications to help clear a path by alerting both the user and nearby pedestrians about the potential risk of collision. BBeep triggers notifications by tracking pedestrians, predicting their future position in real-time, and provides sound notifications only when it anticipates a future collision. We investigate how different types and timings of sound affect nearby pedestrian behavior. In our experiments, we found that sound emission timing has a significant impact on nearby pedestrian trajectories when compared to different sound types. Based on these findings, we performed a real-world user study at an international airport, where blind participants navigated with the suitcase in crowded areas. We observed that the proposed system significantly reduces the number of imminent collisions.

100 citations


Proceedings ArticleDOI
24 Oct 2019
TL;DR: The design of CaBot (Carry-on roBot), an autonomous suitcase-shaped navigation robot that is able to guide blind users to a destination while avoiding obstacles on their path is presented.
Abstract: Navigation robots have the potential to overcome some of the limitations of traditional navigation aids for blind people, specially in unfamiliar environments. In this paper, we present the design of CaBot (Carry-on roBot), an autonomous suitcase-shaped navigation robot that is able to guide blind users to a destination while avoiding obstacles on their path. We conducted a user study where ten blind users evaluated specific functionalities of CaBot, such as a vibro-tactile handle to convey directional feedback; experimented to find their comfortable walking speed; and performed navigation tasks to provide feedback about their overall experience. We found that CaBot's performance highly exceeded users' expectations, who often compared it to navigating with a guide dog or sighted guide. Users' high confidence, sense of safety, and trust on CaBot poses autonomous navigation robots as a promising solution to increase the mobility and independence of blind people, in particular in unfamiliar environments.

89 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports and finds that despite the challenging environment participants were able to complete their itinerary independently.
Abstract: People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies. We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy (BLE) beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports.

58 citations


Journal ArticleDOI
TL;DR: In this forum, research that helps to successfully bring the benefits of computing technologies to children, older adults, people with disabilities, and other populations that are often ignored in the design of mass-marketed products is celebrated.
Abstract: Digital maps such as Google Maps, Yelp, and Waze represent an incredible HCI success-they have transformed the way people navigate and access information about the world. However, there is a twofold problem limiting who can use these systems and how they benefit. First, these platforms focus almost exclusively on data about road networks and points of interest (POIs), noticeably lacking information about pedestrian infrastructure and physical accessibility. Second, because of their graphical nature and reliance on gesture and mouse input, digital maps can be inaccessible to some users-for example, those with visual or upper-body motor impairments. Thus, at a high level, there are two key accessibility problems related to accessible maps: 1) How can we collect, validate, and integrate accessibility information about the physical world into maps? 2) How can we design digital maps to be accessible to a diverse set of users across a wide range of physical, sensory, and cognitive abilities? Active research in HCI and beyond exists in both areas, but there has been no direct effort to unite this research community. To begin addressing this gap, we recently organized a Special Interest Group (SIG) at CHI2018 entitled "Making Maps Accessible and Putting Accessibility in Maps" (Figure 1). We set forth three explicit goals: First, to bring together and network scholars and practitioners who are broadly interested in accessible maps; second, to identify grand challenges and future research trajectories; and third, to establish accessible maps as a valuable topic within HCI. Accessibility is a broad, multifaceted topic. We assembled co-organizers from both academia and industry with varying topical expertise and regional and cultural experiences. The SIG attracted roughly 25 participants, including three telepresence robots, and interwove small-group brainstorming and discussion with large-group summary presentations. The two primary discussion topics were identifying key challenges and seeding potential solutions in the area of accessible maps. Below, we synthesize key themes and enumerate rich, open paths for future work, which emerged from the SIG (Table 1)

47 citations


Journal ArticleDOI
TL;DR: To assess the capability of NavCog3 to promote independent mobility of individuals with visual impairments, the system was deployed and evaluated in two challenging real-world scenarios and the system’s usability in the wild was validated in a hotel complex temporarily equipped with NavCogs3 during a conference for individuals withvisual impairments.
Abstract: NavCog3 is a smartphone turn-by-turn navigation assistant system we developed specifically designed to enable independent navigation for people with visual impairments. Using off-the-shelf Bluetooth beacons installed in the surrounding environment and a commodity smartphone carried by the user, NavCog3 achieves unparalleled localization accuracy in real-world large-scale scenarios. By leveraging its accurate localization capabilities, NavCog3 guides the user through the environment and signals the presence of semantic features and points of interest in the vicinity (e.g., doorways, shops).To assess the capability of NavCog3 to promote independent mobility of individuals with visual impairments, we deployed and evaluated the system in two challenging real-world scenarios. The first scenario demonstrated the scalability of the system, which was permanently installed in a five-story shopping mall spanning three buildings and a public underground area. During the study, 10 participants traversed three fixed routes, and 43 participants traversed free-choice routes across the environment. The second scenario validated the system’s usability in the wild in a hotel complex temporarily equipped with NavCog3 during a conference for individuals with visual impairments. In the hotel, almost 14.2h of system usage data were collected from 37 unique users who performed 280 travels across the environment, for a total of 30,200m traversed.

43 citations


Proceedings ArticleDOI
13 May 2019
TL;DR: This study motivates the design of future navigation systems capable of verbosity level personalization in order to keep the users engaged in the current situational context while minimizing distractions.
Abstract: Navigation assistive technologies have been designed to support individuals with visual impairments during independent mobility by providing sensory augmentation and contextual awareness of their surroundings. Such information is habitually provided through predefned audio-haptic interaction paradigms. However, individual capabilities, preferences and behavior of people with visual impairments are heterogeneous, and may change due to experience, context and necessity. Therefore, the circumstances and modalities for providing navigation assistance need to be personalized to different users, and through time for each user. We conduct a study with 13 blind participants to explore how the desirability of messages provided during assisted navigation varies based on users' navigation preferences and expertise. The participants are guided through two different routes, one without prior knowledge and one previously studied and traversed. The guidance is provided through turn-by-turn instructions, enriched with contextual information about the environment. During navigation and follow-up interviews, we uncover that participants have diversifed needs for navigation instructions based on their abilities and preferences. Our study motivates the design of future navigation systems capable of verbosity level personalization in order to keep the users engaged in the current situational context while minimizing distractions.

22 citations


Proceedings ArticleDOI
13 May 2019
TL;DR: A solution to support an independent, interactive museum experience that uses the continuous tracking of the user's location and orientation to enable a seamless interaction between Navigation and Art Appreciation.
Abstract: Museums are gradually becoming more accessible to blind people, who have shown interest in visiting museums and in appreciating visual art. Yet, their ability to visit museums is still dependent on the assistance they get from their family and friends or from the museum personnel. Based on this observation and on prior research, we developed a solution to support an independent, interactive museum experience that uses the continuous tracking of the user's location and orientation to enable a seamless interaction between Navigation and Art Appreciation. Accurate localization and context-awareness allow for turn-by-turn guidance (Navigation Mode), as well as detailed audio content when facing an artwork within close proximity (Art Appreciation Mode). In order to evaluate our system, we installed it at The Andy Warhol Museum in Pittsburgh and conducted a user study where nine blind participants followed routes of interest while learning about the artworks. We found that all participants were able to follow the intended path, immediately grasped how to switch between Navigation and Art Appreciation modes, and valued listening to the audio content in front of each artwork. Also, they showed high satisfaction and an increased motivation to visit museums more often.

22 citations


Proceedings ArticleDOI
13 May 2019
TL;DR: A web app that uses sonification, earcons and speech synthesis to enable blind people to explore mathematical function graphs and it is shown that users with higher level of mathematical education are capable of better adapting to interaction modalities considered more difficult by others.
Abstract: We present AudioFunctions.web, a web app that uses sonification, earcons and speech synthesis to enable blind people to explore mathematical function graphs. The system is designed for personalized access through different interfaces (touchscreen, keyboard, touchpad and mouse) on both mobile and traditional devices, in order to better adapt to different user abilities and preferences. It is also publicly available as a web service and can be directly accessed from the teaching material through a hypertext link. An experimental evaluation with 13 visually impaired participants highlights that, while the usability of all the presented interaction modalities is high, users with different abilities prefer different interfaces to interact with the system. It is also shown that users with higher level of mathematical education are capable of better adapting to interaction modalities considered more difficult by others.

16 citations


Proceedings ArticleDOI
24 Oct 2019
TL;DR: Through this study, it was found that users create alternative interfaces that extended current screen readers' capabilities, less conservative than mainstream solutions on notification frequency and cardinality.
Abstract: Word completion interfaces are ubiquitously available in mobile virtual keyboards; however, there is no prior research on how to design these interfaces for screen reader users. In addressing this, we propose a design space for nonvisual representation of word completions. The design space covers seven categories aiming to identify challenges and opportunities for interaction design in an unexplored research topic. It is intended to guide the design of novel interaction techniques, serving as a framework for researchers and practitioners working on nonvisual word completion. To demonstrate its potential, we engaged blind users in an exploration of the design space, to create their own bespoke word completion solutions. Through this study we found that users create alternative interfaces that extended current screen readers' capabilities. Resulting interfaces are less conservative than mainstream solutions on notification frequency and cardinality. Customization decisions were based on perceived benefits/costs and varied depending on multiple factors such as users' perceived prediction accuracy, potential keystroke gains, and situational restrictions.

9 citations


Journal ArticleDOI
TL;DR: This article analyzes trajectories of indoor travels in four different environments, showing that rotation errors are frequent in state-of-art navigation assistance for people with visual impairments and proposes a technique to anticipate the stop instruction so that the user stops rotating closer to the target rotation.
Abstract: Navigation assistive technologies are designed to support people with visual impairments during mobility. In particular, turn-by-turn navigation is commonly used to provide walk and turn instructions, without requiring any prior knowledge about the traversed environment. To ensure safe and reliable guidance, many research efforts focus on improving the localization accuracy of such instruments. However, even when the localization is accurate, imprecision in conveying guidance instructions to the user and in following the instructions can still lead to unrecoverable navigation errors. Even slight errors during rotations, amplified by the following frontal movement, can result in the user taking an incorrect and possibly dangerous path.In this article, we analyze trajectories of indoor travels in four different environments, showing that rotation errors are frequent in state-of-art navigation assistance for people with visual impairments. Such errors, caused by the delay between the instruction to stop rotating and when the user actually stops, result in over-rotation. To compensate for over-rotation, we propose a technique to anticipate the stop instruction so that the user stops rotating closer to the target rotation. The technique predicts over-rotation using a deep learning model that takes into account the user’s current rotation speed, duration, and angle; the model is trained with a dataset of rotations performed by blind individuals. By analyzing existing datasets, we show that our approach outperforms a naive baseline that predicts over-rotation with a fixed value. Experiments with 11 blind participants also show that the proposed compensation method results in lower rotation errors (18.8° on average) compared to the non-compensated approach adopted in state-of-the-art solutions (30.1°).

9 citations


Proceedings ArticleDOI
13 May 2019
TL;DR: AudioFunctions.web is a web-based system that enables blind people to explore mathematical function graphs and uses sonification, earcons and speech synthesis to convey the overall shape of a function graph, its key points of interest, and accurate quantitative information at any given point.
Abstract: AudioFunctions.web is a web-based system that enables blind people to explore mathematical function graphs. It uses sonification, earcons and speech synthesis to convey the overall shape of a function graph, its key points of interest, and accurate quantitative information at any given point. The system can be directly linked from digital documents, such as teaching material, and it is designed to be accessed through multiple interfaces such as touchscreen, key- board, touchpad and mouse, on both mobile devices and personal computers. This way, AudioFunctions.web can adapt to different user abilities, preferences and needs.

Posted Content
TL;DR: A set of studies performed with the target population, novices and experts, using a variety of methods, targeted at identifying and verifying challenges; and coping mechanisms are presented.
Abstract: Blind people face significant challenges when using smartphones. The focus on improving non-visual mobile accessibility has been at the level of touchscreen access. Our research investigates the challenges faced by blind people in their everyday usage of mobile phones. In this paper, we present a set of studies performed with the target population, novices and experts, using a variety of methods, targeted at identifying and verifying challenges; and coping mechanisms. Through a multiple methods approach we identify and validate challenges locally with a diverse set of user expertise and devices, and at scale through the analyses of the largest Android and iOS dedicate forums for blind people. We contribute with a prioritized corpus of smartphone challenges for blind people, and a discussion on a set of directions for future research that tackle the open and often overlooked challenges.

Proceedings ArticleDOI
02 May 2019
TL;DR: This workshop intends to bring communities together to increase awareness on recent advances in blind navigation assistive technologies, benefit from diverse perspectives and expertises, discuss open research challenges, and explore avenues for multi-disciplinary collaborations.
Abstract: Independent navigation in unfamiliar and complex environments is a major challenge for blind people. This challenge motivates a multi-disciplinary effort in the CHI community aimed at developing assistive technologies to support the orientation and mobility of blind people, including related disciplines such as accessible computing, cognitive sciences, computer vision, and ubiquitous computing. This workshop intends to bring these communities together to increase awareness on recent advances in blind navigation assistive technologies, benefit from diverse perspectives and expertises, discuss open research challenges, and explore avenues for multi-disciplinary collaborations. Interactions are fostered through a panel on Open Challenges and Avenues for Interdisciplinary Collaboration, Minute-Madness presentations, and a Hands-On Session where workshop participants can hack (design or prototype) new solutions to tackle open research challenges. An expected outcome is the emergence of new collaborations and research directions that can result in novel assistive technologies to support independent blind navigation.