scispace - formally typeset
Search or ask a question
Author

Yu Meng

Bio: Yu Meng is an academic researcher from Northeastern University (China). The author has contributed to research in topics: Object detection & Persuasive technology. The author has an hindex of 1, co-authored 7 publications receiving 5 citations. Previous affiliations of Yu Meng include Northeastern University & University of California, Irvine.

Papers
More filters
Proceedings ArticleDOI
01 Nov 2018
TL;DR: An efficient localization technology that uses a single camera to inspect framed pictures in the surrounding environment to quickly identify the device's location and has been used to deploy indoor drones so that each drone can be location-aware.
Abstract: Service-oriented Internet-of-Things (IoT) systems are being deployed to provide intelligent, personal indoor services that must be location-and context-aware. In this paper, we present an efficient localization technology that uses a single camera to inspect framed pictures in the surrounding environment to quickly identify the device's location. Our idea is motivated by the popular ArUco markers. By using simple transformation algorithms, our technology converts framed pictures into ArUco markers and then identifies their marker ID's and poses. We have used the technology to deploy indoor drones so that each drone can be location-aware. The drone camera also streams video frames back to edge servers for human face recognition in order to identify the locations of known individuals. We believe our work of using artistic pictures as location markers can offer an attractive and low-cost localization services for many smart indoor IoT applications.

4 citations

Journal ArticleDOI
TL;DR: An intelligent AI agent that classifies drivers’ into different driving-personality groups to offer personalized feedback, and a cloud-based Android application to collect, analyze and learn from a driver’s past driving data to provide personalized, constructive feedback accordingly is presented.
Abstract: Nowadays, AI has many applications in everyday human activities such as exercise, eating, sleeping, and automobile driving. Tech companies can apply AI to identify individual behaviors (e.g., walking, eating, driving), analyze them, and offer personalized feedback to help individuals make improvements accordingly. While offering personalized feedback is more beneficial for drivers, most smart driver systems in the current market do not use it. This paper presents AutoCoach, an intelligent AI agent that classifies drivers’ into different driving-personality groups to offer personalized feedback. We have built a cloud-based Android application to collect, analyze and learn from a driver’s past driving data to provide personalized, constructive feedback accordingly. Our GUI interface provides real-time user feedback for both warnings and rewards for the driver. We have conducted an on-the-road pilot user study. We conducted a pilot study where drivers were asked to use different agent versions to compare personality-based feedback versus non-personality-based feedback. The study result proves our design’s feasibility and effectiveness in improving the user experience when using a personality-based driving agent, with 61% overall acceptance that it is more accurate than non-personality-based.

4 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: The idea of memory factor, which decides when to provide feedback to drivers based on their personality, is proposed, which identifies the most critical behaviors within a flexible time-period.
Abstract: AutoCoach is an intelligent agent intended for improving automobile drivers' performance by applying persuasive technology. System models like Advanced driver-assistance (ADAS) and some Usage-based-Insurance (UBI) share an aim to increase car and road safety. However, most prior models do not consider the differences between driving habits. The AutoCoach design includes two unique components to build an effective persuasive system. The first component is the personality classification, which recognizes drivers' personalities by analyzing driving behavior patterns. The second component is the rewarding system, which determines the current driving behavior's risk score based on some immediate past behavior. We propose the idea of memory factor, which decides when to provide feedback to drivers based on their personality. This memory factor identifies the most critical behaviors within a flexible time-period. AutoCoach then decides on feedback to maintain safe driving or improve the level of awareness for risky driving habits.

3 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: A new picture-based localization service PicPose is presented that relies on the feature points extracted from a camera-captured image and conducts feature point matching with the original wall picture to conduct pose calculation, which is impossible for ArPico and ArUco.
Abstract: Device self-localization is an important capability for many IoT applications that require mobility in service capabilities. In our previous work, we have designed the ArPico method for robot indoor localization. By placing and recognizing pre-installed pictures on walls, robots can use low-cost cameras to identify their positions by referencing to pictures' precise locations. However, using ArPico, all pictures need to have clear rectangular borders for the pose computation. But some real-world pictures does not have clear thick borders. Moreover, some pictures may have odd shapes or are only partially visible. To address these problems, a new picture-based localization service PicPose is presented. PicPose relies on the feature points extracted from a camera-captured image and conducts feature point matching with the original wall picture to conduct pose calculation. Using PicPose, even partially visible pictures can be used for localization, which is impossible for ArPico and ArUco. We present our implementation and experiment results in this paper.

1 citations

Journal ArticleDOI
TL;DR: An autonomous moving robot that can self-localize itself using its on-board camera and the PicPose technology is built and shows that the localization methods are practical, have very good accuracy, and can be used for real time robot navigation.
Abstract: Localization is an important technology for smart services like autonomous surveillance, disinfection or delivery robots in future distributed indoor IoT applications. Visual-based localization (VBL) is a promising self-localization approach that identifies a robot’s location in an indoor or underground 3D space by using its camera to scan and match the robot’s surrounding objects and scenes. In this study, we present a pictorial planar surface based 3D object localization framework. We have designed two object detection methods for localization, ArPico and PicPose. ArPico detects and recognizes framed pictures by converting them into binary marker codes for matching with known codes in the library. It then uses the corner points on a picture’s border to identify the camera’s pose in the 3D space. PicPose detects the pictorial planar surface of an object in a camera view and produces the pose output by matching the feature points in the view with that in the original picture and producing the homography to map the object’s actual location in the 3D real world map. We have built an autonomous moving robot that can self-localize itself using its on-board camera and the PicPose technology. The experiment study shows that our localization methods are practical, have very good accuracy, and can be used for real time robot navigation.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors present a systematic review of the state-of-the-art in Motivational technologies for transportation safety, focusing on reducing the accident likelihood and mitigating their consequences and highlight the importance of aligning motivational design with the cognitive demand of the transportation task.

9 citations

Journal ArticleDOI
TL;DR: An intelligent AI agent that classifies drivers’ into different driving-personality groups to offer personalized feedback, and a cloud-based Android application to collect, analyze and learn from a driver’s past driving data to provide personalized, constructive feedback accordingly is presented.
Abstract: Nowadays, AI has many applications in everyday human activities such as exercise, eating, sleeping, and automobile driving. Tech companies can apply AI to identify individual behaviors (e.g., walking, eating, driving), analyze them, and offer personalized feedback to help individuals make improvements accordingly. While offering personalized feedback is more beneficial for drivers, most smart driver systems in the current market do not use it. This paper presents AutoCoach, an intelligent AI agent that classifies drivers’ into different driving-personality groups to offer personalized feedback. We have built a cloud-based Android application to collect, analyze and learn from a driver’s past driving data to provide personalized, constructive feedback accordingly. Our GUI interface provides real-time user feedback for both warnings and rewards for the driver. We have conducted an on-the-road pilot user study. We conducted a pilot study where drivers were asked to use different agent versions to compare personality-based feedback versus non-personality-based feedback. The study result proves our design’s feasibility and effectiveness in improving the user experience when using a personality-based driving agent, with 61% overall acceptance that it is more accurate than non-personality-based.

4 citations

Journal ArticleDOI
TL;DR: The proposed passive visual method based on pedestrian detection and projection transformation delivers high positioning performance and relies on security cameras installed in non-private areas so that pedestrians do not have to take photos.
Abstract: Indoor positioning applications are developing at a rapid pace; active visual positioning is one method that is applicable to mobile platforms. Other methods include Wi-Fi, CSI, and PDR approaches; however, their positioning accuracy usually cannot achieve the positioning performance of the active visual method. Active visual users, however, must take a photo to obtain location information, raising confidentiality and privacy issues. To address these concerns, we propose a solution for passive visual positioning based on pedestrian detection and projection transformation. This method consists of three steps: pretreatment, pedestrian detection, and pose estimation. Pretreatment includes camera calibration and camera installation. In pedestrian detection, features are extracted by deep convolutional neural networks using neighboring frame detection results and the map information as the region of interest attention model (RIAM). Pose estimation computes accurate localization results through projection transformation (PT). This system relies on security cameras installed in non-private areas so that pedestrians do not have to take photos. Experiments were conducted in a hall about 100 square meters in size, with 41 test-points for the localization experiment. The results show that the positioning error was 0.48 m (RMSE) and the 90% error was 0.73 m. Therefore, the proposed passive visual method delivers high positioning performance.

2 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: A new picture-based localization service PicPose is presented that relies on the feature points extracted from a camera-captured image and conducts feature point matching with the original wall picture to conduct pose calculation, which is impossible for ArPico and ArUco.
Abstract: Device self-localization is an important capability for many IoT applications that require mobility in service capabilities. In our previous work, we have designed the ArPico method for robot indoor localization. By placing and recognizing pre-installed pictures on walls, robots can use low-cost cameras to identify their positions by referencing to pictures' precise locations. However, using ArPico, all pictures need to have clear rectangular borders for the pose computation. But some real-world pictures does not have clear thick borders. Moreover, some pictures may have odd shapes or are only partially visible. To address these problems, a new picture-based localization service PicPose is presented. PicPose relies on the feature points extracted from a camera-captured image and conducts feature point matching with the original wall picture to conduct pose calculation. Using PicPose, even partially visible pictures can be used for localization, which is impossible for ArPico and ArUco. We present our implementation and experiment results in this paper.

1 citations

Journal ArticleDOI
TL;DR: An autonomous moving robot that can self-localize itself using its on-board camera and the PicPose technology is built and shows that the localization methods are practical, have very good accuracy, and can be used for real time robot navigation.
Abstract: Localization is an important technology for smart services like autonomous surveillance, disinfection or delivery robots in future distributed indoor IoT applications. Visual-based localization (VBL) is a promising self-localization approach that identifies a robot’s location in an indoor or underground 3D space by using its camera to scan and match the robot’s surrounding objects and scenes. In this study, we present a pictorial planar surface based 3D object localization framework. We have designed two object detection methods for localization, ArPico and PicPose. ArPico detects and recognizes framed pictures by converting them into binary marker codes for matching with known codes in the library. It then uses the corner points on a picture’s border to identify the camera’s pose in the 3D space. PicPose detects the pictorial planar surface of an object in a camera view and produces the pose output by matching the feature points in the view with that in the original picture and producing the homography to map the object’s actual location in the 3D real world map. We have built an autonomous moving robot that can self-localize itself using its on-board camera and the PicPose technology. The experiment study shows that our localization methods are practical, have very good accuracy, and can be used for real time robot navigation.

1 citations