scispace - formally typeset
Search or ask a question
Author

John Barry

Other affiliations: Science Foundation Ireland
Bio: John Barry is an academic researcher from Institute of Technology, Tralee. The author has contributed to research in topics: Sensor fusion & Radar engineering details. The author has an hindex of 3, co-authored 5 publications receiving 24 citations. Previous affiliations of John Barry include Science Foundation Ireland.

Papers
More filters
Journal ArticleDOI
18 Mar 2021-Sensors
TL;DR: In this article, the authors provide an end-to-end review of the hardware and software methods required for sensor fusion object detection in autonomous driving applications. And they conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
Abstract: With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.

162 citations

Proceedings ArticleDOI
03 Sep 2020
TL;DR: A review on autonomous driving architectures is presented, classifying them into technical or functional architectures depending on the domain of their definition, and the perception stage of self-driving solutions is analysed as a component of the architectures.
Abstract: Over the last 10 years, huge advances have been made in the areas of sensor technologies and processing platforms, pushing forward developments in the field of autonomous vehicles, mostly represented by self-driving cars. However, the complexity of these systems has been also increased in terms of the hardware and software within them, especially for the perception stage in which the goal is to create a reliable representation of the vehicle and the world. In order to manage this complexity, several architectural models have been proposed as guidelines to design, develop, operate and deploy self-driving solutions for real applications. In this work, a review on autonomous driving architectures is presented, classifying them into technical or functional architectures depending on the domain of their definition. In addition, the perception stage of self-driving solutions is analysed as a component of the architectures, detailing into the sensing part and how data fusion is used to perform localisation, mapping and object detection. Finally, the paper is concluded with additional thoughts on the actual status and future trends in the field.

29 citations

Proceedings ArticleDOI
11 Jun 2020
TL;DR: The case of autonomous systems in for large heavy vehicles off-road in industrial environments with the use of camera sensor, LiDAR sensor, and radar sensor is considered and a solution implemented in a Python environment is details.
Abstract: Industry 4.0 or fourth industrial revolution elevates the computerization of Industry 3.0 and enhances it with smart and autonomous systems driven by data and Machine Learning. This paper reviews the advantages and disadvantages of sensors and the architecture of multi-sensor setup for object detection. Here we consider the case of autonomous systems in for large heavy vehicles off-road in industrial environments with the use of camera sensor, LiDAR sensor, and radar sensor. Understanding the vehicles surroundings is a vital task in autonomous operation where personnel and other obstacles present significant hazard of collision. This paper review further discusses the challenges of time synchronisation on sensor data acquisition in multi-modal sensor fusion for personnel and object detection, and details a solution implemented in a Python environment.

9 citations

Proceedings ArticleDOI
03 Sep 2020
TL;DR: This work presents a comprehensive review on Intersection Management Systems (IMS), from a component-based point of view, and a special remark on how IoT-based solutions have been adopted recently in this area.
Abstract: Intersections or junctions are critical points in transportation systems due to their dynamic behavior, making them prone to safety or efficiency problems. For this reason, the use of technology in monitoring intersections has increased over the last years. In addition, different proposals based on the Internet of Things (IoT) have been used to handle different traffic issues at intersections, taking advantage of the features it provides regarding hardware, software and communications. This work presents a comprehensive review on Intersection Management Systems (IMS), from a component-based point of view, and a special remark on how IoT-based solutions have been adopted recently in this area. A proposed architecture for the implementation of an IoT-ready IMS is presented as well.

7 citations

Proceedings ArticleDOI
01 Jun 2020
TL;DR: Algorithms and methods for establishing the path kinematics of Cartesian axes of motion pick and place systems which must avoid varying obstacle profiles and which have the potential for path intersections with other pick and Place systems within a shared working environment are described.
Abstract: Combined discrete and continuous event simulations provide a means of investigating the influence of the many factors affecting the productivity of complex electromechanical systems. This paper describes algorithms and methods for establishing the path kinematics of Cartesian axes of motion pick and place systems which must avoid varying obstacle profiles and which have the potential for path intersections with other pick and place systems within a shared working environment. Where intersections arise, one pick and place device must, in accordance with pre-established prioritization, decelerate and wait for another pick and place device to vacate the zone of conflict. Path kinematics represent a continuous event aspect of the simulation under development while awaiting permission to proceed represents a discrete event aspect of the simulation. A requirement of the research is that the kinematics only include periods of constant acceleration and constant velocity and that any deceleration must continue substantially along the original path. The algorithm and methods presented are concise and may be applicable and convenient to apply in the path control of Cartesian axis of motion devices.

Cited by
More filters
Journal ArticleDOI
18 Mar 2021-Sensors
TL;DR: In this article, the authors provide an end-to-end review of the hardware and software methods required for sensor fusion object detection in autonomous driving applications. And they conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
Abstract: With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.

162 citations

Journal ArticleDOI
TL;DR: In this paper , a multi-criteria decision-making (MCDM) framework, combining the self-confidence aggregation approach and social trust network, is proposed to meet the requirements for selecting UGDVs and achieve better applications in community delivery.

55 citations

Journal ArticleDOI
03 Oct 2021
TL;DR: The technologies that are being used or developed to perceive user's intentions for natural and intuitive in‐vehicle interaction, and the challenges that are needed to be overcome to attain truly interactive AVs are discussed.
Abstract: With rapid advances in the field of autonomous vehicles (AVs), the ways in which human–vehicle interaction (HVI) will take place inside the vehicle have attracted major interest and, as a result, intelligent interiors are being explored to improve the user experience, acceptance, and trust. This is also fueled by parallel research in areas such as perception and control of robots, safe human–robot interaction, wearable systems, and the underpinning flexible/printed electronics technologies. Some of these are being routed to AVs. Growing number of network of sensors are being integrated into the vehicles for multimodal interaction to draw correct inferences of the communicative cues from the user and to vary the interaction dynamics depending on the cognitive state of the user and contextual driving scenario. In response to this growing trend, this timely article presents a comprehensive review of the technologies that are being used or developed to perceive user's intentions for natural and intuitive in‐vehicle interaction. The challenges that are needed to be overcome to attain truly interactive AVs and their potential solutions are discussed along with various new avenues for future research.

43 citations

Journal ArticleDOI
01 Sep 2021
TL;DR: A compilation of various issues plaguing Transport Industry classified under Intelligent Transportation Systems where AI benefits are put into use and discussions on AI solutions to resolve issues in transport industry across various countries in the globe and in Indian states are taken up.
Abstract: Artificial intelligence (AI) is the ability of a machine to perform cognitive functions like perceiving, reasoning, learning and problem-solving which humans are capable of performing at ease. AI has gained traction since the past two decades across the globe due to availability of huge volume of data generated through Internet. There has been a huge benefit to governments and businesses by processing this data using advanced algorithms in the recent past. The robust growth of machine learning algorithms supported by various technologies like Internet of Things, Robotic Process Automation, Computer Vision, Natural Language Processing have enabled the growth of AI. This article is a compilation of various issues plaguing Transport Industry classified under Intelligent Transportation Systems. Some of the sub-systems considered are related to Traffic Management, Public Transport, Safety Management, Manufacturing & Logistics from Intelligent Transportation Systems where AI benefits are put into use. The study takes up specific areas of concern in transport industry and its related issues that have possible solutions using AI. The approach involves a secondary study based on the country-wise data available from various sources. Further, discussions on AI solutions to resolve issues in transport industry across various countries in the globe and in Indian states is taken up.

38 citations

Journal ArticleDOI
30 Jan 2022-Sensors
TL;DR: The real-time neural network detector architecture You Only Look Once, the latest version (YOLOv4), is investigated and it is demonstrated that this detector can be adapted to multispectral pedestrian detection and can achieve accuracy on par with the state-of-the-art while being highly computationally efficient, thereby supporting low-latency decision making.
Abstract: Detecting pedestrians in autonomous driving is a safety-critical task, and the decision to avoid a a person has to be made with minimal latency. Multispectral approaches that combine RGB and thermal images are researched extensively, as they make it possible to gain robustness under varying illumination and weather conditions. State-of-the-art solutions employing deep neural networks offer high accuracy of pedestrian detection. However, the literature is short of works that evaluate multispectral pedestrian detection with respect to its feasibility in obstacle avoidance scenarios, taking into account the motion of the vehicle. Therefore, we investigated the real-time neural network detector architecture You Only Look Once, the latest version (YOLOv4), and demonstrate that this detector can be adapted to multispectral pedestrian detection. It can achieve accuracy on par with the state-of-the-art while being highly computationally efficient, thereby supporting low-latency decision making. The results achieved on the KAIST dataset were evaluated from the perspective of automotive applications, where low latency and a low number of false negatives are critical parameters. The middle fusion approach to YOLOv4 in its Tiny variant achieved the best accuracy to computational efficiency trade-off among the evaluated architectures.

30 citations