scispace - formally typeset
Search or ask a question
Author

Ciaran Hughes

Bio: Ciaran Hughes is an academic researcher from Valeo. The author has contributed to research in topics: Automotive industry & Camera resectioning. The author has an hindex of 17, co-authored 76 publications receiving 1199 citations. Previous affiliations of Ciaran Hughes include National University of Ireland, Galway & National University of Ireland.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a comprehensive overview of current research on advanced intra-vehicle networks and identifies outstanding research questions for the future.
Abstract: Automotive electronics is a rapidly expanding area with an increasing number of safety, driver assistance, and infotainment devices becoming standard in new vehicles. Current vehicles generally employ a number of different networking protocols to integrate these systems into the vehicle. The introduction of large numbers of sensors to provide driver assistance applications and the associated high-bandwidth requirements of these sensors have accelerated the demand for faster and more flexible network communication technologies within the vehicle. This paper presents a comprehensive overview of current research on advanced intra-vehicle networks and identifies outstanding research questions for the future.

267 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: The first extensive fisheye automotive dataset, WoodScape, named after Robert Wood, which comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection is released.
Abstract: Fisheye cameras are commonly employed for obtaining a large field of view in surveillance, augmented reality and in particular automotive applications. In spite of their prevalence, there are few public datasets for detailed evaluation of computer vision algorithms on fisheye images. We release the first extensive fisheye automotive dataset, WoodScape, named after Robert Wood who invented the fisheye camera in 1906. WoodScape comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection. Semantic annotation of 40 classes at the instance level is provided for over 10,000 images and annotation for other tasks are provided for over 100,000 images. With WoodScape, we would like to encourage the community to adapt computer vision models for fisheye camera instead of using naive rectification.

196 citations

Journal ArticleDOI
TL;DR: A method is presented by which the lens curve of a fish-eye camera can be extracted using well-founded assumptions and perspective methods and several of the models from the literature are examined against this empirically derived curve.
Abstract: The majority of computer vision applications assumes that the camera adheres to the pinhole camera model. However, most optical systems will introduce undesirable effects. By far, the most evident of these effects is radial lensing, which is particularly noticeable in fish-eye camera systems, where the effect is relatively extreme. Several authors have developed models of fish-eye lenses that can be used to describe the fish-eye displacement. Our aim is to evaluate the accuracy of several of these models. Thus, we present a method by which the lens curve of a fish-eye camera can be extracted using well-founded assumptions and perspective methods. Several of the models from the literature are examined against this empirically derived curve.

123 citations

Journal ArticleDOI
TL;DR: Although wide- angle optics provide greater fields of view, they also introduce undesirable effects, such as radial distortion, tangential distortion and uneven illumination, which have the potential to make objects difficult for the vehicle driver to recognise and, thus, potentially have a greater risk of accident.
Abstract: The development of electronic vision systems for the automotive market is a strongly growing field, driven in particular by customer demand to increase the safety of vehicles both for drivers and for other road users, including vulnerable road users (VRUs), such as pedestrians. Customer demand is matched by legislative developments in a number of key automotive markets; for example Europe, Japan and the US are in the process of introducing legislation to aid in the prevention of fatalities to VRUs, with emphasis on the use of vision systems. The authors discuss some of the factors that motivate the use of wide-angle and fish-eye camera technologies in vehicles. The authors describe the benefits of using wide-angle lens camera systems to display areas of a vehicle's surroundings that the driver would, otherwise, be unaware of (i.e. a vehicle's blind-zones). However, although wide- angle optics provide greater fields of view, they also introduce undesirable effects, such as radial distortion, tangential distortion and uneven illumination. These distortions have the potential to make objects difficult for the vehicle driver to recognise and, thus, potentially have a greater risk of accident. The authors describe some of the methods that can be employed to remove these unwanted effects, and digitally convert the distorted image to the ideal and intuitive rectilinear pin-hole model.

112 citations

Journal ArticleDOI
Markus Heimberger1, Jonathan Horgan1, Ciaran Hughes1, John B. McDonald1, Senthil Yogamani1 
TL;DR: This is the first detailed discussion of a systemic view of a commercial automated parking system from the perspective of computer vision algorithms and demonstrates how camera systems are crucial for addressing a range of automated parking use cases and also to add robustness to systems based on active distance measuring sensors, such as ultrasonics and radar.

93 citations


Cited by
More filters
Posted Content
TL;DR: nuScenes as mentioned in this paper is the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view.
Abstract: Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online.

1,939 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: nuScenes as discussed by the authors is the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view.
Abstract: Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online.

1,378 citations

Journal ArticleDOI
TL;DR: This review summarises deep reinforcement learning algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role of simulators in training agents, and finally methods to evaluate, test and robustifying existing solutions in RL and imitation learning.
Abstract: With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.

740 citations

Patent
16 Jan 2012
TL;DR: In this article, the camera is disposed at an interior portion of a vehicle equipped with the vehicular vision system, where the camera one of (i) views exterior of the equipped vehicle through the windshield of the vehicle and forward of the equipment and (ii) views from the windshield into the interior cabin of the equipments.
Abstract: A vehicular vision system includes a camera having a lens and a CMOS photosensor array having a plurality of photosensor elements. The camera is disposed at an interior portion of a vehicle equipped with the vehicular vision system. The camera one of (i) views exterior of the equipped vehicle through the windshield of the equipped vehicle and forward of the equipped vehicle and (ii) views from the windshield of the equipped vehicle into the interior cabin of the equipped vehicle. A control includes an image processor that processes image data captured by the photosensor array. The image processor processes captured image data to detect an object viewed by the camera. The photosensor array is operable at a plurality of exposure periods and at least one exposure period of the plurality of exposure periods is dynamically variable.

576 citations

Journal ArticleDOI
07 Jun 2016-PLOS ONE
TL;DR: It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.
Abstract: A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

477 citations