scispace - formally typeset
Search or ask a question
Author

Alberto Broggi

Bio: Alberto Broggi is an academic researcher from University of Parma. The author has contributed to research in topics: Object detection & Pedestrian detection. The author has an hindex of 53, co-authored 212 publications receiving 10193 citations. Previous affiliations of Alberto Broggi include University of Pavia & Oshkosh Corporation.


Papers
More filters
Journal ArticleDOI
TL;DR: The generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety, allows to detect both generic obstacles and the lane position in a structured environment at a rate of 10 Hz.
Abstract: This paper describes the generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety. Based on a full-custom massively parallel hardware, it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings) at a rate of 10 Hz. Thanks to a geometrical transform supported by a specific hardware module, the perspective effect is removed from both left and right stereo images; the left is used to detect lane markings with a series of morphological filters, while both remapped stereo images are used for the detection of free-space in front of the vehicle. The output of the processing is displayed on both an on-board monitor and a control-panel to give visual feedbacks to the driver. The system was tested on the mobile laboratory (MOB-LAB) experimental land vehicle, which was driven for more than 3000 km along extra-urban roads and freeways at speeds up to 80 km/h, and demonstrated its robustness with respect to shadows and changing illumination conditions, different road textures, and vehicle movement.

1,088 citations

Journal ArticleDOI
TL;DR: The most common approaches to the challenging task of Autonomous Road Guidance are surveyed, with the most promising experimental solutions and prototypes developed worldwide using AI techniques to perceive the environmental situation by means of artificial vision.

448 citations

Journal ArticleDOI
07 Nov 2002
TL;DR: This paper surveys the most advanced approaches to (partial) customization of the road following task, using on-board systems based on artificial vision, and describes the functionalities of lane detection, obstacle detection and pedestrian detection.
Abstract: The last few decades have witnessed the birth and growth of a new sensibility to transportation efficiency. In particular the need for efficient and improved people and goods mobility has pushed researchers to address the problem of intelligent transportation systems. This paper surveys the most advanced approaches to (partial) customization of the road following task, using on-board systems based on artificial vision. The functionalities of lane detection, obstacle detection and pedestrian detection are described and classified, and their possible application in future road vehicles is discussed.

293 citations

Proceedings ArticleDOI
03 Oct 2000
TL;DR: This paper presents the method for detecting pedestrian recently implemented on the ARGO vehicle: the analysis of a monocular image delivers a first coarse detection, while a distance refinement is performed using the stereo vision technique.
Abstract: This paper presents the method for detecting pedestrian recently implemented on the ARGO vehicle. The perception of the environment is performed through the sole processing of images acquired from a vision system installed on board of the vehicle: the analysis of a monocular image delivers a first coarse detection, while a distance refinement is performed using the stereo vision technique.

271 citations

01 Jan 2006
TL;DR: In this article, an imagedescriptor based on histogram of oriented gradients (HOG), associated with a Support Vector Machine (SVM) classifier was used for pedestrian detection.
Abstract: Thispaperpresents acomplete methodforpedes- trian detection applied toinfrared images. First, westudy an imagedescriptor basedonhistograms oforiented gradients (HOG),associated witha Support Vector Machine(SVM) classifier andevaluate itsefficiency. Afterhaving tunedthe HOG descriptor andtheclassifier, weinclude this methodin acomplete system, whichdeals withstereo infrared images. Thisapproach gives goodresults forwindowclassification, andapreliminary testapplied onavideosequence proves thatthis approach isverypromising. I.INTRODUCTION Since thelast fewyears now,thedevelopment ofdriving assistance systems hasbeenveryactive inorder toin- crease thevehicle anditsenvironment safety. Atthe present time, themainobjective inthisdomainisto provide thedrivers withsomeinformation concerning its environment andanypotential hazard. Oneamongall useful information isthedetection andlocalization ofa pedestrian infront ofavehicle. Thisproblem ofdetecting pedestrians isaverydifficult problemthathasessentially beenaddressed using vision sensors, imageprocessing andpattern recognition techniques. Inparticular, detecting pedestrians thanks to images isacomplex challenge duetotheir appearance andposevariability. Inthecontext ofdaylight vision, several approaches havebeenproposed andarebased on different imageprocessing techniques ormachine learning (9), (5), (12). Recently, owingtothedevelopment oflow-cost infrared cameras, night vision systems havegained moreand moreinterest, thusincreasing theneedofautomatic detection ofpedestrians atnight. Thisproblem of detecting pedestrians frominfrared images hasbeen investigated byvarious research teamsinthelast years. Themainmethodology isbasedon extracting cues (symmetry, shape-independent

259 citations


Cited by
More filters
Journal ArticleDOI
Dariu M. Gavrila1
TL;DR: A number of promising applications are identified and an overview of recent developments in this domain is provided, including work on whole-body or hand motion and the various methodologies.

2,045 citations

Posted Content
TL;DR: nuScenes as mentioned in this paper is the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view.
Abstract: Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online.

1,939 citations

Journal ArticleDOI
13 Jun 2016
TL;DR: In this article, the authors present a survey of the state of the art on planning and control algorithms with particular regard to the urban environment, along with a discussion of their effectiveness.
Abstract: Self-driving vehicles are a maturing technology with the potential to reshape mobility by enhancing the safety, accessibility, efficiency, and convenience of automotive transportation. Safety-critical tasks that must be executed by a self-driving vehicle include planning of motions through a dynamic environment shared with other vehicles and pedestrians, and their robust executions via feedback control. The objective of this paper is to survey the current state of the art on planning and control algorithms with particular regard to the urban setting. A selection of proposed techniques is reviewed along with a discussion of their effectiveness. The surveyed approaches differ in the vehicle mobility model used, in assumptions on the structure of the environment, and in computational requirements. The side by side comparison presented in this survey helps to gain insight into the strengths and limitations of the reviewed approaches and assists with system level design choices.

1,437 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: nuScenes as discussed by the authors is the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view.
Abstract: Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online.

1,378 citations