Pedestrian Models for Autonomous Driving Part II: High-Level Models of Human Behavior
read more
Citations
A micro simulation model for pedestrian flows
Defining interactions: a conceptual framework for understanding interactive behaviour in human and automated road traffic
Set-Based Prediction of Traffic Participants Considering Occlusions and Traffic Rules
Pedestrian Models for Autonomous Driving Part I: Low-Level Models, From Sensing to Tracking
Top-view Trajectories: A Pedestrian Dataset of Vehicle-Crowd Interaction from Controlled Experiments and Crowded Campus.
References
Histograms of oriented gradients for human detection
Are we ready for autonomous driving? The KITTI vision benchmark suite
The Cityscapes Dataset for Semantic Urban Scene Understanding
Vision meets robotics: The KITTI dataset
Attitudes, Personality and Behavior
Related Papers (5)
A Survey on Motion Prediction of Pedestrians and Vehicles for Autonomous Driving
Frequently Asked Questions (14)
Q2. What have the authors stated for future works in "Pedestrian models for autonomous driving part ii: high-level models of human behaviour" ?
There are currently no game-theoretic models using knowing and showing with explicit signalling but this would appear to be a fruitful area for future research.
Q3. What is the common method used to predict the next state of a pedestrian?
An ensemble Kalman filter (EnKF) was used to predict the next state based on current observation and EM algorithm to maximize the likelihood of the state.
Q4. Why are they used as descriptive models rather than as real-time control?
They are used only as descriptive models rather than as real-time control because they require each pedestrian’s final goal location to be known in advance to form the cost matrix – which is only obtainable by looking ahead in the data to see what happened post hoc.
Q5. What are the advantages of using cell-based models for pedestrians?
Cellular-based models are useful for modelling pedestrians with minimal movement choices and when representing their collisions is not required.
Q6. What is the main reason why pedestrians assume that a vehicle is referring to themselves?
Pedestrians usually assume that any AVs communication is referring to themselves, hence using eHMIs with multiple pedestrians present has to be carried out in a way that minimizes miscommunication (i.e. either letting all pedestrians pass or not displaying a signal at all).
Q7. How many cases of pedestrian interaction were observed?
Nathanael et al. [149] in a naturalistic study of driver pedestrian interaction reported that pedestrian head turning towards a vehicle was sufficient for drivers to confidently infer pedestrians intent in 52% of interaction cases observed.
Q8. What are the different utilities assigned to the presence of a person in four different zones around an?
It assigns different utilities to the presence of a person in four different zones around an individual which are defined as the intimate distance, the personal distance, social distance and the public distance [90].
Q9. What is the effect of the fear of falling in elderly pedestrians on the speed of the crossing?
In addition, Avineri et al. [10] found lower crossing speeds for female than male pedestrians, and that the fear of falling in elderly pedestrians has an effect on the number of downward head pitches during crossing.
Q10. How was the trajectories of pedestrians and vehicles refined?
In particular, the final trajectories of pedestrians and vehicles were refined by Kalman filters with linear point-mass model and nonlinear bicycle model, respectively, in which xyvelocity of pedestrians and longitudinal speed and orientation of vehicles were estimated.
Q11. What is the need for computational research in interaction modelling?
More computational research is needed in interaction modelling: psychology/human factors studies and theories are more mature, but their results have not yet been quantified to the extent of enabling translation into algorithms for AVs.
Q12. What are the early steps towards modelling?
Some early steps have however been taken towards modelling at least some levels of explicit knowing and showing of beliefs about each other via signalling behaviour.
Q13. What is the ranking of the models tested on the dataset?
The ApolloScape LeaderBoard shows the ranking and performance of the models tested on the dataset for different tasks, such as scene parsing, detection/tracking, trajectory prediction, self-localisation.
Q14. What is the level of awareness of an actor?
The actor’s awareness is divided into three levels, i.e., (1) unaware of the others, (2) factually aware of the other, or (3) aware and actively attending to the other.