scispace - formally typeset
S

Shaked Shammah

Publications -  43
Citations -  1579

Shaked Shammah is an academic researcher. The author has contributed to research in topics: Host (network) & Navigation system. The author has an hindex of 11, co-authored 43 publications receiving 1272 citations.

Papers
More filters
Posted Content

Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving

TL;DR: This paper applies deep reinforcement learning to the problem of forming long term driving strategies and shows how policy gradient iterations can be used without Markovian assumptions, and decomposes the problem into a composition of a Policy for Desires and trajectory planning with hard constraints.
Posted Content

On a Formal Model of Safe and Scalable Self-driving Cars

TL;DR: A white-box, interpretable, mathematical model for safety assurance, which the authors call-Sensitive Safety (RSS), and a design of a system that adheres to the safety assurance requirements and is scalable to millions of cars.
Posted Content

Failures of Gradient-Based Deep Learning

TL;DR: This work describes four types of simple problems, for which the gradient-based algorithms commonly used in deep learning either fail or suffer from significant difficulties.
Patent

Machine learning navigational engine with imposed constraints

TL;DR: In this article, reinforcement learning techniques are used for navigating an autonomous vehicle using reinforcement learning (RL) techniques, where a navigation system for a host vehicle may include at least one processing device programmed to: receive, from a camera, a plurality of images representative of an environment of the host vehicle; analyze the plurality of views to identify a navigational state associated with the vehicle; provide the navigational states to a trained navigational system; and determine an actual navigational action for execution by the vehicle.
Proceedings Article

Failures of Gradient-Based Deep Learning

TL;DR: In this paper, the authors describe four types of simple problems for which the gradient-based algorithms commonly used in deep learning either fail or suffer from significant difficulties, and illustrate the failures through practical experiments, and provide theoretical insights explaining their source.