scispace - formally typeset
L

Lucas Janson

Researcher at Harvard University

Publications -  66
Citations -  2835

Lucas Janson is an academic researcher from Harvard University. The author has contributed to research in topics: Motion planning & Asymptotically optimal algorithm. The author has an hindex of 22, co-authored 56 publications receiving 2032 citations. Previous affiliations of Lucas Janson include Stanford University.

Papers
More filters
Journal ArticleDOI

Panning for gold: ‘model‐X’ knockoffs for high dimensional controlled variable selection

TL;DR: In this paper, the authors propose a new framework of "model-X" knockoffs, which reads from a different perspective the knockoff procedure that was originally designed for controlling the false discovery rate in linear models.
Journal ArticleDOI

Fast marching tree

TL;DR: This paper proves asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius.
Posted Content

Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions

TL;DR: The Fast Marching Tree algorithm (FMT*) as mentioned in this paper is a sampling-based motion planning algorithm for high-dimensional configuration spaces that is proven to be asymptotically optimal and converges to an optimal solution faster than its state-of-the-art counterparts.
Posted Content

Risk-Constrained Reinforcement Learning with Percentile Risk Criteria

TL;DR: In this paper, the authors present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost.
Journal Article

Risk-Constrained Reinforcement Learning with Percentile Risk Criteria

TL;DR: This paper derives a formula for computing the gradient of the Lagrangian function for percentile risk-constrained Markov decision processes and devise policy gradient and actor-critic algorithms that estimate such gradient, update the policy in the descent direction, and update the Lagrange multiplier in the ascent direction.