A
Alexander Robey
Researcher at University of Pennsylvania
Publications - 26
Citations - 648
Alexander Robey is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Computer science & Optimization problem. The author has an hindex of 8, co-authored 19 publications receiving 328 citations. Previous affiliations of Alexander Robey include Swarthmore College.
Papers
More filters
Proceedings Article
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
TL;DR: In this article, the authors present a convex optimization framework to compute guaranteed upper bounds on the Lipschitz constant of DNNs both accurately and efficiently, where activation functions are interpreted as gradients of convex potential functions.
Posted Content
Learning Control Barrier Functions from Expert Demonstrations
Alexander Robey,Haimin Hu,Lars Lindemann,Hanwen Zhang,Dimos V. Dimarogonas,Stephen Tu,Nikolai Matni +6 more
TL;DR: These are the first results that learn provably safe control barrier functions from data, agnostic to the parameterization used to represent the CBF, assuming only that the Lipschitz constant of such functions can be efficiently bounded.
Proceedings ArticleDOI
Learning Control Barrier Functions from Expert Demonstrations
Alexander Robey,Haimin Hu,Lars Lindemann,Hanwen Zhang,Dimos V. Dimarogonas,Stephen Tu,Nikolai Matni +6 more
TL;DR: In this article, a learning-based approach to safe controller synthesis based on control barrier functions (CBFs) is proposed, which is agnostic to the parameterization used to represent the CBF.
Posted Content
Provable tradeoffs in adversarially robust classification
TL;DR: The results reveal tradeoffs between standard and robust accuracy that grow when data is imbalanced, and develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry, which, to the authors' knowledge, have not yet been used in the area.
Posted Content
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
TL;DR: A convex optimization framework to compute guaranteed upper bounds on the Lipschitz constant of DNNs both accurately and efficiently and is experimentally demonstrated to be the most accurate compared to those in the literature.