M
Miles Brundage
Researcher at Arizona State University
Publications - 25
Citations - 4671
Miles Brundage is an academic researcher from Arizona State University. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 12, co-authored 22 publications receiving 2666 citations. Previous affiliations of Miles Brundage include OpenAI & University of Oxford.
Papers
More filters
Journal ArticleDOI
Deep Reinforcement Learning: A Brief Survey
TL;DR: Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higher-level understanding of the visual world as discussed by the authors.
Journal ArticleDOI
A brief survey of deep reinforcement learning
TL;DR: This survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic, and highlight the unique advantages of deep neural networks, focusing on visual understanding via RL.
Posted ContentDOI
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
Miles Brundage,Shahar Avin,Jack Clark,Helen Toner,Peter Eckersley,Ben Garfinkel,Allan Dafoe,Paul Scharre,Thomas Zeitzoff,Bobby Filar,Hyrum S. Anderson,Heather M. Roff,Gregory C. Allen,Jacob Steinhardt,Carrick Flynn,Seán Ó hÉigeartaigh,Simon Beard,Haydn Belfield,Sebastian Farquhar,Clare Lyle,Rebecca Crootof,Owain Evans,Michael Page,Joanna J. Bryson,Roman V. Yampolskiy,Dario Amodei +25 more
TL;DR: The following organisations are named on the report: Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, Universityof Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI.
Posted Content
Release Strategies and the Social Impacts of Language Models.
Irene Solaiman,Miles Brundage,Jack Clark,Amanda Askell,Ariel Herbert-Voss,Jeffrey Wu,Alec Radford,Jasmine Wang +7 more
TL;DR: This report discusses OpenAI's work related to the release of its GPT-2 language model and discusses staged release, which allows time between model releases to conduct risk and benefit analyses as model sizes increased.
Posted Content
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
Miles Brundage,Shahar Avin,Jasmine Wang,Haydn Belfield,Gretchen Krueger,Gillian K. Hadfield,Gillian K. Hadfield,Heidy Khlaaf,Jingying Yang,Helen Toner,Ruth Fong,Tegan Maharaj,Pang Wei Koh,Sara Hooker,Jade Leung,Andrew Trask,Emma Bluemke,Jonathan Lebensbold,Cullen O'Keefe,Mark Koren,Théo Ryffel,J. B. Rubinovitz,Tamay Besiroglu,Federica Carugati,Jack Clark,Peter Eckersley,Sarah de Haas,Maritza Johnson,Ben Laurie,Alex Ingerman,Igor Krawczuk,Amanda Askell,Rosario Cammarota,Andrew J. Lohn,David Krueger,Charlotte Stix,Peter Henderson,Logan Graham,Carina E. A. Prunkl,Bianca Martin,Elizabeth Seger,Noa Zilberman,Seán Ó hÉigeartaigh,Frens Kroeger,Girish Sastry,Rebecca Kagan,Adrian Weller,Adrian Weller,Brian Tse,Elizabeth A. Barnes,Allan Dafoe,Paul Scharre,Ariel Herbert-Voss,Martijn Rasser,Shagun Sodhani,Carrick Flynn,Thomas Krendl Gilbert,Lisa Dyer,Saif Khan,Yoshua Bengio,Markus Anderljung +60 more
TL;DR: This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.