M
Markus Anderljung
Researcher at University of Oxford
Publications - 16
Citations - 301
Markus Anderljung is an academic researcher from University of Oxford. The author has contributed to research in topics: Computer science & Corporate governance. The author has an hindex of 3, co-authored 9 publications receiving 121 citations.
Papers
More filters
Posted Content
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
Miles Brundage,Shahar Avin,Jasmine Wang,Haydn Belfield,Gretchen Krueger,Gillian K. Hadfield,Gillian K. Hadfield,Heidy Khlaaf,Jingying Yang,Helen Toner,Ruth Fong,Tegan Maharaj,Pang Wei Koh,Sara Hooker,Jade Leung,Andrew Trask,Emma Bluemke,Jonathan Lebensbold,Cullen O'Keefe,Mark Koren,Théo Ryffel,J. B. Rubinovitz,Tamay Besiroglu,Federica Carugati,Jack Clark,Peter Eckersley,Sarah de Haas,Maritza Johnson,Ben Laurie,Alex Ingerman,Igor Krawczuk,Amanda Askell,Rosario Cammarota,Andrew J. Lohn,David Krueger,Charlotte Stix,Peter Henderson,Logan Graham,Carina E. A. Prunkl,Bianca Martin,Elizabeth Seger,Noa Zilberman,Seán Ó hÉigeartaigh,Frens Kroeger,Girish Sastry,Rebecca Kagan,Adrian Weller,Adrian Weller,Brian Tse,Elizabeth A. Barnes,Allan Dafoe,Paul Scharre,Ariel Herbert-Voss,Martijn Rasser,Shagun Sodhani,Carrick Flynn,Thomas Krendl Gilbert,Lisa Dyer,Saif Khan,Yoshua Bengio,Markus Anderljung +60 more
TL;DR: This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
Journal ArticleDOI
Institutionalizing ethics in AI through broader impact requirements
TL;DR: In this Perspective, a governance initiative by one of the world’s largest AI conferences is reflected on and insights are gained regarding effective community-based governance and the role and responsibility of the AI research community more broadly.
Journal ArticleDOI
Institutionalising Ethics in AI through Broader Impact Requirements
TL;DR: In 2019, the Conference on Neural Information Processing Systems (NeurIPS) introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research as mentioned in this paper.
Journal ArticleDOI
Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers
TL;DR: In this article, the authors argue that ML and AI researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment, and that they should not be overlooked.
Journal ArticleDOI
Model evaluation for extreme risks
Toby Shevlane,Sebastian Farquhar,Ben Garfinkel,Mary Phuong,Jess Whittlestone,Jade Leung,Daniel Kokotajlo,Nahema Marchal,Markus Anderljung,Noam Kolt,Divya Siddarth,Shahar Avin,William T. Hawkins,Been Kim,Iason Gabriel,Jack Clark,Yoshua Bengio,Paul F. Christiano,Allan Dafoe +18 more
TL;DR: In this paper , the authors explain why model evaluation is critical for addressing extreme risks and why developers must be able to identify dangerous capabilities (through"dangerous capability evaluations") and the propensity of models to apply their capabilities for harm (through "alignment evaluations").