scispace - formally typeset
Search or ask a question
Author

Geoff Keeling

Bio: Geoff Keeling is an academic researcher from University of Bristol. The author has contributed to research in topics: Computer science & Fairness measure. The author has an hindex of 6, co-authored 10 publications receiving 142 citations.

Papers
More filters
Posted Content
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ B. Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie Chen, Kathleen Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel1, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Ahmad Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Yang Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang 
TL;DR: The authors provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e. g.g. model architectures, training procedures, data, systems, security, evaluation, theory) to their applications.
Abstract: AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.

76 citations

Journal ArticleDOI
TL;DR: A positive account is developed of how trolley cases can inform the ethics of automated vehicles and four arguments for this view are outlined and rejected.
Abstract: This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles. Four arguments for this view are outlined and rejected: the Not Going to Happen Argument, the Moral Difference Argument, the Impossible Deliberation Argument and the Wrong Question Argument. In making clear where these arguments go wrong, a positive account is developed of how trolley cases can inform the ethics of automated vehicles.

53 citations

Journal ArticleDOI
TL;DR: In this article, the authors argue that Santoni de Sio's answer to the moral design problem does not achieve the aim of the legal-philosophical approach, because his answer relies on moral principles which, at least, utilitarians have reason to reject.
Abstract: Suppose a driverless car encounters a scenario where (i) harm to at least one person is unavoidable and (ii) a choice about how to distribute harms between different persons is required. How should the driverless car be programmed to behave in this situation? I call this the moral design problem. Santoni de Sio (Ethical Theory Moral Pract 20:411–429, 2017) defends a legal-philosophical approach to this problem, which aims to bring us to a consensus on the moral design problem despite our disagreements about which moral principles provide the correct account of justified harm. He then articulates an answer to the moral design problem based on the legal doctrine of necessity. In this paper, I argue that Santoni de Sio’s answer to the moral design problem does not achieve the aim of the legal-philosophical approach. This is because his answer relies on moral principles which, at least, utilitarians have reason to reject. I then articulate an alternative reading of the doctrine of necessity, and construct a partial answer to the moral design problem based on this. I argue that utilitarians, contractualists and deontologists can agree on this partial answer, even if they disagree about which moral principles offer the correct account of justified harm.

21 citations

Book ChapterDOI
15 Jul 2019
TL;DR: In this article, the authors outline four perspectives on what matters for the ethics of AVs: risk and uncertainty, value sensitive design, partiality towards passengers and meaningful human control.
Abstract: The ethical discussion on automated vehicles (AVs) has for the most part focused on what morality requires in AV collisions which present moral dilemmas. This discussion has been challenged for its failure to address the various kinds of risk and uncertainty which we can expect to arise in AV collisions; and for overlooking certain morally relevant facts which are unique to the context of AVs. We take these criticisms as a starting point and outline four perspectives on what matters for the ethics of AVs: risk and uncertainty, value sensitive design, partiality towards passengers and meaningful human control.

19 citations

Book ChapterDOI
04 Nov 2017
TL;DR: It is argued that there is reason to reject Derek Leben's answer to the question of what morality requires in cases where imposing a risk of harm on at least one person is unavoidable and a choice about how to allocate risks of harm between different persons is required.
Abstract: Suppose that an autonomous vehicle encounters a situation where (i) imposing a risk of harm on at least one person is unavoidable; and (ii) a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.

12 citations


Cited by
More filters
01 Jan 2016

1,538 citations

01 Jan 2016

930 citations

Journal Article
TL;DR: It's significant to wait for the representative and beneficial books to read to feel good about reading, even if you are a good reader or not.
Abstract: Frances Mei Hardin, MD, MSMA member since 2017, is in the Department of Otolaryngology, University of Missouri, Columbia, Missouri. Above, she is shown as MSMA Physician of the Day at the Missouri Capitol in 2018. Contact: ude.iruossim.htlaeh@fnidrah Open in a separate window Disclaimer This article was not sponsored by Easton bats, Headspace, Calm, Sycamore Creek Farms, Pilot, or Casamigos, but if any of these institutions would like to reach out, I’ll be available.

498 citations

Journal ArticleDOI
TL;DR: It is argued that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour.
Abstract: Interactions between an intelligent software agent (ISA) and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases In such interactions, the ISA mediates the user's access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience Using ideas from bounded rationality (and deploying concepts from artificial intelligence, behavioural economics, control theory, and game theory), we frame these interactions as instances of an ISA whose reward depends on actions performed by the user Such agents benefit by steering the user's behaviour towards outcomes that maximise the ISA's utility, which may or may not be aligned with that of the user Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case Our analysis facilitates distinguishing various subcases of interaction (ie deception, coercion, trading, and nudging), as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings

83 citations