scispace - formally typeset
Search or ask a question
Author

Anthony G. Cohn

Bio: Anthony G. Cohn is an academic researcher from University of Leeds. The author has contributed to research in topics: Spatial intelligence & Qualitative reasoning. The author has an hindex of 47, co-authored 258 publications receiving 10170 citations. Previous affiliations of Anthony G. Cohn include Qingdao University of Science and Technology & University of Technology, Sydney.


Papers
More filters
Proceedings Article
01 Jan 1992
TL;DR: An interval logic for reasoning about space is described, which supports a simpler ontology, has fewer functions and relations, yet does not su er in terms of its useful expressiveness.
Abstract: We describe an interval logic for reasoning about space. The logic simpli es an earlier theory developed by Randell and Cohn, and that of Clarke upon which the former was based. The theory supports a simpler ontology, has fewer de ned functions and relations, yet does not su er in terms of its useful expressiveness. An axiomatisation of the new theory and a comparison with the two original theories is given.

2,210 citations

Journal ArticleDOI
TL;DR: The paper is a overview of the major qualitative spatial representation and reasoning techniques including ontological aspects, topology, distance, orientation and shape, and qualitative spatial reasoning including reasoning about spatial change.
Abstract: The paper is a overview of the major qualitative spatial representation and reasoning techniques. We survey the main aspects of the representation of qualitative knowledge including ontological aspects, topology, distance, orientation and shape. We also consider qualitative spatial reasoning including reasoning about spatial change. Finally there is a discussion of theoretical results and a glimpse of future work. The paper is a revised and condensed version of [33,34].

745 citations

Journal ArticleDOI
TL;DR: This paper surveys the work of the qualitative spatial reasoning group at the University of Leeds and motivates the use of regions as the primary spatial entity and shows how a rich language can be built up from surprisingly few primitives.
Abstract: This paper surveys the work of the qualitative spatial reasoning group at the University of Leeds. The group has developed a number of logical calculi for representing and reasoning with qualitative spatial relations over regions. We motivate the use of regions as the primary spatial entity and show how a rich language can be built up from surprisingly few primitives. This language can distinguish between convex and a variety of concave shapes and there is also an extension which handles regions with uncertain boundaries. We also present a variety of reasoning techniques, both for static and dynamic situations. A number of possible application areas are briefly mentioned.

578 citations

Book ChapterDOI
01 Jan 2008
TL;DR: The challenge of qualitative spatial reasoning (QSR) is to provide calculi that allow a machine to represent and reason with spatial entities without resort to the traditional quantitative techniques prevalent in, for example, computer graphics or computer vision communities.
Abstract: Publisher Summary Early attempts at qualitative spatial reasoning within the qualitative reasoning (QR) community led to the poverty conjecture. The need for spatial representations and spatial reasoning is ubiquitous in artificial intelligence (AI) from robot planning and navigation to interpreting visual inputs to understanding natural language. In all these cases, the need to represent and reason about spatial aspects of the world is of key importance. Related fields of research such as geographic information science (GIScience) have also driven the spatial representation and reasoning community to produce efficient, expressive, and useful calculi. There has been considerable research in spatial representations that are based on metric measurements, in particular within the vision and robotics communities, and also on raster and vector representations in GIScience. This chapter focuses on symbolic and, in particular, qualitative representations. The challenge of qualitative spatial reasoning (QSR) is to provide calculi that allow a machine to represent and reason with spatial entities without resort to the traditional quantitative techniques prevalent in, for example, computer graphics or computer vision communities.

420 citations

28 Jul 1987

312 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

BookDOI
01 Jan 2001
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Abstract: Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target tracking and missile guidance.

6,574 citations

Posted Content
TL;DR: It is shown that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.
Abstract: In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.

2,679 citations