scispace - formally typeset
Search or ask a question
Author

Nello Cristianini

Bio: Nello Cristianini is an academic researcher from University of Bristol. The author has contributed to research in topics: Kernel method & Support vector machine. The author has an hindex of 51, co-authored 183 publications receiving 46640 citations. Previous affiliations of Nello Cristianini include Royal Holloway, University of London & University of California, Davis.


Papers
More filters
Proceedings ArticleDOI
15 Sep 2009
TL;DR: The design of an autonomous agent that can teach itself how to translate from a foreign language, by first assembling its own training set, then using it to improve its vocabulary and language model is described.
Abstract: We describe the design of an autonomous agent that can teach itself how to translate from a foreign language, by first assembling its own training set, then using it to improve its vocabulary and language model. The key idea is that a Statistical Machine Translation package can be used for the Cross-Language Retrieval Task of assembling a training set from a vast amount of available text (e.g. a large multilingual corpus, or the Web) and then train on that data, repeating that process several times. The stability issues related to such a feedback loop are addressed by a mathematical model, connecting statistical and control-theoretic aspects of the system. We test it on real-world tasks, showing that indeed this agent can improve its translation performance autonomously and in a stable fashion, when seeded with a very small initial training set. The modelling approach we develop for this agent is general, and we believe will be useful for an entire class of self-learning autonomous agents working on the Web.

7 citations

Journal ArticleDOI
TL;DR: Results indicate that the NTAR system could assist neuroscientists with thesauri creation for closely related, highly detailed neuroanatomical domains.
Abstract: Generating informational thesauri that classify, cross-reference, and retrieve diverse and highly detailed neuroscientific information requires identifying related neuroanatomical terms and acronyms within and between species (Gorin et al., 2001) Manual construction of such informational thesauri is laborious, and we describe implementing and evaluating a neuroanatomical term and acronym reconciliation (NTAR) system to assist domain experts with this task. NTAR is composed of two modules. The neuroanatomical term extraction (NTE) module employs a hidden Markov model (HMM) in conjunction with lexical rules to extract neuroanatomical terms (NT) and acronyms (NA) from textual material. The output of the NTE is formatted into collections of term- or acronym-indexed documents composed of sentences and word phrases extracted from textual material. The second information retrieval (IR) module utilizes a vector space model (VSM) and includes a novel, automated relevance feedback algorithm. The IR module retrieves statistically related neuroanatomical terms and acronyms in response to queried neuroanatomical terms and acronyms. Neuroanatomical terms and acronyms retrieval obtained from term-based inquiries were compared with (1) term retrieval obtained by including automated relevance feedback and with (2) term retrieval using “document-to-document” comparisons (context-based VSM). The retrieval of synonymous and similar primate and macaque thalamic terms and acronyms in response to a query list of human thalamic terminology by these three IR approaches was compared against a previously published, manually constructed concordance table of homologous cross-species terms and acronyms. Term-based VSM with automated relevance feedback retrieved 70% and 80% of these primate and macaque terms and acronyms, respectively, listed in the concordance table. Automated feedback algorithm correctly identified 87% of the macaque terms and acronyms that were independently selected by a domain expert as being appropriate for manual relevance feedback. Context-based VSM correctly retrieved 97% and 98% of the primate and macaque terms and acronyms listed in the term homology table. These results indicate that the NTAR system could assist neuroscientists with thesauri creation for closely related, highly detailed neuroanatomical domains.

7 citations

Proceedings Article
01 Jan 2012
TL;DR: This paper compares the effectiveness of various approaches to graph construction by building graphs of 800,000 vertices based on the Reuters corpus, showing that relation-based classification is competitive with Support Vector Machines, which can be considered as state of the art.
Abstract: The efficient annotation of documents in vast corpora calls for scalable methods of text classification. Representing the documents in the form of graph vertices, rather than in the form of vectors in a bag of words space, allows for the necessary information to be pre-computed and stored. It also fundamentally changes the problem definition, from a content-based to a relation-based classification problem. Efficiently creating a graph where nearby documents are likely to have the same annotation is the central task of this paper. We compare the effectiveness of various approaches to graph construction by building graphs of 800,000 vertices based on the Reuters corpus, showing that relation-based classification is competitive with Support Vector Machines, which can be considered as state of the art. We further show that the combination of our relation-based approach and Support Vector Machines leads to an improvement over the methods individually.

7 citations

Posted Content
TL;DR: This paper used word lists created for social psychology applications to measure gender bias in data embeddings and demonstrate how a simple projection can significantly reduce the effects of embedding bias, which is part of an ongoing effort to understand how trust can be built into AI systems.
Abstract: Many modern Artificial Intelligence (AI) systems make use of data embeddings, particularly in the domain of Natural Language Processing (NLP). These embeddings are learnt from data that has been gathered "from the wild" and have been found to contain unwanted biases. In this paper we make three contributions towards measuring, understanding and removing this problem. We present a rigorous way to measure some of these biases, based on the use of word lists created for social psychology applications; we observe how gender bias in occupations reflects actual gender bias in the same occupations in the real world; and finally we demonstrate how a simple projection can significantly reduce the effects of embedding bias. All this is part of an ongoing effort to understand how trust can be built into AI systems.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Abstract: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

40,826 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
08 Sep 2000
TL;DR: This book presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects, and provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data.
Abstract: The increasing volume of data in modern business and science calls for more complex and sophisticated tools. Although advances in data mining technology have made extensive data collection much easier, it's still always evolving and there is a constant need for new techniques and tools that can help us transform this data into useful information and knowledge. Since the previous edition's publication, great advances have been made in the field of data mining. Not only does the third of edition of Data Mining: Concepts and Techniques continue the tradition of equipping you with an understanding and application of the theory and practice of discovering patterns hidden in large data sets, it also focuses on new, important topics in the field: data warehouses and data cube technology, mining stream, mining social networks, and mining spatial, multimedia and other complex data. Each chapter is a stand-alone guide to a critical topic, presenting proven algorithms and sound implementations ready to be used directly or with strategic modification against live data. This is the resource you need if you want to apply today's most powerful data mining techniques to meet real business challenges. * Presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects. * Addresses advanced topics such as mining object-relational databases, spatial databases, multimedia databases, time-series databases, text databases, the World Wide Web, and applications in several fields. *Provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data

23,600 citations

Book
25 Oct 1999
TL;DR: This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining.
Abstract: Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization

20,196 citations

Journal ArticleDOI
TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.
Abstract: The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

15,696 citations