scispace - formally typeset
Search or ask a question

Showing papers by "Nello Cristianini published in 2019"


Journal ArticleDOI
TL;DR: A large number of empirical studies are reviewed, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations, personality, and disorders and conditions.
Abstract: We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour.

33 citations


Posted Content
TL;DR: The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but draws on similar situations emerging from other algorithms utilised in controlling access to opportunities, to explain how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or 'classifiers'.
Abstract: As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but draws on similar situations emerging from other algorithms utilised in controlling access to opportunities, to explain how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or 'classifiers'. It examines the key aspects of the performance of classifiers, including how classifiers learn, the fact that they operate on the basis of correlation rather than causation, and that the term 'bias' in machine learning has a different meaning to common usage. An example of a real world 'classifier', the Harm Assessment Risk Tool (HART), is examined, through identification of its technical features: the classification method, the training data and the test data, the features and the labels, validation and performance measures. Four normative benchmarks are then considered by reference to HART: (a) prediction accuracy (b) fairness and equality before the law (c) transparency and accountability (d) informational privacy and freedom of expression, in order to demonstrate how its technical features have important normative dimensions that bear directly on the extent to which the system can be regarded as a viable and legitimate support for, or even alternative to, existing human decision-makers.

17 citations


Posted Content
TL;DR: In this paper, the authors identify convergent social and technical trends that are leading towards social regulation by algorithms, and discuss the possible social, political, and ethical consequences of taking this path.
Abstract: Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of 'social machine' and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social scoring. After showing how all the building blocks of algorithmic regulation are already well in place, we discuss possible implications for human autonomy and social order. The main contribution of this paper is to identify convergent social and technical trends that are leading towards social regulation by algorithms, and to discuss the possible social, political, and ethical consequences of taking this path.

13 citations


Journal ArticleDOI
TL;DR: Strong seasonal patterns of antidepressant prescriptions are found, which show stronger correlations with day length than levels of solar energy, and levels of depression in a population can be determined by proxy indicators such as web query logs.
Abstract: The state of an individual's mental health depends on many factors. Determination of the importance of any particular factor within a population needs access to unbiased data. We used publicly available data-sets to investigate, at a population level, how surrogates of mental health covary with light exposure. We found strong seasonal patterns of antidepressant prescriptions, which show stronger correlations with day length than levels of solar energy. Levels of depression in a population can therefore be determined by proxy indicators such as web query logs. Furthermore, these proxies for depression correlate with day length rather than solar energy.Declaration of interestNone.

5 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the process of creating a digital corpus formed by Italian newspapers published in Gorizia between 1873 and 1914, which they compared with a corpus of Slovenian newspapers printed in the same city and at the same time, already digitized by the Slovene National Library.
Abstract: Digital libraries allow not only to improve the preservation of documents and to facilitate access by users, but also to experiment with new methods; for example, it is possible to examine the statistical relationships between the contents of thousands of documents in a short time, an operation almost inaccessible to traditional methods. The key step remains that of converting from analogue support, paper or microfilm, to the digital one, including the transformation of images of the printed text into digital text: only in this way is it possible to statistically analyze those texts, an analysis that cannot be separated from the historical context of their production and from other sources. In this article, we describe in detail the process of creating a digital corpus formed by Italian newspapers published in Gorizia between 1873 and 1914. This includes digitization, editable text extraction, annotation process and statistical analysis of the resulting time series. The data thus obtained are compared with a corpus of Slovenian newspapers printed in the same city and at the same time, already digitized by the Slovene National Library. The analysis of the 47.466 pages of Italian newspapers allows us to demonstrate the type of information that can be extracted from a digital corpus, highlighting the importance of operating within a historical and comparative context. This example of multilingual digital humanism allows us to identify the statistical traces of profound cultural transitions that have taken place in a very complex geographical area and historical period, whose study cannot ignore a particular attention to cultural, technological and social transformations.