scispace - formally typeset
Search or ask a question

Showing papers by "School for Advanced Studies in the Social Sciences published in 2021"


Journal ArticleDOI
TL;DR: Whether LENAⓇ results are accurate enough for a given research, educational, or clinical application depends largely on the specifics at hand, and a set of recommendations is concluded to help researchers make this determination for their goals.
Abstract: In the previous decade, dozens of studies involving thousands of children across several research disciplines have made use of a combined daylong audio-recorder and automated algorithmic analysis called the LENAⓇ system, which aims to assess children's language environment. While the system's prevalence in the language acquisition domain is steadily growing, there are only scattered validation efforts on only some of its key characteristics. Here, we assess the LENAⓇ system's accuracy across all of its key measures: speaker classification, Child Vocalization Counts (CVC), Conversational Turn Counts (CTC), and Adult Word Counts (AWC). Our assessment is based on manual annotation of clips that have been randomly or periodically sampled out of daylong recordings, collected from (a) populations similar to the system's original training data (North American English-learning children aged 3-36 months), (b) children learning another dialect of English (UK), and (c) slightly older children growing up in a different linguistic and socio-cultural setting (Tsimane' learners in rural Bolivia). We find reasonably high accuracy in some measures (AWC, CVC), with more problematic levels of performance in others (CTC, precision of male adults and other children). Statistical analyses do not support the view that performance is worse for children who are dissimilar from the LENAⓇ original training set. Whether LENAⓇ results are accurate enough for a given research, educational, or clinical application depends largely on the specifics at hand. We therefore conclude with a set of recommendations to help researchers make this determination for their goals.

58 citations


Journal ArticleDOI
19 Mar 2021-iScience
TL;DR: The authors showed that the landscape of ongoing thought is reflected in the activity of multiple neural systems, and it is important to distinguish between processes contributing to how the experience unfolds from those linked to how these experiences are regulated.

37 citations


Journal ArticleDOI
TL;DR: The authors report that Total personnel received warnings of the potential for catastrophic global warming from its products by 1971, became more fully informed of the issue in the 1980s, began promoting doubt regarding the scientific basis for global warming by the late 1980s and ultimately settled on a position in the late 1990s of publicly accepting climate science while promoting policy delay or policies peripheral to fossil fuel control.
Abstract: Building upon recent work on other major fossil fuel companies, we report new archival research and primary source interviews describing how Total responded to evolving climate science and policy in the last 50 years. We show that Total personnel received warnings of the potential for catastrophic global warming from its products by 1971, became more fully informed of the issue in the 1980s, began promoting doubt regarding the scientific basis for global warming by the late 1980s, and ultimately settled on a position in the late 1990s of publicly accepting climate science while promoting policy delay or policies peripheral to fossil fuel control. Additionally, we find that Exxon, through the International Petroleum Industry Environmental Conservation Association (IPIECA), coordinated an international campaign to dispute climate science and weaken international climate policy, beginning in the 1980s. This represents one of the first longitudinal studies of a major fossil fuel company’s responses to global warming to the present, describing historical stages of awareness, preparation, denial, and delay.

37 citations


Journal ArticleDOI
TL;DR: This research identifies a factor that, alongside accuracy, drives the sharing of true and fake news: the ‘interestingness-if-true’ of a piece of information.
Abstract: Why would people share news they think might not be accurate? We identify a factor that, alongside accuracy, drives the sharing of true and fake news: the ‘interestingness-if-true’ of a piece of ne...

29 citations


Journal ArticleDOI
TL;DR: In this paper, the relationship between law, labor, and political economy was explored in both the Britain-Russia interplay and Britain-India interplay, and the tension between universalism and particularism of philosophical, social and economic categories was at work.
Abstract: Liberal utilitarianism is usually presented as a current of thought mostly inspired by Jeremy Bentham and other Western European thinkers, and eventually diffused in other parts of the world. This paper adopts a different approach and shows, on the one hand, how the Bentham brothers’ experiences in Russia and serfdom in particular inspired their invention of the Panopticon. The latter was not related to deviance (Foucault's interpretation), but to labor organization and surveillance. On the other hand, the interplay between utilitarianism and colonial India led Bentham, then James and John Stuart Mill, and ultimately Henry Maine to revise utilitarianism, in particular the relationship between law, labor, and political economy. In both the Britain–Russia interplay and Britain–India interplay, the tension between universalism and particularism of philosophical, social and economic categories was at work.

22 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce the neural resource allocation problem for robotic body augmentation and discuss how to allow the effective voluntary control of augmentative devices without compromising control of the biological body.
Abstract: The emergence of robotic body augmentation provides exciting innovations that will revolutionize the fields of robotics, human–machine interaction and wearable electronics. Although augmentative devices such as extra robotic arms and fingers are informed by restorative technologies in many ways, they also introduce unique challenges for bidirectional human–machine collaboration. Can humans adapt and learn to operate a new robotic limb collaboratively with their biological limbs, without restricting other physical abilities? To successfully achieve robotic body augmentation, we need to ensure that, by giving a user an additional (artificial) limb, we are not trading off the functionalities of an existing (biological) one. Here, we introduce the ‘neural resource allocation problem’ and discuss how to allow the effective voluntary control of augmentative devices without compromising control of the biological body. In reviewing the relevant literature on extra robotic fingers and arms, we critically assess the range of potential solutions available for this neural resource allocation problem. For this purpose, we combine multiple perspectives from engineering and neuroscience with considerations including human–machine interaction, sensory–motor integration, ethics and law. In summary, we aim to define common foundations and operating principles for the successful implementation of robotic body augmentation. The development of extra fingers and arms is an exciting research area in robotics, human–machine interaction and wearable electronics. It is unclear, however, whether humans can adapt and learn to control extra limbs and integrate them into a new sensorimotor representation, without sacrificing their natural abilities. The authors review this topic and describe challenges in allocating neural resources for robotic body augmentation.

20 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluated whether early vocalizations develop in similar ways in children across diverse cultural contexts and found that the proportion of clips reported to contain canonical transitions increased with age.
Abstract: This study evaluates whether early vocalizations develop in similar ways in children across diverse cultural contexts. We analyze data from daylong audio recordings of 49 children (1-36 months) from five different language/cultural backgrounds. Citizen scientists annotated these recordings to determine if child vocalizations contained canonical transitions or not (e.g., "ba" vs. "ee"). Results revealed that the proportion of clips reported to contain canonical transitions increased with age. Furthermore, this proportion exceeded 0.15 by around 7 months, replicating and extending previous findings on canonical vocalization development but using data from the natural environments of a culturally and linguistically diverse sample. This work explores how crowdsourcing can be used to annotate corpora, helping establish developmental milestones relevant to multiple languages and cultures. Lower inter-annotator reliability on the crowdsourcing platform, relative to more traditional in-lab expert annotators, means that a larger number of unique annotators and/or annotations are required, and that crowdsourcing may not be a suitable method for more fine-grained annotation decisions. Audio clips used for this project are compiled into a large-scale infant vocalization corpus that is available for other researchers to use in future work.

16 citations


Journal ArticleDOI
TL;DR: A comprehensive review of the literature on five experimental tasks documented 45 studies showing social information waste, and four studies showed social information being over-used as discussed by the authors, which means that human adults fail to give social information its optimal weight.
Abstract: Social information is immensely valuable. Yet we waste it. The information we get from observing other humans and from communicating with them is a cheap and reliable informational resource. It is considered the backbone of human cultural evolution. Theories and models focused on the evolution of social learning show the great adaptive benefits of evolving cognitive tools to process it. In spite of this, human adults in the experimental literature use social information quite inefficiently: they do not take it sufficiently into account. A comprehensive review of the literature on five experimental tasks documented 45 studies showing social information waste, and four studies showing social information being over-used. These studies cover 'egocentric discounting' phenomena as studied by social psychology, but also include experimental social learning studies. Social information waste means that human adults fail to give social information its optimal weight. Both proximal explanations and accounts derived from evolutionary theory leave crucial aspects of the phenomenon unaccounted for: egocentric discounting is a pervasive effect that no single unifying explanation fully captures. Cultural evolutionary theory's insistence on the power and benefits of social influence is to be balanced against this phenomenon. This article is part of the theme issue 'Foundations of cultural evolution'.

15 citations


Journal ArticleDOI
TL;DR: This paper studied the evolution of the visual complexity of written characters and found that character complexity depends primarily on which linguistic unit the characters encode, and that there is little evidence of evolutionary change in character complexity.

15 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose that imaginary worlds co-opt our preferences for exploration, which have evolved in humans and non-human animals alike, to propel individuals toward new environments and new sources of reward.
Abstract: Imaginary worlds are extremely successful. The most popular fictions produced in the last decades contain such a fictional world. They can be found in all fictional media, from novels (e.g., Lord of The Ring, Harry Potter) to films (e.g., Star Wars, Avatar), video games (e.g., The Legend of Zelda, Final Fantasy), graphic novels (e.g., One piece, Naruto) and TV series (e.g., Star Trek, Game of Thrones), and they date as far back as ancient literature (e.g., the Cyclops Islands in The Odyssey, 850 BCE). Why such a success? Why so much attention devoted to nonexistent worlds? In this article, we propose that imaginary worlds co-opt our preferences for exploration, which have evolved in humans and non-human animals alike, to propel individuals toward new environments and new sources of reward. Humans would find imaginary worlds very attractive for the very same reasons, and under the same circumstances, as they are lured by unfamiliar environments in real life. After reviewing research on exploratory preferences in behavioral ecology, environmental aesthetics, neuroscience, and evolutionary and developmental psychology, we focus on the sources of their variability across time and space, which we argue can account for the variability of the cultural preference for imaginary worlds. This hypothesis can therefore explain the way imaginary worlds evolved culturally, their shape and content, their recent striking success, and their distribution across time and populations.

14 citations


Journal ArticleDOI
01 Nov 2021-Synthese
TL;DR: This paper aims to offer an account of affective experiences within Predictive Processing, a novel framework that considers the brain to be a dynamical, hierarchical, Bayesian hypothesis-testing mechanism, and develops a synthesis of existing theories: the Affective Inference Theory.
Abstract: This paper aims to offer an account of affective experiences within Predictive Processing, a novel framework that considers the brain to be a dynamical, hierarchical, Bayesian hypothesis-testing mechanism. We begin by outlining a set of common features of affective experiences (or feelings) that a PP-theory should aim to explain: feelings are conscious, they have valence, they motivate behaviour, and they are intentional states with particular and formal objects. We then review existing theories of affective experiences within Predictive Processing and delineate two families of theories: Interoceptive Inference Theories (which state that feelings are determined by interoceptive predictions) and Error Dynamics Theories (which state that feelings are determined by properties of error dynamics). We highlight the strengths and shortcomings of each family of theories and develop a synthesis: the Affective Inference Theory. Affective Inference Theory claims that valence corresponds to the expected rate of prediction error reduction. In turn, the particular object of a feeling is the object predicted to be the most likely cause of expected changes in prediction error rate, and the formal object of a feeling is a predictive model of the expected changes in prediction error rate caused by a given particular object. Finally, our theory shows how affective experiences bias action selection, directing the organism towards allostasis and towards optimal levels of uncertainty in order to minimise prediction error over time.

Journal ArticleDOI
TL;DR: In this paper, the authors examine how to trigger a wave of low-carbon investments compatible with the well-below 2°C target of the Paris Agreement in the current post-pandemic context of increasing private...

Journal ArticleDOI
01 Apr 2021
TL;DR: The presente articulo se cuestiona la nocion de especializacion agricola a partir de los debates sobre las innovaciones tecnicas and el desarrollo socioeconomico rural entre los siglos XVIII y XX.
Abstract: En el presente articulo se cuestiona la nocion de especializacion agricola a partir de los debates sobre las innovaciones tecnicas y el desarrollo socioeconomico rural entre los siglos XVIII y XX. Para ello, los autores se basan en los resultados de las investigaciones que han actualizado la historia rural europea durante las ultimas decadas y proponen tres posibles lecturas para entender como los cultivos y las actividades especializadas han transformado la organizacion de las explotaciones y de los territorios. En primer lugar, parten de la sintesis pionera de la historiadora Joan Thirsk sobre la agricultura alternativa para analizar las distintas maneras de describir y rechazar este fenomeno desde el punto de vista de la produccion. La segunda lectura se centra en el desarrollo de las redes comerciales, las cadenas de suministro y los sectores industriales concebidos, en cierta medida, para modelar la huella espacial de los sistemas especializados. En tercer lugar, y desde el punto de vista de estos sistemas, estudian las dinamicas contemporaneas de intensificacion productiva, racionalizacion tecnica y seleccion socioeconomica.

Journal ArticleDOI
TL;DR: In this article, the authors carried out meta-analyses on 29 studies investigating the benefit of spacing out retrieval practice episodes on final retention, and found that the more learners are tested, the more beneficial the expanding schedule is compared with the uniform one.
Abstract: Spaced retrieval practice consists of repetitions of the same retrieval event distributed through time. This learning strategy combines two “desirable difficulties”: retrieval practice and spacing effects. We carried out meta-analyses on 29 studies investigating the benefit of spacing out retrieval practice episodes on final retention. The total dataset was divided into two subsets to investigate two main questions: (1) Does spaced retrieval practice induce better memory retention than massed retrieval practice? (subset 1); (2) Is the expanding spacing schedule superior to the uniform spacing schedule when learning with retrieval practice? (subset 2). Using meta-regression with robust variance estimation, 39 effect sizes were aggregated in subset 1 and 54 in subset 2. Results from subset 1 indicated a strong benefit of spaced retrieval practice in comparison with massed retrieval practice (g = 0.74). Results from subset 2 indicated no significant difference between expanding and uniform spacing schedules of retrieval practice (g = 0.034). Moderator analyses on this subset showed that the number of exposures of an item during retrieval practice explains inconsistencies between studies: the more learners are tested, the more beneficial the expanding schedule is compared with the uniform one. Overall, these results support the advantage of spacing out the retrieval practice episodes on the same content, but do not support the widely held belief that inter-retrieval intervals should be progressively increased until a retention test.

Journal ArticleDOI
TL;DR: The epidemiology of cognitive development is an approach essentially based on large observational studies, which examines individual differences in cognitive abilities throughout childhood and their determinants as mentioned in this paper, and highlights the methodological advances that have made such contributions possible.

Journal ArticleDOI
TL;DR: This paper investigates the practical applicability of measuring linguistic units using a novel system called Automatic LInguistic unit Count Estimator (ALICE) together with audio from seven child-centered daylong audio corpora from diverse cultural and linguistic environments and shows that language-independent measurement of phoneme counts is somewhat more accurate than syllables or words.
Abstract: Recordings captured by wearable microphones are a standard method for investigating young children’s language environments. A key measure to quantify from such data is the amount of speech present in children’s home environments. To this end, the LENA recorder and software—a popular system for measuring linguistic input—estimates the number of adult words that children may hear over the course of a recording. However, word count estimation is challenging to do in a language- independent manner; the relationship between observable acoustic patterns and language-specific lexical entities is far from uniform across human languages. In this paper, we ask whether some alternative linguistic units, namely phone(me)s or syllables, could be measured instead of, or in parallel with, words in order to achieve improved cross-linguistic applicability and comparability of an automated system for measuring child language input. We discuss the advantages and disadvantages of measuring different units from theoretical and technical points of view. We also investigate the practical applicability of measuring such units using a novel system called Automatic LInguistic unit Count Estimator (ALICE) together with audio from seven child-centered daylong audio corpora from diverse cultural and linguistic environments. We show that language-independent measurement of phoneme counts is somewhat more accurate than syllables or words, but all three are highly correlated with human annotations on the same data. We share an open-source implementation of ALICE for use by the language research community, enabling automatic phoneme, syllable, and word count estimation from child-centered audio recordings.

Journal ArticleDOI
TL;DR: In many human societies, truth-making institutions are considered necessary to establish an officially valid or "received" description of some specific situation as mentioned in this paper, e.g., why would an ordeal reveal a defendant's guilt or innocence?

Journal ArticleDOI
TL;DR: MetHis as discussed by the authors is a software package that allows users to draw model-parameter values from prior distributions set by the user, and, for each simulation, MetHis can calculate numerous summary statistics describing genetic diversity patterns and moments of the distribution of individual admixture fractions.
Abstract: Admixture is a fundamental evolutionary process that has influenced genetic patterns in numerous species. Maximum-likelihood approaches based on allele frequencies and linkage-disequilibrium have been extensively used to infer admixture processes from genome-wide data sets, mostly in human populations. Nevertheless, complex admixture histories, beyond one or two pulses of admixture, remain methodologically challenging to reconstruct. We developed an Approximate Bayesian Computation (ABC) framework to reconstruct highly complex admixture histories from independent genetic markers. We built the software package MetHis to simulate independent SNPs or microsatellites in a two-way admixed population for scenarios with multiple admixture pulses, monotonically decreasing or increasing recurring admixture, or combinations of these scenarios. MetHis allows users to draw model-parameter values from prior distributions set by the user, and, for each simulation, MetHis can calculate numerous summary statistics describing genetic diversity patterns and moments of the distribution of individual admixture fractions. We coupled MetHis with existing machine-learning ABC algorithms and investigated the admixture history of admixed populations. Results showed that random forest ABC scenario-choice could accurately distinguish among most complex admixture scenarios, and errors were mainly found in regions of the parameter space where scenarios were highly nested, and, thus, biologically similar. We focused on African American and Barbadian populations as two study-cases. We found that neural network ABC posterior parameter estimation was accurate and reasonably conservative under complex admixture scenarios. For both admixed populations, we found that monotonically decreasing contributions over time, from Europe and Africa, explained the observed data more accurately than multiple admixture pulses. This approach will allow for reconstructing detailed admixture histories when maximum-likelihood methods are intractable.

Journal ArticleDOI
TL;DR: In this article, the effect of one-way street networks on the pattern of shortest paths is studied and it is shown that the transition from undirected to one-ways is nontrivial for lattices with degree less than 4 and numerically the critical exponents for this transition are defined.
Abstract: In most studies, street networks are considered as undirected graphs while one-way streets and their effect on shortest paths are usually ignored. Here, we first study the empirical effect of one-way streets in about 140 cities in the world. Their presence induces a detour that persists over a wide range of distances and is characterized by a nonuniversal exponent. The effect of one-ways on the pattern of shortest paths is then twofold: they mitigate local traffic in certain areas but create bottlenecks elsewhere. This empirical study leads naturally to considering a mixed graph model of 2d regular lattices with both undirected links and a diluted variable fraction $p$ of randomly directed links which mimics the presence of one-ways in a street network. We study the size of the strongly connected component (SCC) versus $p$ and demonstrate the existence of a threshold ${p}_{c}$ above which the SCC size is zero. We show numerically that this transition is nontrivial for lattices with degree less than 4 and provide some analytical argument. We compute numerically the critical exponents for this transition and confirm previous results showing that they define a new universality class different from both the directed and standard percolation. Finally, we show that the transition on real-world graphs can be understood with random perturbations of regular lattices. The impact of one-ways on the graph properties was already the subject of a few mathematical studies, and our results show that this problem has also interesting connections with percolation, a classical model in statistical physics.

Journal ArticleDOI
01 Apr 2021
TL;DR: Qualitative research explored the experiences of women in one region of Scotland who accessed early medical abortion with home self-administration of misoprostol and suggested that the legislation be amended so that women can self- administer in an appropriate non-clinical setting, not just their home.
Abstract: Background Between 2017 and 2019, legislation was introduced in the UK that approved the home as a place for self-administration of misoprostol for early medical abortion. While research has shown that early medical abortion at home is as safe as in a clinical setting, women’s experiences in the UK in the light of this change have not yet been investigated. This qualitative research explored the experiences of women in one region of Scotland, UK who accessed early medical abortion with home self-administration of misoprostol. Methods Qualitative interviews were conducted with 20 women who had recently undergone early medical abortion (≤69 days9 gestation) with home self-administration of misoprostol. The data were analysed thematically using an approach informed by the Framework analytic approach. Results Women appreciated the flexibility that home administration of misoprostol offered, including the opportunity to control the timing of the abortion. This was particularly important for women who sought not to disclose the abortion to others. Most women valued being in the comfort and privacy of the home when preparing for self-administration, although a small number highlighted some concerns about being at home. Most women reported that self-administration of misoprostol was straightforward; however, some expressed concerns around assessing whether their experiences were ‘normal’. Conclusions Women welcomed the opportunity for home self-administration of misoprostol. To further improve women’s early medical abortion experience we suggest that the legislation be amended so that women can self-administer in an appropriate non-clinical setting, not just their home.

Journal ArticleDOI
TL;DR: The authors evaluated the association between socioeconomic status (SES) and children's experiences measured with the Language Environment Analysis (LENA) system using a meta-analytic approach, using 22 independent samples, representing data from 1583 children.
Abstract: Using a meta-analytic approach, we evaluate the association between socioeconomic status (SES) and children's experiences measured with the Language Environment Analysis (LENA) system. Our final analysis included 22 independent samples, representing data from 1583 children. A model controlling for LENATM measures, age and publication type revealed an effect size of rz= .186, indicating a small effect of SES on children's language experiences. The type of LENA metric measured emerged as a significant moderator, indicating stronger effects for adult word counts than child vocalization counts. These results provide important evidence for the strength of association between SES and children's everyday language experiences as measured with an unobtrusive recording analyzed automatically in a standardized fashion.

Journal ArticleDOI
TL;DR: In this paper, the authors examine how the identification of key intrinsic chemical specificities has offered fertile ground for the development of novel synchrotron approaches allowing a better stochastic description of the properties of ancient and historical materials.
Abstract: ConspectusThe chemical study of materials from natural history and cultural heritage, which provide information for art history, archeology, or paleontology, presents a series of specific challenges. The complexity of these ancient and historical materials, which are chemically heterogeneous, the product of alteration processes, and inherently not reproducible, is a major obstacle to a thorough understanding of their making and long-term behavior (e.g., fossilization). These challenges required the development of methodologies and instruments coupling imaging and data processing approaches that are optimized for the specific properties of the materials. This Account discusses how these characteristics not only constrain their study but also open up specific innovative avenues for providing key historical information. Synchrotron methods have extensively been used since the late 1990s to study heritage objects, in particular for their potential to provide speciation information from excitation spectroscopies and to image complex heritage objects and samples in two and three dimensions at high resolution. We examine in practice how the identification of key intrinsic chemical specificities has offered fertile ground for the development of novel synchrotron approaches allowing a better stochastic description of the properties of ancient and historical materials. These developments encompass three main aspects: (1) The multiscale heterogeneity of these materials can provide an essential source of information in the development of probes targeting their multiple scales of homogeneity. (2) Chemical alteration can be described in many ways, e.g., by segmenting datasets in a semiquantitative way to jointly inform morphological and chemical transformation pathways. (3) The intrinsic individuality of chemical signatures in artifacts triggers the development of specific strategies, such as those focusing on weak signal detection. We propose a rereading of the advent of these new methodologies for analysis and characterization and examine how they have led to innovative strategies combining materials science, instrument development, history, and data science. In particular, we show that spectral imaging and the search for correlations in image datasets have provided a powerful way to address what archeologists have called the uncertainty and ambiguity of the material record. This approach has implications beyond synchrotron techniques and extends in particular to a series of rapidly developing approaches that couple spectral and spatial information, as in hyperspectral imaging and spatially resolved mass spectrometry. The preeminence of correlations holds promise for the future development of machine learning methods for processing data on historical objects. Beyond heritage, these developments are an original source of inspiration for the study of materials in many related fields, such as environmental, geochemical, or life sciences, which deal with systems whose alteration and heterogeneity cannot be neglected.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a new link between deductive and probabilistic reasoning fallacies, and outline a unified theory of deductive illusory inferences from disjunction and the conjunction fallacy in terms of Bayesian confirmation theory.
Abstract: We provide a new link between deductive and probabilistic reasoning fallacies. Illusory inferences from disjunction are a broad class of deductive fallacies traditionally explained by recourse to a matching procedure that looks for content overlap between premises. In two behavioral experiments, we show that this phenomenon is instead sensitive to real-world causal dependencies and not to exact content overlap. A group of participants rated the strength of the causal dependence between pairs of sentences. This measure is a near perfect predictor of fallacious reasoning by an independent group of participants in illusory inference tasks with the same materials. In light of these results, we argue that all extant accounts of these deductive fallacies require non-trivial adjustments. Crucially, these novel indirect illusory inferences from disjunction bear a structural similarity to seemingly unrelated probabilistic reasoning problems, in particular the conjunction fallacy from the heuristics and biases literature. This structural connection was entirely obscure in previous work on these deductive problems, due to the theoretical and empirical focus on content overlap. We argue that this structural parallelism provides arguments against the need for rich descriptions and individuating information in the conjunction fallacy, and we outline a unified theory of deductive illusory inferences from disjunction and the conjunction fallacy, in terms of Bayesian confirmation theory.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a model of population dynamics accounting for the impact of climate change on mortality through five channels (heat, diarrhoeal disease, malaria, dengue, undernutrition).

Journal ArticleDOI
TL;DR: In this paper, the authors reveal that participants can experience an illusion that a mechanical grabber, which looks scarcely like a hand, is part of their body, and they found changes in three signatures of embodiment: the real hand's perceived location, the feeling that the grabber belonged to the body and autonomic responses to visible threats.
Abstract: A tool can function as a body part yet not feel like one: Putting down a fork after dinner does not feel like losing a hand. However, studies show fake body-parts are embodied and experienced as parts of oneself. Typically, embodiment illusions have only been reported when the fake body-part visually resembles the real one. Here we reveal that participants can experience an illusion that a mechanical grabber, which looks scarcely like a hand, is part of their body. We found changes in three signatures of embodiment: the real hand's perceived location, the feeling that the grabber belonged to the body, and autonomic responses to visible threats to the grabber. These findings show that artificial objects can become embodied even though they bear little visual resemblance to the hand.

Journal ArticleDOI
TL;DR: In this paper, the authors compared how early speech perception and cognitive skills predict later language outcomes using a within-participant design and found that only native vowel discrimination significantly predicted vocabulary, while evidence was ambiguous between null and alternative hypotheses for all infant predictors.
Abstract: Research has identified bivariate correlations between speech perception and cognitive measures gathered during infancy as well as correlations between these individual measures and later language outcomes. However, these correlations have not all been explored together in prospective longitudinal studies. The goal of the current research was to compare how early speech perception and cognitive skills predict later language outcomes using a within-participant design. To achieve this goal, we tested 97 5- to 7-month-olds on two speech perception tasks (stress pattern preference, native vowel discrimination) and two cognitive tasks (visual recognition memory, A-not-B) and later assessed their vocabulary outcomes at 18 and 24 months. Frequentist statistical analyses showed that only native vowel discrimination significantly predicted vocabulary. However, Bayesian analyses suggested that evidence was ambiguous between null and alternative hypotheses for all infant predictors. These results highlight the importance of recognizing and addressing challenges related to infant data collection, interpretation, and replication in the developmental field, a roadblock in our route to understanding the contribution of domain-specific and domain-general skills for language acquisition. Future methodological development and research along similar lines is encouraged to assess individual differences in infant speech perception and cognitive skills and their predictability for language development.

Journal ArticleDOI
TL;DR: As predicted, both children and adults benefit from early exposure to multiword units, and when exposed to unsegmented input – adults show better learning of nouns compared to article-noun pairings, but children do not, a pattern consistent with adults’ predicted tendency to focus less on multi word units.
Abstract: Multiword units play an important role in language learning and use It was proposed that learning from such units can facilitate mastery of certain grammatical relations, and that children and adults differ in their use of multiword units during learning, contributing to their varying language-learning trajectories Accordingly, adults learn gender agreement better when encouraged to learn from multiword units Previous work has not examined two core predictions of this proposal: (1) that children also benefit from initial exposure to multiword units, and (2) that their learning patterns reflect a greater reliance on multiword units compared to adults We test both predictions using an artificial-language As predicted, both children and adults benefit from early exposure to multiword units In addition, when exposed to unsegmented input - adults show better learning of nouns compared to article-noun pairings, but children do not, a pattern consistent with adults' predicted tendency to focus less on multiword units

Journal ArticleDOI
TL;DR: Connectedness is a sister notion of monotonicity, which has been recruited to explain certain lexical restrictions on nouns, adjectives and more recently quantifiers; it is proposed here that connectedness could play a similar role at the level of propositional meanings.
Abstract: “Scalar implicatures” is a phrase used to refer to some inferences arising from the competition between alternatives: typically, “Mary read some of the books” ends up conveying that Mary did not read all books, because one could have said “Mary read all books”. The so-called grammatical theory argues that these inferences obtain from the application of a covert operator $$ exh $$ , which not only has the capability to negate alternative sentences, but also the capability to be embedded within sentences under other linguistic operators, i.e. $$ exh $$ has the potential to add to the meaning of expressions (not necessarily full sentences), the negation of their alternatives. This view typically seeks support from the existence of readings that could not be explained without the extra-capability of $$ exh $$ to occur in embedded positions. However, if some embedded positions seem to be accessible to $$ exh $$ , not all conceivable positions that $$ exh $$ could occupy yield sensible results. In short: the $$ exh $$ approach is powerful, maybe too powerful. Various approaches based on logical strength and monotonicity have been proposed to justify on principled grounds the limited distribution of $$ exh $$ ; these approaches are mostly based on a comparison between possible parses, and considerations of monotonicity (e.g., the Strongest Meaning Hypothesis). We propose a new constraint based instead on “connectedness”, ruling out parses because of inherent problems their outcome may raise. Connectedness is a sister notion of monotonicity, which has been recruited to explain certain lexical restrictions on nouns, adjectives and more recently quantifiers; we propose here that connectedness could play a similar role at the level of propositional meanings.


Journal ArticleDOI
TL;DR: A case is made for a propositional, explicit judgment-based action structure that makes it possible to accommodate some typical practices used in addressing drawing problems and hint to by-products and “surrounding practices” that find a plausible explanation in the propositional account of drawing.
Abstract: I investigate some aspects of the structure of the production of drawings by developing a practice-based phenomenology articulated around some “drawing problems.” The examples I chose cluster aroun...