Institution
Central European University
Education•Vienna, Austria•
About: Central European University is a education organization based out in Vienna, Austria. It is known for research contribution in the topics: Politics & European union. The organization has 1358 authors who have published 4186 publications receiving 85246 citations. The organization is also known as: CEU & Közép-Európai Egyetem.
Topics: Politics, European union, Population, Context (language use), Democracy
Papers published on a yearly basis
Papers
More filters
••
01 Oct 2020
TL;DR: Transformers is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community.
Abstract: Recent progress in natural language processing has been driven by advances in both model architecture and model pretraining. Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to effectively utilize this capacity for a wide variety of tasks. Transformers is an open-source library with the goal of opening up these advances to the wider machine learning community. The library consists of carefully engineered state-of-the art Transformer architectures under a unified API. Backing this library is a curated collection of pretrained models made by and available for the community. Transformers is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments. The library is available at https://github.com/huggingface/transformers.
4,798 citations
••
Emory University1, University of California, Los Angeles2, University of Siena3, California Institute of Technology4, University of Zurich5, Central European University6, University of California, Davis7, Texas A&M University8, University of Oxford9, University of New Mexico10, University of Pennsylvania11, University of California, Santa Barbara12, Harvard University13, California State University, Fullerton14, University of Colorado Denver15
TL;DR: A cross-cultural study of behavior in ultimatum, public goods, and dictator games in a range of small-scale societies exhibiting a wide variety of economic and cultural conditions found the canonical model – based on self-interest – fails in all of the societies studied.
Abstract: Researchers from across the social sciences have found consistent deviations from the predictions of the canonical model of self-interest in hundreds of experiments from around the world. This research, however, cannot determine whether the uniformity re- sults from universal patterns of human behavior or from the limited cultural variation available among the university students used in virtually all prior experimental work. To address this, we undertook a cross-cultural study of behavior in ultimatum, public goods, and dictator games in a range of small-scale societies exhibiting a wide variety of economic and cultural conditions. We found, first, that the canonical model - based on self-interest - fails in all of the societies studied. Second, our data reveal substantially more behavioral vari- ability across social groups than has been found in previous research. Third, group-level differences in economic organization and the structure of social interactions explain a substantial portion of the behavioral variation across societies: the higher the degree of market integration and the higher the payoffs to cooperation in everyday life, the greater the level of prosociality expressed in experimental games. Fourth, the available individual-level economic and demographic variables do not consistently explain game behavior, either within or across groups. Fifth, in many cases experimental play appears to reflect the common interactional patterns of everyday life.
1,589 citations
•
09 Oct 2019TL;DR: Transformers is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community.
Abstract: Recent advances in modern Natural Language Processing (NLP) research have
been dominated by the combination of Transfer Learning methods with large-scale
Transformer language models. With them came a paradigm shift in NLP with the
starting point for training a model on a downstream task moving from a blank
specific model to a general-purpose pretrained architecture. Still, creating
these general-purpose models remains an expensive and time-consuming process
restricting the use of these methods to a small sub-set of the wider NLP
community. In this paper, we present Transformers, a library for
state-of-the-art NLP, making these developments available to the community by
gathering state-of-the-art general-purpose pretrained models under a unified
API together with an ecosystem of libraries, examples, tutorials and scripts
targeting many downstream NLP tasks. Transformers features carefully crafted
model implementations and high-performance pretrained weights for two main deep
learning frameworks, PyTorch and TensorFlow, while supporting all the necessary
tools to analyze, evaluate and use these models in downstream tasks such as
text/token classification, questions answering and language generation among
others. Transformers has gained significant organic traction and adoption among
both the researcher and practitioner communities. We are committed at Hugging
Face to pursue the efforts to develop Transformers with the ambition of
creating the standard library for building NLP systems.
1,261 citations
••
TL;DR: The results show that, from birth, human infants prefer to look at faces that engage them in mutual gaze and that, at an early age, healthy babies show enhanced neural processing of direct gaze.
Abstract: Making eye contact is the most powerful mode of establishing a communicative link between humans. During their first year of life, infants learn rapidly that the looking behaviors of others conveys significant information. Two experiments were carried out to demonstrate special sensitivity to direct eye contact from birth. The first experiment tested the ability of 2- to 5-day-old newborns to discriminate between direct and averted gaze. In the second experiment, we measured 4-month-old infants' brain electric activity to assess neural processing of faces when accompanied by direct (as opposed to averted) eye gaze. The results show that, from birth, human infants prefer to look at faces that engage them in mutual gaze and that, from an early age, healthy babies show enhanced neural processing of direct gaze. The exceptionally early sensitivity to mutual gaze demonstrated in these studies is arguably the major foundation for the later development of social skills.
1,199 citations
••
TL;DR: A list from A to Z of twenty-six proposals regarding what a “good” QCAbased research entails, both with regard to QCA as a research approach and as an analytical technique are presented.
Abstract: As a relatively new methodological tool, QCA is still a work in progress. Standards of good practice are needed in order to enhance the quality of its applications. We present a list from A to Z of twenty-six proposals regarding what a “good” QCAbased research entails, both with regard to QCA as a research approach and as an analytical technique. Our suggestions are subdivided into three categories: criteria referring to the research stages before, during, and after the analytical moment of data analysis. Th is listing can be read as a guideline for authors, reviewers, and readers of QCA.
975 citations
Authors
Showing all 1416 results
Name | H-index | Papers | Citations |
---|---|---|---|
Albert-László Barabási | 152 | 438 | 200119 |
Dan Sperber | 67 | 207 | 32068 |
Gergely Csibra | 67 | 172 | 16635 |
Herbert Gintis | 66 | 269 | 35339 |
János Kertész | 64 | 369 | 19276 |
Rosario N. Mantegna | 62 | 268 | 20543 |
Saul Estrin | 58 | 359 | 16448 |
Philippe C. Schmitter | 51 | 167 | 17240 |
Günther Knoblich | 49 | 156 | 10789 |
Robert J. Willis | 49 | 125 | 14068 |
János Kornai | 45 | 203 | 13830 |
Philip N. Howard | 44 | 129 | 8566 |
Milos R. Popovic | 44 | 312 | 7458 |
Ernest Gellner | 42 | 166 | 11173 |
David Stark | 41 | 133 | 8238 |