scispace - formally typeset
Search or ask a question
Institution

Carnegie Mellon University

EducationPittsburgh, Pennsylvania, United States
About: Carnegie Mellon University is a education organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Population & Robot. The organization has 36317 authors who have published 104359 publications receiving 5975734 citations. The organization is also known as: CMU & Carnegie Mellon.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors study a class of models in which the law of motion perceived by agents influences the actual one that they actually face, and show how the perceived law and actual one may converge to one another, depending on the behavior of a particular ordinary differential equation.

818 citations

Journal ArticleDOI
15 Jun 2018-Science
TL;DR: The possibility to push magnetic information storage to the atomically thin limit and CrI3 as a superlative magnetic tunnel barrier for vdW heterostructure spintronic devices is revealed.
Abstract: Magnetic multilayer devices that exploit magnetoresistance are the backbone of magnetic sensing and data storage technologies. Here, we report multiple-spin-filter magnetic tunnel junctions (sf-MTJs) based on van der Waals (vdW) heterostructures in which atomically thin chromium triiodide (CrI3) acts as a spin-filter tunnel barrier sandwiched between graphene contacts. We demonstrate tunneling magnetoresistance that is drastically enhanced with increasing CrI3 layer thickness, reaching a record 19,000% for magnetic multilayer structures using four-layer sf-MTJs at low temperatures. Using magnetic circular dichroism measurements, we attribute these effects to the intrinsic layer-by-layer antiferromagnetic ordering of the atomically thin CrI3 Our work reveals the possibility to push magnetic information storage to the atomically thin limit and highlights CrI3 as a superlative magnetic tunnel barrier for vdW heterostructure spintronic devices.

818 citations

Proceedings ArticleDOI
23 Feb 2013
TL;DR: This paper presents a lightweight graph processing framework that is specific for shared-memory parallel/multicore machines, which makes graph traversal algorithms easy to write and significantly more efficient than previously reported results using graph frameworks on machines with many more cores.
Abstract: There has been significant recent interest in parallel frameworks for processing graphs due to their applicability in studying social networks, the Web graph, networks in biology, and unstructured meshes in scientific simulation. Due to the desire to process large graphs, these systems have emphasized the ability to run on distributed memory machines. Today, however, a single multicore server can support more than a terabyte of memory, which can fit graphs with tens or even hundreds of billions of edges. Furthermore, for graph algorithms, shared-memory multicores are generally significantly more efficient on a per core, per dollar, and per joule basis than distributed memory systems, and shared-memory algorithms tend to be simpler than their distributed counterparts.In this paper, we present a lightweight graph processing framework that is specific for shared-memory parallel/multicore machines, which makes graph traversal algorithms easy to write. The framework has two very simple routines, one for mapping over edges and one for mapping over vertices. Our routines can be applied to any subset of the vertices, which makes the framework useful for many graph traversal algorithms that operate on subsets of the vertices. Based on recent ideas used in a very fast algorithm for breadth-first search (BFS), our routines automatically adapt to the density of vertex sets. We implement several algorithms in this framework, including BFS, graph radii estimation, graph connectivity, betweenness centrality, PageRank and single-source shortest paths. Our algorithms expressed using this framework are very simple and concise, and perform almost as well as highly optimized code. Furthermore, they get good speedups on a 40-core machine and are significantly more efficient than previously reported results using graph frameworks on machines with many more cores.

816 citations

Journal ArticleDOI
TL;DR: In this article, the authors evaluate and improve on the most important design decisions made by Hinton and Shallice, relating to the task, the network architecture, the trai...
Abstract: Deep dyslexia is an acquired reading disorder marked by the Occurrence of semantic errors (e.g. reading RIVER as “ocean”). In addition, patients exhibit a number of other symptoms, including visual and morphological effects in their errors, a part-of-speech effect, and an advantage for concrete over abstract words. Deep dyslexia poses a distinct challenge for cognitive neuropsychology because there is little understanding of why such a variety of symptoms should co-occur in virtually all known patients. Hinton and Shallice (1991) replicated the co-occurrence of visual and semantic errors by lesioning a recurrent connectionist network trained to map from orthography to semantics. Although the success of their simulations is encouraging. there is little understanding of what underlying principles are responsible for them. In this paper we evaluate and, where possible, improve on the most important design decisions made by Hinton and Shallice, relating to the task, the network architecture, the trai...

816 citations

Journal ArticleDOI
TL;DR: Three aspects of motivational opponency involving dopamine and serotonin are considered; it is suggested that a tonic serotonergic signal reports the long-run average reward rate as part of an average-case reinforcement learning model; that aTonic dopaminergic signalReports theLong-Run average punishment rate in a similar context; and finally speculate that a phasic serotonin signal might report an ongoing prediction error for future punishment.

815 citations


Authors

Showing all 36645 results

NameH-indexPapersCitations
Yi Chen2174342293080
Rakesh K. Jain2001467177727
Robert C. Nichol187851162994
Michael I. Jordan1761016216204
Jasvinder A. Singh1762382223370
J. N. Butler1722525175561
P. Chang1702154151783
Krzysztof Matyjaszewski1691431128585
Yang Yang1642704144071
Geoffrey E. Hinton157414409047
Herbert A. Simon157745194597
Yongsun Kim1562588145619
Terrence J. Sejnowski155845117382
John B. Goodenough1511064113741
Scott Shenker150454118017
Network Information
Related Institutions (5)
Massachusetts Institute of Technology
268K papers, 18.2M citations

95% related

University of Maryland, College Park
155.9K papers, 7.2M citations

93% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

93% related

IBM
253.9K papers, 7.4M citations

93% related

Princeton University
146.7K papers, 9.1M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023120
2022499
20214,980
20205,375
20195,420
20184,972