Education•Princeton, New Jersey, United States•
About: Princeton University is a education organization based out in Princeton, New Jersey, United States. It is known for research contribution in the topics: Population & Galaxy. The organization has 60542 authors who have published 146772 publications receiving 9158888 citations. The organization is also known as: College of New Jersey & Princeton.
Papers published on a yearly basis
20 Jun 2009
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Abstract: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.
01 Jan 1979
TL;DR: The relationship between Stimulation and Stimulus Information for visual perception is discussed in detail in this article, where the authors also present experimental evidence for direct perception of motion in the world and movement of the self.
Abstract: Contents: Preface. Introduction. Part I: The Environment To Be Perceived.The Animal And The Environment. Medium, Substances, Surfaces. The Meaningful Environment. Part II: The Information For Visual Perception.The Relationship Between Stimulation And Stimulus Information. The Ambient Optic Array. Events And The Information For Perceiving Events. The Optical Information For Self-Perception. The Theory Of Affordances. Part III: Visual Perception.Experimental Evidence For Direct Perception: Persisting Layout. Experiments On The Perception Of Motion In The World And Movement Of The Self. The Discovery Of The Occluding Edge And Its Implications For Perception. Looking With The Head And Eyes. Locomotion And Manipulation. The Theory Of Information Pickup And Its Consequences. Part IV: Depiction.Pictures And Visual Awareness. Motion Pictures And Visual Awareness. Conclusion. Appendixes: The Principal Terms Used in Ecological Optics. The Concept of Invariants in Ecological Optics.
TL;DR: The revised RECIST includes a new imaging appendix with updated recommendations on the optimal anatomical assessment of lesions, and a section on detection of new lesions, including the interpretation of FDG-PET scan assessment is included.
Abstract: Background Assessment of the change in tumour burden is an important feature of the clinical evaluation of cancer therapeutics: both tumour shrinkage (objective response) and disease progression are useful endpoints in clinical trials. Since RECIST was published in 2000, many investigators, cooperative groups, industry and government authorities have adopted these criteria in the assessment of treatment outcomes. However, a number of questions and issues have arisen which have led to the development of a revised RECIST guideline (version 1.1). Evidence for changes, summarised in separate papers in this special issue, has come from assessment of a large data warehouse (>6500 patients), simulation studies and literature reviews. Highlights of revised RECIST 1.1 Major changes include: Number of lesions to be assessed : based on evidence from numerous trial databases merged into a data warehouse for analysis purposes, the number of lesions required to assess tumour burden for response determination has been reduced from a maximum of 10 to a maximum of five total (and from five to two per organ, maximum). Assessment of pathological lymph nodes is now incorporated: nodes with a short axis of ⩾15 mm are considered measurable and assessable as target lesions. The short axis measurement should be included in the sum of lesions in calculation of tumour response. Nodes that shrink to Confirmation of response is required for trials with response primary endpoint but is no longer required in randomised studies since the control arm serves as appropriate means of interpretation of data. Disease progression is clarified in several aspects: in addition to the previous definition of progression in target disease of 20% increase in sum, a 5 mm absolute increase is now required as well to guard against over calling PD when the total sum is very small. Furthermore, there is guidance offered on what constitutes ‘unequivocal progression’ of non-measurable/non-target disease, a source of confusion in the original RECIST guideline. Finally, a section on detection of new lesions, including the interpretation of FDG-PET scan assessment is included. Imaging guidance : the revised RECIST includes a new imaging appendix with updated recommendations on the optimal anatomical assessment of lesions. Future work A key question considered by the RECIST Working Group in developing RECIST 1.1 was whether it was appropriate to move from anatomic unidimensional assessment of tumour burden to either volumetric anatomical assessment or to functional assessment with PET or MRI. It was concluded that, at present, there is not sufficient standardisation or evidence to abandon anatomical assessment of tumour burden. The only exception to this is in the use of FDG-PET imaging as an adjunct to determination of progression. As is detailed in the final paper in this special issue, the use of these promising newer approaches requires appropriate clinical validation studies.
University of Udine1, International School for Advanced Studies2, National Research Council3, Massachusetts Institute of Technology4, University of Paris5, Princeton University6, University of Minnesota7, ParisTech8, University of Milan9, International Centre for Theoretical Physics10, University of Paderborn11, ETH Zurich12, École Polytechnique Fédérale de Lausanne13
TL;DR: QUANTUM ESPRESSO as discussed by the authors is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave).
Abstract: QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.
TL;DR: WordNet1 provides a more effective combination of traditional lexicographic information and modern computing, and is an online lexical database designed for use under program control.
Abstract: Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets .
Showing all 60542 results
|Donald P. Schneider||242||1622||263641|
|David J. Schlegel||193||600||193972|
|Michael A. Strauss||185||1688||208506|
|David H. Weinberg||183||700||171424|
|Robert H. Lupton||179||415||151608|
|Bruce M. Spiegelman||179||434||158009|
|Daniel J. Eisenstein||179||672||151720|
|David A. Weitz||178||1038||114182|
|David R. Williams||178||2034||138789|
|Michael I. Jordan||176||1016||216204|
|Timothy M. Heckman||170||754||141237|
Related Institutions (5)
Massachusetts Institute of Technology
268K papers, 18.2M citations
University of California, Berkeley
265.6K papers, 16.8M citations
Max Planck Society
406.2K papers, 19.5M citations
224K papers, 12.8M citations
320.3K papers, 21.8M citations