scispace - formally typeset
Search or ask a question
Institution

University of Massachusetts Amherst

EducationAmherst Center, Massachusetts, United States
About: University of Massachusetts Amherst is a education organization based out in Amherst Center, Massachusetts, United States. It is known for research contribution in the topics: Population & Galaxy. The organization has 37274 authors who have published 83965 publications receiving 3834996 citations. The organization is also known as: UMass Amherst & Massachusetts State College.


Papers
More filters
Journal ArticleDOI
Bernard Aubert, A. Bazan, A. Boucham, D. Boutigny  +816 moreInstitutions (68)
TL;DR: BABAR as discussed by the authors is a detector for the SLAC PEP-II asymmetric e+e-B Factory operating at the upsilon 4S resonance, which allows comprehensive studies of CP-violation in B-meson decays.
Abstract: BABAR, the detector for the SLAC PEP-II asymmetric e+e- B Factory operating at the upsilon 4S resonance, was designed to allow comprehensive studies of CP-violation in B-meson decays. Charged particle tracks are measured in a multi-layer silicon vertex tracker surrounded by a cylindrical wire drift chamber. Electromagentic showers from electrons and photons are detected in an array of CsI crystals located just inside the solenoidal coil of a superconducting magnet. Muons and neutral hadrons are identified by arrays of resistive plate chambers inserted into gaps in the steel flux return of the magnet. Charged hadrons are identified by dE/dx measurements in the tracking detectors and in a ring-imaging Cherenkov detector surrounding the drift chamber. The trigger, data acquisition and data-monitoring systems, VME- and network-based, are controlled by custom-designed online software. Details of the layout and performance of the detector components and their associated electronics and software are presented.

789 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined the contribution of central and satellite galaxies to the HOD, more specifically to the probability P(N|M) that a halo of virial mass M contains N galaxies of a particular class.
Abstract: The halo occupation distribution (HOD) describes the relation between galaxies and dark matter at the level of individual dark matter halos. The properties of galaxies residing at the centers of halos differ from those of satellite galaxies because of differences in their formation histories. Using a smoothed particle hydrodynamics (SPH) simulation and a semianalytic (SA) galaxy formation model, we examine the separate contributions of central and satellite galaxies to the HOD, more specifically to the probability P(N|M) that a halo of virial mass M contains N galaxies of a particular class. In agreement with earlier results for dark matter subhalos, we find that the mean occupation function langNrangM for galaxies above a baryonic mass threshold can be approximated by a step function for central galaxies plus a power law for satellites and that the distribution of satellite numbers is close to Poisson at fixed halo mass. Since the number of central galaxies is always zero or one, the width of P(N|M) is narrower than a Poisson distribution at low N and approaches Poisson at high N. For galaxy samples defined by different baryonic mass thresholds, there is a nearly linear relation between the minimum halo mass Mmin required to host a central galaxy and the mass M1 at which an average halo hosts one satellite, with M1 ≈ 14Mmin (SPH) or M1 ≈ 18Mmin (SA). The stellar population age of central galaxies correlates with halo mass, and this correlation explains much of the age dependence of the galaxy HOD. The mean occupation number of young galaxies exhibits a local minimum at M ~ 10Mmin where halos are too massive to host a young central galaxy but not massive enough to host satellites. Using the SA model, we show that the conditional galaxy mass function at fixed halo mass cannot be described by a Schechter function because central galaxies produce a "bump" at high masses. We suggest parameterizations for the HOD and the conditional luminosity function that can be used to model observed galaxy clustering. Many of our predictions are in good agreement with recent results inferred from clustering in the Sloan Digital Sky Survey.

786 citations

Posted Content
TL;DR: Conditional Random Fields (CRFs) as discussed by the authors are a popular probabilistic method for structured prediction and have seen wide application in natural language processing, computer vision, and bioinformatics.
Abstract: Often we wish to predict a large number of variables that depend on each other as well as on other observed variables. Structured prediction methods are essentially a combination of classification and graphical modeling, combining the ability of graphical models to compactly model multivariate data with the ability of classification methods to perform prediction using large sets of input features. This tutorial describes conditional random fields, a popular probabilistic method for structured prediction. CRFs have seen wide application in natural language processing, computer vision, and bioinformatics. We describe methods for inference and parameter estimation for CRFs, including practical issues for implementing large scale CRFs. We do not assume previous knowledge of graphical modeling, so this tutorial is intended to be useful to practitioners in a wide variety of fields.

785 citations

Journal ArticleDOI
TL;DR: Broad application of microbial fuel cells will require substantial increases in current density and a better understanding of the microbiology of these systems may help.

783 citations

Proceedings ArticleDOI
19 May 2019
TL;DR: The reasons why deep learning models may leak information about their training data are investigated and new algorithms tailored to the white-box setting are designed by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks.
Abstract: Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.

783 citations


Authors

Showing all 37601 results

NameH-indexPapersCitations
George M. Whitesides2401739269833
Joan Massagué189408149951
David H. Weinberg183700171424
David L. Kaplan1771944146082
Michael I. Jordan1761016216204
James F. Sallis169825144836
Bradley T. Hyman169765136098
Anton M. Koekemoer1681127106796
Derek R. Lovley16858295315
Michel C. Nussenzweig16551687665
Alfred L. Goldberg15647488296
Donna Spiegelman15280485428
Susan E. Hankinson15178988297
Bernard Moss14783076991
Roger J. Davis147498103478
Network Information
Related Institutions (5)
Cornell University
235.5K papers, 12.2M citations

96% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

96% related

University of Minnesota
257.9K papers, 11.9M citations

96% related

University of Wisconsin-Madison
237.5K papers, 11.8M citations

95% related

University of Toronto
294.9K papers, 13.5M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023103
2022535
20213,983
20203,858
20193,712
20183,385