scispace - formally typeset
Search or ask a question

Showing papers by "University of Massachusetts Amherst published in 2015"


Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this article, a CNN architecture is proposed to combine information from multiple views of a 3D shape into a single and compact shape descriptor, which can be applied to accurately recognize human hand-drawn sketches of shapes.
Abstract: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors Recognition rates further increase when multiple views of the shapes are provided In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives

2,195 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review important mechanisms that contribute towards elevation-dependent warming, such as snow albedo and surface-based feedbacks, water vapour changes and latent heat release, surface water vapours and radiative flux changes, surface heat loss and temperature change; and aerosols.
Abstract: There is growing evidence that the rate of warming is amplified with elevation, such that high-mountain environments experience more rapid changes in temperature than environments at lower elevations. Elevation-dependent warming (EDW) can accelerate the rate of change in mountain ecosystems, cryospheric systems, hydrological regimes and biodiversity. Here we review important mechanisms that contribute towards EDW: snow albedo and surface-based feedbacks; water vapour changes and latent heat release; surface water vapour and radiative flux changes; surface heat loss and temperature change; and aerosols. All lead to enhanced warming with elevation (or at a critical elevation), and it is believed that combinations of these mechanisms may account for contrasting regional patterns of EDW. We discuss future needs to increase knowledge of mountain temperature trends and their controlling mechanisms through improved observations, satellite-based remote sensing and model simulations.

1,628 citations


Journal ArticleDOI
TL;DR: In this paper, a photoactive layer made from a newly developed semiconducting polymer with a deepened valence energy level is used to reduce the tail state density below the conduction band of the electron acceptor.
Abstract: Organic solar cells with efficiency greater than 10% are fabricated by incorporating a semiconductor polymer with a deepened valence energy level. Polymer solar cells are an exciting class of next-generation photovoltaics, because they hold promise for the realization of mechanically flexible, lightweight, large-area devices that can be fabricated by room-temperature solution processing1,2. High power conversion efficiencies of ∼10% have already been reported in tandem polymer solar cells3. Here, we report that similar efficiencies are achievable in single-junction devices by reducing the tail state density below the conduction band of the electron acceptor in a high-performance photoactive layer made from a newly developed semiconducting polymer with a deepened valence energy level. Control over band tailing is realized through changes in the composition of the active layer and the structure order of the blend, both of which are known to be important factors in cell operation4,5,6. The approach yields cells with high power conversion efficiencies (∼9.94% certified) and enhanced photovoltage.

1,585 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +5117 moreInstitutions (314)
TL;DR: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4ℓ decay channels.
Abstract: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4l decay channels. The results are obtained from a simultaneous fit to the reconstructed invariant mass peaks in the two channels and for the two experiments. The measured masses from the individual channels and the two experiments are found to be consistent among themselves. The combined measured mass of the Higgs boson is mH=125.09±0.21 (stat)±0.11 (syst) GeV.

1,567 citations


Posted Content
TL;DR: This work presents a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and shows that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art3D shape descriptors.
Abstract: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.

1,508 citations


Journal ArticleDOI
TL;DR: A much more complete understanding of the endocrine principles by which EDCs act, including nonmonotonic dose-responses, low-dose effects, and developmental vulnerability, can be much better translated to human health.
Abstract: The Endocrine Society's first Scientific Statement in 2009 provided a wake-up call to the scientific community about how environmental endocrine-disrupting chemicals (EDCs) affect health and disease. Five years later, a substantially larger body of literature has solidified our understanding of plausible mechanisms underlying EDC actions and how exposures in animals and humans-especially during development-may lay the foundations for disease later in life. At this point in history, we have much stronger knowledge about how EDCs alter gene-environment interactions via physiological, cellular, molecular, and epigenetic changes, thereby producing effects in exposed individuals as well as their descendants. Causal links between exposure and manifestation of disease are substantiated by experimental animal models and are consistent with correlative epidemiological data in humans. There are several caveats because differences in how experimental animal work is conducted can lead to difficulties in drawing broad conclusions, and we must continue to be cautious about inferring causality in humans. In this second Scientific Statement, we reviewed the literature on a subset of topics for which the translational evidence is strongest: 1) obesity and diabetes; 2) female reproduction; 3) male reproduction; 4) hormone-sensitive cancers in females; 5) prostate; 6) thyroid; and 7) neurodevelopment and neuroendocrine systems. Our inclusion criteria for studies were those conducted predominantly in the past 5 years deemed to be of high quality based on appropriate negative and positive control groups or populations, adequate sample size and experimental design, and mammalian animal studies with exposure levels in a range that was relevant to humans. We also focused on studies using the developmental origins of health and disease model. No report was excluded based on a positive or negative effect of the EDC exposure. The bulk of the results across the board strengthen the evidence for endocrine health-related actions of EDCs. Based on this much more complete understanding of the endocrine principles by which EDCs act, including nonmonotonic dose-responses, low-dose effects, and developmental vulnerability, these findings can be much better translated to human health. Armed with this information, researchers, physicians, and other healthcare providers can guide regulators and policymakers as they make responsible decisions.

1,423 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: Blinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor, are proposed.
Abstract: We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor. This architecture can model local pairwise feature interactions in a translationally invariant manner which is particularly useful for fine-grained categorization. It also generalizes various orderless texture descriptors such as the Fisher vector, VLAD and O2P. We present experiments with bilinear models where the feature extractors are based on convolutional neural networks. The bilinear form simplifies gradient computation and allows end-to-end training of both networks using image labels only. Using networks initialized from the ImageNet dataset followed by domain specific fine-tuning we obtain 84.1% accuracy of the CUB-200-2011 dataset requiring only category labels at training time. We present experiments and visualizations that analyze the effects of fine-tuning and the choice two networks on the speed and accuracy of the models. Results show that the architecture compares favorably to the existing state of the art on a number of fine-grained datasets while being substantially simpler and easier to train. Moreover, our most accurate model is fairly efficient running at 8 frames/sec on a NVIDIA Tesla K40 GPU. The source code for the complete system will be made available at http://vis-www.cs.umass.edu/bcnn.

1,386 citations


Journal ArticleDOI
TL;DR: It is found that the models designed specifically for salient object detection generally work better than models in closely related areas, which provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems.
Abstract: We extensively compare, qualitatively and quantitatively, 41 state-of-the-art models (29 salient object detection, 10 fixation prediction, 1 objectness, and 1 baseline) over seven challenging data sets for the purpose of benchmarking salient object detection and segmentation methods. From the results obtained so far, our evaluation shows a consistent rapid progress over the last few years in terms of both accuracy and running time. The top contenders in this benchmark significantly outperform the models identified as the best in the previous benchmark conducted three years ago. We find that the models designed specifically for salient object detection generally work better than models in closely related areas, which in turn provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems. In particular, we analyze the influences of center bias and scene complexity in model performance, which, along with the hard cases for the state-of-the-art models, provide useful hints toward constructing more challenging large-scale data sets and better saliency models. Finally, we propose probable solutions for tackling several open problems, such as evaluation scores and data set bias, which also suggest future research directions in the rapidly growing field of salient object detection.

1,372 citations


Journal ArticleDOI
J. Aasi1, J. Abadie1, B. P. Abbott1, Richard J. Abbott1  +884 moreInstitutions (98)
TL;DR: In this paper, the authors review the performance of the LIGO instruments during this epoch, the work done to characterize the detectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of the detectors to a variety of astrophysical sources.
Abstract: In 2009–2010, the Laser Interferometer Gravitational-Wave Observatory (LIGO) operated together with international partners Virgo and GEO600 as a network to search for gravitational waves (GWs) of astrophysical origin. The sensitivity of these detectors was limited by a combination of noise sources inherent to the instrumental design and its environment, often localized in time or frequency, that couple into the GW readout. Here we review the performance of the LIGO instruments during this epoch, the work done to characterize the detectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of LIGO to a variety of astrophysical sources.

1,266 citations


Journal ArticleDOI
TL;DR: Six new EBPs were identified in this review, and one EBP from the previous review was removed, and the authors discuss implications for current practices and future research.
Abstract: The purpose of this study was to identify evidenced-based, focused intervention practices for children and youth with autism spectrum disorder. This study was an extension and elaboration of a previous evidence-based practice review reported by Odom et al. (Prev Sch Fail 54:275–282, 2010b, doi: 10.1080/10459881003785506 ). In the current study, a computer search initially yielded 29,105 articles, and the subsequent screening and evaluation process found 456 studies to meet inclusion and methodological criteria. From this set of research studies, the authors found 27 focused intervention practices that met the criteria for evidence-based practice (EBP). Six new EBPs were identified in this review, and one EBP from the previous review was removed. The authors discuss implications for current practices and future research.

1,206 citations


Journal ArticleDOI
TL;DR: The authors argue that appropriateness-based approaches to language education are implicated in the reproduction of racial normativity by expounding on theories of language ideologies and racialization, and they offer a perspective from which students classified as long-term English learners, heritage language learners, and Standard English learners can be understood to inhabit a shared racial positioning that frames their linguistic practices as deficient regardless of how closely they follow supposed rules of appropriATeness.
Abstract: In this article, Nelson Flores and Jonathan Rosa critique appropriateness-based approaches to language diversity in education. Those who subscribe to these approaches conceptualize standardized linguistic practices as an objective set of linguistic forms that are appropriate for an academic setting. In contrast, Flores and Rosa highlight the raciolinguistic ideologies through which racialized bodies come to be constructed as engaging in appropriately academic linguistic practices. Drawing on theories of language ideologies and racialization, they offer a perspective from which students classified as long-term English learners, heritage language learners, and Standard English learners can be understood to inhabit a shared racial positioning that frames their linguistic practices as deficient regardless of how closely they follow supposed rules of appropriateness. The authors illustrate how appropriateness-based approaches to language education are implicated in the reproduction of racial normativity by exp...

Posted Content
TL;DR: This paper proposed bilinear models, which consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor, which can model local pairwise feature interactions in a translationally invariant manner.
Abstract: We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor. This architecture can model local pairwise feature interactions in a translationally invariant manner which is particularly useful for fine-grained categorization. It also generalizes various orderless texture descriptors such as the Fisher vector, VLAD and O2P. We present experiments with bilinear models where the feature extractors are based on convolutional neural networks. The bilinear form simplifies gradient computation and allows end-to-end training of both networks using image labels only. Using networks initialized from the ImageNet dataset followed by domain specific fine-tuning we obtain 84.1% accuracy of the CUB-200-2011 dataset requiring only category labels at training time. We present experiments and visualizations that analyze the effects of fine-tuning and the choice two networks on the speed and accuracy of the models. Results show that the architecture compares favorably to the existing state of the art on a number of fine-grained datasets while being substantially simpler and easier to train. Moreover, our most accurate model is fairly efficient running at 8 frames/sec on a NVIDIA Tesla K40 GPU. The source code for the complete system will be made available at this http URL

Journal ArticleDOI
TL;DR: In this article, the authors present a theoretical framework for the thermodynamics of information based on stochastic thermodynamics and fluctuation theorems, review some recent experimental results, and present an overview of the state of the art in the field.
Abstract: By its very nature, the second law of thermodynamics is probabilistic, in that its formulation requires a probabilistic description of the state of a system. This raises questions about the objectivity of the second law: does it depend, for example, on what we know about the system? For over a century, much effort has been devoted to incorporating information into thermodynamics and assessing the entropic and energetic costs of manipulating information. More recently, this historically theoretical pursuit has become relevant in practical situations where information is manipulated at small scales, such as in molecular and cell biology, artificial nano-devices or quantum computation. Here we give an introduction to a novel theoretical framework for the thermodynamics of information based on stochastic thermodynamics and fluctuation theorems, review some recent experimental results, and present an overview of the state of the art in the field. The task of integrating information into the framework of thermodynamics dates back to Maxwell and his infamous demon. Recent advances have made these ideas rigorous—and brought them into the laboratory.

17 Jun 2015
TL;DR: A general standardised and practical static digestion method based on physiologically relevant conditions that can be applied for various endpoints, which may be amended to accommodate further specific requirements, is proposed.
Abstract: Simulated gastro-intestinal digestion is widely employed in many fields of food and nutritional sciences, as conducting human trials are often costly, resource intensive, and ethically disputable. As a consequence, in vitro alternatives that determine endpoints such as the bioaccessibility of nutrients and non-nutrients or the digestibility of macronutrients (e.g. lipids, proteins and carbohydrates) are used for screening and building new hypotheses. Various digestion models have been proposed, often impeding the possibility to compare results across research teams. For example, a large variety of enzymes from different sources such as of porcine, rabbit or human origin have been used, differing in their activity and characterization. Differences in pH, mineral type, ionic strength and digestion time, which alter enzyme activity and other phenomena, may also considerably alter results. Other parameters such as the presence of phospholipids, individual enzymes such as gastric lipase and digestive emulsifiers vs. their mixtures (e.g. pancreatin and bile salts), and the ratio of food bolus to digestive fluids, have also been discussed at length. In the present consensus paper, within the COST Infogest network, we propose a general standardised and practical static digestion method based on physiologically relevant conditions that can be applied for various endpoints, which may be amended to accommodate further specific requirements. A frameset of parameters including the oral, gastric and small intestinal digestion are outlined and their relevance discussed in relation to available in vivo data and enzymes. This consensus paper will give a detailed protocol and a line-by-line, guidance, recommendations and justifications but also limitation of the proposed model. This harmonised static, in vitro digestion method for food should aid the production of more comparable data in the future.

Journal ArticleDOI
TL;DR: The results demonstrate that a fine and balanced modification/design of chemical structure can make significant performance differences and that the performance of solution-processed small-molecule-based solar cells can be comparable to or even surpass that of their polymer counterparts.
Abstract: A series of acceptor-donor-acceptor simple oligomer-like small molecules based on oligothiophenes, namely, DRCN4T-DRCN9T, were designed and synthesized. Their optical, electrical, and thermal properties and photovoltaic performances were systematically investigated. Except for DRCN4T, excellent performances were obtained for DRCN5T-DRCN9T. The devices based on DRCN5T, DRCN7T, and DRCN9T with axisymmetric chemical structures exhibit much higher short-circuit current densities than those based on DRCN6T and DRCN8T with centrosymmetric chemical structures, which is attributed to their well-developed fibrillar network with a feature size less than 20 nm. The devices based on DRCN5T/PC71BM showed a notable certified power conversion efficiency (PCE) of 10.10% under AM 1.5G irradiation (100 mW cm(-2)) using a simple solution spin-coating fabrication process. This is the highest PCE for single-junction small-molecule-based organic photovoltaics (OPVs) reported to date. DRCN5T is a rather simpler molecule compared with all of the other high-performance molecules in OPVs to date, and this might highlight its advantage in the future possible commercialization of OPVs. These results demonstrate that a fine and balanced modification/design of chemical structure can make significant performance differences and that the performance of solution-processed small-molecule-based solar cells can be comparable to or even surpass that of their polymer counterparts.

Journal ArticleDOI
TL;DR: In this article, a solution-processed small-molecule solar cells with almost 100% internal quantum efficiency and a power conversion efficiency of 9% were reported, making use of a donor molecule called DRCN7T and use PC71BM as an acceptor.
Abstract: Solution-processed small-molecule solar cells with almost 100% internal quantum efficiency and a power conversion efficiency of 9% are reported. The cells make use of a donor molecule called DRCN7T and use PC71BM as an acceptor.

Journal ArticleDOI
TL;DR: A survey of 119 countries showed that education is the strongest predictor of climate change awareness around the world as mentioned in this paper, which suggests that improving understanding of local impacts is vital for public engagement.
Abstract: A survey of 119 countries shows that education is the strongest predictor of climate change awareness around the world. The results suggest that improving understanding of local impacts is vital for public engagement.

Journal ArticleDOI
TL;DR: In this article, the authors discuss how and why social media platforms have become powerful sites for documenting and challenging episodes of police brutality and the misrepresentation of racialized bodies in mainstream media.
Abstract: As thousands of demonstrators took to the streets of Ferguson, Missouri, to protest the fatal police shooting of unarmed African American teenager Michael Brown in the summer of 2014, news and commentary on the shooting, the protests, and the militarized response that followed circulated widely through social media networks. Through a theorization of hashtag usage, we discuss how and why social media platforms have become powerful sites for documenting and challenging episodes of police brutality and the misrepresentation of racialized bodies in mainstream media. We show how engaging in “hashtag activism” can forge a shared political temporality, and, additionally, we examine how social media platforms can provide strategic outlets for contesting and reimagining the materiality of racialized bodies. Our analysis combines approaches from linguistic anthropology and social movements research to investigate the semiotics of digital protest and to interrogate both the possibilities and the pitfalls of engaging in “hashtag ethnography.”

Posted Content
TL;DR: In this paper, diffusion convolutional neural networks (DCNNs) are proposed for graph-structured data and shown to outperform probabilistic relational models and kernel-on-graph methods at relational node classification.
Abstract: We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on the GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks.

Journal ArticleDOI
TL;DR: In this article, the authors rely on the theory of planned behavior to identify the beliefs that influence young people's pro-environmental behavior and find that attitudes, subjective norms, and perceptions of control made independent contributions to the prediction of intentions, and intentions together with perceived control predicted behavior.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work proposes a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank, which substantially improves the state-of-the-art in texture, material and scene recognition.
Abstract: Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.

Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of the deep Herschel images in four major extragalactic fields Goodfellow-Herschel, CANDELS, UDS, and COSMOS.
Abstract: We present an analysis of the deepest Herschel images in four major extragalactic fields GOODS–North, GOODS–South, UDS, and COSMOS obtained within the GOODS–Herschel and CANDELS–Herschel key programs. The star formation picture provided by a total of 10 497 individual far-infrared detections is supplemented by the stacking analysis of a mass complete sample of 62 361 star-forming galaxies from the Hubble Space Telescope (HST) H band-selected catalogs of the CANDELS survey and from two deep ground-based Ks band-selected catalogs in the GOODS–North and the COSMOS-wide field to obtain one of the most accurate and unbiased understanding to date of the stellar mass growth over the cosmic history. We show, for the first time, that stacking also provides a powerful tool to determine the dispersion of a physical correlation and describe our method called “scatter stacking”, which may be easily generalized to other experiments. The combination of direct UV and far-infrared UV-reprocessed light provides a complete census on the star formation rates (SFRs), allowing us to demonstrate that galaxies at z = 4 to 0 of all stellar masses (M∗) follow a universal scaling law, the so-called main sequence of star-forming galaxies. We find a universal close-to-linear slope of the log 10(SFR)–log 10(M∗) relation, with evidence for a flattening of the main sequence at high masses (log 10(M∗/M⊙) > 10.5) that becomesless prominent with increasing redshift and almost vanishes by z ≃ 2. This flattening may be due to the parallel stellar growth of quiescent bulges in star-forming galaxies, which mostly happens over the same redshift range. Within the main sequence, we measure a nonvarying SFR dispersion of 0.3 dex: at a fixed redshift and stellar mass, about 68% of star-forming galaxies form stars at a universal rate within a factor 2. The specific SFR (sSFR = SFR/M∗) of star-forming galaxies is found to continuously increase from z = 0 to 4. Finally we discuss the implications of our findings on the cosmic SFR history and on the origin of present-day stars: more than two-thirds of present-day stars must have formed in a regime dominated by the “main sequence” mode. As a consequence we conclude that, although omnipresent in the distant Universe, galaxy mergers had little impact in shaping the global star formation history over the last 12.5 billion years.

Journal ArticleDOI
TL;DR: In this article, a robust measurement and analysis of the rest-frame ultraviolet (UV) luminosity function at z = 4 to 8.5 was presented, with more than 1000 galaxies at z of approximately 6 - 8.4.
Abstract: We present a robust measurement and analysis of the rest-frame ultraviolet (UV) luminosity function at z = 4 to 8. We use deep Hubble Space Telescope imaging over the CANDELS/GOODS fields, the Hubble Ultra Deep Field and the Hubble Frontier Field deep parallel observations near the Abell 2744 and MACS J0416.1- 2403 clusters. The combination of these surveys provides an effective volume of 0.6-1.2 ×10(exp 6) Mpc(exp 3) over this epoch, allowing us to perform a robust search for bright (M(sub UV) less than −21) and faint (M(sub UV) = −18) galaxies. We select galaxies using a well-tested photometric redshift technique with careful screening of contaminants, finding a sample of 7446 galaxies at 3.5 less than z less than 8.5, with more than 1000 galaxies at z of approximately 6 - 8. We measure both a stepwise luminosity function for galaxies in our redshift samples, as well as a Schechter function, using a Markov Chain Monte Carlo analysis to measure robust uncertainties. At the faint end our UV luminosity functions agree with previous studies, yet we find a higher abundance of UV-bright galaxies at z of greater than or equal to 6. Our bestfit value of the characteristic magnitude M* is consistent with −21 at z of greater than or equal to 5, different than that inferred based on previous trends at lower redshift. At z = 8, a single power-law provides an equally good fit to the UV luminosity function, while at z = 6 and 7, an exponential cutoff at the bright-end is moderately preferred. We compare our luminosity functions to semi-analytical models, and find that the lack of evolution in M* is consistent with models where the impact of dust attenuation on the bright-end of the luminosity function decreases at higher redshift, though a decreasing impact of feedback may also be possible. We measure the evolution of the cosmic star-formation rate (SFR) density by integrating our observed luminosity functions to M(sub UV) = −17, correcting for dust attenuation, and find that the SFR density declines proportionally to (1 + z)((exp −4.3)(+/-)(0.5)) at z greater than 4, consistent with observations at z greater than or equal to 9. Our observed luminosity functions are consistent with a reionization history that starts at redshift of approximately greater than 10, completes at z greater than 6, and reaches a midpoint (x(sub HII) = 0.5) at 6.7 less than z less than 9.4. Finally, using a constant cumulative number density selection and an empirically derived rising star-formation history, our observations predict that the abundance of bright z = 9 galaxies is likely higher than previous constraints, though consistent with recent estimates of bright z similar to 10 galaxies.

Journal ArticleDOI
TL;DR: The global under-5 mortality rate reduced by 53% in the past 25 years and therefore missed the MDG 4 target and five scenario-based projections for under- 5 mortality from 2016 to 2030 were constructed and estimated national, regional, and global under -5 mortality rates up to 2030 for each scenario.

Journal ArticleDOI
TL;DR: A survey to query the community for their ranking of plant-pathogenic oomycete species based on scientific and economic importance received 263 votes from 62 scientists in 15 countries for a total of 33 species and the Top 10 species are provided.
Abstract: Oomycetes form a deep lineage of eukaryotic organisms that includes a large number of plant pathogens which threaten natural and managed ecosystems. We undertook a survey to query the community for their ranking of plant-pathogenic oomycete species based on scientific and economic importance. In total, we received 263 votes from 62 scientists in 15 countries for a total of 33 species. The Top 10 species and their ranking are: (1) Phytophthora infestans; (2, tied) Hyaloperonospora arabidopsidis; (2, tied) Phytophthora ramorum; (4) Phytophthora sojae; (5) Phytophthora capsici; (6) Plasmopara viticola; (7) Phytophthora cinnamomi; (8, tied) Phytophthora parasitica; (8, tied) Pythium ultimum; and (10) Albugo candida. This article provides an introduction to these 10 taxa and a snapshot of current research. We hope that the list will serve as a benchmark for future trends in oomycete research.

Journal ArticleDOI
TL;DR: In their call to lay the theory of planned behaviour (TPB) to rest, Sniehotta, Presseau, and Araujo-Soares (2014) contend that the theory has been thoroughly discredited, at least as a guide to pre-planned behaviour.
Abstract: In their call to lay the theory of planned behaviour (TPB) to rest, Sniehotta, Presseau, and Araujo-Soares (2014) contend that the theory has been thoroughly discredited, at least as a guide to pre...

Journal ArticleDOI
TL;DR: Genetic rescue is a tool that can stem biodiversity loss more than has been appreciated, provides population resilience, and will become increasingly useful if integrated with molecular advances in population genomics.
Abstract: Genetic rescue can increase the fitness of small, imperiled populations via immigration. A suite of studies from the past decade highlights the value of genetic rescue in increasing population fitness. Nonetheless, genetic rescue has not been widely applied to conserve many of the threatened populations that it could benefit. In this review, we highlight recent studies of genetic rescue and place it in the larger context of theoretical and empirical developments in evolutionary and conservation biology. We also propose directions to help shape future research on genetic rescue. Genetic rescue is a tool that can stem biodiversity loss more than has been appreciated, provides population resilience, and will become increasingly useful if integrated with molecular advances in population genomics.

Journal ArticleDOI
10 Jul 2015-Science
TL;DR: This work concludes that during recent interglacial periods, small increases in global mean temperature and just a few degrees of polar warming relative to the preindustrial period resulted in ≥6 m of GMSL rise, which is currently not possible to make a precise estimate of peak G MSL during the Pliocene.
Abstract: BACKGROUND:Although thermal expansion of seawater and melting of mountain glaciers have dominated global mean sea level (GMSL) rise over the last century, mass loss from the Greenland and Antarctic ice sheets is expected to exceed other contributions to GMSL rise under future warming. To better constrain polarice-sheetresponse to warmer temperatures, we draw on evidence from in- terglacial periods in the geologic record that ex- perienced warmer polar temperatures and higher GMSLs than present. Coastal records of sea level from these previous warm periods dem- onstrate geographic variability because of the influence of several geophysical processes that operate across a range of magnitudes and time scales. Inferring GMSL and ice- volume changes from these reconstructions is nontrivial and generally requires the use of geophysical models. ADVANCES: Interdisciplinary studies of geo- logic archives have ushered in a new era of deciphering magnitudes, rates, and sources of sea-level rise. Advances in our understanding of polar ice-sheet response to warmer climates have been made through an increase in the number and geographic distribution of sea- level reconstructions, better ice-sheet constraints, and the recognition that several geophysical processes cause spatially complex patterns in sea level. In particular, accounting for glacial isostatic processes helps to decipher spatial variability in coastal sea-level records and has reconciled a number of site-specific sea-level reconstructions for warm periods that have oc- curred within the past several hundred thou- sand years. This enables us to infer that during recent interglacial periods, small increases in

Journal ArticleDOI
TL;DR: The finding that biochar can stimulate DIET may be an important consideration when amending soils with biochar and can help explain why biochar may enhance methane production from organic wastes under anaerobic conditions.
Abstract: Biochar, a charcoal-like product of the incomplete combustion of organic materials, is an increasingly popular soil amendment designed to improve soil fertility. We investigated the possibility that biochar could promote direct interspecies electron transfer (DIET) in a manner similar to that previously reported for granular activated carbon (GAC). Although the biochars investigated were 1000 times less conductive than GAC, they stimulated DIET in co-cultures of Geobacter metallireducens with Geobacter sulfurreducens or Methanosarcina barkeri in which ethanol was the electron donor. Cells were attached to the biochar, yet not in close contact, suggesting that electrons were likely conducted through the biochar, rather than biological electrical connections. The finding that biochar can stimulate DIET may be an important consideration when amending soils with biochar and can help explain why biochar may enhance methane production from organic wastes under anaerobic conditions.

Journal ArticleDOI
29 Jan 2015-Nature
TL;DR: A protein–DNA network is presented between Arabidopsis thaliana transcription factors and secondary cell wall metabolic genes with gene expression regulated by a series of feed-forward loops to develop and validate new hypotheses about secondary wall gene regulation under abiotic stress.
Abstract: The plant cell wall is an important factor for determining cell shape, function and response to the environment. Secondary cell walls, such as those found in xylem, are composed of cellulose, hemicelluloses and lignin and account for the bulk of plant biomass. The coordination between transcriptional regulation of synthesis for each polymer is complex and vital to cell function. A regulatory hierarchy of developmental switches has been proposed, although the full complement of regulators remains unknown. Here we present a protein-DNA network between Arabidopsis thaliana transcription factors and secondary cell wall metabolic genes with gene expression regulated by a series of feed-forward loops. This model allowed us to develop and validate new hypotheses about secondary wall gene regulation under abiotic stress. Distinct stresses are able to perturb targeted genes to potentially promote functional adaptation. These interactions will serve as a foundation for understanding the regulation of a complex, integral plant component.