scispace - formally typeset
Search or ask a question

Showing papers by "Purdue University published in 2013"


Journal ArticleDOI
TL;DR: This protocol provides a workflow for genome-independent transcriptome analysis leveraging the Trinity platform and presents Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes.
Abstract: De novo assembly of RNA-seq data enables researchers to study transcriptomes without the need for a genome sequence; this approach can be usefully applied, for instance, in research on 'non-model organisms' of ecological and evolutionary importance, cancer samples or the microbiome. In this protocol we describe the use of the Trinity platform for de novo transcriptome assembly from RNA-seq data in non-model organisms. We also present Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes. In the procedure, we provide a workflow for genome-independent transcriptome analysis leveraging the Trinity platform. The software, documentation and demonstrations are freely available from http://trinityrnaseq.sourceforge.net. The run time of this protocol is highly dependent on the size and complexity of data to be analyzed. The example data set analyzed in the procedure detailed herein can be processed in less than 5 h.

6,369 citations


Journal ArticleDOI
15 Mar 2013-Science
TL;DR: Progress in the optics of metasurfaces is reviewed and promising applications for surface-confined planar photonics components are discussed and the studies of new, low-loss, tunable plasmonic materials—such as transparent conducting oxides and intermetallics—that can be used as building blocks for metAsurfaces will complement the exploration of smart designs and advanced switching capabilities.
Abstract: Metamaterials, or engineered materials with rationally designed, subwavelength-scale building blocks, allow us to control the behavior of physical fields in optical, microwave, radio, acoustic, heat transfer, and other applications with flexibility and performance that are unattainable with naturally available materials. In turn, metasurfaces-planar, ultrathin metamaterials-extend these capabilities even further. Optical metasurfaces offer the fascinating possibility of controlling light with surface-confined, flat components. In the planar photonics concept, it is the reduced dimensionality of the optical metasurfaces that enables new physics and, therefore, leads to functionalities and applications that are distinctly different from those achievable with bulk, multilayer metamaterials. Here, we review the progress in developing optical metasurfaces that has occurred over the past few years with an eye toward the promising future directions in the field.

2,562 citations


Journal ArticleDOI
TL;DR: It is demonstrated that through a proper understanding and design of source/drain contacts and the right choice of number of MoS(2) layers the excellent intrinsic properties of this 2-D material can be harvested.
Abstract: While there has been growing interest in two-dimensional (2-D) crystals other than graphene, evaluating their potential usefulness for electronic applications is still in its infancy due to the lack of a complete picture of their performance potential. The focus of this article is on contacts. We demonstrate that through a proper understanding and design of source/drain contacts and the right choice of number of MoS2 layers the excellent intrinsic properties of this 2-D material can be harvested. Using scandium contacts on 10-nm-thick exfoliated MoS2 flakes that are covered by a 15 nm Al2O3 film, high effective mobilities of 700 cm2/(V s) are achieved at room temperature. This breakthrough is largely attributed to the fact that we succeeded in eliminating contact resistance effects that limited the device performance in the past unrecognized. In fact, the apparent linear dependence of current on drain voltage had mislead researchers to believe that a truly Ohmic contact had already been achieved, a miscon...

2,185 citations


Journal ArticleDOI
TL;DR: Evidence is provided that the rapid 32P-PtdOH response was primarily generated through DAG kinase (DGK), and a tentative model illustrating direct cold effects on phospholipid metabolism is presented.
Abstract: Phosphatidic acid (PtdOH) is emerging as an important signalling lipid in abiotic stress responses in plants. The effect of cold stress was monitored using 32P-labelled seedlings and leaf discs of Arabidopsis thaliana. Low, non-freezing temperatures were found to trigger a very rapid 32P-PtdOH increase, peaking within 2 and 5 min, respectively. In principle, PtdOH can be generated through three different pathways, i.e. i) via de novo phospholipid biosynthesis (through acylation of lyso-PtdOH), ii) via phospholipase D hydrolysis of structural phospholipids or iii) via phosphorylation of diacylglycerol (DAG) by DAG kinase (DGK). Using a differential 32P-labelling protocol and a PLD-transphosphatidylation assay, evidence is provided that the rapid 32P-PtdOH response was primarily generated through DGK. A simultaneous decrease in the levels of 32P-PtdInsP, correlating in time, temperature dependency and magnitude with the increase in 32P-PtdOH, suggested that a PtdInsP-hydrolyzing PLC generated the DAG in this reaction. Testing T-DNA insertion lines available for the seven DGK genes, revealed no clear changes in 32P-PtdOH responses, suggesting functional redundancy. Similarly, known cold-stress mutants were analyzed to investigate whether the PtdOH response acted downstream of the respective gene products. The hos1, los1 and fry1 mutants were found to exhibit normal PtdOH responses. Slight changes were found for ice1, snow1, and the overexpression line Super-ICE1, however, this was not cold-specific and likely due to pleiotropic effects. A tentative model illustrating direct cold effects on phospholipid metabolism is presented.

1,936 citations


Journal ArticleDOI
TL;DR: This review explores different material classes for plasmonic and metamaterial applications, such as conventional semiconductors, transparent conducting oxides, perovskiteOxides, metal nitrides, silicides, germanides, and 2D materials such as graphene.
Abstract: Materials research plays a vital role in transforming breakthrough scientific ideas into next-generation technology. Similar to the way silicon revolutionized the microelectronics industry, the proper materials can greatly impact the field of plasmonics and metamaterials. Currently, research in plasmonics and metamaterials lacks good material building blocks in order to realize useful devices. Such devices suffer from many drawbacks arising from the undesirable properties of their material building blocks, especially metals. There are many materials, other than conventional metallic components such as gold and silver, that exhibit metallic properties and provide advantages in device performance, design flexibility, fabrication, integration, and tunability. This review explores different material classes for plasmonic and metamaterial applications, such as conventional semiconductors, transparent conducting oxides, perovskite oxides, metal nitrides, silicides, germanides, and 2D materials such as graphene. This review provides a summary of the recent developments in the search for better plasmonic materials and an outlook of further research directions.

1,836 citations


Journal ArticleDOI
TL;DR: Ni et al. as discussed by the authors presented ultra-thin plasmonic holograms that control amplitude and phase in the visible region and are just 30 nm thick, which is comparable to the light wavelength used.
Abstract: Holographic techniques provide phase and amplitude information for images of objects, but normally the hologram thickness is comparable to the light wavelength used. Ni et al. present ultra-thin plasmonic holograms that control amplitude and phase in the visible region and are just 30 nm thick.

1,243 citations


Journal ArticleDOI
TL;DR: This review describes the crystal and electronic structures that are closely related to the photoelectrochemical properties of BiVO(4) and the latest efforts toward addressing these limitations in order to improve the performances of Bi VO(4)-based photoanodes are discussed.
Abstract: Harvesting energy directly from sunlight as nature accomplishes through photosynthesis is a very attractive and desirable way to solve the energy challenge. Many efforts have been made to find appropriate materials and systems that can utilize solar energy to produce chemical fuels. One of the most viable options is the construction of a photoelectrochemical cell that can reduce water to H2 or CO2 to carbon-based molecules. Bismuth vanadate (BiVO4) has recently emerged as a promising material for use as a photoanode that oxidizes water to O2 in these cells. Significant advancement in the understanding and construction of efficient BiVO4-based photoanode systems has been made within a short period of time owing to various newly developed ideas and approaches. In this review, the crystal and electronic structures that are closely related to the photoelectrochemical properties of BiVO4 are described first, and the photoelectrochemical properties and limitations of BiVO4 are examined. Subsequently, the latest efforts toward addressing these limitations in order to improve the performances of BiVO4-based photoanodes are discussed. These efforts include morphology control, formation of composite structures, composition tuning, and coupling oxygen evolution catalysts. The discussions and insights provided in this review reflect the most recent approaches and directions for general photoelectrode developments and they will be directly applicable for the understanding and improvement of other photoelectrode systems.

1,146 citations


Journal ArticleDOI
TL;DR: In this paper, a review describes recent groundbreaking results in Si, Si/SiGe, and dopant-based quantum dots, and highlights the remarkable advances in Sibased quantum physics that have occurred in the past few years.
Abstract: This review describes recent groundbreaking results in Si, Si/SiGe, and dopant-based quantum dots, and it highlights the remarkable advances in Si-based quantum physics that have occurred in the past few years. This progress has been possible thanks to materials development of Si quantum devices, and the physical understanding of quantum effects in silicon. Recent critical steps include the isolation of single electrons, the observation of spin blockade, and single-shot readout of individual electron spins in both dopants and gated quantum dots in Si. Each of these results has come with physics that was not anticipated from previous work in other material systems. These advances underline the significant progress toward the realization of spin quantum bits in a material with a long spin coherence time, crucial for quantum computation and spintronics.

998 citations


Journal ArticleDOI
TL;DR: This paper proposes the use of outdoor millimeter wave communications for backhaul networking between cells and mobile access within a cell, and proposes an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks.
Abstract: Recently, there has been considerable interest in new tiered network cellular architectures, which would likely use many more cell sites than found today. Two major challenges will be i) providing backhaul to all of these cells and ii) finding efficient techniques to leverage higher frequency bands for mobile access and backhaul. This paper proposes the use of outdoor millimeter wave communications for backhaul networking between cells and mobile access within a cell. To overcome the outdoor impairments found in millimeter wave propagation, this paper studies beamforming using large arrays. However, such systems will require narrow beams, increasing sensitivity to movement caused by pole sway and other environmental concerns. To overcome this, we propose an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks. A wind sway analysis is presented to establish a notion of beam coherence time. This highlights a previously unexplored tradeoff between array size and wind-induced movement. Generally, it is not possible to use larger arrays without risking a corresponding performance loss from wind-induced beam misalignment. The performance of the proposed alignment technique is analyzed and compared with other search and alignment methods. The results show significant performance improvement with reduced search time.

975 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the biosynthesis and regulation of plant volatiles, the involvement of floral volaticles in plant reproduction as well as their contribution to plant biodiversity and applications in agriculture via crop-pollinator interactions.
Abstract: Plants synthesize an amazing diversity of volatile organic compounds (VOCs) that facilitate interactions with their environment, from attracting pollinators and seed dispersers to protecting themselves from pathogens, parasites and herbivores. Recent progress in -omics technologies resulted in the isolation of genes encoding enzymes responsible for the biosynthesis of many volatiles and contributed to our understanding of regulatory mechanisms involved in VOC formation. In this review, we largely focus on the biosynthesis and regulation of plant volatiles, the involvement of floral volatiles in plant reproduction as well as their contribution to plant biodiversity and applications in agriculture via crop-pollinator interactions. In addition, metabolic engineering approaches for both the improvement of plant defense and pollinator attraction are discussed in light of methodological constraints and ecological complications that limit the transition of crops with modified volatile profiles from research laboratories to real-world implementation.

963 citations



Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper demonstrates with some simple examples how Plug-and-Play priors can be used to mix and match a wide variety of existing denoising models with a tomographic forward model, thus greatly expanding the range of possible problem solutions.
Abstract: Model-based reconstruction is a powerful framework for solving a variety of inverse problems in imaging. In recent years, enormous progress has been made in the problem of denoising, a special case of an inverse problem where the forward model is an identity operator. Similarly, great progress has been made in improving model-based inversion when the forward model corresponds to complex physical measurements in applications such as X-ray CT, electron-microscopy, MRI, and ultrasound, to name just a few. However, combining state-of-the-art denoising algorithms (i.e., prior models) with state-of-the-art inversion methods (i.e., forward models) has been a challenge for many reasons. In this paper, we propose a flexible framework that allows state-of-the-art forward models of imaging systems to be matched with state-of-the-art priors or denoising models. This framework, which we term as Plug-and-Play priors, has the advantage that it dramatically simplifies software integration, and moreover, it allows state-of-the-art denoising methods that have no known formulation as an optimization problem to be used. We demonstrate with some simple examples how Plug-and-Play priors can be used to mix and match a wide variety of existing denoising models with a tomographic forward model, thus greatly expanding the range of possible problem solutions.

Journal ArticleDOI
Predrag Radivojac1, Wyatt T. Clark1, Tal Ronnen Oron2, Alexandra M. Schnoes3, Tobias Wittkop2, Artem Sokolov4, Artem Sokolov5, Kiley Graim5, Christopher S. Funk6, Karin Verspoor6, Asa Ben-Hur5, Gaurav Pandey7, Gaurav Pandey8, Jeffrey M. Yunes8, Ameet Talwalkar8, Susanna Repo8, Susanna Repo9, Michael L Souza8, Damiano Piovesan10, Rita Casadio10, Zheng Wang11, Jianlin Cheng11, Hai Fang, Julian Gough12, Patrik Koskinen13, Petri Törönen13, Jussi Nokso-Koivisto13, Liisa Holm13, Domenico Cozzetto14, Daniel W. A. Buchan14, Kevin Bryson14, David T. Jones14, Bhakti Limaye15, Harshal Inamdar15, Avik Datta15, Sunitha K Manjari15, Rajendra Joshi15, Meghana Chitale16, Daisuke Kihara16, Andreas Martin Lisewski17, Serkan Erdin17, Eric Venner17, Olivier Lichtarge17, Robert Rentzsch14, Haixuan Yang18, Alfonso E. Romero18, Prajwal Bhat18, Alberto Paccanaro18, Tobias Hamp19, Rebecca Kaßner19, Stefan Seemayer19, Esmeralda Vicedo19, Christian Schaefer19, Dominik Achten19, Florian Auer19, Ariane Boehm19, Tatjana Braun19, Maximilian Hecht19, Mark Heron19, Peter Hönigschmid19, Thomas A. Hopf19, Stefanie Kaufmann19, Michael Kiening19, Denis Krompass19, Cedric Landerer19, Yannick Mahlich19, Manfred Roos19, Jari Björne20, Tapio Salakoski20, Andrew Wong21, Hagit Shatkay22, Hagit Shatkay21, Fanny Gatzmann23, Ingolf Sommer23, Mark N. Wass24, Michael J.E. Sternberg24, Nives Škunca, Fran Supek, Matko Bošnjak, Panče Panov, Sašo Džeroski, Tomislav Šmuc, Yiannis A. I. Kourmpetis25, Yiannis A. I. Kourmpetis26, Aalt D. J. van Dijk25, Cajo J. F. ter Braak25, Yuanpeng Zhou27, Qingtian Gong27, Xinran Dong27, Weidong Tian27, Marco Falda28, Paolo Fontana, Enrico Lavezzo28, Barbara Di Camillo28, Stefano Toppo28, Liang Lan29, Nemanja Djuric29, Yuhong Guo29, Slobodan Vucetic29, Amos Marc Bairoch30, Amos Marc Bairoch31, Michal Linial32, Patricia C. Babbitt3, Steven E. Brenner8, Christine A. Orengo14, Burkhard Rost19, Sean D. Mooney2, Iddo Friedberg33 
TL;DR: Today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets, and there is considerable need for improvement of currently available tools.
Abstract: Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools.

Journal ArticleDOI
TL;DR: An up-to-date synthesis of estimates of global CH4 emissions from wetlands and other freshwater aquatic ecosystems is provided, major biogeophysical controls over CH4 emitters from wetlands are summarized, new frontiers in CH4 biogeochemistry are suggested, and relationships between methanogen community structure and CH4 dynamics in situ are examined.
Abstract: Understanding the dynamics of methane (CH4) emissions is of paramount importance because CH4 has 25 times the global warming potential of carbon dioxide (CO2) and is currently the second most important anthropogenic greenhouse gas. Wetlands are the single largest natural CH4 source with median emissions from published studies of 164 Tg yr 1 , which is about a third of total global emissions. We provide a perspective on important new frontiers in obtaining a better understanding of CH4 dynamics in natural systems, with a focus on wetlands. One of the most exciting recent developments in this field is the attempt to integrate the different methodologies and spatial scales of biogeochemistry, molecular microbiology, and modeling, and thus this is a major focus of this review. Our specific objectives are to provide an up-to-date synthesis of estimates of global CH4 emissions from wetlands and other freshwater aquatic ecosystems, briefly summarize major biogeophysical controls over CH4 emissions from wetlands, suggest new frontiers in CH4 biogeochemistry, examine relationships between methanogen community structure and CH4 dynamics in situ, and to review the current generation of CH4 models. We highlight throughout some of the most pressing issues concerning global change and feedbacks on CH4 emissions from natural ecosystems. Major uncertainties in estimating current and future CH4 emissions from natural ecosystems include the following: (i) A number of important controls over CH4 production, consumption, and transport have not been, or are inadequately, incorporated into existing CH4 biogeochemistry models. (ii) Significant errors in regional and global emission estimates are derived from large spatial-scale extrapolations from highly heterogeneous and often poorly mapped wetland complexes. (iii) The limited number of observations of CH4 fluxes and their associated environmental variables loosely constrains the parameterization of process-based biogeochemistry models.

Journal ArticleDOI
TL;DR: An overview of the background, techniques, systems, and research areas for offloading computation is provided, and directions for future research are described.
Abstract: Mobile systems have limited resources, such as battery life, network bandwidth, storage capacity, and processor performance. These restrictions may be alleviated by computation offloading: sending heavy computation to resourceful servers and receiving the results from these servers. Many issues related to offloading have been investigated in the past decade. This survey paper provides an overview of the background, techniques, systems, and research areas for offloading computation. We also describe directions for future research.

Posted Content
TL;DR: In this article, the reliability, validity, and sensitivity to change of life satisfaction measures are reviewed, showing that the scales are stable under unchanging conditions, but are sensitive to changes in circumstances in people's lives.
Abstract: National accounts of subjective well-being are being considered and adopted by nations. In order to be useful for policy deliberations, the measures of life satisfaction must be psychometrically sound. The reliability, validity, and sensitivity to change of life satisfaction measures are reviewed. The scales are stable under unchanging conditions, but are sensitive to changes in circumstances in people’s lives. Several types of data indicate that the scales validly reflect the quality of respondents’ lives: 1. Differences between nations in life satisfaction associated with differences in objective conditions, 2. Differences between groups who live in different circumstances, 3. Correlations with nonself-report measures of life satisfaction, 4. Genetic and physiological associations with life satisfaction, 5. Systematic patterns of change in the scales before, during, and after significant life events, and 6. Prediction by life satisfaction scores of future behaviors such as suicide. The life satisfaction scales can be influenced by factors such as question order, current mood, and mode of presentation, but in most cases these can be controlled. Our model of life satisfaction judgments points to the importance of attention, values, standards, and top-down effects. Although the scales are useful in research on individual well-being, there are policy questions that need more analysis and research, such as which types of subjective well-being measures are most relevant to which types of policies, how standards influence scores, and how best to associate the scores with current policy deliberations.

Journal ArticleDOI
TL;DR: In this article, an integrated framework based on telecoupling, an umbrella concept that refers to socioeconomic and environmental interactions over distances, is proposed to understand and integrate various distant interactions better.
Abstract: Interactions between distant places are increasingly widespread and influential, often leading to unexpected outcomes with profound implications for sustainability. Numerous sustainability studies have been conducted within a particular place with little attention to the impacts of distant interactions on sustainability in multiple places. Although distant forces have been studied, they are usually treated as exogenous variables and feedbacks have rarely been considered. To understand and integrate various distant interactions better, we propose an integrated framework based on telecoupling, an umbrella concept that refers to socioeconomic and environmental interactions over distances. The concept of telecoupling is a logical extension of research on coupled human and natural systems, in which interactions occur within particular geographic locations. The telecoupling framework contains five major interrelated components, i.e., coupled human and natural systems, flows, agents, causes, and effects. We illustrate the framework using two examples of distant interactions associated with trade of agricultural commodities and invasive species, highlight the implications of the framework, and discuss research needs and approaches to move research on telecouplings forward. The framework can help to analyze system components and their interrelationships, identify research gaps, detect hidden costs and untapped benefits, provide a useful means to incorporate feedbacks as well as trade-offs and synergies across multiple systems (sending, receiving, and spillover systems), and improve the understanding of distant interactions and the effectiveness of policies for socioeconomic and environmental sustainability from local to global levels.

Journal ArticleDOI
24 Apr 2013-Animal
TL;DR: Further research is still needed to improve the knowledge of basic mechanisms associated to the negative effects of heat stress in poultry, as well as to develop effective interventions to deal with heat stress conditions.
Abstract: Understanding and controlling environmental conditions is crucial to successful poultry production and welfare. Heat stress is one of the most important environmental stressors challenging poultry production worldwide. The detrimental effects of heat stress on broilers and laying hens range from reduced growth and egg production to decreased poultry and egg quality and safety. Moreover, the negative impact of heat stress on poultry welfare has recently attracted increasing public awareness and concern. Much information has been published on the effects of heat stress on productivity and immune response in poultry. However, our knowledge of basic mechanisms associated to the reported effects, as well as related to poultry behavior and welfare under heat stress conditions is in fact scarce. Intervention strategies to deal with heat stress conditions have been the focus of many published studies. Nevertheless, effectiveness of most of the interventions has been variable or inconsistent. This review focuses on the scientific evidence available on the importance and impact of heat stress in poultry production, with emphasis on broilers and laying hens.

Proceedings Article
01 Jun 2013
TL;DR: In this paper, the authors review some of the recent developments in the field of hyperbolic dispersion of metamaterials and their applications in a variety of phenomena, from spontaneous emission to light propagation and scattering.
Abstract: Metamaterials with hyperbolic dispersion (where two eigenvalues of the dielectric permittivity tensor have opposite signs) exhibit a broad bandwidth singularity in the photonic density of states, with resulting manifestations in a variety of phenomena, from spontaneous emission to light propagation and scattering. In this tutorial, I will review some of the recent developments in this field.

Journal ArticleDOI
B. S. Acharya1, Marcos Daniel Actis2, T. Aghajani3, G. Agnetta4  +979 moreInstitutions (122)
TL;DR: The Cherenkov Telescope Array (CTA) as discussed by the authors is a very high-energy (VHE) gamma ray observatory with an international collaboration with more than 1000 members from 27 countries in Europe, Asia, Africa and North and South America.

Journal ArticleDOI
TL;DR: GPR41 and GPR43 mice had reduced inflammatory responses after administration of ethanol or TNBS compared with control mice, and had a slower immune response against C rodentium infection, clearing the bacteria more slowly.

Journal ArticleDOI
S. Schael1, R. Barate2, R. Brunelière2, D. Buskulic2  +1672 moreInstitutions (143)
TL;DR: In this paper, the results of the four LEP experiments were combined to determine fundamental properties of the W boson and the electroweak theory, including the branching fraction of W and the trilinear gauge-boson self-couplings.

Journal ArticleDOI
08 Feb 2013-Science
TL;DR: In this article, high-resolution gravity data obtained from the dual Gravity Recovery and Interior Laboratory (GRAIL) spacecraft show that the bulk density of the Moon's highlands crust is 2550 kilograms per cubic meter, substantially lower than generally assumed.
Abstract: High-resolution gravity data obtained from the dual Gravity Recovery and Interior Laboratory (GRAIL) spacecraft show that the bulk density of the Moon's highlands crust is 2550 kilograms per cubic meter, substantially lower than generally assumed. When combined with remote sensing and sample data, this density implies an average crustal porosity of 12% to depths of at least a few kilometers. Lateral variations in crustal porosity correlate with the largest impact basins, whereas lateral variations in crustal density correlate with crustal composition. The low-bulk crustal density allows construction of a global crustal thickness model that satisfies the Apollo seismic constraints, and with an average crustal thickness between 34 and 43 kilometers, the bulk refractory element composition of the Moon is not required to be enriched with respect to that of Earth.

Journal ArticleDOI
TL;DR: In this article, a detailed description of the analysis used by the CMS Collaboration in the search for the standard model Higgs boson in pp collisions at the LHC, which led to the observation of a new boson.
Abstract: A detailed description is reported of the analysis used by the CMS Collaboration in the search for the standard model Higgs boson in pp collisions at the LHC, which led to the observation of a new boson. The data sample corresponds to integrated luminosities up to 5.1 inverse femtobarns at sqrt(s) = 7 TeV, and up to 5.3 inverse femtobarns at sqrt(s) = 8 TeV. The results for five Higgs boson decay modes gamma gamma, ZZ, WW, tau tau, and bb, which show a combined local significance of 5 standard deviations near 125 GeV, are reviewed. A fit to the invariant mass of the two high resolution channels, gamma gamma and ZZ to 4 ell, gives a mass estimate of 125.3 +/- 0.4 (stat) +/- 0.5 (syst) GeV. The measurements are interpreted in the context of the standard model Lagrangian for the scalar Higgs field interacting with fermions and vector bosons. The measured values of the corresponding couplings are compared to the standard model predictions. The hypothesis of custodial symmetry is tested through the measurement of the ratio of the couplings to the W and Z bosons. All the results are consistent, within their uncertainties, with the expectations for a standard model Higgs boson.


Journal ArticleDOI
TL;DR: This paper proposes logic complexity reduction at the transistor level as an alternative approach to take advantage of the relaxation of numerical accuracy, and demonstrates the utility of these approximate adders in two digital signal processing architectures with specific quality constraints.
Abstract: Low power is an imperative requirement for portable multimedia devices employing various signal processing algorithms and architectures. In most multimedia applications, human beings can gather useful information from slightly erroneous outputs. Therefore, we do not need to produce exactly correct numerical outputs. Previous research in this context exploits error resiliency primarily through voltage overscaling, utilizing algorithmic and architectural techniques to mitigate the resulting errors. In this paper, we propose logic complexity reduction at the transistor level as an alternative approach to take advantage of the relaxation of numerical accuracy. We demonstrate this concept by proposing various imprecise or approximate full adder cells with reduced complexity at the transistor level, and utilize them to design approximate multi-bit adders. In addition to the inherent reduction in switched capacitance, our techniques result in significantly shorter critical paths, enabling voltage scaling. We design architectures for video and image compression algorithms using the proposed approximate arithmetic units and evaluate them to demonstrate the efficacy of our approach. We also derive simple mathematical models for error and power consumption of these approximate adders. Furthermore, we demonstrate the utility of these approximate adders in two digital signal processing architectures (discrete cosine transform and finite impulse response filter) with specific quality constraints. Simulation results indicate up to 69% power savings using the proposed approximate adders, when compared to existing implementations using accurate adders.

Journal ArticleDOI
TL;DR: Ni et al. as mentioned in this paper developed a plasmonic metalenses that focus a beam of visible light into a spot measuring only slightly larger than the wavelength of operation, achieving a focal length of 2.5 μm at a wavelength of 676 nm.
Abstract: Scientists in the USA have developed powerful ultrathin planar lenses made from 30-nm-thick perforated gold films. These plasmonic metalenses, created by Xingjie Ni and co-workers at Purdue University, focus a beam of visible light into a spot measuring only slight larger than the wavelength of operation. The devices rely on a concentric pattern of Babinet-inverted nano-antennas (nano-avoids) in a metal film that manipulate the phase of the incident light. For example, a 4-μm-diameter lens provides a focal length of 2.5 μm at a wavelength of 676 nm. The lenses are designed to work at two orthogonal linear polarizations and in future could be suitable for use as miniature couplers or light concentrators in on-chip optical devices.

Journal ArticleDOI
TL;DR: The results explain how TG synthesis is coupled with LD growth and identify two distinct LD subpopulations based on their capacity for localized TG synthesis.

Journal ArticleDOI
TL;DR: The goal of this paper is to review both the understanding of the field and the support tools that exist for the purpose, and identify the trends and possible directions research can evolve in the future.
Abstract: Product design is a highly involved, often ill-defined, complex and iterative process, and the needs and specifications of the required artifact get more refined only as the design process moves toward its goal. An effective computer support tool that helps the designer make better-informed decisions requires efficient knowledge representation schemes. In today's world, there is a virtual explosion in the amount of raw data available to the designer, and knowledge representation is critical in order to sift through this data and make sense of it. In addition, the need to stay competitive has shrunk product development time through the use of simultaneous and collaborative design processes, which depend on effective transfer of knowledge between teams. Finally, the awareness that decisions made early in the design process have a higher impact in terms of energy, cost, and sustainability, has resulted in the need to project knowledge typically required in the later stages of design to the earlier stages. Research in design rationale systems, product families, systems engineering, and ontology engineering has sought to capture knowledge from earlier product design decisions, from the breakdown of product functions and associated physical features, and from customer requirements and feedback reports. VR (Virtual reality) systems and multidisciplinary modeling have enabled the simulation of scenarios in the manufacture, assembly, and use of the product. This has helped capture vital knowledge from these stages of the product life and use it in design validation and testing. While there have been considerable and significant developments in knowledge capture and representation in product design, it is useful to sometimes review our position in the area, study the evolution of research in product design, and from past and current trends, try and foresee future developments. The goal of this paper is thus to review both our understanding of the field and the support tools that exist for the purpose, and identify the trends and possible directions research can evolve in the future.

Journal ArticleDOI
TL;DR: The ACR MR Safe Practices Guidelines established de facto industry standards for safe and responsible practices in clinical and research MR environments and as the MR industry changes the document is reviewed, modified and updated.
Abstract: Because there are many potential risks in the MR environment and reports of adverse incidents involving patients, equipment and personnel, the need for a guidance document on MR safe practices emerged. Initially published in 2002, the ACR MR Safe Practices Guidelines established de facto industry standards for safe and responsible practices in clinical and research MR environments. As the MR industry changes the document is reviewed, modified and updated. The most recent version will reflect these changes.