scispace - formally typeset
Search or ask a question

Showing papers by "Michigan Technological University published in 2012"


Posted Content
TL;DR: The Berlin Numeracy Test as discussed by the authors is a psychometrically sound instrument that quickly assesses statistical numeracy and risk literacy and has been shown to be the strongest predictor of comprehension of everyday risks (e.g., evaluating claims about products and treatments; interpreting forecasts).
Abstract: We introduce the Berlin Numeracy Test, a new psychometrically sound instrument that quickly assesses statistical numeracy and risk literacy. We present 21 studies (n=5336) showing robust psychometric discriminability across 15 countries (e.g., Germany, Pakistan, Japan, USA) and diverse samples (e.g., medical professionals, general populations, Mechanical Turk web panels). Analyses demonstrate desirable patterns of convergent validity (e.g., numeracy, general cognitive abilities), discriminant validity (e.g., personality, motivation), and criterion validity (e.g., numerical and nonnumerical questions about risk). The Berlin Numeracy Test was found to be the strongest predictor of comprehension of everyday risks (e.g., evaluating claims about products and treatments; interpreting forecasts), doubling the predictive power of other numeracy instruments and accounting for unique variance beyond other cognitive tests (e.g., cognitive reflection, working memory, intelligence). The Berlin Numeracy Test typically takes about three minutes to complete and is available in multiple languages and formats, including a computer adaptive test that automatically scores and reports data to researchers (www.riskliteracy.org). The online forum also provides interactive content for public outreach and education, and offers a recommendation system for test format selection. Discussion centers on construct validity of numeracy for risk literacy, underlying cognitive mechanisms, and applications in adaptive decision support.

582 citations


Journal ArticleDOI
14 Sep 2012-Science
TL;DR: Theopen-source paradigm is now enabling creation of open-source scientific hardware by combining three-dimensional (3D) printing with open- source microcontrollers running on FOSS.
Abstract: Most experimental research projects are executed with a combination of purchased hardware equipment, which may be modified in the laboratory and custom single-built equipment fabricated inhouse. However, the computer software that helps design and execute experiments and analyze data has an additional source: It can also be free and open-source software (FOSS) ( 1 ). FOSS has the advantage that the code is openly available for modification and is also often free of charge. In the past, customizing software has been much easier than custom-building equipment, which often can be quite costly because fabrication requires the skills of machinists, glassblowers, technicians, or outside suppliers. However, the open-source paradigm is now enabling creation of open-source scientific hardware by combining three-dimensional (3D) printing with open-source microcontrollers running on FOSS. These developments are illustrated below by several examples of equipment fabrication that can better meet particular specifications at substantially lower overall costs.

471 citations


Journal ArticleDOI
TL;DR: The first complete structural, vibrational and electronic characterization of the isostructural UiO-67 material is reported, obtained using the longer 4,4'-biphenyl-dicarboxylate (BPDC) linker by combining laboratory XRPD, Zr K-edge EXAFS, TGA, FTIR, and UV-Vis studies.
Abstract: The recently discovered UiO-66/67/68 class of isostructural metallorganic frameworks (MOFs) [J. H. Cavka et al. J. Am. Chem. Soc., 2008, 130, 13850] has attracted great interest because of its remarkable stability at high temperatures, high pressures and in the presence of different solvents, acids and bases [L. Valenzano et al. Chem. Mater., 2011, 23, 1700]. UiO-66 is obtained by connecting Zr(6)O(4)(OH)(4) inorganic cornerstones with 1,4-benzene-dicarboxylate (BDC) as linker resulting in a cubic MOF, which has already been successfully reproduced in several laboratories. Here we report the first complete structural, vibrational and electronic characterization of the isostructural UiO-67 material, obtained using the longer 4,4'-biphenyl-dicarboxylate (BPDC) linker, by combining laboratory XRPD, Zr K-edge EXAFS, TGA, FTIR, and UV-Vis studies. Comparison between experimental and periodic calculations performed at the B3LYP level of theory allows a full understanding of the structural, vibrational and electronic properties of the material. Both materials have been tested for molecular hydrogen storage at high pressures and at liquid nitrogen temperature. In this regard, the use of a longer ligand has a double benefit: (i) it reduces the density of the material and (ii) it increases the Langmuir surface area from 1281 to 2483 m(2) g(-1) and the micropore volume from 0.43 to 0.85 cm(3) g(-1). As a consequence, the H(2) uptake at 38 bar and 77 K increases from 2.4 mass% for UiO-66 up to 4.6 mass% for the new UiO-67 material. This value is among the highest values reported so far but is lower than those reported for MIL-101, IRMOF-20 and MOF-177 under similar pressure and temperature conditions (6.1, 6.2 and 7.0 mass%, respectively) [A. G. Wong-Foy et al. J. Am. Chem. Soc., 2006, 128, 3494; M. Dinca and J. R. Long. Angew. Chem., Int. Ed., 2008, 47, 6766]. Nevertheless the remarkable chemical and thermal stability of UiO-67 and the absence of Cr in its structure would make this material competitive.

385 citations


Journal ArticleDOI
TL;DR: In black and white: The hydrogenation of TiO(2) can extend its optical absorption into the visible and infrared region and change its color from white to black.
Abstract: In black and white: The hydrogenation of TiO(2) can extend its optical absorption into the visible and infrared region and change its color from white to black. Furthermore, the hydrogenated black TiO(2) exhibits excellent photocatalytic activity for the splitting of water to yield H(2).

382 citations


Journal ArticleDOI
TL;DR: In this article, the fabrication and performance of graphene film counter electrodes for dye-sensitized solar cells are discussed. And a brief outlook is provided on the future development of graphene-based materials as prospective counter electrode for DSSCs.
Abstract: The dye-sensitized solar cell (DSSC) plays a leading role in third generation photovoltaic devices. Platinum-loaded conducting glass has been widely exploited as the standard counter electrode (CE) for DSSCs. However, the high cost and the rarity of platinum limits its practical application in DSSCs. This has promoted large interest in exploring Pt-free CEs for DSSCs. Very recently, graphene, which is an atomic planar sheet of hexagonally arrayed sp2 carbon atoms, has been demonstrated to be a promising CE material for DSSCs due to its excellent conductivity and high electrocatalytic activity. This article provides a mini review of graphene-based CEs for DSSCs. Firstly, the fabrication and performance of graphene film CE in DSSCs are discussed. Secondly, DSSC counter electrodes made from graphene-based composite materials are evaluated. Finally, a brief outlook is provided on the future development of graphene-based materials as prospective counter electrodes for DSSCs.

370 citations


Journal ArticleDOI
TL;DR: It is shown that the expression of a short tandem target mimic (STTM), which is composed of two short sequences mimicking small RNA target sites, separated by a linker of an empirically determined optimal size, leads to the degradation of targeted small RNAs by small RNA degrading nucleases.
Abstract: MicroRNAs (miRNAs) and other endogenous small RNAs act as sequence-specific regulators of the genome, transcriptome, and proteome in eukaryotes. The interrogation of small RNA functions requires an effective, widely applicable method to specifically block small RNA function. Here, we report the development of a highly effective technology that targets specific endogenous miRNAs or small interfering RNAs for destruction in Arabidopsis thaliana. We show that the expression of a short tandem target mimic (STTM), which is composed of two short sequences mimicking small RNA target sites, separated by a linker of an empirically determined optimal size, leads to the degradation of targeted small RNAs by small RNA degrading nucleases. The efficacy of the technology was demonstrated by the strong and specific developmental defects triggered by STTMs targeting three miRNAs and an endogenous siRNA. In summary, we developed an effective approach for the destruction of endogenous small RNAs, thereby providing a powerful tool for functional genomics of small RNA molecules in plants and potentially animals.

322 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe the utility of system dynamics for holistic water resources planning and management by illustrating the fundamentals of the approach and provide an overview of Causal Loop and Stock and Flow Diagrams, reference modes of dynamic behavior, and system archetypes.
Abstract: Out-of-context analysis of water resources systems can result in unsustainable management strategies. To address this problem, systems thinking seeks to understand interactions among the subsystems driving a system’s overall behavior. System dynamics, a method for operationalizing systems thinking, facilitates holistic understanding of water resources systems, and strategic decision making. The approach also facilitates participatory modeling, and analysis of the system’s behavioral trends, essential to sustainable management. The field of water resources has not utilized the full capacity of system dynamics in the thinking phase of integrated water resources studies. We advocate that the thinking phase of modeling applications is critically important, and that system dynamics offers unique qualitative tools that improve understanding of complex problems. Thus, this paper describes the utility of system dynamics for holistic water resources planning and management by illustrating the fundamentals of the approach. Using tangible examples, we provide an overview of Causal Loop and Stock and Flow Diagrams, reference modes of dynamic behavior, and system archetypes to demonstrate the use of these qualitative tools for holistic conceptualization of water resources problems. Finally, we present a summary of the potential benefits as well as caveats of qualitative system dynamics for water resources decision making.

272 citations


Journal ArticleDOI
TL;DR: A new compressed sensing framework is proposed for extracting useful second-order statistics of wideband random signals from digital samples taken at sub-Nyquist rates, exploiting the unique sparsity property of the two-dimensional cyclic spectra of communications signals.
Abstract: For cognitive radio networks, efficient and robust spectrum sensing is a crucial enabling step for dynamic spectrum access. Cognitive radios need to not only rapidly identify spectrum opportunities over very wide bandwidth, but also make reliable decisions in noise-uncertain environments. Cyclic spectrum sensing techniques work well under noise uncertainty, but require high-rate sampling which is very costly in the wideband regime. This paper develops robust and compressive wideband spectrum sensing techniques by exploiting the unique sparsity property of the two-dimensional cyclic spectra of communications signals. To do so, a new compressed sensing framework is proposed for extracting useful second-order statistics of wideband random signals from digital samples taken at sub-Nyquist rates. The time-varying cross-correlation functions of these compressive samples are formulated to reveal the cyclic spectrum, which is then used to simultaneously detect multiple signal sources over the entire wide band. Because the proposed wideband cyclic spectrum estimator utilizes all the cross-correlation terms of compressive samples to extract second-order statistics, it is also able to recover the power spectra of stationary signals as a special case, permitting lossless rate compression even for non-sparse signals. Simulation results demonstrate the robustness of the proposed spectrum sensing algorithms against both sampling rate reduction and noise uncertainty in wireless networks.

249 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the use of discrete return airborne LiDAR data for quantifying biomass change and carbon flux from repeat field and LIDAR surveys and conclude that repeat LiDARS surveys are useful for accurately quantifying high resolution, spatially explicit biomass and carbon dynamics in conifer forests.

248 citations


Journal ArticleDOI
P. Abreu1, Marco Aglietta2, Eun-Joo Ahn3, Ivone F. M. Albuquerque4  +518 moreInstitutions (73)
TL;DR: A measurement of the proton-air cross section for particle production at the center-of-mass energy per nucleon of 57 TeV is reported, derived from the distribution of the depths of shower maxima observed with the Pierre Auger Observatory.
Abstract: We report a measurement of the proton-air cross section for particle production at the center-of-mass energy per nucleon of 57 TeV. This is derived from the distribution of the depths of shower maxima observed with the Pierre Auger Observatory: systematic uncertainties are studied in detail. Analyzing the tail of the distribution of the shower maxima, a proton-air cross section of [505 +/- 22(stat)(-36)(+28)(syst)] mb is found.

246 citations


Posted Content
TL;DR: In this paper, the technical, social, and economic benefits and limitations of photovoltaic technologies to provide electricity in both off-grid and on-grid applications is critically analyzed in the context of this shift in energy sources.
Abstract: As both population and energy use per capita increase, modern society is approaching physical limits to its continued fossil fuel consumption. The immediate limits are set by the planet’s ability to adapt to a changing atmospheric chemical composition, not the availability of resources. In order for a future society to be sustainable while operating at or above our current standard of living a shift away from carbon based energy sources must occur. An overview of the current state of active solar (photovoltaic) energy technology is provided here to outline a partial solution for the environmental problems caused by accelerating global energy expenditure. The technical, social, and economic benefits and limitations of photovoltaic technologies to provide electricity in both off-grid and on-grid applications is critically analyzed in the context of this shift in energy sources. It is shown that photovoltaic electrical production is a technologically feasible, economically viable, environmentally benign, sustainable, and socially equitable solution to society’s future energy requirements.

Journal ArticleDOI
TL;DR: A mobile phone based system, DietCam, to help assess food intakes with few human interventions, which only requires users to take three images or a short video around the meal, then it will do the rest.

Journal ArticleDOI
TL;DR: In this paper, the authors used remote sensing studies of volcanic ash clouds with field measurement and sampling, and lab experiments are required to fill current gaps in knowledge surrounding the theory of ash aggregate formation.
Abstract: Most volcanic ash particles with diameters <63 lm settle from eruption clouds as particle aggregates that cumulatively have larger sizes, lower densities, and higher terminal fall velocities than individual constituent particles. Particle aggregation reduces the atmospheric residence time of fine ash, which results in a proportional increase in fine ash fallout within 10–100 s km from the volcano and a reduction in airborne fine ash mass concentrations 1000 s km from the volcano. Aggregate characteristics vary with distance from the volcano: proximal aggregates are typically larger (up to cm size) with concentric structures, while distal aggregates are typically smaller (sub-millimetre size). Particles comprising ash aggregates are bound through hydro-bonds (liquid and ice water) and electrostatic forces, and the rate of particle aggregation correlates with cloud liquid water availability. Eruption source parameters (including initial particle size distribution, erupted mass, eruption column height, cloud water content and temperature) and the eruption plume temperature lapse rate, coupled with the environmental parameters, determines the type and spatiotemporal distribution of aggregates. Field studies, lab experiments and modelling investigations have already provided important insights on the process of particle aggregation. However, new integrated observations that combine remote sensing studies of ash clouds with field measurement and sampling, and lab experiments are required to fill current gaps in knowledge surrounding the theory of ash aggregate formation.

Journal ArticleDOI
TL;DR: In this article, the authors used crumb rubber from scrap tires as an environmental friendly and sustainable additive for enhancing the high temperature and low temperature rheological properties of asphalt binders for asphalt pavements.

Journal ArticleDOI
TL;DR: It is demonstrated that ZIF-8's C(ij)'s can be reliably predicted, and its elastic deformation mechanism is linked to the pliant ZnN(4) tetrahedra, which is the lowest yet reported for a single-crystalline extended solid.
Abstract: Using Brillouin scattering, we measured the single-crystal elastic constants (C(ij)'s) of a prototypical metal-organic framework (MOF): zeolitic imidazolate framework (ZIF)-8 [Zn(2-methylimidazolate)(2)], which adopts a zeolitic sodalite topology and exhibits large porosity. Its C(ij)'s under ambient conditions are (in GPa) C(11)=9.522(7), C(12)=6.865(14), and C(44)=0.967(4). Tensorial analysis of the C(ij)'s reveals the complete picture of the anisotropic elasticity in cubic ZIF-8. We show that ZIF-8 has a remarkably low shear modulus G(min) < or approximately 1 GPa, which is the lowest yet reported for a single-crystalline extended solid. Using ab initio calculations, we demonstrate that ZIF-8's C(ij)'s can be reliably predicted, and its elastic deformation mechanism is linked to the pliant ZnN(4) tetrahedra. Our results shed new light on the role of elastic constants in establishing the structural stability of MOF materials and thus their suitability for practical applications.

Journal ArticleDOI
TL;DR: In this paper, the peak power density of an anion-exchange membrane fuel cell (AEM-DGFC) with 1.0mgPt cm−2 anode and non-PGM catalyst cathode can reach 124.5mWcm−2 at 80°C and 58.6mW cm−1 at 50°C, while the highest selectivity of C3 acids can reach 91%.
Abstract: Electrocatalytic oxidation of glycerol for cogenerating electricity and higher-valued chemicals on a Pt/C anode catalyst (2.4 nm) in an anion-exchange membrane fuel cell (AEMFC) was investigated. The peak power density of an anion-exchange membrane – direct glycerol fuel cell (AEM-DGFC) with 1.0 mgPt cm−2 anode and non-PGM catalyst cathode can reach 124.5 mW cm−2 at 80 °C and 58.6 mW cm−2 at 50 °C, while the highest selectivity of C3 acids (glyceric acid + tartronic acid) can reach 91%. The study found that higher pH reaction media could enhance fuel cell output power density (electricity generation) and selectivity of C3 acids, while lower glycerol concentration could improve the selectivity of deeper-oxidized products (mesoxalic acid and oxalic acid). The fuel cell reactor with the Pt/C anode catalyst demonstrated an excellent reusability, and successfully obtained tartronic acid with a selectivity of 50% and mesoxalic acid with a selectivity of 7%, which are high compared to heterogeneous catalytic glycerol oxidation in batch reactors. It is found that the anode overpotential can regulate the oxidation product distribution, and that higher anode overpotentials favor C C bond breaking, thus lowering the C3 acids selectivity. The reaction sequence of glycerol electro-oxidation detected in an electrolysis half cell with an on-line sample collection and off-line HPLC analysis agrees with the results obtained from single fuel cell tests. However, inconsistencies between the two systems still exist and are possibly due to different reaction environments, such as electrode structure, glycerol:catalyst ratio, and residence time of reactants.

Journal ArticleDOI
TL;DR: Fundamental evidence is provided that physical capacity (fatigability and recovery) is adversely affected by mental workload and it is critical to determine or evaluate occupational demands based on modified muscular capacity (due to mental workload) to reduce risk of WMSD development.
Abstract: Most occupational tasks involve some level of mental/cognitive processing in addition to physical work; however, the etiology of work-related musculoskeletal disorders (WMSDs) due to these demands remains unclear. The aim of this study was to quantify the interactive effects of physical and mental workload on muscle endurance, fatigue, and recovery during intermittent work. Twelve participants, balanced by gender, performed intermittent static shoulder abductions to exhaustion at 15, 35, and 55% of individual maximal voluntary contraction (MVC), in the absence (control) and presence (concurrent) of a mental arithmetic task. Changes in muscular capacity were determined using endurance time, strength decline, electromyographic (EMG) fatigue indicators, muscle oxygenation, and heart rate measures. Muscular recovery was quantified through changes in strength and physiological responses. Mental workload was associated with shorter endurance times, specifically at 35% MVC, and greater strength decline. EMG and oxygenation measures showed similar changes during fatigue manifestation during concurrent conditions compared to the control, despite shorter endurance times. Moreover, decreased heart rate variability during concurrent demand conditions indicated increased mental stress. Although strength recovery was not influenced by mental workload, a slower heart rate recovery was observed after concurrent demand conditions. The findings from this study provide fundamental evidence that physical capacity (fatigability and recovery) is adversely affected by mental workload. Thus, it is critical to determine or evaluate occupational demands based on modified muscular capacity (due to mental workload) to reduce risk of WMSD development.

Journal ArticleDOI
TL;DR: The results demonstrate the capability of the vascular implantation model to conduct rapid in vivo assessments of vascular biomaterial corrosion behavior and to predict long-term biocorrosion behavior from material analyses and highlight the critical role of the arterial environment in directing the corrosion behavior of biodegradable metals.
Abstract: Metal stents are commonly used to revascularize occluded arteries. A bioabsorbable metal stent that harm- lessly erodes away over time may minimize the normal chronic risks associated with permanent implants. However, there is no simple, low-cost method of introducing candidate materials into the arterial environment. Here, we developed a novel experimental model where a biomaterial wire is implanted into a rat artery lumen (simulating bioabsorbable stent blood contact) or artery wall (simulating bioabsorbable stent matrix contact). We use this model to clarify the corro- sion mechanism of iron (� 99.5 wt %), which is a candidate bioabsorbable stent material due to its biocompatibility and mechanical strength. We found that iron wire encapsulation within the arterial wall extracellular matrix resulted in sub- stantial biocorrosion by 22 days, with a voluminous corrosion product retained within the vessel wall at 9 months. In con- trast, the blood-contacting luminal implant experienced mini- mal biocorrosion at 9 months. The importance of arterial blood versus arterial wall contact for regulating biocorrosion was confirmed with magnesium wires. We found that mag- nesium was highly corroded when placed in the arterial wall but was not corroded when exposed to blood in the arterial lumen for 3 weeks. The results demonstrate the capability of the vascular implantation model to conduct rapid in vivo assessments of vascular biomaterial corrosion behav- ior and to predict long-term biocorrosion behavior from material analyses. The results also highlight the critical role of the arterial environment (blood vs. matrix contact) in direct- ing the corrosion behavior of biodegradable metals. V C 2011 Wiley Periodicals, Inc. J Biomed Mater Res Part B: 00B:000-000, 2011.

Journal ArticleDOI
TL;DR: In this article, a bio-modified binder (BMB) is used for highway and airport pavements, which is produced from the thermochemical conversion of swine manure.
Abstract: This paper introduces bio-binder as a partial replacement for bituminous binder on highway and airport pavements. The proposed bio-binder is produced from the thermochemical conversion of swine manure. Bio-binder is then blended with virgin binder to produce bio-modified binder (BMB). In addition to being a renewable alternative for petroleum-based binder, the production and application of bio-binder may provide a solution for the management of swine manure waste. Furthermore, the application of bio-binder will improve asphalt binder's properties while reducing asphalt pavements' construction cost; the cost of bio-binder production is $0.54/gallon compared with that of asphalt binder $2/gallon. However, although the BMB has improved low-temperature properties, it may decrease the high-temperature grade of the binder. To address this concern, we in this paper investigate the feasibility of applying polyphosphoric acid into BMB. Using BMB in asphalt pavements can reduce mixing and compaction temperatures an...

Journal ArticleDOI
TL;DR: This work shows how a large carnivore living in a seasonal environment displays marked seasonal variation in predation because of changes in prey vulnerability, and contradicts previous research that suggests that rates of biomass acquisition for large terrestrial carnivores tend not to vary among seasons.
Abstract: Summary 1. For large predators living in seasonal environments, patterns of predation are likely to vary among seasons because of related changes in prey vulnerability. Variation in prey vulnerability underlies the influence of predators on prey populations and the response of predators to seasonal variation in rates of biomass acquisition. Despite its importance, seasonal variation in predation is poorly understood. 2. We assessed seasonal variation in prey composition and kill rate for wolves Canis lupus living on the Northern Range (NR) of Yellowstone National Park. Our assessment was based on data collected over 14 winters (1995–2009) and five spring–summers between 2004 and 2009. 3. The species composition of wolf-killed prey and the age and sex composition of wolf-killed elk Cervus elaphus (the primary prey for NR wolves) varied among seasons. 4. One’s understanding of predation depends critically on the metric used to quantify kill rate. For example, kill rate was greatest in summer when quantified as the number of ungulates acquired per wolf per day, and least during summer when kill rate was quantified as the biomass acquired per wolf per day. This finding contradicts previous research that suggests that rates of biomass acquisition for large terrestrial carnivores tend not to vary among seasons. 5. Kill rates were not well correlated among seasons. For example, knowing that early-winter kill rate is higher than average (compared with other early winters) provides little basis for anticipating whether kill rates a few months later during late winter will be higher or lower than average (compared with other late winters). This observation indicates how observing, for example, higher-than-average kill rates throughout any particular season is an unreliable basis for inferring that the year-round average kill rate would be higher than average. 6. Our work shows how a large carnivore living in a seasonal environment displays marked seasonal variation in predation because of changes in prey vulnerability. Patterns of wolf predation were influenced by the nutritional condition of adult elk and the availability of smaller prey (i.e. elk calves, deer). We discuss how these patterns affect our overall understanding of predator and prey population dynamics.

Journal ArticleDOI
TL;DR: In this paper, the rheological properties of nano-modified asphalt were analyzed by use of asphalt binder tests such as Rotational Viscosity (RV), Dynamic Shear Rheometer (DSR), and Bending Beam Rheometers (BBR).

Journal ArticleDOI
TL;DR: It is necessary to model and control a microgrid as a system of systems that share some common aspects, such as voltage levels, but can operate independently, using local information in the form of the bus voltage, which improves system reliability.
Abstract: DC power systems can be made more reliable by considering the load as an important energy asset. Currently the ability to manage the total system is available only through a centralized controller, which limits flexibility, reconfigurability, and reliability. These limitations can be avoided while still providing system level coordination through the use of distributed controls based on local information. All elements of the power system including source, loads, and the network itself have influence, interaction, and coupling to all other elements. Therefore, it is necessary to model and control a microgrid as a system of systems that share some common aspects, such as voltage levels, but can operate independently. Using local information in the form of the bus voltage, these techniques do not rely on a centralized controller, which improves system reliability. However, it is important to design the microgrid in such a manner as to take advantage of the energy not just from the generation sources, but also the energy stored in the individual points-of-load as well.

Journal ArticleDOI
TL;DR: This article investigated the potential effects associated with future changes in vegetation driven by atmospheric CO2 concentrations, climate, and anthropogenic land use over the 21st century, and performed a series of model experiments combining a general circulation model with a dynamic global vegetation model and an atmospheric chemical-transport model.
Abstract: . The effects of future land use and land cover change on the chemical composition of the atmosphere and air quality are largely unknown. To investigate the potential effects associated with future changes in vegetation driven by atmospheric CO2 concentrations, climate, and anthropogenic land use over the 21st century, we performed a series of model experiments combining a general circulation model with a dynamic global vegetation model and an atmospheric chemical-transport model. Our results indicate that climate- and CO2-induced changes in vegetation composition and density between 2100 and 2000 could lead to decreases in summer afternoon surface ozone of up to 10 ppb over large areas of the northern mid-latitudes. This is largely driven by the substantial increases in ozone dry deposition associated with increases in vegetation density in a warmer climate with higher atmospheric CO2 abundance. Climate-driven vegetation changes over the period 2000–2100 lead to general increases in isoprene emissions, globally by 15% in 2050 and 36% in 2100. These increases in isoprene emissions result in decreases in surface ozone concentrations where the NOx levels are low, such as in remote tropical rainforests. However, over polluted regions, such as the northeastern United States, ozone concentrations are calculated to increase with higher isoprene emissions in the future. Increases in biogenic emissions also lead to higher concentrations of secondary organic aerosols, which increase globally by 10% in 2050 and 20% in 2100. Summertime surface concentrations of secondary organic aerosols are calculated to increase by up to 1 μg m−3 and double for large areas in Eurasia over the period of 2000–2100. When we use a scenario of future anthropogenic land use change, we find less increase in global isoprene emissions due to replacement of higher-emitting forests by lower-emitting cropland. The global atmospheric burden of secondary organic aerosols changes little by 2100 when we account for future land use change, but both secondary organic aerosols and ozone show large regional changes at the surface.

Journal ArticleDOI
P. Abreu1, Marco Aglietta2, Markus Ahlers3, E. J. Ahn4  +533 moreInstitutions (71)
TL;DR: In this article, the authors present comparative studies to identify and optimize the antenna design for the final configuration of the AERA consisting of 160 individual radio detector stations and rank the antennas with respect to the noise level added to the galactic signal.
Abstract: The Pierre Auger Observatory is exploring the potential of the radio detection technique to study extensive air showers induced by ultra-high energy cosmic rays. The Auger Engineering Radio Array (AERA) addresses both technological and scientific aspects of the radio technique. A first phase of AERA has been operating since September 2010 with detector stations observing radio signals at frequencies between 30 and 80 MHz. In this paper we present comparative studies to identify and optimize the antenna design for the final configuration of AERA consisting of 160 individual radio detector stations. The transient nature of the air shower signal requires a detailed description of the antenna sensor. As the ultra-wideband reception of pulses is not widely discussed in antenna literature, we review the relevant antenna characteristics and enhance theoretical considerations towards the impulse response of antennas including polarization effects and multiple signal reflections. On the basis of the vector effective length we study the transient response characteristics of three candidate antennas in the time domain. Observing the variation of the continuous galactic background intensity we rank the antennas with respect to the noise level added to the galactic signal.

Journal ArticleDOI
TL;DR: A statistical learning methodology is used to quantify the gap between Mr and Me in a closed form via data fitting, which offers useful design guideline for compressive samplers and develops a two-step compressive spectrum sensing algorithm for wideband cognitive radios as an illustrative application.
Abstract: Compressive sampling techniques can effectively reduce the acquisition costs of high-dimensional signals by utilizing the fact that typical signals of interest are often sparse in a certain domain. For compressive samplers, the number of samples Mr needed to reconstruct a sparse signal is determined by the actual sparsity order Snz of the signal, which can be much smaller than the signal dimension N. However, Snz is often unknown or dynamically varying in practice, and the practical sampling rate has to be chosen conservatively according to an upper bound Smax of the actual sparsity order in lieu of Snz, which can be unnecessarily high. To circumvent such wastage of the sampling resources, this paper introduces the concept of sparsity order estimation, which aims to accurately acquire Snz prior to sparse signal recovery, by using a very small number of samples Me less than Mr. A statistical learning methodology is used to quantify the gap between Mr and Me in a closed form via data fitting, which offers useful design guideline for compressive samplers. It is shown that Me ≥ 1.2Snz log(N/Snz + 2) + 3 for a broad range of sampling matrices. Capitalizing on this gap, this paper also develops a two-step compressive spectrum sensing algorithm for wideband cognitive radios as an illustrative application. The first step quickly estimates the actual sparsity order of the wide spectrum of interest using a small number of samples, and the second step adjusts the total number of collected samples according to the estimated signal sparsity order. By doing so, the overall sampling cost can be minimized adaptively, without degrading the sensing performance.

Journal ArticleDOI
TL;DR: In this paper, the sustainability of algal biomass as a feedstock for bio-fuels, due to differences in data aggregation, life cycle boundaries, technical and life cycle assumptions, environmental metrics considered, and use of experimental, modeled or assumed data, is compared.
Abstract: It is often difficult to compare publications assessing the sustainability of algal biomass as a feedstock for biofuels, due to differences in data aggregation, life cycle boundaries, technical and life cycle assumptions, environmental metrics considered, and use of experimental, modeled or assumed data. Input data for the algae cultivation stage was collected from published studies, focusing on microalgae production in open-air raceway ponds. Input data was normalized to a consistent functional unit, 1 kg of dry algal biomass. Environmental impacts were applied consistently to the different study inputs in order to eliminate this source of variation between the studies. Greenhouse gas emissions, fossil energy demand, and consumptive freshwater use were tabulated for the algal feedstock growth stage for open pond systems, and results were categorized (energy use, macronutrient fertilizers, and everything else) to compare the different studies in general terms. Environmental impacts for the cultivation of algal biomass in the considered reports varied by over two orders of magnitude. To illustrate impacts of variability in the cultivation stage on the ultimate environmental footprint of microalgae biofuels, algal oil harvesting, extraction and conversion to Green Jet Fuel was examined using the Renewable Jet Fuel process developed by Honeywell's UOP.

Journal ArticleDOI
01 Oct 2012-Methods
TL;DR: STTM is a powerful technology complementing the previous target mimic (TM) in plants and the miRNA sponge, as well as the recently defined endogenous competing RNA (CeRNA) in animals.

Journal ArticleDOI
TL;DR: Calibration is a process of comparing model results with field data and making the appropriate adjustments so that both results agree as mentioned in this paper, which can involve formal optimization methods or manual methods in which the modeler informally examines alternative model parameters.
Abstract: Calibration is a process of comparing model results with field data and making the appropriate adjustments so that both results agree. Calibration methods can involve formal optimization methods or manual methods in which the modeler informally examines alternative model parameters. The development of a calibration framework typically involves the following: (1) definition of the model variables, coefficients, and equations; (2) selection of an objective function to measure the quality of the calibration; (3) selection of the set of data to be used for the calibration process; and (4) selection of an optimization/manual scheme for altering the coefficient values in the direction of reducing the objective function. Hydraulic calibration usually involves the modification of system demands, fine-tuning the roughness values of pipes, altering pump operation characteristics, and adjusting other model attributes that affect simulation results, in particular those that have significant uncertainty assoc...

Journal ArticleDOI
Anushka Udara Abeysekara1, Juanan Aguilar2, S. Aguilar3, Ruben Alfaro3, Erick Almaraz3, C. Alvarez, J. de D. Álvarez-Romero, M. Álvarez3, R. Arceo, J. C. Arteaga-Velázquez, C. Badillo3, A. S. Barber4, B. M. Baughman5, N. Bautista-Elivar6, E. Belmont3, Erika Benítez3, Segev BenZvi2, D. Berley5, A. Bernal3, Emanuele Bonamente7, J. Braun5, R. A. Caballero-Lopez3, I. Cabrera3, A. Carraminana8, L. Carrasco8, M. Castillo9, L. Chambers10, Ruben Conde9, P. Condreay10, U. Cotti, J. Cotzomi9, Juan Carlos D'Olivo3, E. De la Fuente11, C. De León, S. Delay12, David Delepine13, Tyce DeYoung10, L. Diaz3, L. Diaz-Cruz9, Brenda Dingus14, Michael DuVernois2, D. Edmunds1, R. W. Ellsworth15, Brian Fick7, D. W. Fiorino2, Alberto Flandes3, Nissim Fraija3, A. Galindo8, J. L. García-Luna11, Guillermo Garcia-Torales11, F. Garfias3, Luis Xavier Gonzalez8, Maria Magdalena González3, J. A. Goodman5, V. Grabski3, M. Gussert16, C. Guzmán-Ceron3, Z. Hampel-Arias2, T. Harris17, E. Hays18, L. Hernandez-Cervantes3, P. Hüntemeyer7, A. Imran14, A. Iriarte3, J. J. Jimenez, P. Karn12, N. Kelley-Hoskins7, David Kieda, R. Langarica3, Alejandro Lara3, R. J. Lauer19, William H. Lee3, E. C. Linares, J. T. Linnemann1, M. Longo16, R. Luna-García20, H. Martinez21, J. Martínez3, L. A. Martínez3, O. Martinez9, J. Martínez-Castro20, M. Martos3, J. A. Matthews19, Julie McEnery18, G. Medina-Tanco3, J. E. Mendoza-Torres8, Pedro Miranda-Romagnoli22, Teresa Montaruli2, E. Moreno9, Miguel Mostafa, M. Napsuciale13, J. Nava8, Lukas Nellen3, M. Newbold4, R. Noriega-Papaqui22, T. Oceguera-Becerra11, A. Olmos Tapia8, V. Orozco3, V. Pérez3, E. G. Pérez-Pérez6, J. S. Perkins18, J. Pretz14, C. Ramirez9, I. Ramírez3, D. Rebello17, A. Rentería3, J. Reyes8, Daniel Rosa-Gonzalez8, A. Rosado9, James M. Ryan23, J. R. Sacahui3, Humberto Ibarguen Salazar9, F. Salesa16, A. Sandoval3, Elton J. G. Santos, Michael Schneider24, A. Shoup25, S. Silich8, G. Sinnis14, A. J. Smith5, K. Sparks10, W. Springer4, F. Suarez3, Noslen Suarez8, Ignacio Taboada17, A. F. Tellez9, Guillermo Tenorio-Tagle8, A. Tepe17, P. A. Toale26, Kirsten Tollefson1, I. Torres8, T. N. Ukwatta1, J. F. Valdés-Galicia3, P. Vanegas3, V. Vasileiou18, O. Vázquez3, X. Vázquez3, L. Villaseñor, W. Wall8, J. S. Walters8, D. Warner16, S. Westerhoff2, I. G. Wisher2, Joshua Wood5, G. B. Yodh12, D. Zaborov10, Arnulfo Zepeda21 
TL;DR: In this paper, the sensitivity of HAWC to Gamma Ray Bursts (GRBs) has been evaluated using two data acquisition (DAQ) systems: the main DAQ system reads out coincident signals in the tanks and reconstructs the direction and energy of individual atmospheric showers, and the scaler DAQ counts the hits in each photomultiplier tube (PMT) in the detector and searches for a statistical excess over the noise of all PMTs.

Journal ArticleDOI
TL;DR: Open Source Appropriate Technology (OSAT) as discussed by the authors is a paradigm for sustainable development in which anyone can learn how to make and use needed technologies free of intellectual property concerns and contribute to the collective open source knowledge ecosystem or knowledge commons.
Abstract: Much of the widespread poverty, environmental desecration, and waste of human life seen around the globe could be prevented by known (to humanity as a whole) technologies, many of which are simply not available to those that need it. This lack of access to critical information for sustainable development is directly responsible for a morally and ethically unacceptable level of human suffering and death. A solution to this general problem is the concept of open source appropriate technology or OSAT, which refers to technologies that provide for sustainable development while being designed in the same fashion as free and open source software. OSAT is made up of technologies that are easily and economically utilized from readily available resources by local communities to meet their needs and must meet the boundary conditions set by environmental, cultural, economic, and educational resource constraints of the local community. This paper explores both the open source and appropriate technology aspects of OSAT to create a paradigm, in which anyone can both learn how to make and use needed technologies free of intellectual property concerns. At the same time, anyone can also add to the collective open source knowledge ecosystem or knowledge commons by contributing ideas, designs, observations, experimental data, deployment logs, etc. It is argued that if OSAT continues to grow and takes hold globally creating a vibrant virtual community to share technology plans and experiences, a new technological revolution built on a dispersed network of innovators working together to create a just sustainable world is possible.