scispace - formally typeset
Search or ask a question

Showing papers by "ETH Zurich published in 2001"


Journal ArticleDOI
Dmitri Ivanov1
TL;DR: From the properties of the solutions to Bogoliubov-de Gennes equations in the vortex core, the non-Abelian statistics of vortices are derived identical to that for the Moore-Read (Pfaffian) quantum Hall state.
Abstract: Excitation spectrum of a half-quantum vortex in a $p$-wave superconductor contains a zero-energy Majorana fermion. This results in a degeneracy of the ground state of the system of several vortices. From the properties of the solutions to Bogoliubov--de Gennes equations in the vortex core we derive the non-Abelian statistics of vortices identical to that for the Moore-Read (Pfaffian) quantum Hall state.

1,461 citations


Journal ArticleDOI
TL;DR: Using the proposed method, SENSE becomes practical with nonstandard k‐space trajectories, enabling considerable scan time reduction with respect to mere gradient encoding, and the in vivo feasibility of non‐Cartesian SENSE imaging with iterative reconstruction is demonstrated.
Abstract: New, efficient reconstruction procedures are proposed for sensitivity encoding (SENSE) with arbitrary k-space trajectories. The presented methods combine gridding principles with so-called conjugate-gradient iteration. In this fashion, the bulk of the work of reconstruction can be performed by fast Fourier transform (FFT), reducing the complexity of data processing to the same order of magnitude as in conventional gridding reconstruction. Using the proposed method, SENSE becomes practical with nonstandard k-space trajectories, enabling considerable scan time reduction with respect to mere gradient encoding. This is illustrated by imaging simulations with spiral, radial, and random k-space patterns. Simulations were also used for investigating the convergence behavior of the proposed algorithm and its dependence on the factor by which gradient encoding is reduced. The in vivo feasibility of non-Cartesian SENSE imaging with iterative reconstruction is demonstrated by examples of brain and cardiac imaging using spiral trajectories. In brain imaging with six receiver coils, the number of spiral interleaves was reduced by factors ranging from 2 to 6. In cardiac real-time imaging with four coils, spiral SENSE permitted reducing the scan time per image from 112 ms to 56 ms, thus doubling the frame-rate. Magn Reson Med 46:638–651, 2001. © 2001 Wiley-Liss, Inc.

1,221 citations


Journal ArticleDOI
Stefan Wiemer1
TL;DR: There is a handy-dandy software package ideally suited to answering exactly this question, ZMAP, developed by Stefan Wiemer, which allows the user to examine an earthquake catalog from many different angles and helps the user get the most out of the analyzed catalog.
Abstract: The Electronic Seismologist (ES) has been known to actually do some research in the field of seismology from time to time. As an operator of a seismic monitoring network the research done often is related to the seismicity of the monitored region. Detecting changes or trends in seismicity is relevant to earthquake and volcano hazards, but are the trends detected real or only an artifact of changes in the network operating parameters? Because all seismic networks evolve, change staff, change software and hardware, there is always the nagging feeling, if not outright knowledge, that interesting patterns in the catalog reflect network changes rather than changes in the Earth. How can one tell the difference? The ES is happy to report that there is a handy-dandy software package ideally suited to answering exactly this question (and many others). ZMAP , developed by Stefan Wiemer, allows the user to examine an earthquake catalog from many different angles. Not only does it include the traditional map, cross-section, and time sequence parameters, but also several others, such as event size and mechanism. These can be combined in interesting ways to present the user with different “views” into the data. Considerable seismological acumen lies behind the use and presentation of these parameters, which helps the user get the most out of the analyzed catalog. ZMAP is fairly intuitive to use and produces attractive output. In fact, the ES actually has fun “playing” with it and gets useful results besides. Perhaps one of the best ways to get a sense of how ZMAP might be used is to take a tour of case studies. The following includes many examples, and if they're not enough there are a slew of references where one can find more. In his traditional groveling way the ES has prevailed on Stefan …

971 citations


Journal ArticleDOI
TL;DR: The GROMOS96 45A3 parameter set should be suitable for application to lipid aggregates such as membranes and micelles, for mixed systems of aliphatics with or without water, for polymers, and other apolar systems that may interact with different biomolecules.
Abstract: Over the past 4 years the GROMOS96 force field has been successfully used in biomolecular simulations, for example in peptide folding studies and detailed protein investigations, but no applications to lipid systems have been published yet. Here we provide a detailed investigation of aliphatic liquid systems. For liquids of larger aliphatic chains, n-heptane and longer, the standard GROMOS96 parameter sets 43A1 and 43A2 yield a too low pressure at the experimental density. Therefore, a reparametrization of the GROMOS96 force field regarding aliphatic carbons was initiated. The new force field parameter set 45A3 shows considerable improvements for n-alkanes, cyclo-, iso-, and neoalkanes and other branched aliphatics. Liquid densities and heat of vaporization are reproduced for almost all of these molecules. Excellent agreement is found with experiment for the free energy of hydration for alkanes. The GROMOS96 45A3 parameter set should, therefore, be suitable for application to lipid aggregates such as membranes and micelles, for mixed systems of aliphatics with or without water, for polymers, and other apolar systems that may interact with different biomolecules. c

856 citations


Journal ArticleDOI
TL;DR: Equivalences among five classes of hybrid systems are established, of paramount importance for transferring theoretical properties and tools from one class to another, with the consequence that for the study of a particular hybrid system that belongs to any of these classes, one can choose the most convenient hybrid modeling framework.

780 citations


Journal ArticleDOI
01 Jul 2001
TL;DR: This tutorial focuses on recent techniques that combine model checking with satisfiability solving, known as bounded model checking, which do a very fast exploration of the state space, and for some types of problems seem to offer large performance improvements over previous approaches.
Abstract: The phrase model checking refers to algorithms for exploring the state space of a transition system to determine if it obeys a specification of its intended behavior. These algorithms can perform exhaustive verification in a highly automatic manner, and, thus, have attracted much interest in industry. Model checking programs are now being commercially marketed. However, model checking has been held back by the state explosion problem, which is the problem that the number of states in a system grows exponentially in the number of system components. Much research has been devoted to ameliorating this problem. In this tutorial, we first give a brief overview of the history of model checking to date, and then focus on recent techniques that combine model checking with satisfiability solving. These techniques, known as bounded model checking, do a very fast exploration of the state space, and for some types of problems seem to offer large performance improvements over previous approaches. We review experiments with bounded model checking on both public domain and industrial designs, and propose a methodology for applying the technique in industry for invariance checking. We then summarize the pros and cons of this new technology and discuss future research efforts to extend its capabilities.

770 citations


Journal ArticleDOI
TL;DR: Functional and anatomical evidence exists that spontaneous plasticity can be potentiated by activity, as well as by specific experimental manipulations, which prepare the way to a better understanding of rehabilitation treatments and to the development of new approaches to treat spinal cord injury.
Abstract: Although spontaneous regeneration of lesioned fibres is limited in the adult central nervous system, many people that suffer from incomplete spinal cord injuries show significant functional recovery. This recovery process can go on for several years after the injury and probably depends on the reorganization of circuits that have been spared by the lesion. Synaptic plasticity in pre-existing pathways and the formation of new circuits through collateral sprouting of lesioned and unlesioned fibres are important components of this recovery process. These reorganization processes might occur in cortical and subcortical motor centres, in the spinal cord below the lesion, and in the spared fibre tracts that connect these centres. Functional and anatomical evidence exists that spontaneous plasticity can be potentiated by activity, as well as by specific experimental manipulations. These studies prepare the way to a better understanding of rehabilitation treatments and to the development of new approaches to treat spinal cord injury.

754 citations


Proceedings ArticleDOI
01 Aug 2001
TL;DR: This work proposes a system using a procedural approach based on L-systems to model cities that generates geometry and a texturing system based on texture elements and procedural methods compose the buildings.
Abstract: Modeling a city poses a number of problems to computer graphics. Every urban area has a transportation network that follows population and environmental influences, and often a superimposed pattern plan. The buildings appearances follow historical, aesthetic and statutory rules. To create a virtual city, a roadmap has to be designed and a large number of buildings need to be generated. We propose a system using a procedural approach based on L-systems to model cities. From various image maps given as input, such as land-water boundaries and population density, our system generates a system of highways and streets, divides the land into lots, and creates the appropriate geometry for the buildings on the respective allotments. For the creation of a city street map, L-systems have been extended with methods that allow the consideration of global goals and local constraints and reduce the complexity of the production rules. An L-system that generates geometry and a texturing system based on texture elements and procedural methods compose the buildings.

749 citations


Proceedings ArticleDOI
07 May 2001
TL;DR: The algorithms presented herein rely on range measurements between pairs of nodes and the a priori coordinates of sparsely located anchor nodes to establish confident position estimates through assumptions, checks, and iterative refinements.
Abstract: Evolving networks of ad-hoc wireless sensing nodes rely heavily on the ability to establish position information. The algorithms presented herein rely on range measurements between pairs of nodes and the a priori coordinates of sparsely located anchor nodes. Clusters of nodes surrounding anchor nodes cooperatively establish confident position estimates through assumptions, checks, and iterative refinements. Once established, these positions are propagated to more distant nodes, allowing the entire network to create an accurate map of itself. Major obstacles include overcoming inaccuracies in range measurements as great as /spl plusmn/50%, as well as the development of initial guesses for node locations in clusters with few or no anchor nodes. Solutions to these problems are presented and discussed, using position error as the primary metric. Algorithms are compared according to position error, scalability, and communication and computational requirements. Early simulations yield average position errors of 5% in the presence of both range and initial position inaccuracies.

743 citations


Journal ArticleDOI
15 Nov 2001-Nature
TL;DR: This work combines several of these developments to fabricate a smart single-chip chemical microsensor system that incorporates three different transducers (mass-sensitive, capacitive and calorimetric), all of which rely on sensitive polymeric layers to detect airborne volatile organic compounds.
Abstract: Research activity in chemical gas sensing is currently directed towards the search for highly selective (bio)chemical layer materials, and to the design of arrays consisting of different partially selective sensors that permit subsequent pattern recognition and multi-component analysis. Simultaneous use of various transduction platforms has been demonstrated, and the rapid development of integrated-circuit technology has facilitated the fabrication of planar chemical sensors and sensors based on three-dimensional microelectromechanical systems. Complementary metal-oxide silicon processes have previously been used to develop gas sensors based on metal oxides and acoustic-wave-based sensor devices. Here we combine several of these developments to fabricate a smart single-chip chemical microsensor system that incorporates three different transducers (mass-sensitive, capacitive and calorimetric), all of which rely on sensitive polymeric layers to detect airborne volatile organic compounds. Full integration of the microelectronic and micromechanical components on one chip permits control and monitoring of the sensor functions, and enables on-chip signal amplification and conditioning that notably improves the overall sensor performance. The circuitry also includes analog-to-digital converters, and an on-chip interface to transmit the data to off-chip recording units. We expect that our approach will provide a basis for the further development and optimization of gas microsystems.

663 citations


Journal ArticleDOI
Harald Bugmann1
TL;DR: The structure of JABOWA is analysed in terms of the functional relationships used for formulating the processes of tree establishment, growth, and mortality, and it is concluded that JABowA contains a number of unrealistic assumptions that have not been questioned strongly to date.
Abstract: Forest gap models, initially conceived in 1969 as a special case of individual-tree based models, have become widely popular among forest ecologists for addressing a large number of applied research questions, including the impacts of global change on long-term dynamics of forest structure, biomass, and composition. However, they have been strongly criticized for a number of weaknesses inherent in the original model structure. In this paper, I review the fundamental assump- tions underlying forest gap models, the structure of the parent model JABOWA, and examine these criticisms in the context of the many alternative formulations that have been developed over the past 30 years. Four assumptions originally underlie gap models: (1) The forest is abstracted as a composite of many small patches of land, where each can have a different age and successional stage; (2) patches are horizontally homogeneous, i.e., tree position within a patch is not considered; (3) the leaves of each tree are located in an indefinitely thin layer (disk) at the top of the stem; and (4) successional processes are described on each patch separately, i.e., there are no interactions between patches. These simplifications made it possible to consider mixed-species, mixed-age forests, which had been difficult previously mainly because of computing limitations. The structure of JABOWA is analysed in terms of the functional relationships used for formulat- ing the processes of tree establishment, growth, and mortality. It is concluded that JABOWA contains a number of unrealistic assumptions that have not been questioned strongly to date. At the same time, some aspects of JABOWA that were criticized strongly in the past years are internally consistent given the objectives of this specific model. A wide variety of formulations for growth processes, establishment, and mortality factors have been developed in gap models over the past 30 years, and modern gap models include more robust parameterizations of environmental influences on tree growth and population dynamics as compared to JABOWA. Approaches taken in more recent models that led to the relaxation of one or several of the four basic assumptions are discussed. It is found that the original assumptions often have been replaced by alternatives; however, no systematic analysis of the behavioral effects of these conceptual changes has been attempted to date. The feasibility of including more physiological detail (instead of using relatively simple parame- terizations) in forest gap models is discussed, and it is concluded that we often lack the data base to implement such approaches for more than a few commercially important tree species. Hence, it is important to find a compromise between using simplistic parameterizations and expanding gap models with physiology-based functions and parameters that are difficult to estimate. While the modeling of tree growth has received a lot of attention over the past years, much less effort has been spent on improving the formulations of tree establishment and mortality, although these processes are likely to be just as sensitive to global change as tree growth itself. Finally, model validation issues are discussed, and it is found that there is no single data source that can reliably be used for evaluating the behavior of forest gap models; instead, I propose a combination of sensitivity analyses, qualitative examinations of process formulations, and quantitative tests of gap models or selected submodels against various kinds of empirical data to evaluate the usefulness of these models for assessing their utility for predicting the impacts of global change on long-term forest dynamics.

Journal ArticleDOI
TL;DR: In this article, the International Commission for the Hydrology of the Rhine basin (CHR) has carried out a research project to assess the impact of climate change on the river flow conditions in the rhine basin.
Abstract: The International Commission for the Hydrology of the Rhine basin (CHR) hascarried out a researchproject to assess the impact of climate change on the river flow conditionsin the Rhine basin. Along abottom-up line, different detailed hydrological models with hourly and dailytime steps have beendeveloped for representative sub-catchments of the Rhine basin. Along atop-down line, a water balancemodel for the entire Rhine basin has been developed, which calculates monthlydischarges and which wastested on the scale of the major tributaries of the Rhine. Using this set ofmodels, the effects of climatechange on the discharge regime in different parts of the Rhine basin werecalculated using the results ofUKHI and XCCC GCM-experiments. All models indicate the same trends in thechanges: higher winterdischarge as a result of intensified snow-melt and increased winterprecipitation, and lower summerdischarge due to the reduced winter snow storage and an increase ofevapotranspiration. When the resultsare considered in more detail, however, several differences show up. These canfirstly be attributed todifferent physical characteristics of the studied areas, but different spatialand temporal scales used in themodelling and different representations of several hydrological processes(e.g., evapotranspiration,snow melt) are responsible for the differences found as well. Climate changecan affect various socio-economicsectors. Higher temperatures may threaten winter tourism in the lower wintersport areas. The hydrologicalchanges will increase flood risk during winter, whilst low flows during summerwill adversely affectinland navigation, and reduce water availability for agriculture and industry.Balancing the required actionsagainst economic cost and the existing uncertainties in the climate changescenarios, a policy of `no-regretand flexibility' in water management planning and design is recommended, whereanticipatory adaptivemeasures in response to climate change impacts are undertaken in combinationwith ongoing activities.

Proceedings ArticleDOI
01 Aug 2001
TL;DR: A point rendering and texture filtering technique called surface splatting which directly renders opaque and transparent surfaces from point clouds without connectivity based on a novel screen space formulation of the Elliptical Weighted Average (EWA) filter is described.
Abstract: Modern laser range and optical scanners need rendering techniques that can handle millions of points with high resolution textures. This paper describes a point rendering and texture filtering technique called surface splatting which directly renders opaque and transparent surfaces from point clouds without connectivity. It is based on a novel screen space formulation of the Elliptical Weighted Average (EWA) filter. Our rigorous mathematical analysis extends the texture resampling framework of Heckbert to irregularly spaced point samples. To render the points, we develop a surface splat primitive that implements the screen space EWA filter. Moreover, we show how to optimally sample image and procedural textures to irregular point data during pre-processing. We also compare the optimal algorithm with a more efficient view-independent EWA pre-filter. Surface splatting makes the benefits of EWA texture filtering available to point-based rendering. It provides high quality anisotropic texture filtering, hidden surface removal, edge anti-aliasing, and order-independent transparency.

Journal ArticleDOI
Atsumu Ohmura1
TL;DR: The simulation capacity of the temperature-based melt-index method, however, is too good to be called cru... as mentioned in this paper, and is rated as inferior to other more sophisticated methods such as the energy balance method.
Abstract: The close relationship between air temperature measured at standard screen level and the rate of melt on snow and ice has been widely used to estimate the rate of melt. The parameterization of the melt rate using air temperature usually takes a simple form as a function of either the mean temperature for the relevant period or positive degree-day statistics. The computation provides the melt rate with sufficient accuracy for most practical purposes. Because of its simplicity, it is often called a crude method and is rated as inferior to other more sophisticated methods such as the energy balance method. The method is often used with the justification that temperature data are easily available or that obtaining energy balance fluxes is difficult. The physical process responsible for the temperature effect on the melt rate is often attributed to the sensible heat conduction from the atmosphere. The simulation capacity of the temperature-based melt-index method, however, is too good to be called cru...

Proceedings ArticleDOI
Kay Römer1
01 Oct 2001
TL;DR: This work presents a time synchronization scheme that is appropriate for sparse ad hoc networks and explains how the data sensed by various smart things can be combined to derive knowledge about the environment, which enables the smart things to "react" intelligently to their environment.
Abstract: Ubiquitous computing environments are typically based upon ad hoc networks of mobile computing devices. These devices may be equipped with sensor hardware to sense the physical environment and may be attached to real world artifacts to form so-called smart things. The data sensed by various smart things can then be combined to derive knowledge about the environment, which in turn enables the smart things to "react" intelligently to their environment. For this so-called sensor fusion, temporal relationships (X happened before Y) and real-time issues (X and Y happended within a certain time interval) play an important role. Thus physical time and clock synchronization are crucial in such environments. However, due to the characteristics of sparse ad hoc networks, classical clock synchronization algorithms are not applicable in this setting. We present a time synchronization scheme that is appropriate for sparse ad hoc networks

Journal ArticleDOI
23 Jan 2001-Langmuir
TL;DR: Poly(l-lysine)-g-poly(ethylene glycol) (PLL-g-PEG) is a member of a family of polycationic PEG-grafted copolymers that have been shown to chemisorb on anionic surfaces, including various metal oxidizers.
Abstract: Poly(l-lysine)-g-poly(ethylene glycol) (PLL-g-PEG) is a member of a family of polycationic PEG-grafted copolymers that have been shown to chemisorb on anionic surfaces, including various metal oxid...

Journal ArticleDOI
Christian Monn1
TL;DR: In this paper, a review describes databases of small-scale spatial variations and indoor, outdoor and personal measurements of air pollutants with the main focus on suspended particulate matter, and to a lesser extent, nitrogen dioxide and photochemical pollutants.

Journal ArticleDOI
01 Jun 2001
TL;DR: The experiments showed that the gait phase detection system, unlike other similar devices, was insensitive to perturbations caused by nonwalking activities such as weight shifting between legs during standing, feet sliding, sitting down, and standing up.
Abstract: A new highly reliable gait phase detection system, which can be used in gait analysis applications and to control the gait cycle of a neuroprosthesis for walking, is described. The system was designed to detect in real-time the following gait phases: stance, heel-off, swing, and heel-strike. The gait phase detection system employed a gyroscope to measure the angular velocity of the foot and three force sensitive resistors to assess the forces exerted by the foot on the shoe sole during walking. A rule-based detection algorithm, which was running on a portable microprocessor board, processed the sensor signals. In the presented experimental study ten able bodied subjects and six subjects with impaired gait tested the device in both indoor and outdoor environments (0-25/spl deg/C). The subjects were asked to walk on flat and irregular surfaces, to step over small obstacles, to walk on inclined surfaces, and to ascend and descend stairs. Despite the significant variation in the individual walking styles the system achieved an overall detection reliability above 99% for both subject groups for the tasks involving walking on flat, irregular, and inclined surfaces. In the case of stair climbing and descending tasks the success rate of the system was above 99% for the able body subjects and above 96% for the subjects with impaired gait. The experiments also showed that the gait phase detection system, unlike other similar devices, was insensitive to perturbations caused by nonwalking activities such as weight shifting between legs during standing, feet sliding, sitting down, and standing up.

Journal ArticleDOI
Willem H. Koppenol1
TL;DR: The Haber-Weiss reaction was revived by Beauchamp and Fridovich in 1970 to explain the toxicity of superoxide, but this time the oxygen was believed to be in the singlet (1∆g) state, and the reaction was dropped from the scheme of oxygen toxicity, and superoxide became the source of hydrogen peroxide.
Abstract: The chain reactions HO• + H2O2 → H2O + O2•- + H+ and O2•- + H+ + H2O2 → O2 + HO• + H2O, commonly known as the Haber-Weiss cycle, were first mentioned by Haber and Willst?tter in 1931. George showed in 1947 that the second reaction is insignificant in comparison to the fast dismutation of superoxide, and this finding appears to have been accepted by Weiss in 1949. In 1970, the Haber-Weiss reaction was revived by Beauchamp and Fridovich to explain the toxicity of superoxide. During the 1970s various groups determined that the rate constant for this reaction is of the order of 1 M-1s-1 or less, which confirmed George's conclusion. The reaction of superoxide with hydrogen peroxide was dropped from the scheme of oxygen toxicity, and superoxide became the source of hydrogen peroxide, which yields hydroxyl radicals via the Fenton reaction, Fe 2+ + H2O2 → Fe3+ + HO- + HO•. In 1994, Kahn and Kasha resurrected the Haber-Weiss reaction again, but this time the oxygen was believed to be in the singlet (1∆g) s...

Journal ArticleDOI
TL;DR: The development of a tissue-engineered blood vessel substitute has motivated much of the research in the area of cardiovascular tissue engineering over the past 20 years as discussed by the authors, and several methodologies have emerged for constructing blood vessel replacements with biological functionality.
Abstract: ▪ Abstract The development of a tissue-engineered blood vessel substitute has motivated much of the research in the area of cardiovascular tissue engineering over the past 20 years. Several methodologies have emerged for constructing blood vessel replacements with biological functionality. These include cell-seeded collagen gels, cell-seeded biodegradable synthetic polymer scaffolds, cell self-assembly, and acellular techniques. This review details the most recent developments, with a focus on core technologies and construct development. Specific examples are discussed to illustrate both the benefits and shortcomings of each methodology, as well as to underline common themes. Finally, a brief perspective on challenges for the future is presented.

Journal ArticleDOI
TL;DR: In this paper, a conceptual model of island development is proposed which integrates the interactions between large woody debris and vegetation, geomorphic features, sediment calibre and hydrological regime.
Abstract: After more than 300 years of river management, scientific knowledge of European river systems has evolved with limited empirical knowledge of truly natural systems. In particular, little is known of the mechanisms supporting the evolution and maintenance of islands and secondary channels. The dynamic, gravel-bed Fiume Tagliamento, Italy, provides an opportunity to acquire baseline data from a river where the level of direct engineering intervention along the main stem is remarkably small. Against a background of a strong alpine to mediterranean climatic and hydrological gradient, this paper explores relationships between topography, sediment and vegetation at eight sites along the active zone of the Tagliamento. A conceptual model of island development is proposed which integrates the interactions between large woody debris and vegetation, geomorphic features, sediment calibre and hydrological regime. Islands may develop on bare gravel sites or be dissected from the floodplain by channel avulsion. Depositional and erosional processes result in different island types and developmental stages. Differences in the apparent trajectories of island development are identified for each of the eight study sites along the river. The management implications of the model and associated observations of the role of riparian vegetation in island development are considered. In particular, the potential impacts of woody debris removal, riparian tree management, regulation of river flow and sediment regimes, and changes in riparian tree species' distribution are discussed.

Journal ArticleDOI
17 May 2001-Nature
TL;DR: In this article, phase equilibria were used to quantify the evolution of CO2 and H2O through the subduction-zone metamorphism of carbonate-bearing marine sediments, which is considered to be a major source for CO2 released by arc volcanoes.
Abstract: Volatiles, most notably CO2, are recycled back into the Earth's interior at subduction zones1,2 The amount of CO2 emitted from arc volcanism appears to be less than that subducted, which implies that a significant amount of CO2 either is released before reaching the depth at which arc magmas are generated or is subducted to deeper depths Few high-pressure experimental studies3,4,5 have addressed this problem and therefore metamorphic decarbonation in subduction zones remains largely unquantified, despite its importance to arc magmatism, palaeoatmospheric CO2 concentrations and the global carbon cycle6 Here we present computed phase equilibria to quantify the evolution of CO2 and H2O through the subduction-zone metamorphism of carbonate-bearing marine sediments (which are considered to be a major source for CO2 released by arc volcanoes6) Our analysis indicates that siliceous limestones undergo negligible devolatilization under subduction-zone conditions Along high-temperature geotherms clay-rich marls completely devolatilize before reaching the depths at which arc magmatism is generated, but along low-temperature geotherms, they undergo virtually no devolatilization And from 80 to 180 km depth, little devolatilization occurs for all carbonate-bearing marine sediments Infiltration of H2O-rich fluids therefore seems essential to promote subarc decarbonation of most marine sediments In the absence of such infiltration, volatiles retained within marine sediments may explain the apparent discrepancy between subducted and volcanic volatile fluxes and represent a mechanism for return of carbon to the Earth's mantle

Journal ArticleDOI
22 Nov 2001-Nature
TL;DR: RNA localization and reporter gene expression indicated expression of StPT3 in root sectors where mycorrhizal structures are formed, suggesting that the mutualistic symbiosis evolved by genetic rearrangements in the St PT3 promoter.
Abstract: Arbuscular mycorrhizas are the most common non-pathogenic symbioses in the roots of plants. It is generally assumed that this symbiosis facilitated the colonization of land by plants. In arbuscular mycorrhizas, fungal hyphae often extend between the root cells and tuft-like branched structures (arbuscules) form within the cell lumina that act as the functional interface for nutrient exchange. In the mutualistic arbuscular-mycorrhizal symbiosis the host plant derives mainly phosphorus from the fungus, which in turn benefits from plant-based glucose. The molecular basis of the establishment and functioning of the arbuscular-mycorrhizal symbiosis is largely not understood. Here we identify the phosphate transporter gene StPT3 in potato (Solanum tuberosum). Functionality of the encoded protein was confirmed by yeast complementation. RNA localization and reporter gene expression indicated expression of StPT3 in root sectors where mycorrhizal structures are formed. A sequence motif in the StPT3 promoter is similar to transposon-like elements, suggesting that the mutualistic symbiosis evolved by genetic rearrangements in the StPT3 promoter.

Journal ArticleDOI
Lars Ellgaard1, Ari Helenius1
TL;DR: Recent work provides the first structural insights into the process of glycoprotein folding in the ER involving the lectin chaperones calnexin and calreticulin.


Journal ArticleDOI
TL;DR: In this article, the authors investigated the role of intrinsic and extrinsic defects such as the alkali (or hydrogen)-compensated [AlO4/M+] center and the short-lived blue-green CL centered around 500 µm in the red spectral region.
Abstract: Investigations of natural and synthetic quartz specimens by cathodoluminescence (CL) microscopy and spectroscopy, electron paramagnetic resonance (EPR) and trace-element analysis showed that various luminescence colours and emission bands can be ascribed to different intrinsic and extrinsic defects. The perceived visible luminescence colours in quartz depend on the relative intensities of the dominant emission bands between 380 and 700 nm. Some of the CL emissions of quartz from the UV to the yellow spectral region (175 nm, 290 nm, 340 nm, 420 nm, 450 nm, 580 nm) can be related to intrinsic lattice defects. Extrinsic defects such as the alkali (or hydrogen)-compensated [AlO4/M+] centre have been suggested as being responsible for the transient emission band at 380–390 nm and the short-lived blue-green CL centered around 500 nm. CL emissions between 620 and 650 nm in the red spectral region are attributed to the nonbridging oxygen hole centre (NBOHC) with several precursors. The weak but highly variable CL colours and emission spectra of quartz can be related to genetic conditions of quartz formation. Hence, both luminescence microscopy and spectroscopy can be used widely in various applications in geosciences and techniques. One of the most important fields of application of quartz CL is the ability to reveal internal structures, growth zoning and lattice defects in quartz crystals not discernible by means of other analytical techniques. Other fields of investigations are the modal analysis of rocks, the provenance evaluation of clastic sediments, diagenetic studies, the reconstruction of alteration processes and fluid flow, the detection of radiation damage or investigations of ultra-pure quartz and silica glass in technical applications. Ursachen, spektrale Charakteristika und praktische Anwendungen der Kathodolumineszenz (KL) von Quarz – eine Revision Untersuchungen von naturlichen und synthetischen Quarzproben mittels Kathodolumineszenz (KL) Mikroskopie und -spektroskopie, Elektron Paramagnetischer Resonanz (EPR) und Spurenelementanalysen zeigen verschiedene Lumineszenzfarben und Emissionsbanden, die unterschiedlichen intrinsischen und extrinsischen Defekten zugeordnet werden konnen. Die sichtbaren Lumineszenzfarben von Quarz werden durch unterschiedliche Intensitatsverhaltnisse der dominierenden Emissionsbanden zwischen 380 und 700 nm verursacht. Einige der KL Emissionen vom UV bis zum gelben Spektralbereich (175 nm, 290 nm, 340 nm, 420 nm, 450 nm, 580 nm) stehen im Zusammenhang mit intrinsischen Defekten. Die kurzlebigen Lumineszenzemissionen bei 380–390 nm sowie 500 nm werden mit kompensierten [AlO4/M+]-Zentren in Verbindung gebracht. Die KL-Emissionen im roten Spektralbereich bei 620 bis 650 nm haben ihre Ursache im “nonbridging oxygen hole centre” (NBOHC) mit verschiedenen Vorlauferzentren. Die unterschiedlichen KL-Farben und Emissionsspektren von Quarz konnen oft bestimmten genetischen Bildungsbedingungen zugeordnet werden und ermoglichen deshalb vielfaltige Anwendungen in den Geowissenschaften und in der Technik. Eine der gravierendsten Einsatzmoglichkeiten ist die Sichtbarmachung von Internstrukturen, Wachstumszonierungen und Defekten im Quarz, die mit anderen Analysenmethoden nicht oder nur schwer nachweisbar sind. Weitere wesentliche Untersuchungsschwerpunkte sind die Modalanalyse von Gesteinen, die Eduktanalyse klastischer Sedimente, Diageneseuntersuchungen, die Rekonstruktion von Alterationsprozessen und Fluidmigrationen, der Nachweis von Strahlungsschaden oder die Untersuchung von ultrareinem Quarz und Silikaglas fur technische Anwendungen.

Journal ArticleDOI
Eva M. Golet1, Alfredo C. Alder1, Andreas Hartmann1, Thomas Ternes1, Walter Giger1 
TL;DR: Results indicate that conventional environmental risk assessment overestimates FQ concentrations in surface waters by 1 to 2 orders of magnitude.
Abstract: Fluoroquinolones (FQs) are among the most important antibacterial agents (synthetic antibiotics) used in human and veterinary medicine. An analytical method based on reversed-phase liquid chromatography with fluorescence detection was developed and validated for the simultaneous determination of nine FQs and the quinolone pipemidic acid in urban wastewater. Aqueous samples were extracted using mixed-phase cation-exchange disk cartridges that were subsequently eluted by ammonia solution in methanol. Recoveries were above 80% at an overall precision of better than 10%. Instrumental quantification limits varied between 150 and 450 pg injected. The presented method was successfully applied to quantify FQs in effluents of urban wastewater treatment plants. The two most abundant human-use FQs, ciprofloxacin and norfloxacin, occurred in primary and tertiary wastewater effluents at concentrations between 249 and 405 ng/L and from 45 to 120 ng/L, respectively. The identity of FQs in urban wastewater was confirmed ...

Journal ArticleDOI
TL;DR: An overview of recent advances in the synthesis of nanoparticles by flame aerosol processes is given in this paper, where a wide spectrum of new nanosized powders can be synthesized.
Abstract: An overview of recent advances in the synthesis of nanoparticles by flame aerosol processes is given. In flame processes with gaseous precursors emphasis is placed on reactant mixing and composition, additives, and external electric fields for control of product characteristics. Thermophoretic sampling can monitor the formation and growth of nanoparticles, while the corresponding temperature history can be obtained by non-intrusive Fourier transform infrared spectroscopy. Furthermore, synthesis of composite nanoparticles for various applications is addressed such as in reinforcement or catalysis as well as for scale-up from 1 to 700 g/h of silica-carbon nanostructured particles. In flame processes with liquid precursors using the so-called flame spray pyrolysis (FSP), emphasis is placed on reactant and fuel composition. The FSP processes are quite attractive as they can employ a wide array of precursors, so a broad spectrum of new nanosized powders can be synthesized. Computational fluid dynamics (CFD) in combination with gas-phase particle formation models offer unique possibilities for improvement and possible new designs for flame reactors.

Journal ArticleDOI
12 Oct 2001-Science
TL;DR: The three oxygen isotopes (Δ17O), 16O,17O, and 18O provide no evidence that isotopic heterogeneity on the Moon was created by lunar impacts, and are consistent with the Giant Impact model.
Abstract: We have determined the abundances of 16 O, 17 O, and 18 O in 31 lunar samples from Apollo missions 11, 12, 15, 16, and 17 using a high-precision laser fluorination technique. All oxygen isotope compositions plot within ±0.016 per mil (2 standard deviations) on a single mass-dependent fractionation line that is identical to the terrestrial fractionation line within uncertainties. This observation is consistent with the Giant Impact model, provided that the proto-Earth and the smaller impactor planet (named Theia) formed from an identical mix of components. The similarity between the proto-Earth and Theia is consistent with formation at about the same heliocentric distance. The three oxygen isotopes (Δ 17 O) provide no evidence that isotopic heterogeneity on the Moon was created by lunar impacts.

Journal ArticleDOI
TL;DR: The chemical and physical evolution of magmatic to hydrothermal processes in the porphyry Cu-Au deposit of Bajo de la Alumbrera (northwestern Argentina) has been reconstructed with a quantitative fluid inclusion study.
Abstract: The chemical and physical evolution of magmatic to hydrothermal processes in the porphyry Cu-Au deposit of Bajo de la Alumbrera (northwestern Argentina) has been reconstructed with a quantitative fluid inclusion study. Fluid inclusion petrography, microthermometry, and single inclusion microanalysis by Excimer laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) are combined to determine the evolution of pressure, temperature, and ore metal concentrations (including Cu and Au) in the fluids. Complementary hydrogen and oxygen isotope analyses are used to further constrain the water sources in the evolving system. The combined data provide a new level of insight into the mechanisms of metal sourcing and ore mineral precipitation in a porphyry-style magmatic-hydrothermal system. Based on previously reported observations of the igneous geology, alteration geochemistry, and veining history of the subvolcanic porphyries at Alumbrera, the distribution of fluid inclusion types in space and time is documented. Six major inclusion types are distinguished. The highest temperature brine inclusions (up to 750°C; P >1 kbar) are mainly recorded in barren quartz ± magnetite veins in the core of the alteration system. These polyphase brine inclusions (halite ± sylvite + multiple opaque and transparent daughter crystals) are interpreted as the most primitive magmatic fluid recorded at the level of the deposit. They are of moderately high salinity (50–60 wt % NaCl equiv) dominated by NaCl, KCl, and FeCl 2 , and contain on average 0.33 wt percent Cu and 0.55 ppm Au. Upon cooling and decompression, these saline liquids exsolve a vapor phase, which is preferentially enriched in Cu relative to its main salt components but probably plays a minor role in the formation of this particular deposit because of the inferred small mass fraction of vapor. Cooling and decompression from the highest initial P-T conditions down to about 450°C causes magnetite ± K silicate alteration but no saturation in Au or Cu sulfides, as recorded by continually high ore metal concentrations in the fluid inclusions. Coprecipitation of Cu and Au as chalcopyrite and native gold (± some early bornite) occurs over a narrow range of decreasing fluid temperature. With cooling from ~400° to 305°C, the Cu concentration in the brine drops by about one order of magnitude to less than ~0.07 wt percent, without a proportional decrease in major salt components. Ore mineral precipitation extracts ~85 percent of the Cu and Au from the fluid. It is associated with potassic alteration, as shown by a concomitant decrease in the K/Na ratio of the cooling magmatic brine and by an increase in its Ba and Sr concentrations (elements which are probably liberated in the destruction of calcic igneous minerals). The fluid chemical data demonstrate that the metal ratios in this and probably many other porphyry-style ore deposits are primarily controlled by the magmatic source of the ore brines. On the other hand, the final hypogene ore grade of the deposit is controlled by the efficiency of ore mineral precipitation. At Alumbrera, metal extraction is governed by the efficiency of cooling a high flux of magmatic fluid within a small rock volume. Dilution of residual magmatic fluids, as recorded by aqueous fluid inclusions of decreasing salinities and temperatures below 295°C, follows after the main stage of copper introduction and is associated with feldspar-destructive (phyllic) alteration. Geometric relationships, fluid analyses, and stable isotope data together indicate that phyllic alteration results from postmineralization hydrothermal activity involving minor mixing between meteoric water, residual brine, and a waning input of magmatic vapor.