scispace - formally typeset
Search or ask a question

Showing papers by "Vienna University of Technology published in 2010"


Journal ArticleDOI
11 Feb 2010-Nature
TL;DR: A new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community is described.
Abstract: Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth's climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community.

5,670 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss about integrating renewable energy sources into the smart power grid through industrial electronics, including photovoltaic power, wind energy conversion, hybrid energy systems, and tidal energy conversion.
Abstract: This paper discusses about integrating renewable energy sources into the smart power grid through industrial electronics. This paper discusses photovoltaic power, wind energy conversion, hybrid energy systems, and tidal energy conversion.

933 citations


Journal ArticleDOI
25 Jun 2010-Science
TL;DR: Ultrafast metrology reveals a 20-attosecond delay between photoemission from different electronic orbitals in neon atoms and theoretical models refined with the help of attosecond timing metrology may provide insight into electron correlations and allow the setting of the zero of time in atomic-scale chronoscopy with a precision of a few attose Cond.
Abstract: Photoemission from atoms is assumed to occur instantly in response to incident radiation and provides the basis for setting the zero of time in clocking atomic-scale electron motion. We used attosecond metrology to reveal a delay of 21 +/- 5 attoseconds in the emission of electrons liberated from the 2p orbitals of neon atoms with respect to those released from the 2s orbital by the same 100-electron volt light pulse. Small differences in the timing of photoemission from different quantum states provide a probe for modeling many-electron dynamics. Theoretical models refined with the help of attosecond timing metrology may provide insight into electron correlations and allow the setting of the zero of time in atomic-scale chronoscopy with a precision of a few attoseconds.

856 citations


Proceedings ArticleDOI
25 Oct 2010
TL;DR: This work investigates and develops methods to extract and combine low-level features that represent the emotional content of an image, and uses these for image emotion classification.
Abstract: Images can affect people on an emotional level. Since the emotions that arise in the viewer of an image are highly subjective, they are rarely indexed. However there are situations when it would be helpful if images could be retrieved based on their emotional content. We investigate and develop methods to extract and combine low-level features that represent the emotional content of an image, and use these for image emotion classification. Specifically, we exploit theoretical and empirical concepts from psychology and art theory to extract image features that are specific to the domain of artworks with emotional expression. For testing and training, we use three data sets: the International Affective Picture System (IAPS); a set of artistic photography from a photo sharing site (to investigate whether the conscious use of colors and textures displayed by the artists improves the classification); and a set of peer rated abstract paintings to investigate the influence of the features and ratings on pictures without contextual content. Improved classification results are obtained on the International Affective Picture System (IAPS), compared to state of the art work.

734 citations


Journal ArticleDOI
16 Sep 2010-Nature
TL;DR: This technique is a reproducible method of creating vortex electron beams in a conventional electron microscope, and it is demonstrated how they may be used in electron energy-loss spectroscopy to detect the magnetic state of materials and describe their properties.
Abstract: It has been possible to produce photon vortex beams — optical beams with spiralling wavefronts — for some time, and they have found widespread application as optical tweezers, in interferometry and in information transfer, for example. The production of vortex beams of electrons was demonstrated earlier this year ( http://go.nature.com/4H2xWR ) in a procedure involving the passage of electrons through a spiral stack of graphite thin films. The ability to generate such beams reproducibly in a conventional electron microscope would enable many new applications. Now Jo Verbeeck and colleagues have taken a step towards that goal. They describe a versatile holographic technique for generating these twisted electron beams, and demonstrate their potential use as probes of a material's magnetic properties. It was demonstrated recently that passing electrons through a spiral stack of graphite thin films generates an electron beam with orbital angular momentum — analogous to the spiralling wavefronts that can be introduced in photon beams and which have found widespread application. Here, a versatile holographic technique for generating these twisted electron beams is described. Moreover, a demonstration is provided of their potential use in probing a material's magnetic properties. Vortex beams (also known as beams with a phase singularity) consist of spiralling wavefronts that give rise to angular momentum around the propagation direction. Vortex photon beams are widely used in applications such as optical tweezers to manipulate micrometre-sized particles and in micro-motors to provide angular momentum1,2, improving channel capacity in optical3 and radio-wave4 information transfer, astrophysics5 and so on6. Very recently, an experimental realization of vortex beams formed of electrons was demonstrated7. Here we describe the creation of vortex electron beams, making use of a versatile holographic reconstruction technique in a transmission electron microscope. This technique is a reproducible method of creating vortex electron beams in a conventional electron microscope. We demonstrate how they may be used in electron energy-loss spectroscopy to detect the magnetic state of materials and describe their properties. Our results show that electron vortex beams hold promise for new applications, in particular for analysing and manipulating nanomaterials, and can be easily produced.

710 citations


Journal ArticleDOI
K. Aamodt1, Betty Abelev2, A. Abrahantes Quintana, Dagmar Adamová3  +1011 moreInstitutions (81)
TL;DR: In this paper, the first measurement of charged particle elliptic flow in Pb-Pb collisions at root s(NN) p = 2.76 TeV with the ALICE detector at the CERN Large Hadron Collider was performed in the central pseudorapidity region.
Abstract: We report the first measurement of charged particle elliptic flow in Pb-Pb collisions at root s(NN) p = 2.76 TeV with the ALICE detector at the CERN Large Hadron Collider. The measurement is performed in the central pseudorapidity region (vertical bar eta vertical bar < 0.8) and transverse momentum range 0.2 < p(t) < 5.0 GeV/c. The elliptic flow signal v(2), measured using the 4-particle correlation method, averaged over transverse momentum and pseudorapidity is 0.087 +/- 0.002(stat) +/- 0.003(syst) in the 40%-50% centrality class. The differential elliptic flow v(2)(p(t)) reaches a maximum of 0.2 near p(t) = 3 GeV/c. Compared to RHIC Au-Au collisions at root s(NN) = 200 GeV, the elliptic flow increases by about 30%. Some hydrodynamic model predictions which include viscous corrections are in agreement with the observed increase.

652 citations


Journal ArticleDOI
TL;DR: This review aims to give a broad overview on the qualities and versatility of the best studied Trichoderma species and to highlight intriguing findings as well as promising applications.
Abstract: Fungi of the genus Trichoderma are soilborne, green-spored ascomycetes that can be found all over the world. They have been studied with respect to various characteristics and applications and are known as successful colonizers of their habitats, efficiently fighting their competitors. Once established, they launch their potent degradative machinery for decomposition of the often heterogeneous substrate at hand. Therefore, distribution and phylogeny, defense mechanisms, beneficial as well as deleterious interaction with hosts, enzyme production and secretion, sexual development, and response to environmental conditions such as nutrients and light have been studied in great detail with many species of this genus, thus rendering Trichoderma one of the best studied fungi with the genome of three species currently available. Efficient biocontrol strains of the genus are being developed as promising biological fungicides, and their weaponry for this function also includes secondary metabolites with potential applications as novel antibiotics. The cellulases produced by Trichoderma reesei, the biotechnological workhorse of the genus, are important industrial products, especially with respect to production of second generation biofuels from cellulosic waste. Genetic engineering not only led to significant improvements in industrial processes but also to intriguing insights into the biology of these fungi and is now complemented by the availability of a sexual cycle in T. reesei/Hypocrea jecorina, which significantly facilitates both industrial and basic research. This review aims to give a broad overview on the qualities and versatility of the best studied Trichoderma species and to highlight intriguing findings as well as promising applications.

636 citations


Proceedings ArticleDOI
16 May 2010
TL;DR: A MATLAB computationally efficient LTE system level simulator capable of evaluating the performance of the Downlink Shared Channel of LTE SISO and MIMO networks using Open Loop Spatial Multiplexing and Transmission Diversity transmit modes is presented.
Abstract: In order to evaluate the performance of new mobile network technologies, system level simulations are crucial. They aim at determining whether, and at which level predicted link level gains impact network performance. In this paper we present a MATLAB computationally efficient LTE system level simulator. The simulator is offered for free under an academic, noncommercial use license, a first to the authors' knowledge. The simulator is capable of evaluating the performance of the Downlink Shared Channel of LTE SISO and MIMO networks using Open Loop Spatial Multiplexing and Transmission Diversity transmit modes. The physical layer model is based on the postequalization SINR and provides the simulation pre-calculated "fading parameters" representing each of the individual interference terms. This structure allows the fading parameters to be pregenerated offline, vastly reducing computational complexity at run-time.

578 citations


Journal ArticleDOI
Andrew Gould1, Subo Dong2, B. S. Gaudi1, Andrzej Udalski3  +146 moreInstitutions (43)
TL;DR: In this paper, the authors presented the first measurement of the planet frequency beyond the "snow line," for the planet-to-star mass-ratio interval during 2005-2008 microlensing events during the survey-plus-follow-up high-magnification channel.
Abstract: We present the first measurement of the planet frequency beyond the "snow line," for the planet-to-star mass-ratio interval –4.5 200) microlensing events during 2005-2008. The sampled host stars have a typical mass M_(host) ~ 0.5 M_⊙, and detection is sensitive to planets over a range of planet-star-projected separations (s ^(–1)_(max)R_E, s_(max)R_E), where R_E ~ 3.5 AU(M_(host)/M_⊙)^(1/2) is the Einstein radius and s_(max) ~ (q/10^(–4.3))^(1/3). This corresponds to deprojected separations roughly three times the "snow line." We show that the observations of these events have the properties of a "controlled experiment," which is what permits measurement of absolute planet frequency. High-magnification events are rare, but the survey-plus-follow-up high-magnification channel is very efficient: half of all high-mag events were successfully monitored and half of these yielded planet detections. The extremely high sensitivity of high-mag events leads to a policy of monitoring them as intensively as possible, independent of whether they show evidence of planets. This is what allows us to construct an unbiased sample. The planet frequency derived from microlensing is a factor 8 larger than the one derived from Doppler studies at factor ~25 smaller star-planet separations (i.e., periods 2-2000 days). However, this difference is basically consistent with the gradient derived from Doppler studies (when extrapolated well beyond the separations from which it is measured). This suggests a universal separation distribution across 2 dex in planet-star separation, 2 dex in mass ratio, and 0.3 dex in host mass. Finally, if all planetary systems were "analogs" of the solar system, our sample would have yielded 18.2 planets (11.4 "Jupiters," 6.4 "Saturns," 0.3 "Uranuses," 0.2 "Neptunes") including 6.1 systems with two or more planet detections. This compares to six planets including one two-planet system in the actual sample, implying a first estimate of 1/6 for the frequency of solar-like systems.

381 citations


Journal ArticleDOI
TL;DR: Analysis, based on a microscopic quantum theory, shows that the nonlinear polariton splitting, a signature of this regime, is a dynamical effect arising from the self-interaction of the collective electronic polarization with its own emitted field.
Abstract: The regime of ultrastrong light-matter interaction has been investigated theoretically and experimentally, using zero-dimensional electromagnetic resonators coupled with an electronic transition between two confined states of a semiconductor quantum well. We have measured a splitting between the coupled modes that amounts to 48% of the energy transition, the highest ratio ever observed in a light-matter coupled system. Our analysis, based on a microscopic quantum theory, shows that the nonlinear polariton splitting, a signature of this regime, is a dynamical effect arising from the self-interaction of the collective electronic polarization with its own emitted field.

376 citations


Journal ArticleDOI
TL;DR: In this paper, a triple collocation error estimation technique for assessing the relative quality of several globally available soil moisture products from active (ASCAT) and passive (AMSR-E and SSM/I) microwave sensors is proposed.
Abstract: . Understanding the error structures of remotely sensed soil moisture observations is essential for correctly interpreting observed variations and trends in the data or assimilating them in hydrological or numerical weather prediction models. Nevertheless, a spatially coherent assessment of the quality of the various globally available datasets is often hampered by the limited availability over space and time of reliable in-situ measurements. As an alternative, this study explores the triple collocation error estimation technique for assessing the relative quality of several globally available soil moisture products from active (ASCAT) and passive (AMSR-E and SSM/I) microwave sensors. The triple collocation is a powerful statistical tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three linearly related data sources with independent error structures. Prerequisite for this technique is the availability of a sufficiently large number of timely corresponding observations. In addition to the active and passive satellite-based datasets, we used the ERA-Interim and GLDAS-NOAH reanalysis soil moisture datasets as a third, independent reference. The prime objective is to reveal trends in uncertainty related to different observation principles (passive versus active), the use of different frequencies (C-, X-, and Ku-band) for passive microwave observations, and the choice of the independent reference dataset (ERA-Interim versus GLDAS-NOAH). The results suggest that the triple collocation method provides realistic error estimates. Observed spatial trends agree well with the existing theory and studies on the performance of different observation principles and frequencies with respect to land cover and vegetation density. In addition, if all theoretical prerequisites are fulfilled (e.g. a sufficiently large number of common observations is available and errors of the different datasets are uncorrelated) the errors estimated for the remote sensing products are hardly influenced by the choice of the third independent dataset. The results obtained in this study can help us in developing adequate strategies for the combined use of various scatterometer and radiometer-based soil moisture datasets, e.g. for improved flood forecast modelling or the generation of superior multi-mission long-term soil moisture datasets.

Journal ArticleDOI
TL;DR: An implementation of an interface between the full-potential linearized augmented plane wave package Wien2k and the wannier90 code for the construction of maximally localized Wannier functions is presented.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the soil wetness index (SWI) derived from the Advanced SCATterometer (ASCAT) sensor onboard of the Metop satellite with the soil moisture temporal pattern derived from a continuous rainfall-runoff model (MISDc) to assess its relationship with modeled data.
Abstract: . The role and the importance of soil moisture for meteorological, agricultural and hydrological applications is widely known. Remote sensing offers the unique capability to monitor soil moisture over large areas (catchment scale) with, nowadays, a temporal resolution suitable for hydrological purposes. However, the accuracy of the remotely sensed soil moisture estimates has to be carefully checked. The validation of these estimates with in-situ measurements is not straightforward due the well-known problems related to the spatial mismatch and the measurement accuracy. The analysis of the effects deriving from assimilating remotely sensed soil moisture data into hydrological or meteorological models could represent a more valuable method to test their reliability. In particular, the assimilation of satellite-derived soil moisture estimates into rainfall-runoff models at different scales and over different regions represents an important scientific and operational issue. In this study, the soil wetness index (SWI) product derived from the Advanced SCATterometer (ASCAT) sensor onboard of the Metop satellite was tested. The SWI was firstly compared with the soil moisture temporal pattern derived from a continuous rainfall-runoff model (MISDc) to assess its relationship with modeled data. Then, by using a simple data assimilation technique, the linearly rescaled SWI that matches the range of variability of modelled data (denoted as SWI*) was assimilated into MISDc and the model performance on flood estimation was analyzed. Moreover, three synthetic experiments considering errors on rainfall, model parameters and initial soil wetness conditions were carried out. These experiments allowed to further investigate the SWI potential when uncertain conditions take place. The most significant flood events, which occurred in the period 2000–2009 on five subcatchments of the Upper Tiber River in central Italy, ranging in extension between 100 and 650 km2, were used as case studies. Results reveal that the SWI derived from the ASCAT sensor can be conveniently adopted to improve runoff prediction in the study area, mainly if the initial soil wetness conditions are unknown.

Journal ArticleDOI
29 Jul 2010-Nature
TL;DR: The ability to probe structural and electronic features, combined with high time resolution, make high-harmonic spectroscopy ideally suited to measuring coupled electronic and nuclear dynamics occurring in photochemical reactions and to characterizing the electronic structure of transition states.
Abstract: New methods are emerging that aim to image chemical reactions as they occur, using X-ray diffraction, electron diffraction or laser-induced recollision. But none of these methods offer spectral selection, which allows a laser pulse with light of one wavelength to initiate a reaction, and a second pulse with another, appropriately selected wavelength to monitor the reacting molecules. Worner et al. now show that this apparent limitation offers exciting opportunities for recollision-based high-harmonic spectroscopy: due to the coherent nature of the attosecond high-harmonic pulse generation, unexcited molecules can act as local oscillators against which structural and electronic dynamics is observed on an attosecond timescale. High-harmonic spectroscopy thus seems ideally suited to measure coupled electronic and nuclear dynamics in fast photochemical reactions, or to characterize short-lived transition states. New methods are emerging that aim to image chemical reactions as they occur using X-ray diffraction, electron diffraction or laser-induced recollision, but spectral selection cannot be used to monitor the reacting molecules for these methods. These authors show that this apparent limitation offers opportunities for recollision-based high-harmonic spectroscopy, in which unexcited molecules can act as local oscillators against which structural and electronic dynamics is observed on an attosecond timescale. The study of chemical reactions on the molecular (femtosecond) timescale typically uses pump laser pulses to excite molecules and subsequent probe pulses to interrogate them. The ultrashort pump pulse can excite only a small fraction of molecules, and the probe wavelength must be carefully chosen to discriminate between excited and unexcited molecules. The past decade has seen the emergence of new methods that are also aimed at imaging chemical reactions as they occur, based on X-ray diffraction1, electron diffraction2 or laser-induced recollision3,4—with spectral selection not available for any of these new methods. Here we show that in the case of high-harmonic spectroscopy based on recollision, this apparent limitation becomes a major advantage owing to the coherent nature of the attosecond high-harmonic pulse generation. The coherence allows the unexcited molecules to act as local oscillators against which the dynamics are observed, so a transient grating technique5,6 can be used to reconstruct the amplitude and phase of emission from the excited molecules. We then extract structural information from the amplitude, which encodes the internuclear separation, by quantum interference at short times and by scattering of the recollision electron at longer times. The phase records the attosecond dynamics of the electrons, giving access to the evolving ionization potentials and the electronic structure of the transient molecule. In our experiment, we are able to document a temporal shift of the high-harmonic field of less than an attosecond (1 as = 10−18 s) between the stretched and compressed geometry of weakly vibrationally excited Br2 in the electronic ground state. The ability to probe structural and electronic features, combined with high time resolution, make high-harmonic spectroscopy ideally suited to measuring coupled electronic and nuclear dynamics occurring in photochemical reactions and to characterizing the electronic structure of transition states.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed a large, consistent and reliable dataset of floods in Africa and found that intensive and unplanned human settlements in flood-prone areas appeared to be playing a major role in increasing flood risk.
Abstract: [1] Flood-related fatalities in Africa, as well as associated economic losses, have increased dramatically over the past half-century. There is a growing global concern about the need to identify the causes for such increased flood damages. To this end, we analyze a large, consistent and reliable dataset of floods in Africa. Identification of causes is not easy given the diverse economic settings, demographic distribution and hydro-climatic conditions of the African continent. On the other hand, many African river basins have a relatively low level of human disturbance and, therefore, provide a unique opportunity to analyze climatic effects on floods. We find that intensive and unplanned human settlements in flood-prone areas appears to be playing a major role in increasing flood risk. Timely and economically sustainable actions, such as the discouragement of human settlements in flood-prone areas and the introduction of early warning systems are, therefore, urgently needed.

Journal ArticleDOI
TL;DR: The framework resides in the packages robustbase and rrcov and includes an almost complete set of algorithms for computing robust multivariate location and scatter, various robust methods for principal component analysis as well as robust linear and quadratic discriminant analysis.
Abstract: Taking advantage of the S4 class system of the programming environment R, which facilitates the creation and maintenance of reusable and modular components, an object-oriented framework for robust multivariate analysis was developed. The framework resides in the packages robustbase and rrcov and includes an almost complete set of algorithms for computing robust multivariate location and scatter, various robust methods for principal component analysis as well as robust linear and quadratic discriminant analysis. The design of these methods follows common patterns which we call statistical design patterns in analogy to the design patterns widely used in software engineering. The application of the framework to data analysis as well as possible extensions by the development of new methods is demonstrated on examples which themselves are part of the package rrcov.

Proceedings ArticleDOI
02 May 2010
TL;DR: In this article, the similarity between Random Telegraph Noise and Negative Bias Temperature Instability (NBTI) relaxation is further demonstrated by the observation of exponentially-distributed threshold voltage shifts corresponding to single-carrier discharges in NBTI transients in deeply scaled pFETs.
Abstract: The similarity between Random Telegraph Noise and Negative Bias Temperature Instability (NBTI) relaxation is further demonstrated by the observation of exponentially-distributed threshold voltage shifts corresponding to single-carrier discharges in NBTI transients in deeply scaled pFETs. A SPICE-based simplified channel percolation model is devised to confirm this behavior. The overall device-to-device ΔV th distribution following NBTI stress is argued to be a convolution of exponential distributions of uncorrelated individual charged defects Poisson-distributed in number. An analytical description of the total NBTI threshold voltage shift distribution is derived, allowing, among other things, linking its first two moments with the average number of defects per device.

Journal ArticleDOI
TL;DR: In this article, Tran and Blaha proposed a local density approximation (MBJLDA) method for the description of the fundamental band gaps in III-V semiconductors.
Abstract: The band structures and effective masses of III-V semiconductors (InP, InAs, InSb, GaAs, and GaSb) are calculated using the $GW$ method, the Heyd, Scuseria, and Ernzerhof hybrid functional, and modified Becke-Johnson combined with the local-density approximation (MBJLDA)---a local potential optimized for the description of the fundamental band gaps [F. Tran and P. Blaha, Phys. Rev. Lett. 102, 226401 (2009)]. We find that MBJLDA yields an excellent description of the band gaps at high-symmetry points, on par with the hybrid functional and $GW$. However, the effective masses are generally overestimated by $20--30\text{ }\mathrm{%}$ using the MBJLDA local multiplicative potential. We believe this to be related to incorrect nearest-neighbor hopping elements, which are little affected by the choice of the local potential. Despite these shortcomings, the MBJLDA method might be a suitable approach for predicting or interpolating the full band dispersion, if only limited experimental data are available. Furthermore, the method is applicable to systems containing several thousand atoms where accurate quasiparticle methods are not applicable.

Journal ArticleDOI
TL;DR: This work proposes a sparsity-enhancing basis expansion and a method for optimizing the basis with or without prior statistical information about the channel, and presents an alternative CS-based channel estimator, which is capable of estimating the off-diagonal channel coefficients characterizing intersymbol and intercarrier interference (ISI/ICI).
Abstract: We consider the application of compressed sensing (CS) to the estimation of doubly selective channels within pulse-shaping multicarrier systems (which include orthogonal frequency-division multiplexing (OFDM) systems as a special case). By exploiting sparsity in the delay-Doppler domain, CS-based channel estimation allows for an increase in spectral efficiency through a reduction of the number of pilot symbols. For combating leakage effects that limit the delay-Doppler sparsity, we propose a sparsity-enhancing basis expansion and a method for optimizing the basis with or without prior statistical information about the channel. We also present an alternative CS-based channel estimator for (potentially) strongly time-frequency dispersive channels, which is capable of estimating the ?off-diagonal? channel coefficients characterizing intersymbol and intercarrier interference (ISI/ICI). For this estimator, we propose a basis construction combining Fourier (exponential) and prolate spheroidal sequences. Simulation results assess the performance gains achieved by the proposed sparsity-enhancing processing techniques and by explicit estimation of ISI/ICI channel coefficients.

Journal ArticleDOI
K. Aamodt1, N. Abel2, U. Abeysekara3, A. Abrahantes Quintana  +1106 moreInstitutions (80)
TL;DR: In this paper, the alignment of the inner tracking system of the ALICE Large Ion Collider Experiment (ALICE ITS) with the Millepede global approach has been studied and the results obtained for the ITS alignment using about 10(5) charged tracks from cosmic rays that have been collected during summer 2008.
Abstract: ALICE (A Large Ion Collider Experiment) is the LHC (Large Hadron Collider) experiment devoted to investigating the strongly interacting matter created in nucleus-nucleus collisions at the LHC energies. The ALICE ITS, Inner Tracking System, consists of six cylindrical layers of silicon detectors with three different technologies; in the outward direction: two layers of pixel detectors, two layers each of drift, and strip detectors. The number of parameters to be determined in the spatial alignment of the 2198 sensor modules of the ITS is about 13,000. The target alignment precision is well below 10 mu m in some cases (pixels). The sources of alignment information include survey measurements, and the reconstructed tracks from cosmic rays and from proton-proton collisions. The main track-based alignment method uses the Millepede global approach. An iterative local method was developed and used as well. We present the results obtained for the ITS alignment using about 10(5) charged tracks from cosmic rays that have been collected during summer 2008, with the ALICE solenoidal magnet switched off.

Proceedings ArticleDOI
02 May 2010
TL;DR: In this article, the authors introduced a new method to analyze the statistical properties of the defects responsible for the ubiquitous recovery behavior following negative bias temperature stress, which they termed time dependent defect spectroscopy (TDDS).
Abstract: We introduce a new method to analyze the statistical properties of the defects responsible for the ubiquitous recovery behavior following negative bias temperature stress, which we term time dependent defect spectroscopy (TDDS). The TDDS relies on small-area metal-oxide-semiconductor field effect transistors (MOSFETs) where recovery proceeds in discrete steps. Contrary to techniques for the analysis of random telegraph noise (RTN), which only allow to monitor the defect behavior in a rather narrow window, the TDDS can be used to study the capture and emission times of the defects over an extremely wide range. We demonstrate that the recoverable component of NBTI is due to thermally activated hole capture and emission in individual defects with a very wide distribution of time constants, consistent with nonradiative multiphonon theory previously applied to the analysis of RTN. The defects responsible for this process show a number of peculiar features similar to anomalous RTN previously observed in nMOS transistors. A quantitative model is suggested which can explain the bias as well as the temperature dependence of the characteristic time constants. Furthermore, it is shown how the new model naturally explains the various abnormalities observed.

Proceedings ArticleDOI
12 Aug 2010
TL;DR: The LoM2HiS framework detects future SLA violation threats and can notify the enactor component to act so as to avert the threats, and is embedded into FoSII infrastructure, which facilitates autonomic SLA management and enforcement.
Abstract: Cloud computing represents a novel on-demand computing approach where resources are provided in compliance to a set of predefined non-functional properties specified and negotiated by means of Service Level Agreements (SLAs). In order to avoid costly SLA violations and to timely react to failures and environmental changes, advanced SLA enactment strategies are necessary, which include appropriate resource-monitoring concepts. Currently, Cloud providers tend to adopt existing monitoring tools, as for example those from Grid environments. However, those tools are usually restricted to locality and homogeneity of monitored objects, are not scalable, and do not support mapping of low-level resource metrics e.g., system up and down time to high-level application specific SLA parameters e.g., system availability. In this paper we present a novel framework for managing the mappings of the Low-level resource Metrics to High-level SLAs (LoM2HiS framework). The LoM2HiS framework is embedded into FoSII infrastructure, which facilitates autonomic SLA management and enforcement. Thus, the LoM2HiS framework detects future SLA violation threats and can notify the enactor component to act so as to avert the threats. We discuss the conceptual model of the LoM2HiS framework, followed by the implementation details. Finally, we present the first experimental results and a proof of concept of the LoM2HiS framework.

Proceedings Article
09 May 2010
TL;DR: This paper introduces dialectical frameworks, a powerful generalization of Dung-style argumentation frameworks where each node comes with an associated acceptance condition, and shows how acceptance conditions can be conveniently represented using weights respectively priorities on the links.
Abstract: In this paper we introduce dialectical frameworks, a powerful generalization of Dung-style argumentation frameworks where each node comes with an associated acceptance condition. This allows us to model different types of dependencies, e.g. support and attack, as well as different types of nodes within a single framework. We show that Dung's standard semantics can be generalized to dialectical frameworks, in case of stable and preferred semantics to a slightly restricted class which we call bipolar frameworks. We show how acceptance conditions can be conveniently represented using weights respectively priorities on the links and demonstrate how some of the legal proof standards can be modeled based on this idea.

Journal ArticleDOI
TL;DR: The developed method, which is based on solid phase microextraction coupled to gas chromatography-mass spectrometry (GC-MS) and which can be used for the profiling of microbial volatile organic compounds (MVOCs) in the headspace of cultures of filamentous fungi, was successfully applied to cultures of the biocontrol fungus Trichoderma atroviride.

Journal ArticleDOI
TL;DR: In this article, strong and lattice-parameter-dependent magnetic anisotropies of the ground-state energy, chemical potential, and density of states of bimetallic antiferromagnets were found.
Abstract: Magnetic anisotropy phenomena in bimetallic antiferromagnets ${\text{Mn}}_{2}\text{Au}$ and MnIr are studied by first-principles density-functional theory calculations We find strong and lattice-parameter-dependent magnetic anisotropies of the ground-state energy, chemical potential, and density of states, and attribute these anisotropies to combined effects of large moment on the $\text{Mn}\text{ }3d$ shell and large spin-orbit coupling on the $5d$ shell of the noble metal Large magnitudes of the proposed effects can open a route towards spintronics in compensated antiferromagnets without involving ferromagnetic elements

Journal ArticleDOI
TL;DR: In this article, the authors proposed basic concepts for the integration of membrane biogas upgrading plants into the existing plants while taking into account the permeate utilisation and the heating requirements of the plants.

Journal ArticleDOI
TL;DR: The results show that the proposed methods outperform standard imputation methods in the presence of outliers, and the model-based method with robust regressions is preferable.

Journal ArticleDOI
TL;DR: The phenolic compound phloridzin is a prominent member of the chemical class of dihydrochalcones, which are phenylpropanoids and the effect on human health - especially diabetes - and membrane permeability is well documented.

Journal ArticleDOI
TL;DR: Some basic physical concepts commonly used by the remote sensing community for modelling scattering and reflection processes are reviewed and the backscattering coefficient γ is recommended to use for the radiometric calibration of small-footprint full-waveform airborne laser scanners.
Abstract: Small-footprint (0.2–2 m) airborne laser scanners are lidar instruments originally developed for topographic mapping. While the first airborne laser scanners only allowed determining the range from the sensor to the target, the latest sensor generation records the complete echo waveform. The waveform provides important information about the backscattering properties of the observed targets and may be useful for geophysical parameter retrieval and advanced geometric modelling. However, to fully utilise the potential of the waveform measurements in applications, it is necessary to perform a radiometric calibration. As there are not yet calibration standards, this paper reviews some basic physical concepts commonly used by the remote sensing community for modelling scattering and reflection processes. Based purely on theoretical arguments it is recommended to use the backscattering coefficient γ , which is the backscatter cross-section normalised relative to the laser footprint area, for the radiometric calibration of small-footprint full-waveform airborne laser scanners. The presented concepts are, with some limitations, also applicable to conventional airborne laser scanners that measure the range and intensity of multiple echoes.

Proceedings Article
16 Jun 2010
TL;DR: It is demonstrated that in a single-user MIMO channel and for low signal-to-noise (SNR) ratios, the relative calibration method can increase the capacity close to the theoretical limit.
Abstract: Channel state information at the transmitter (CSIT) can greatly improve the capacity of a wireless MIMO communication system. In a time division duplex (TDD) system CSIT can be obtained by exploiting the reciprocity of the wireless channel. This however requires calibration of the radio frequency (RF) chains of the receiver and the transmitter, which are in general not reciprocal. In this paper we investigate different methods for relative calibration in the presence of frequency offsets between transmitter and receiver. We show results of theses calibration methods with real two-directional channel measurements, which were performed using the Eure-com MIMO Openair Sounder (EMOS). We demonstrate that in a single-user MIMO channel and for low signal-to-noise (SNR) ratios, the relative calibration method can increase the capacity close to the theoretical limit.