scispace - formally typeset
Search or ask a question

Showing papers by "Delft University of Technology published in 2005"


Journal ArticleDOI
TL;DR: The final data release of observations of 21 cm emission from Galactic neutral hydrogen over the entire sky, merging the Leiden/Dwingeloo Survey (LDS: Hartmann & Burton 1997, Atlas of Galactic Neutral Hydrogen) with the Instituto Argentino de Radioastronomia Survey (IAR: Arnal et al. 2000, AA and Bajaja et al., 2005, A&A, 440, 767) of the sky south of? =?25?, is presented in this article.
Abstract: We present the final data release of observations of ?21-cm emission from Galactic neutral hydrogen over the entire sky, merging the Leiden/Dwingeloo Survey (LDS: Hartmann & Burton 1997, Atlas of Galactic Neutral Hydrogen) of the sky north of ? = ?30? with the Instituto Argentino de Radioastronomia Survey (IAR: Arnal et al. 2000, AA and Bajaja et al. 2005, A&A, 440, 767) of the sky south of ? = ?25?. The angular resolution of the combined material is HPBW ? 0. ?6. The LSR velocity coverage spans the interval ?450 km s?1 to +400 km s?1, at a resolution of 1.3 kms?1. The data were corrected for stray radiation at the Institute for Radioastronomy of the University of Bonn, refining the original correction applied to the LDS. The rms brightness-temperature noise of the merged database is 0.07?0.09 K. Residual errors in the profile wings due to defects in the correction for stray radiation are for most of the data below a level of 20?40 mK. It would be necessary to construct a telescope with a main beam efficiency of ?MB >? 99% to achieve the same accuracy. The merged and refined material entering the LAB Survey of Galactic HI is intended to be a general resource useful to a wide range of studies of the physical and structural characteristices of the Galactic interstellar environment. The LAB Survey is the most sensitive Milky Way HI survey to date, with the most extensive coverage both spatially and kinematically.

4,228 citations



Journal ArticleDOI
15 Apr 2005-Science
TL;DR: In this article, the seasonal flux of sediment, on a river-by-river basis, under modern and prehuman conditions, is provided, and the authors show that humans have simultaneously increased the sediment transport by global rivers through soil erosion (by 2.3 ± 0.6 billion metric tons per year), yet reduced the flux reaching the world's coasts (by 1.4 ± 0 3 billion metric ton per year) because of retention within reservoirs.
Abstract: Here we provide global estimates of the seasonal flux of sediment, on a river-by-river basis, under modern and prehuman conditions. Humans have simultaneously increased the sediment transport by global rivers through soil erosion (by 2.3 ± 0.6 billion metric tons per year), yet reduced the flux of sediment reaching the world's coasts (by 1.4 ± 0.3 billion metric tons per year) because of retention within reservoirs. Over 100 billion metric tons of sediment and 1 to 3 billion metric tons of carbon are now sequestered in reservoirs constructed largely within the past 50 years. African and Asian rivers carry a greatly reduced sediment load; Indonesian rivers deliver much more sediment to coastal areas.

2,037 citations


Journal ArticleDOI
TL;DR: A review of the state of the art in the use of alternative reaction media for green, sustainable organic synthesis is presented in this article, where a novel and effective method for the immobilisation of enzymes as cross-linked enzyme aggregates (CLEAs) is discussed and a combi CLEA, containing two enzymes, for the one-pot conversion of benzaldehyde to S-mandelic acid is reported.

1,392 citations


Journal ArticleDOI
TL;DR: In this article, an adaptation of the original median test for the detection of spurious PIV data is proposed that normalizes the median residual with respect to a robust estimate of the local variation of the velocity.
Abstract: An adaptation of the original median test for the detection of spurious PIV data is proposed that normalizes the median residual with respect to a robust estimate of the local variation of the velocity. It is demonstrated that the normalized median test yields a more or less ‘universal’ probability density function for the residual and that a single threshold value can be applied to effectively detect spurious vectors. The generality of the proposed method is verified by the application to a large variety of documented flow cases with values of the Reynolds number ranging from 10−1 to 107.

1,121 citations


Proceedings ArticleDOI
06 Jul 2005
TL;DR: The MMI facial expression database is presented, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation.
Abstract: In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic facial expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark facial expression database difficult. We then present the MMI facial expression database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation. It has been built as a Web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on facial expression analysis to date.

1,093 citations


Journal ArticleDOI
TL;DR: In this article, a broad variety of pulse control and tomographic techniques have been developed for, and used in, NMR quantum computation and many of these will be useful in other quantum systems now being considered for the implementation of quantum information processing tasks.
Abstract: Fifty years of developments in nuclear magnetic resonance (NMR) have resulted in an unrivaled degree of control of the dynamics of coupled two-level quantum systems. This coherent control of nuclear spin dynamics has recently been taken to a new level, motivated by the interest in quantum information processing. NMR has been the workhorse for the experimental implementation of quantum protocols, allowing exquisite control of systems up to seven qubits in size. This article surveys and summarizes a broad variety of pulse control and tomographic techniques which have been developed for, and used in, NMR quantum computation. Many of these will be useful in other quantum systems now being considered for the implementation of quantum information processing tasks.

1,068 citations


Journal ArticleDOI
TL;DR: In this article, the role of spin pumping in layered structures is discussed and the main body of the theory is semiclassical and based on a mean-field Stoner or spin-density functional picture, but quantum-size effects and electron-electron correlations are also discussed.
Abstract: Two complementary effects modify the GHz magnetization dynamics of nanoscale heterostructures of ferromagnetic and normal materials relative to those of the isolated magnetic constituents. On the one hand, a time-dependent ferromagnetic magnetization pumps a spin angular-momentum flow into adjacent materials and, on the other hand, spin angular momentum is transferred between ferromagnets by an applied bias, causing mutual torques on the magnetizations. These phenomena are manifestly nonlocal: they are governed by the entire spin-coherent region that is limited in size by spin-flip relaxation processes. This review presents recent progress in understanding the magnetization dynamics in ferromagnetic heterostructures from first principles, focusing on the role of spin pumping in layered structures. The main body of the theory is semiclassical and based on a mean-field Stoner or spin-density-functional picture, but quantum-size effects and the role of electron-electron correlations are also discussed. A growing number of experiments support the theoretical predictions. The formalism should be useful for understanding the physics and for engineering the characteristics of small devices such as magnetic random-access memory elements.

1,051 citations


Journal ArticleDOI
TL;DR: In this article, a solution is described that makes it possible for wind turbines using doubly-fed induction generators to stay connected to the grid during grid faults by limiting the high current in the rotor in order to protect the converter and to provide a bypass for this current via a set of resistors that are connected to rotor windings.
Abstract: In this paper, a solution is described that makes it possible for wind turbines using doubly-fed induction generators to stay connected to the grid during grid faults. The key of the solution is to limit the high current in the rotor in order to protect the converter and to provide a bypass for this current via a set of resistors that are connected to the rotor windings. With these resistors, it is possible to ride through grid faults without disconnecting the turbine from the grid. Because the generator and converter stay connected, the synchronism of operation remains established during and after the fault and normal operation can be continued immediately after the fault has been cleared. An additional feature is that reactive power can be supplied to the grid during long dips in order to facilitate voltage restoration. A control strategy has been developed that takes care of the transition back to normal operation. Without special control action, large transients would occur.

879 citations


Journal ArticleDOI
01 Apr 2005-Codesign
TL;DR: In this paper, insights are shared, based on several projects from research and many years of industrial practice, of conducting user studies with generative techniques about how such studies can be conducted.
Abstract: In recent years, various methods and techniques have emerged for mapping the contexts of people's interaction with products. Designers and researchers use these techniques to gain deeper insight into the needs and dreams of prospective users of new products. As most of these techniques are still under development, there is a lack of practical knowledge about how such studies can be conducted. In this paper we share our insights, based on several projects from research and many years of industrial practice, of conducting user studies with generative techniques. The appendix contains a single case illustrating the application of these techniques in detail.

839 citations


Book ChapterDOI
24 Feb 2005
TL;DR: A measurement study of BitTorrent is presented in which it is shown that the system apparently has the right mechanisms to attract a large user community, to provide measurement data that may be useful in modeling P2P systems, and to identify design issues in such systems.
Abstract: Of the many P2P file-sharing prototypes in existence, BitTorrent is one of the few that has managed to attract millions of users. BitTorrent relies on other (global) components for file search, employs a moderator system to ensure the integrity of file data, and uses a bartering technique for downloading in order to prevent users from freeriding. In this paper we present a measurement study of BitTorrent in which we focus on four issues, viz. availability, integrity, flashcrowd handling, and download performance. The purpose of this paper is to aid in the understanding of a real P2P system that apparently has the right mechanisms to attract a large user community, to provide measurement data that may be useful in modeling P2P systems, and to identify design issues in such systems.

Journal ArticleDOI
TL;DR: A new steerable endoscope for laparoscopic surgery is described, inspired by the tentacles of squid, which can be easily miniaturized to a very small diameter, making it suitable for low-cost mass production of steerableendoscopes, instruments, and catheters.
Abstract: This article describes a new steerable endoscope for laparoscopic surgery. The steerable mechanism was inspired by the tentacles of squid and consists only of standard parts such as cables, coil springs, rings, and tubes. Laparoscopic surgery is carried out using an endoscope and long and slender instruments that are inserted through small incisions in the abdominal wall. The endoscope contains a light source and a camera that displays a picture of the abdominal cavity on a monitor in the operation room. The surgeon uses the instruments to carry out the operation while estimating the spatial position of the instruments and the organs from the camera picture. The mechanism can be easily miniaturized to a very small diameter, making it suitable for low-cost mass production of steerable endoscopes, instruments, and catheters. A patent for the mechanism has been applied for, and it is currently being commercialized.

Journal ArticleDOI
TL;DR: A theoretical model where hydrodynamic drag on the ection of the polymer outside the pore is the dominant force counteracting the electrical driving force is presented, and a power-law scaling with an exponent of 1.22 is derived in good agreement with the data.
Abstract: We report experiments and modeling of translocation of double-strand DNA through a siliconoxide nanopore. Long DNA molecules with different lengths ranging from 6500 to 97000 base pairs have been electrophoretically driven through a 10 nm pore. We observe a power-law scaling of the translocation time with the length, with an exponent of 1.27. This nonlinear scaling is strikingly different from the well-studied linear behavior observed in similar experiments performed on protein pores. We present a theoretical model where hydrodynamic drag on the section of the polymer outside the pore is the dominant force counteracting the electrical driving force. We show that this applies to our experiments, and we derive a power-law scaling with an exponent of 1.22, in good agreement with the data.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated loss of human life statistics for different types of floods and different regions on a global scale and concluded that floods are the most significant disaster type in terms of the number of persons affected.
Abstract: Every year floods cause enormous damage all over the world. This study investigates loss of human life statistics for different types of floods and different regions on a global scale. The OFDA/CRED Database contains data on international disasters and is maintained by the Centre for Research on the Epidemiology of Disasters in Brussels (CRED) in cooperation with United States Office for Foreign Disaster Assistance (OFDA). Information from this source on a large number of flood events, which occurred between January 1975 and June 2002, is evaluated with respect to flood location and flood type. Due to the limited availability of information on coastal flood events, the scope of this study is limited to three types of freshwater flooding: river floods, flash floods and drainage problems. First, the development of loss of life statistics over time is discussed. Second, the dataset is analysed by region, by flood type and by the combination of type and region. The study shows that flash floods result in the highest average mortality per event (the number of fatalities divided by the number of affected persons). A cross analysis by flood type and location shows that average mortality is relatively constant for the different types over various continents, while the magnitude of the impacts (numbers of killed) and affected for a certain type varies between the different continents. On a worldwide scale Asian river floods are most significant in terms of number of persons killed and affected. Finally, a comparison with figures for other types of natural disasters shows that floods are the most significant disaster type in terms of the number of persons affected.

Journal ArticleDOI
TL;DR: A computational framework for affective video content representation and modeling is proposed based on the dimensional approach to affect that is known from the field of psychophysiology that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect).
Abstract: This paper looks into a new direction in video content analysis - the representation and modeling of affective video content . The affective content of a given video clip can be defined as the intensity and type of feeling or emotion (both are referred to as affect) that are expected to arise in the user while watching that clip. The availability of methodologies for automatically extracting this type of video content will extend the current scope of possibilities for video indexing and retrieval. For instance, we will be able to search for the funniest or the most thrilling parts of a movie, or the most exciting events of a sport program. Furthermore, as the user may want to select a movie not only based on its genre, cast, director and story content, but also on its prevailing mood, the affective content analysis is also likely to contribute to enhancing the quality of personalizing the video delivery to the user. We propose in this paper a computational framework for affective video content representation and modeling. This framework is based on the dimensional approach to affect that is known from the field of psychophysiology. According to this approach, the affective video content can be represented as a set of points in the two-dimensional (2-D) emotion space that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect). We map the affective video content onto the 2-D emotion space by using the models that link the arousal and valence dimensions to low-level features extracted from video data. This results in the arousal and valence time curves that, either considered separately or combined into the so-called affect curve, are introduced as reliable representations of expected transitions from one feeling to another along a video, as perceived by a viewer.

Journal ArticleDOI
TL;DR: In this paper, the optimal coordination of variable speed limits and ramp metering in a freeway traffic network is discussed, where the objective of the control is to minimize the total time that vehicles spend in the network.
Abstract: This paper discusses the optimal coordination of variable speed limits and ramp metering in a freeway traffic network, where the objective of the control is to minimize the total time that vehicles spend in the network. Coordinated freeway traffic control is a new development where the control problem is to find the combination of control measures that results in the best network performance. This problem is solved by model predictive control, where the macroscopic traffic flow model METANET is used as the prediction model. We extend this model with a model for dynamic speed limits and for main-stream origins. This approach results in a predictive coordinated control approach where variable speed limits can prevent a traffic breakdown and maintain a higher outflow even when ramp metering is unable to prevent congestion (e.g., because of an on-ramp queue constraint). The use of dynamic speed limits significantly reduces congestion and results in a lower total time spent. Since the primary effect of the speed limits is the limitation of the main-stream flow, a comparison is made with the case where the speed limits are replaced by main-stream metering. The resulting performances are comparable. Since the range of flows that main-stream metering and dynamic speed limits can control is different, the choice between the two should be primarily based on the traffic demands.

Journal ArticleDOI
TL;DR: The use of segmented flow in capillaries, also known as Taylor flow, for reaction engineering purposes has soared in recent years as mentioned in this paper, with an emphasis on the underlying principles.

Journal ArticleDOI
TL;DR: Experimental results strongly suggest that P-removal occurs partly by (biologically induced) precipitation, and monitoring the laboratory scale reactors for a long period showed that N- Removal efficiency highly depends on the diameter of the granules.
Abstract: Aerobic granular sludge technology offers a possibility to design compact wastewater treatment plants based on simultaneous chemical oxygen demand (COD), nitrogen and phosphate removal in one sequencing batch reactor. In earlier studies, it was shown that aerobic granules, cultivated with an aerobic pulse-feeding pattern, were not stable at low dissolved oxygen concentrations. Selection for slow-growing organisms such as phosphate-accumulating organisms (PAO) was shown to be a measure for improved granule stability, particularly at low oxygen concentrations. Moreover, this allows long feeding periods needed for economically feasible full-scale applications. Simultaneous nutrient removal was possible, because of heterotrophic growth inside the granules (denitrifying PAO). At low oxygen saturation (20%) high removal efficiencies were obtained; 100% COD removal, 94% phosphate (P-) removal and 94% total nitrogen (N-) removal (with 100% ammonium removal). Experimental results strongly suggest that P-removal occurs partly by (biologically induced) precipitation. Monitoring the laboratory scale reactors for a long period showed that N-removal efficiency highly depends on the diameter of the granules.

Journal ArticleDOI
TL;DR: The zipper effect causes the capacity of the bottleneck to increase in a stepwise fashion with the width of theleneck, at least for bottlenecks of moderate width (less than 3 m).
Abstract: Traffic operations in public walking spaces are to a large extent determined by differences in pedestrian traffic demand and infrastructure supply. Congestion occurs when pedestrian traffic demand exceeds the capacity. In turn, the latter is determined by a number of factors, such as the width of the bottleneck and the wall surface, as well as the interaction behavior of the pedestrians passing the bottleneck.This article discusses experimental findings of microscopic pedestrian behavior in case of bottlenecks. Results for both a narrow bottleneck and a wide bottleneck are discussed and compared to the results of an experiment without a bottleneck. It is shown how pedestrians inside bottlenecks effectively form layers or trails, the distance between which is approximately 45 cm. This is less than the effective width of a single pedestrian, which is around 55 cm. The layers are thus overlapping, a phenomenon which is referred to as the "zipper" effect. The pedestrians within these layers follow each other at 1.3 seconds, irrespective of the considered experiment. For the narrow bottleneck case (width of one meter) two layers are formed; for the wide bottleneck case (width of two meters), four or five layers are formed, although the life span of these layers is rather small.The zipper effect causes the capacity of the bottleneck to increase in a stepwise fashion with the width of the bottleneck, at least for bottlenecks of moderate width (less than 3 m). This has substantial implications for the design of walking facilities.

Reference BookDOI
02 Nov 2005
TL;DR: The present and the future of structured catalysts is discussed in this article, where the authors present a review of the past, present and future monolithic catalysts for the selective reduction of NOx with NH3 from stationary monolith reactors.
Abstract: The present and the future of structured catalysts - an overview. Reactors with structured catalysts where no convective mass transfer over a cross section of the reactor occurs (monolithic catalysts = honeycomb catalysts): ceramic catalyst supports for casoline fuel metal and coated-metal catalysts autocatalysts - past, present and future monolithic catalysts for the selective reduction of NOx with NH3 from stationary monolith reactors unconventional utilization of monolithic

Journal ArticleDOI
TL;DR: A standardised method of classifying flood deaths is proposed and the difficulties associated with comparing and assessing existing information on flood deaths are discussed, finding that a substantial number of flood disaster fatalities are not related to drowning.
Abstract: The objective of this paper is to investigate and to improve understanding of the causes and circumstances of flood disaster deaths. A standardised method of classifying flood deaths is proposed and the difficulties associated with comparing and assessing existing information on flood deaths are discussed. Thirteen flood cases from Europe and the United States, resulting in 247 flood disaster fatalities, were analysed and taken as indicative of flood disaster deaths. Approximately two-thirds of the deaths occurred through drowning. Thus, a substantial number of flood disaster fatalities are not related to drowning. Furthermore, males are highly vulnerable to dying in floods and unnecessary risk-taking behaviour contributes significantly to flood disaster deaths. Based on these results, recommendations are made to prevent loss of life in floods. To provide a more solid basis for the formulation of prevention strategies, better systematic recording of flood fatalities is suggested, especially those caused by different types of floods in all countries.

Journal ArticleDOI
TL;DR: The existence of framework Al sites in different T positions that are more or less susceptible to the alkaline treatment, and the occurrence of re-alumination, are tentative explanations for the remarkable behaviour of Al in the desilication process.
Abstract: The role of the concentration and the nature of aluminium in the creation of hierarchical porosity in both commercial and synthesized MFI zeolites have been investigated through controlled mesoporosity development by desilication in alkaline medium. Framework aluminium controls the process of framework silicon extraction and makes desilication selective towards intracrystalline mesopore formation. An optimal molar Si/Al ratio in the range 25-50 has been identified; this leads to an optimal mesoporosity centred around 10 nm and mesopore surface areas of up to 235 m(2) g(-1) while preserving the intrinsic crystalline and acidic properties. At lower framework Si/Al ratios the relatively high Al content inhibits Si extraction and hardly any mesopores are created, while in highly siliceous ZSM-5 unselective extraction of framework Si induces formation of large pores. The existence of framework Al sites in different T positions that are more or less susceptible to the alkaline treatment, and the occurrence of re-alumination, are tentative explanations for the remarkable behaviour of Al in the desilication process. The presence of substantial extra framework Al, obtained by steam treatment, inhibits Si extraction and related mesopore formation; this is attributed to re-alumination of the extraframework Al species during the alkaline treatment. Removal of extraframework Al species by mild oxalic acid treatment restores susceptibility to desilication, which is accompanied by formation of larger mesopores due to the enhanced Si/Al ratio in the acid-treated zeolite.

Journal ArticleDOI
TL;DR: It is shown that a nanopore can be used to distinguish the lengths of DNA fragments present in a mixture and paved the way for quantitative analytical techniques with solid-state nanopores.
Abstract: We report double-strand DNA translocation experiments using silicon oxide nanopores with a diameter of about 10 nm . By monitoring the conductance of a voltage-biased pore, we detect molecules with a length ranging from 6557 to 48 500 base pairs. We find that the molecules can pass the pore both in a straight linear fashion and in a folded state. Experiments on circular DNA further support this picture. We sort the molecular events according to their folding state and estimate the folding position. As a proof-of-principle experiment, we show that a nanopore can be used to distinguish the lengths of DNA fragments present in a mixture. These experiments pave the way for quantitative analytical techniques with solid-state nanopores.

Journal ArticleDOI
TL;DR: In this paper, the location of the 13 divalent lanthanides in oxide or fluoride compounds relative to the top of the valence band or the bottom of the conduction band can be obtained by using only three host dependent parameters: (1) the energy of charge transfer from the top to Eu 3 +, (2) the redshift of the first 4 f → 5 d transition in divalent l lanthanide ions, and (3) the EE from the ED to the ED.

Journal ArticleDOI
TL;DR: This article proposes a freeway travel time prediction framework that exploits a recurrent neural network topology, the so-called state-space neural network (SSNN), with preprocessing strategies based on imputation that appears to be robust to the “damage” done by these imputation schemes.
Abstract: Accuracy and robustness with respect to missing or corrupt input data are two key characteristics for any travel time prediction model that is to be applied in a real-time environment (e.g. for display on variable message signs on freeways). This article proposes a freeway travel time prediction framework that exhibits both qualities. The framework exploits a recurrent neural network topology, the so-called state-space neural network (SSNN), with preprocessing strategies based on imputation. Although the SSNN model is a neural network, its design (in terms of input- and model selection) is not “black box” nor location-specific. Instead, it is based on the lay-out of the freeway stretch of interest. In this sense, the SSNN model combines the generality of neural network approaches, with traffic related (“white-box”) design. Robustness to missing data is tackled by means of simple imputation (data replacement) schemes, such as exponential forecasts and spatial interpolation. Although there are clear theoretical shortcomings to “simple” imputation schemes to remedy input failure, our results indicate that their use is justified in this particular application. The SSNN model appears to be robust to the “damage” done by these imputation schemes. This is true for both incidental (random) and structural input failure. We demonstrate that the SSNN travel time prediction framework yields good accurate and robust travel time predictions on both synthetic and real data.

Journal ArticleDOI
TL;DR: Measurements of the streaming current, an electrical current generated by a pressure-driven liquid flow, in individual rectangular silica nanochannels down to 70 nm in height, show that it is proportional to the pressure gradient and increases with the channel height.
Abstract: We report measurements of the streaming current, an electrical current generated by a pressure-driven liquid flow, in individual rectangular silica nanochannels down to 70 nm in height. The streaming current is observed to be proportional to the pressure gradient and increases with the channel height. As a function of salt concentration, it is approximately constant below approximately 10 mM, whereas it strongly decreases at higher salt. Changing the sign of the surface charge is found to reverse the streaming current. The data are best modeled using a nonlinear Poisson-Boltzmann theory that includes the salt-dependent hydration state of the silica surface.

Journal ArticleDOI
TL;DR: In this paper, the authors performed in situ X-ray diffraction measurements at a synchrotron source in order to study the thermal stability of the retained austenite phase in transformation induced plasticity steels during cooling from room temperature to 100 K.

Journal ArticleDOI
TL;DR: In this article, the ionization of the 5d electron to conduction band states is shown to be the genuine quenching mechanism for Eu2+ 5d−4f emission on Ba, Sr, or Ca sites in compounds.
Abstract: The thermal quenching of Eu2+ 5d–4f emission on Ba, Sr, or Ca sites in compounds is often attributed to a large displacement between the ground state and excited state in the configuration coordinate diagram. This work will demonstrate that the ionization of the 5d electron to conduction band states is the genuine quenching mechanism. A model is proposed to explain why in some types of compounds the quenching temperature decreases when going from the Ba variant via the Sr variant to the Ca variant and in other types of compounds the reverse behaviour occurs. The nature of the bottom of the conduction band plays an important role in this.

Journal ArticleDOI
08 Jul 2005-Science
TL;DR: Nanoscale supercond conductor/semiconductor hybrid devices are assembled from indium arsenide semiconductor nanowires individually contacted by aluminum-based superconductor electrodes, which form superconducting weak links operating as mesoscopic Josephson junctions with electrically tunable coupling.
Abstract: Nanoscale superconductor/semiconductor hybrid devices are assembled from indium arsenide semiconductor nanowires individually contacted by aluminum-based superconductor electrodes. Below 1 kelvin, the high transparency of the contacts gives rise to proximity-induced superconductivity. The nanowires form superconducting weak links operating as mesoscopic Josephson junctions with electrically tunable coupling. The supercurrent can be switched on/off by a gate voltage acting on the electron density in the nanowire. A variation in gate voltage induces universal fluctuations in the normal-state conductance, which are clearly correlated to critical current fluctuations. The alternating-current Josephson effect gives rise to Shapiro steps in the voltage-current characteristic under microwave irradiation.

Journal ArticleDOI
TL;DR: Surprisingly the resulting strain grew anaerobically on xylose in synthetic media with a mu(max) as high as 0.09 h(-1) without any non-defined mutagenesis or selection, and during growth onxylose, xylulose formation was absent and xylitol production was negligible.
Abstract: After an extensive selection procedure, Saccharomyces cerevisiae strains that express the xylose isomerase gene from the fungus Piromyces sp. E2 can grow anaerobically on xylose with a μmax of 0.03 h−1. In order to investigate whether reactions downstream of the isomerase control the rate of xylose consumption, we overexpressed structural genes for all enzymes involved in the conversion of xylulose to glycolytic intermediates, in a xylose-isomerase-expressing S. cerevisiae strain. The overexpressed enzymes were xylulokinase (EC 2.7.1.17), ribulose 5-phosphate isomerase (EC 5.3.1.6), ribulose 5-phosphate epimerase (EC 5.3.1.1), transketolase (EC 2.2.1.1) and transaldolase (EC 2.2.1.2). In addition, the GRE3 gene encoding aldose reductase was deleted to further minimise xylitol production. Surprisingly the resulting strain grew anaerobically on xylose in synthetic media with a μmax as high as 0.09 h−1 without any non-defined mutagenesis or selection. During growth on xylose, xylulose formation was absent and xylitol production was negligible. The specific xylose consumption rate in anaerobic xylose cultures was 1.1 g xylose (g biomass)−1 h−1. Mixtures of glucose and xylose were sequentially but completely consumed by anaerobic batch cultures, with glucose as the preferred substrate.