scispace - formally typeset
Search or ask a question

Showing papers by "Helsinki University of Technology published in 2006"


Journal ArticleDOI
04 Oct 2006
TL;DR: In this paper, a review of numerical and experimental studies of supercontinuum generation in photonic crystal fiber is presented over the full range of experimentally reported parameters, from the femtosecond to the continuous-wave regime.
Abstract: A topical review of numerical and experimental studies of supercontinuum generation in photonic crystal fiber is presented over the full range of experimentally reported parameters, from the femtosecond to the continuous-wave regime. Results from numerical simulations are used to discuss the temporal and spectral characteristics of the supercontinuum, and to interpret the physics of the underlying spectral broadening processes. Particular attention is given to the case of supercontinuum generation seeded by femtosecond pulses in the anomalous group velocity dispersion regime of photonic crystal fiber, where the processes of soliton fission, stimulated Raman scattering, and dispersive wave generation are reviewed in detail. The corresponding intensity and phase stability properties of the supercontinuum spectra generated under different conditions are also discussed.

3,361 citations


Journal ArticleDOI
TL;DR: In this paper, a review of the thermal properties of mesoscopic structures is presented based on the concept of electron energy distribution, and, in particular, on controlling and probing it, and an immediate application of solidstate refrigeration and thermometry is in ultrasensitive radiation detection, which is discussed in depth.
Abstract: This review presents an overview of the thermal properties of mesoscopic structures. The discussion is based on the concept of electron energy distribution, and, in particular, on controlling and probing it. The temperature of an electron gas is determined by this distribution: refrigeration is equivalent to narrowing it, and thermometry is probing its convolution with a function characterizing the measuring device. Temperature exists, strictly speaking, only in quasiequilibrium in which the distribution follows the Fermi-Dirac form. Interesting nonequilibrium deviations can occur due to slow relaxation rates of the electrons, e.g., among themselves or with lattice phonons. Observation and applications of nonequilibrium phenomena are also discussed. The focus in this paper is at low temperatures, primarily below $4\phantom{\rule{0.3em}{0ex}}\mathrm{K}$, where physical phenomena on mesoscopic scales and hybrid combinations of various types of materials, e.g., superconductors, normal metals, insulators, and doped semiconductors, open up a rich variety of device concepts. This review starts with an introduction to theoretical concepts and experimental results on thermal properties of mesoscopic structures. Then thermometry and refrigeration are examined with an emphasis on experiments. An immediate application of solid-state refrigeration and thermometry is in ultrasensitive radiation detection, which is discussed in depth. This review concludes with a summary of pertinent fabrication methods of presented devices.

984 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the definitions of a distributed energy system and evaluate political, economic, social, and technological dimensions associated with regional energy systems on the basis of the degree of decentralization.
Abstract: Conventionally, power plants have been large, centralized units. A new trend is developing toward distributed energy generation, which means that energy conversion units are situated close to energy consumers, and large units are substituted by smaller ones. A distributed energy system is an efficient, reliable and environmentally friendly alternative to the traditional energy system. In this article, we will first discuss the definitions of a distributed energy system. Then we will evaluate political, economic, social, and technological dimensions associated with regional energy systems on the basis of the degree of decentralization. Finally, we will deal with the characteristics of a distributed energy system in the context of sustainability. This article concludes that a distributed energy system is a good option with respect to sustainable development.

640 citations


Journal ArticleDOI
TL;DR: In this article, a simplified bottom-up load model is presented to generate realistic domestic electricity consumption data on an hourly basis from a few up to thousands of households using input data that is available in public reports and statistics.
Abstract: Electricity consumption data profiles that include details on the consumption can be generated with a bottom-up load models. In these models the load is constructed from elementary load components that can be households or even their individual appliances. In this work a simplified bottom-up model is presented. The model can be used to generate realistic domestic electricity consumption data on an hourly basis from a few up to thousands of households. The model uses input data that is available in public reports and statistics. Two measured data sets from block houses are also applied for statistical analysis, model training, and verification. Our analysis shows that the generated load profiles correlate well with real data. Furthermore, three case studies with generated load data demonstrate some opportunities for appliance level demand side management (DSM). With a mild DSM scheme using cold loads, the daily peak loads can be reduced 7.2% in average. With more severe DSM schemes the peak load at the yearly peak day can be completely levelled with 42% peak reduction and sudden 3 h loss of load can be compensated with 61% mean load reduction. Copyright © 2005 John Wiley & Sons, Ltd.

528 citations


Journal ArticleDOI
TL;DR: An overview of the results obtained with lattice models of the fracture, highlighting the relations with statistical physics theories and more conventional fracture mechanics approaches is presented.
Abstract: Disorder and long-range interactions are two of the key components that make material failure an interesting playfield for the application of statistical mechanics. The cornerstone in this respect has been lattice models of the fracture in which a network of elastic beams, bonds, or electrical fuses with random failure thresholds are subject to an increasing external load. These models describe on a qualitative level the failure processes of real, brittle, or quasi-brittle materials. This has been particularly important in solving the classical engineering problems of material strength: the size dependence of maximum stress and its sample-to-sample statistical fluctuations. At the same time, lattice models pose many new fundamental questions in statistical physics, such as the relation between fracture and phase transitions. Experimental results point out to the existence of an intriguing crackling noise in the acoustic emission and of self-affine fractals in the crack surface morphology. Recent advances ...

464 citations


Journal ArticleDOI
TL;DR: This paper investigates the feasibility of an audio-based context recognition system developed and compared to the accuracy of human listeners in the same task, with particular emphasis on the computational complexity of the methods.
Abstract: The aim of this paper is to investigate the feasibility of an audio-based context recognition system. Here, context recognition refers to the automatic classification of the context or an environment around a device. A system is developed and compared to the accuracy of human listeners in the same task. Particular emphasis is placed on the computational complexity of the methods, since the application is of particular interest in resource-constrained portable devices. Simplistic low-dimensional feature vectors are evaluated against more standard spectral features. Using discriminative training, competitive recognition accuracies are achieved with very low-order hidden Markov models (1-3 Gaussian components). Slight improvement in recognition accuracy is observed when linear data-driven feature transformations are applied to mel-cepstral features. The recognition rate of the system as a function of the test sequence length appears to converge only after about 30 to 60 s. Some degree of accuracy can be achieved even with less than 1-s test sequence lengths. The average reaction time of the human listeners was 14 s, i.e., somewhat smaller, but of the same order as that of the system. The average recognition accuracy of the system was 58% against 69%, obtained in the listening tests in recognizing between 24 everyday contexts. The accuracies in recognizing six high-level classes were 82% for the system and 88% for the subjects.

436 citations


Journal ArticleDOI
TL;DR: Using functional magnetic resonance imaging, it is shown that not only the presence of pain but also the intensity of the observed pain is encoded in the observer's brain-as occurs during the observer’s own pain experience.
Abstract: Understanding another person's experience draws on "mirroring systems," brain circuitries shared by the subject's own actions/feelings and by similar states observed in others. Lately, also the experience of pain has been shown to activate partly the same brain areas in the subjects' own and in the observer's brain. Recent studies show remarkable overlap between brain areas activated when a subject undergoes painful sensory stimulation and when he/she observes others suffering from pain. Using functional magnetic resonance imaging, we show that not only the presence of pain but also the intensity of the observed pain is encoded in the observer's brain-as occurs during the observer's own pain experience. When subjects observed pain from the faces of chronic pain patients, activations in bilateral anterior insula (AI), left anterior cingulate cortex, and left inferior parietal lobe in the observer's brain correlated with their estimates of the intensity of observed pain. Furthermore, the strengths of activation in the left AI and left inferior frontal gyrus during observation of intensified pain correlated with subjects' self-rated empathy. These findings imply that the intersubjective representation of pain in the human brain is more detailed than has been previously thought.

431 citations


Journal ArticleDOI
TL;DR: In this paper, the stationary spatial distribution of a node moving according to the random waypoint model in a given convex area is analyzed, which is in the form of a one-dimensional integral giving the density up to a normalization constant.
Abstract: The random waypoint model (RWP) is one of the most widely used mobility models in performance analysis of ad hoc networks. We analyze the stationary spatial distribution of a node moving according to the RWP model in a given convex area. For this, we give an explicit expression, which is in the form of a one-dimensional integral giving the density up to a normalization constant. This result is also generalized to the case where the waypoints have a nonuniform distribution. As a special case, we study a modified RWP model, where the waypoints are on the perimeter. The analytical results are illustrated through numerical examples. Moreover, the analytical results are applied to study certain performance aspects of ad hoc networks, namely, connectivity and traffic load distribution.

375 citations


Journal ArticleDOI
TL;DR: In this article, the degree of surface substitution (DSS) was determined using Si concentrations from XPS survey scans, as well as deconvoluted peaks in high-resolution C1s XPS spectra.
Abstract: Microfibrillated cellulose (MFC) obtained by disintegration of bleached softwood sulphite pulp in a homogenizer, was hydrophobically modified by surface silylation with chlorodimethyl isopropylsilane (CDMIPS). The silylated MFC was characterized by Fourier transform infrared spectroscopy (FT-IR), atomic force microscopy (AFM), transmission electron spectroscopy (TEM), X-ray photoelectron spectroscopy (XPS) and white light interferometry (WLI). The degree of surface substitution (DSS) was determined using Si concentrations from XPS survey scans, as well as deconvoluted peaks in high-resolution C1s XPS spectra. The DSS values obtained by the two methods were found to be in good agreement. MFC with DSS between 0.6 and 1 could be dispersed in a non-flocculating manner into non-polar solvents, TEM observations showing that the material had kept its initial morphological integrity. However, when CDMIPS in excess of 5 mol CDMIPS/glucose unit in the MFC was used, partial solubilization of the MFC occurred, resulting in a drop in the observed DSS and a loss of the microfibrillar character of the material. The wetting properties of films cast from suspension of the silylated MFC were also investigated. The contact angles of water on the films increased with increasing DSS of the MFC, approaching the contact angles observed on super hydrophobic surfaces for the MFC with the highest degree of substitution. This is believed to originate from a combination of low surface energy and surface microstructure in the films.

339 citations


Journal ArticleDOI
TL;DR: In this paper, a novel assimilation technique based on (forward) modelling of observed brightness temperatures as a function of snow pack characteristics is introduced, which is a Bayesian approach that weighs the space-borne data and the reference field on SD interpolated from discrete synoptic observations with their estimated statistical accuracy.

338 citations


Journal ArticleDOI
TL;DR: In this article, the authors present life-cycle assessments of newly constructed European and U.S. office buildings from materials production through construction, use, and maintenance to end-of-life treatment.
Abstract: Office buildings are thought to be significant sources of energy use and emissions in industrialized countries, but quantitative assessments of all of the phases of the service life of office buildings are still quite rare. In order to enable environmentally conscious design and management, this paper presents life-cycle assessments of newly constructed European and U.S. office buildings from materials production through construction, use, and maintenance to end-of-life treatment. The significant environmental aspects indicate the dominance of the use phase in the quantified environmental categories, but draw attention to the importance of embedded materials and expected maintenance investments throughout the assumed 50-year service life, especially for particulate matter emissions. The relevance of the materials, construction, maintenance, and end-of-life phases relative to the use of buildings is expected to increase considerably as functional obsolescence of office buildings becomes more rapid, and complete reconstruction and reconfiguration become more frequent. By quantifying the energy use and environmental emissions of each life-cycle phase in more detail, the elements that cause significant emissions can be identified and targeted for improvement.

Journal ArticleDOI
TL;DR: Paper is a material known to everybody as mentioned in this paper, it has a network structure consisting of wood fibres that can be mimicked by cooking a portion of spaghetti and pouring it on a plate, to form a planar assembly of fibres.
Abstract: Paper is a material known to everybody. It has a network structure consisting of wood fibres that can be mimicked by cooking a portion of spaghetti and pouring it on a plate, to form a planar assembly of fibres that lie roughly horizontal. Real paper also contains other constituents added for technical purposes.This review has two main lines of thought. First, in the introductory part, we consider the physics that one encounters when 'using' paper, an everyday material that exhibits the presence of disorder. Questions arise, for instance, as to why some papers are opaque and others translucent, some are sturdy and others sloppy, some readily absorb drops of liquid while others resist the penetration of water. The mechanical and rheological properties of paper and paperboard are also interesting. They are inherently dependent on moisture content. In humid conditions paper is ductile and soft, in dry conditions brittle and hard.In the second part we explain in more detail research problems concerned with paper. We start with paper structure. Paper is made by dewatering a suspension of fibres starting from very low content of solids. The processes of aggregation, sedimentation and clustering are familiar from statistical mechanics. Statistical growth models or packing models can simulate paper formation well and teach a lot about its structure.The second research area that we consider is the elastic and viscoelastic properties and fracture of paper and paperboard. This has traditionally been the strongest area of paper physics. There are many similarities to, but also important differences from, composite materials. Paper has proved to be convenient test material for new theories in statistical fracture mechanics. Polymer physics and memory effects are encountered when studying creep and stress relaxation in paper. Water is a 'softener' of paper. In humid conditions, the creep rate of paper is much higher than in dry conditions.The third among our topics is the interaction of paper with water. The penetration of water into paper is an interesting transport problem because wood fibres are hygroscopic and swell with water intake. The porous fibre network medium changes as the water first penetrates into the pore space between the fibres and then into the fibres. This is an area where relatively little systematic research has been done. Finally, we summarize our thoughts on paper physics.

Proceedings ArticleDOI
09 Sep 2006
TL;DR: The complexity and large variety of factors involved in students' decision to drop the course indicates that simple actions to improve teaching or organization on a CS1 course to reduce drop out may be ineffective.
Abstract: This study focuses on CS minor students' decisions to drop out from the CS1 course. The high level of drop out percentage has been a problem at Helsinki University of Technology for many years. This course has yearly enrolment of 500-600 students and the drop out percentage has varied from 30-50 percents.Since we did not have clear picture of drop out reasons we conducted a qualitative interview research in which 18 dropouts from the CS1 course were interviewed. The reasons of drop out were categorized and, in addition, each case was investigated individually. This procedure enabled us to both list the reasons and to reveal the cumulative nature of drop out reasons.The results indicate that several reasons affect students' decision to quit the CS1 course. The most frequent reasons were the lack of time and the lack of motivation. However, both of these reasons were in turn affected by factors, such as the perceived difficulty of the course, general difficulties with time managing and planning studies, or the decision to prefer something else. Furthermore, low comfort level and plagiarism played a role in drop out. In addition, drop out reasons cumulated.This study shows that the complexity and large variety of factors involved in students' decision to drop the course. This indicates that simple actions to improve teaching or organization on a CS1 course to reduce drop out may be ineffective. Efficient intervention to the problem apparently requires a combination of many different actions that take into consideration the versatile nature of reasons involved in drop out.

Journal ArticleDOI
TL;DR: In this article, the effects of short-chain alcohols, methanol and ethanol, on two different fully hydrated lipid bilayer systems (POPC and DPPC) in the fluid phase at 323 K were investigated.

Journal ArticleDOI
TL;DR: Because of the long-term and dynamic nature of the M&A process, it is argued that instead of studying the simple performance impact of cultural differences in M& a, it should move on to thinking how cultural differences impact on the M &A process and its outcome.
Abstract: Do cultural differences have an impact on the performance of M&A? Despite the widely accepted myth that they do, and in a negative way, a review of extant research provides contradictory findings. In this article, we explore reasons for this contradiction and propose solutions in the form of propositions and a theoretical framework. We begin with a brief overview of extant research on the culture-performance relationship in M&A. In light of the contradictions emerging from this review, we move on to identifying three areas of complexity explaining this confusion, and for each one, we suggest propositions to guide future research. We then summarize our argument using a theoretical framework. Because of the long-term and dynamic nature of the M&A process, we argue that instead of studying the simple performance impact of cultural differences in M&A, we should move on to thinking how cultural differences impact on the M&A process and its outcome.

Journal ArticleDOI
TL;DR: In this paper, a gas phase process of single-walled carbon nanotube (SWCNT) formation, based on thermal decomposition of iron pentacarbonyl or ferrocene in the presence of carbon monoxide (CO), was investigated in ambient pressure laminar flow reactors in the temperature range of 600-1300 ǫ.

Journal ArticleDOI
TL;DR: In this article, the authors show that the curvature of the nanotube atomic network breaks the trigonal symmetry of a perfect graphene sheet, making the diffusion anisotropic and strongly influencing the migration barrier.

Journal ArticleDOI
TL;DR: An improved version of the FastICA algorithm is proposed which is asymptotically efficient, i.e., its accuracy given by the residual error variance attains the Cramer-Rao lower bound (CRB).
Abstract: FastICA is one of the most popular algorithms for independent component analysis (ICA), demixing a set of statistically independent sources that have been mixed linearly. A key question is how accurate the method is for finite data samples. We propose an improved version of the FastICA algorithm which is asymptotically efficient, i.e., its accuracy given by the residual error variance attains the Cramer-Rao lower bound (CRB). The error is thus as small as possible. This result is rigorously proven under the assumption that the probability distribution of the independent signal components belongs to the class of generalized Gaussian (GG) distributions with parameter alpha, denoted GG(alpha) for alpha>2. We name the algorithm efficient FastICA (EFICA). Computational complexity of a Matlab implementation of the algorithm is shown to be only slightly (about three times) higher than that of the standard symmetric FastICA. Simulations corroborate these claims and show superior performance of the algorithm compared with algorithm JADE of Cardoso and Souloumiac and nonparametric ICA of Boscolo on separating sources with distribution GG(alpha) with arbitrary alpha, as well as on sources with bimodal distribution, and a good performance in separating linearly mixed speech signals

Proceedings ArticleDOI
23 Apr 2006
TL;DR: This paper can be seen as HIP tutorial since it provides an insight view on HIP Architecture, HIP Base Exchange, Encapsulated Security Payload (ESP) Security Association Setup, mobility and multi-homing, and some early experiences about HIP.
Abstract: Host Identity Protocol (HIP) proposes a new name space, Host Identity. This name can be any globally unique name but it has been chosen to be the Public Key of a Public/Private Key pair. This paper can be seen as HIP tutorial since it provides an insight view on HIP Architecture, HIP Base Exchange, Encapsulated Security Payload (ESP) Security Association Setup, mobility and multi-homing, and some early experiences about HIP.

Journal ArticleDOI
TL;DR: In this paper, the effect of the reaction conditions on the system was studied and conditions for optimal selectivity toward ethers were discovered near with isobutene/glycerol molar ratio of 3 at 80°C.
Abstract: Glycerol is a by-product of biodiesel production, for which new uses are being sought. Etherification of glycerol with isobutene in liquid phase with acidic ion exchange resin catalyst gave five product ethers and, as a side reaction, isobutene reacted to C8–C16 hydrocarbons. The effect of the reaction conditions on the system was studied and conditions for optimal selectivity toward ethers were discovered near with isobutene/glycerol molar ratio of 3 at 80 °C. The conditions controlling the distribution of the product ethers were studied and it was found that the extent of the etherification reaction and thus the main ether products can be changed by varying the reaction conditions.

Journal ArticleDOI
TL;DR: A quantitative relationship between work performance and ventilation within a wide range of ventilation rates is demonstrated and use of this relationship in ventilation design and feasibility studies may be preferable to the current practice, which ignores the relationship between ventilation and productivity.
Abstract: Outdoor air ventilation rates vary considerably between and within buildings, and may be too low in some spaces. The purpose of this study was to evaluate the potential work performance benefits of increased ventilation. We analyzed the literature relating work performance with ventilation rate and employed statistical analyses with weighting factors to combine the results of different studies. The studies included in the review assessed performance of various tasks in laboratory experiments and measured performance at work in real buildings. Almost all studies found increases in performance with higher ventilation rates. The studies indicated typically a 1-3% improvement in average performance per 10 l/s-person increase in outdoor air ventilation rate. The performance increase per unit increase in ventilation was bigger with ventilation rates below 20 l/s-person and almost negligible with ventilation rates over 45 l/s-person. The performance increase was statistically significant with increased ventilation rates up to 15 l/s-person with 95% CI and up to 17 l/s-person with 90% CI. Practical Implications We have demonstrated a quantitative relationship between work performance and ventilation within a wide range of ventilation rates. The model shows a continuous increase in performance per unit increase in ventilation rate from 6.5 l/s-person to 65 l/s-person. The increase is statistically significant up to 15 l/s-person. This relationship has a high level of uncertainty; however, use of this relationship in ventilation design and feasibility studies may be preferable to the current practice, which ignores the relationship between ventilation and productivity.

Book ChapterDOI
TL;DR: Reactivity of the mu rhythm, especially its motor cortex 20-Hz component, provides an illuminating window to the involvement of the human sensorimotor system in the loop that connects action and perception with the environment.
Abstract: The rolandic mu rhythm consists of two main frequency components: one around 10 Hz and the other around 20 Hz. Reactivity of the mu rhythm, especially its motor cortex 20-Hz component, provides an illuminating window to the involvement of the human sensorimotor system in the loop that connects action and perception with the environment.

Journal ArticleDOI
TL;DR: In this article, the authors give an overview of commercial carbonate chemical production routes that do (or in a near future can) make use of the CO2 that is produced at a large scale from human activities, and address the process technology, market potential and other aspects of mineral carbonation for long-term CO2 storage as an alternative for, for example, storage in underground aquifers.

Journal ArticleDOI
26 May 2006-Science
TL;DR: It is reported that controlled irradiation of multiwalled carbon nanotubes can cause large pressure buildup within the nanotube cores that can plastically deform, extrude, and break solid materials that are encapsulated inside the core.
Abstract: Closed-shell carbon nanostructures, such as carbon onions, have been shown to act as self-contracting high-pressure cells under electron irradiation. We report that controlled irradiation of multiwalled carbon nanotubes can cause large pressure buildup within the nanotube cores that can plastically deform, extrude, and break solid materials that are encapsulated inside the core. We further showed by atomistic simulations that the internal pressure inside nanotubes can reach values higher than 40 gigapascals. Nanotubes can thus be used as robust nanoscale jigs for extruding and deforming hard nanomaterials and for modifying their properties, as well as templates for the study of individual nanometer-sized crystals under high pressure.

Journal ArticleDOI
TL;DR: This paper compares the performance of three usual allocations, namely max-min fairness, proportional fairness and balanced fairness, in a communication network whose resources are shared by a random number of data flows and shows this model is representative of a rich class of wired and wireless networks.
Abstract: We compare the performance of three usual allocations, namely max-min fairness, proportional fairness and balanced fairness, in a communication network whose resources are shared by a random number of data flows. The model consists of a network of processor-sharing queues. The vector of service rates, which is constrained by some compact, convex capacity set representing the network resources, is a function of the number of customers in each queue. This function determines the way network resources are allocated. We show that this model is representative of a rich class of wired and wireless networks. We give in this general framework the stability condition of max-min fairness, proportional fairness and balanced fairness and compare their performance on a number of toy networks.

Journal ArticleDOI
TL;DR: A debate persists about the distinctiveness of entrepreneurship research as discussed by the authors, and entrepreneurship research is seen as fragmented and its results are considered non-cumulative, handicapping the evolution of entrepreneurship.
Abstract: A debate persists about the distinctiveness of entrepreneurship research Entrepreneurship research is seen as fragmented and its results are considered noncumulative, handicapping the evolution of

Journal ArticleDOI
09 Nov 2006-Nature
TL;DR: Experimental results are reported showing that at low temperatures heat is transferred by photon radiation, when electron–phonon as well as normal electronic heat conduction is frozen out.
Abstract: A clever experiment that makes use of two metallic islands connected by superconducting leads now confirms that thermal conduction by photons is limited by the same quantum value. Although the result is mainly of fundamental importance, there are implications for the design of bolometers, detectors of far-infrared light that are used in astrophysical studies, and electronic micro-refrigerators. The thermal conductance of a single channel is limited by its unique quantum value GQ, as was shown theoretically1 in 1983. This result closely resembles the well-known quantization of electrical conductance in ballistic one-dimensional conductors2,3. Interestingly, all particles—irrespective of whether they are bosons or fermions—have the same quantized thermal conductance4,5 when they are confined within dimensions that are small compared to their characteristic wavelength. The single-mode heat conductance is particularly relevant in nanostructures. Quantized heat transport through submicrometre dielectric wires by phonons has been observed6, and it has been predicted to influence cooling of electrons in metals at very low temperatures due to electromagnetic radiation7. Here we report experimental results showing that at low temperatures heat is transferred by photon radiation, when electron–phonon8 as well as normal electronic heat conduction is frozen out. We study heat exchange between two small pieces of normal metal, connected to each other only via superconducting leads, which are ideal insulators against conventional thermal conduction. Each superconducting lead is interrupted by a switch of electromagnetic (photon) radiation in the form of a DC-SQUID (a superconducting loop with two Josephson tunnel junctions). We find that the thermal conductance between the two metal islands mediated by photons indeed approaches the expected quantum limit of GQ at low temperatures. Our observation has practical implications—for example, for the performance and design of ultra-sensitive bolometers (detectors of far-infrared light) and electronic micro-refrigerators9, whose operation is largely dependent on weak thermal coupling between the device and its environment.

Journal ArticleDOI
TL;DR: Data indicate that unilateral touch of fingers is associated, in addition to the well known activation of the contralateral SI cortex, with deactivation of the ipsilateral SI cortex and of the MI cortex of both hemispheres.
Abstract: The whole human primary somatosensory (SI) cortex is activated by contralateral tactile stimuli, whereas its subarea 2 displays neuronal responses also to ipsilateral stimuli. Here we report on a transient deactivation of area 3b of the ipsilateral SI during long-lasting tactile stimulation. We collected functional magnetic resonance imaging data with a 3 T scanner from 10 healthy adult subjects while tactile pulses were delivered at 1, 4, or 10 Hz in 25 s blocks to three right-hand fingers. In the contralateral SI cortex, activation [positive blood oxygenation level-dependent (BOLD) response] outlasted the stimulus blocks by 20 s, with an average duration of 45 s. In contrast, a transient deactivation (negative BOLD response) occurred in the ipsilateral rolandic cortex with an average duration of 18 s. Additional recordings on 10 subjects confirmed that the deactivation was not limited to the right SI but occurred in the SI cortex ipsilateral to the stimulated hand. Moreover, the primary motor cortex (MI) contained voxels that were phasically deactivated in response to both ipsilateral and contralateral touch. These data indicate that unilateral touch of fingers is associated, in addition to the well known activation of the contralateral SI cortex, with deactivation of the ipsilateral SI cortex and of the MI cortex of both hemispheres. The ipsilateral SI deactivation could result from transcallosal inhibition, whereas intracortical SI-MI connections could be responsible for the MI deactivation. The shorter time course of deactivation than activation would agree with stronger decay of inhibitory than EPSP at the applied stimulus repetition rates.

Journal ArticleDOI
TL;DR: In this article, the authors presented a ride-through simulation study of a 2MW wind-power doubly fed induction generator (DFIG) under a short-term unsymmetrical network disturbance.
Abstract: This paper presents a ride-through simulation study of a 2-MW wind-power doubly fed induction generator (DFIG) under a short-term unsymmetrical network disturbance. The DFIG is represented by an analytical two-axis model with constant lumped parameters and by a finite element method (FEM)-based model. The model of the DFIG is coupled with the model of the active crowbar protected and direct torque controlled (DTC) frequency converter, the model of the main transformer, and a simple model of the grid. The simulation results show the ride-through capability of the studied doubly fed wind-power generator. The results obtained by means of an analytical model and FEM model are compared in order to reveal the influence of the different modeling approaches on the short-term transient simulation accuracy

Journal ArticleDOI
TL;DR: In this paper, the authors review the approximation error theory and investigate the interplay between the mesh density and measurement accuracy in the case of optical diffusion tomography, showing that if the approximation errors are estimated and employed, it is possible to use mesh densities that would be unacceptable with a conventional measurement model.
Abstract: Model reduction is often required in several applications, typically due to limited available time, computer memory or other restrictions. In problems that are related to partial differential equations, this often means that we are bound to use sparse meshes in the model for the forward problem. Conversely, if we are given more and more accurate measurements, we have to employ increasingly accurate forward problem solvers in order to exploit the information in the measurements. Optical diffusion tomography (ODT) is an example in which the typical required accuracy for the forward problem solver leads to computational times that may be unacceptable both in biomedical and industrial end applications. In this paper we review the approximation error theory and investigate the interplay between the mesh density and measurement accuracy in the case of optical diffusion tomography. We show that if the approximation errors are estimated and employed, it is possible to use mesh densities that would be unacceptable with a conventional measurement model.