scispace - formally typeset
Search or ask a question

Showing papers by "Aalto University published in 2013"


Journal ArticleDOI
TL;DR: MNE-Python as discussed by the authors is an open-source software package that provides state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions.
Abstract: Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne.

1,723 citations


Posted Content
TL;DR: This tutorial surveys a wide range of IBFD self-interference mitigation techniques and discusses numerous other research challenges and opportunities in the design and analysis of IB FD wireless systems.
Abstract: In-band full-duplex (IBFD) operation has emerged as an attractive solution for increasing the throughput of wireless communication systems and networks. With IBFD, a wireless terminal is allowed to transmit and receive simultaneously in the same frequency band. This tutorial paper reviews the main concepts of IBFD wireless. Because one the biggest practical impediments to IBFD operation is the presence of self-interference, i.e., the interference caused by an IBFD node's own transmissions to its desired receptions, this tutorial surveys a wide range of IBFD self-interference mitigation techniques. Also discussed are numerous other research challenges and opportunities in the design and analysis of IBFD wireless systems.

1,549 citations


Journal ArticleDOI
Teuvo Kohonen1
TL;DR: The self-organizing map (SOM) is an automatic data-analysis method widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics and can be found in the management of massive textual databases and in bioinformatics.

1,079 citations


Book
Simo Srkk1
01 Sep 2013
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework, learning what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Abstract: Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book's practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB/GNU Octave source code is available for download at www.cambridge.org/sarkka, promoting hands-on work with the methods.

879 citations


Book
30 May 2013
TL;DR: This special issue includes eight original works that detail the further developments of ELMs in theories, applications, and hardware implementation.
Abstract: This special issue includes eight original works that detail the further developments of ELMs in theories, applications, and hardware implementation. In "Representational Learning with ELMs for Big Data," Liyanaarachchi Lekamalage Chamara Kasun, Hongming Zhou, Guang-Bin Huang, and Chi Man Vong propose using the ELM as an auto-encoder for learning feature representations using singular values. In "A Secure and Practical Mechanism for Outsourcing ELMs in Cloud Computing," Jiarun Lin, Jianping Yin, Zhiping Cai, Qiang Liu, Kuan Li, and Victor C.M. Leung propose a method for handling large data applications by outsourcing to the cloud that would dramatically reduce ELM training time. In "ELM-Guided Memetic Computation for Vehicle Routing," Liang Feng, Yew-Soon Ong, and Meng-Hiot Lim consider the ELM as an engine for automating the encapsulation of knowledge memes from past problem-solving experiences. In "ELMVIS: A Nonlinear Visualization Technique Using Random Permutations and ELMs," Anton Akusok, Amaury Lendasse, Rui Nian, and Yoan Miche propose an ELM method for data visualization based on random permutations to map original data and their corresponding visualization points. In "Combining ELMs with Random Projections," Paolo Gastaldo, Rodolfo Zunino, Erik Cambria, and Sergio Decherchi analyze the relationships between ELM feature-mapping schemas and the paradigm of random projections. In "Reduced ELMs for Causal Relation Extraction from Unstructured Text," Xuefeng Yang and Kezhi Mao propose combining ELMs with neuron selection to optimize the neural network architecture and improve the ELM ensemble's computational efficiency. In "A System for Signature Verification Based on Horizontal and Vertical Components in Hand Gestures," Beom-Seok Oh, Jehyoung Jeon, Kar-Ann Toh, Andrew Beng Jin Teoh, and Jaihie Kim propose a novel paradigm for hand signature biometry for touchless applications without the need for handheld devices. Finally, in "An Adaptive and Iterative Online Sequential ELM-Based Multi-Degree-of-Freedom Gesture Recognition System," Hanchao Yu, Yiqiang Chen, Junfa Liu, and Guang-Bin Huang propose an online sequential ELM-based efficient gesture recognition algorithm for touchless human-machine interaction.

705 citations


Journal ArticleDOI
B. S. Acharya1, Marcos Daniel Actis2, T. Aghajani3, G. Agnetta4  +979 moreInstitutions (122)
TL;DR: The Cherenkov Telescope Array (CTA) as discussed by the authors is a very high-energy (VHE) gamma ray observatory with an international collaboration with more than 1000 members from 27 countries in Europe, Asia, Africa and North and South America.

701 citations


Posted Content
TL;DR: In this article, the authors explore different ways to extend a recurrent neural network (RNN) to a \textit{deep} RNN by carefully analyzing and understanding the architecture of an RNN.
Abstract: In this paper, we explore different ways to extend a recurrent neural network (RNN) to a \textit{deep} RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs.

690 citations


Journal ArticleDOI
TL;DR: Nanotubes and graphene have emerged as promising materials for use in ultrafast fiber lasers as discussed by the authors, and their unique electrical and optical properties enable them to be used as saturable absorbers that have fast responses and broadband operation and can be easily integrated in fibre lasers.
Abstract: Nanotubes and graphene have emerged as promising materials for use in ultrafast fibre lasers. Their unique electrical and optical properties enable them to be used as saturable absorbers that have fast responses and broadband operation and that can be easily integrated in fibre lasers.

673 citations


Journal ArticleDOI
TL;DR: The tunable interaction strengths provide tools for understanding light-induced macroscopic motions in photoresponsive azobenzene-containing polymers, and the directionality renders halogen bonding useful in the design on functional supramolecular liquid crystals and gel-phase materials.
Abstract: Halogen bonding is an emerging noncovalent interaction for constructing supramolecular assemblies. Though similar to the more familiar hydrogen bonding, four primary differences between these two interactions make halogen bonding a unique tool for molecular recognition and the design of functional materials. First, halogen bonds tend to be much more directional than (single) hydrogen bonds. Second, the interaction strength scales with the polarizability of the bond-donor atom, a feature that researchers can tune through single-atom mutation. In addition, halogen bonds are hydrophobic whereas hydrogen bonds are hydrophilic. Lastly, the size of the bond-donor atom (halogen) is significantly larger than hydrogen. As a result, halogen bonding provides supramolecular chemists with design tools that cannot be easily met with other types of noncovalent interactions and opens up unprecedented possibilities in the design of smart functional materials.This Account highlights the recent advances in the design of hal...

669 citations


Journal ArticleDOI
TL;DR: The pseudolikelihood method, applied to 21-state Potts models describing the statistical properties of families of evolutionarily related proteins, significantly outperforms existing approaches to the direct-coupling analysis, the latter being based on standard mean-field techniques.
Abstract: Spatially proximate amino acids in a protein tend to coevolve. A protein's three-dimensional (3D) structure hence leaves an echo of correlations in the evolutionary record. Reverse engineering 3D structures from such correlations is an open problem in structural biology, pursued with increasing vigor as more and more protein sequences continue to fill the data banks. Within this task lies a statistical inference problem, rooted in the following: correlation between two sites in a protein sequence can arise from firsthand interaction but can also be network-propagated via intermediate sites; observed correlation is not enough to guarantee proximity. To separate direct from indirect interactions is an instance of the general problem of inverse statistical mechanics, where the task is to learn model parameters (fields, couplings) from observables (magnetizations, correlations, samples) in large systems. In the context of protein sequences, the approach has been referred to as direct-coupling analysis. Here we show that the pseudolikelihood method, applied to 21-state Potts models describing the statistical properties of families of evolutionarily related proteins, significantly outperforms existing approaches to the direct-coupling analysis, the latter being based on standard mean-field techniques. This improved performance also relies on a modified score for the coupling strength. The results are verified using known crystal structures of specific sequence instances of various protein families. Code implementing the new method can be found at http://plmdca.csc.kth.se/.

637 citations


Journal ArticleDOI
TL;DR: This manuscript aims at making recommendations for a number of important data acquisition and data analysis steps and suggests details that should be specified in manuscripts reporting MEG studies, in order to facilitate interpretation and reproduction of the results.

Journal ArticleDOI
TL;DR: Positron annihilation spectroscopy is particularly suitable for studying vacancy-type defects in semiconductors and combining state-of-the-art experimental and theoretical methods allows for detailed identification of the defects and their chemical surroundings as mentioned in this paper.
Abstract: Positron annihilation spectroscopy is particularly suitable for studying vacancy-type defects in semiconductors. Combining state-of-the-art experimental and theoretical methods allows for detailed identification of the defects and their chemical surroundings. Also charge states and defect levels in the band gap are accessible. In this review the main experimental and theoretical analysis techniques are described. The usage of these methods is illustrated through examples in technologically important elemental and compound semiconductors. Future challenges include the analysis of noncrystalline materials and of transient defect-related phenomena.

Journal ArticleDOI
TL;DR: In this paper, the optical properties of transition metal dichalcogenide (TMD) bilayer heterostructures consisting of MoS${}_{2}$ layers sandwiched with WS${}{2}$, MoSe${}µ, MoTe${} µ, BN, or graphene sheets were investigated.
Abstract: We calculate from first principles the electronic structure and optical properties of a number of transition metal dichalcogenide (TMD) bilayer heterostructures consisting of MoS${}_{2}$ layers sandwiched with WS${}_{2}$, MoSe${}_{2}$, MoTe${}_{2}$, BN, or graphene sheets. Contrary to previous works, the systems are constructed in such a way that the unstrained lattice constants of the constituent incommensurate monolayers are retained. We find strong interaction between the $\ensuremath{\Gamma}$-point states in all TMD/TMD heterostructures, which can lead to an indirect gap. On the other hand, states near the $K$ point remain as in the monolayers. When TMDs are paired with BN or graphene layers, the interaction around the $\ensuremath{\Gamma}$-point is negligible, and the electronic structure resembles that of two independent monolayers. Calculations of optical properties of the MoS${}_{2}$/WS${}_{2}$ system show that, even when the valence- and conduction-band edges are located in different layers, the mixing of optical transitions is minimal, and the optical characteristics of the monolayers are largely retained in these heterostructures. The intensity of interlayer transitions is found to be negligibly small, a discouraging result for engineering the optical gap of TMDs by heterostructuring.

Journal ArticleDOI
TL;DR: By combining high-resolution transmission electron microscopy experiments and first-principles calculations, this paper studied production, diffusion, and agglomeration of sulfur vacancies in monolayer MoS${}_{2}$ under electron irradiation.
Abstract: By combining high-resolution transmission electron microscopy experiments and first-principles calculations, we study production, diffusion, and agglomeration of sulfur vacancies in monolayer MoS${}_{2}$ under electron irradiation. Single vacancies are found to be mobile under the electron beam and tend to agglomerate into lines. Different kinds of such extended defects are identified in the experiments, and their atomic structures and electronic properties are determined with the help of calculations. The orientation of line defects is found to be sensitive to mechanical strain. Our calculations also indicate that the electronic properties of the extended defects can be tuned by filling vacancy lines with other atomic species, thereby suggesting a way for strain and electron-beam-assisted engineering of MoS${}_{2}$-based nanostructures.

Journal ArticleDOI
TL;DR: In this paper, a simulation-based optimization method was used to find the cost-optimal and nZEB energy performance levels for a study case of a single-family house in Finland, where different options of building envelope parameters, heat-recovery units, and heating/cooling systems as well as various sizes of thermal and photovoltaic solar systems were explored as design options via three-stage optimization.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, M. I. R. Alves2, C. Armitage-Caplan3  +467 moreInstitutions (88)
TL;DR: The ESA's Planck satellite was launched 14 May 2009 and has been scanning the microwave and sub-millimetre sky continuously since 12 August 2009 as discussed by the authors, where it has measured gravitational lensing of CMB anisotropies at greater than 25 sigma.
Abstract: The ESA's Planck satellite, dedicated to studying the early Universe and its subsequent evolution, was launched 14 May 2009 and has been scanning the microwave and submillimetre sky continuously since 12 August 2009. This paper gives an overview of the mission and its performance, the processing, analysis, and characteristics of the data, the scientific results, and the science data products and papers in the release. The science products include maps of the CMB and diffuse extragalactic foregrounds, a catalogue of compact Galactic and extragalactic sources, and a list of sources detected through the SZ effect. The likelihood code used to assess cosmological models against the Planck data and a lensing likelihood are described. Scientific results include robust support for the standard six-parameter LCDM model of cosmology and improved measurements of its parameters, including a highly significant deviation from scale invariance of the primordial power spectrum. The Planck values for these parameters and others derived from them are significantly different from those previously determined. Several large-scale anomalies in the temperature distribution of the CMB, first detected by WMAP, are confirmed with higher confidence. Planck sets new limits on the number and mass of neutrinos, and has measured gravitational lensing of CMB anisotropies at greater than 25 sigma. Planck finds no evidence for non-Gaussianity in the CMB. Planck's results agree well with results from the measurements of baryon acoustic oscillations. Planck finds a lower Hubble constant than found in some more local measures. Some tension is also present between the amplitude of matter fluctuations derived from CMB data and that derived from SZ data. The Planck and WMAP power spectra are offset from each other by an average level of about 2% around the first acoustic peak.

Journal ArticleDOI
TL;DR: The findings indicate a breakthrough in using evolutionary algorithms in solving highly constrained envelope, HVAC and renewable optimization problems and some future directions anticipated or needed for improvement of current tools are presented.

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud, M. Ashdown3  +258 moreInstitutions (62)
TL;DR: In this paper, the authors used the Sunyaev-Zeldovich (SZ) and pressure profiles of 62 nearby massive clusters detected at high significance in the 14-month nominal survey.
Abstract: Taking advantage of the all-sky coverage and broad frequency range of the Planck satellite, we study the Sunyaev-Zeldovich (SZ) and pressure profiles of 62 nearby massive clusters detected at high significance in the 14-month nominal survey. Careful reconstruction of the SZ signal indicates that most clusters are individually detected at least out to R500. By stacking the radial profiles, we have statistically detected the radial SZ signal out to 3R500, i.e., at a density contrast of about 50-100, though the dispersion about the mean profile dominates the statistical errors across the whole radial range. Our measurement is fully consistent with previous Planck results on integrated SZ fluxes, further strengthening the agreement between SZ and X-ray measurements inside R500. Correcting for the effects of the Planck beam, we have calculated the corresponding pressure profiles. This new constraint from SZ measurements is consistent with the X-ray constraints from xmm in the region in which the profiles overlap (i.e., [0.1-1] R500), and is in fairly good agreement with theoretical predictions within the expected dispersion. At larger radii the average pressure profile is shallower than the predictions. Combining the SZ and X-ray observed profiles into a joint fit to a generalised pressure profile gives best-fit parameters [P0, c500, gamma, alpha, beta] = [6.41, 1.81, 0.31, 1.33, 4.13]. Using a reasonable hypothesis for the gas temperature in the cluster outskirts we reconstruct from our stacked pressure profile the gas mass fraction profile out to 3R500. Within the temperature driven uncertainties, our Planck constraints are compatible with the cosmic baryon fraction and expected gas fraction in halos.

Journal ArticleDOI
TL;DR: The positive effect of SE pre-treatment, opening the cell wall matrix to make polysaccharides more accessible, may be compromised by the structural changes of lignin that increase non-productive enzyme adsorption.

Journal ArticleDOI
TL;DR: In this article, a review of the physical and technical constraints that influence single-electron charge transport is presented, and a broad variety of proposed realizations are presented, some of them have already proven experimentally to nearly fulfill the demanding needs, in terms of transfer errors and transfer rate, of quantum metrology of electrical quantities, whereas some others are currently "just" wild ideas, still often potentially competitive if technical constraints can be lifted.
Abstract: The control electrons at the level of the elementary charge e was demonstrated experimentally already in the 1980s. Ever since, the production of an electrical current ef, or its integer multiple, at a drive frequency f has been in a focus of research for metrological purposes.This review discusses the generic physical phenomena and technical constraints that influence single-electron charge transport and presents a broad variety of proposed realizations. Some of them have already proven experimentally to nearly fulfill the demanding needs, in terms of transfer errors and transfer rate, of quantum metrology of electrical quantities, whereas some others are currently ‘‘just’’ wild ideas, still often potentially competitive if technical constraints can be lifted. The important issues of readout of singleelectron events and potential error correction schemes based on them are also discussed. Finally, an account is given of the status of single-electron current sources in the bigger framework of electric quantum standards and of the future international SI system of units, and applications and uses of single-electron devices outside the metrological context are briefly discussed.

Journal ArticleDOI
TL;DR: It is shown that electrostatically patchy protein cages--cowpea chlorotic mottle virus and ferritin cages--can be used to direct the self-assembly of three-dimensional binary superlattices, and these magnetic assemblies provide contrast enhancement in magnetic resonance imaging.
Abstract: Binary nanoparticle superlattices are periodic nanostructures with lattice constants much shorter than the wavelength of light and could be used to prepare multifunctional metamaterials. Such superlattices are typically made from synthetic nanoparticles, and although biohybrid structures have been developed, incorporating biological building blocks into binary nanoparticle superlattices remains challenging. Protein-based nanocages provide a complex yet monodisperse and geometrically well-defined hollow cage that can be used to encapsulate different materials. Such protein cages have been used to program the self-assembly of encapsulated materials to form free-standing crystals and superlattices at interfaces or in solution. Here, we show that electrostatically patchy protein cages--cowpea chlorotic mottle virus and ferritin cages--can be used to direct the self-assembly of three-dimensional binary superlattices. The negatively charged cages can encapsulate RNA or superparamagnetic iron oxide nanoparticles, and the superlattices are formed through tunable electrostatic interactions with positively charged gold nanoparticles. Gold nanoparticles and viruses form an AB(8)(fcc) crystal structure that is not isostructural with any known atomic or molecular crystal structure and has previously been observed only with large colloidal polymer particles. Gold nanoparticles and empty or nanoparticle-loaded ferritin cages form an interpenetrating simple cubic AB structure (isostructural with CsCl). We also show that these magnetic assemblies provide contrast enhancement in magnetic resonance imaging.

Journal ArticleDOI
TL;DR: It is concluded that the method is widely misunderstood, and the results cast strong doubts on its effectiveness for building and testing theory in organizational research.
Abstract: Partial least squares path modeling (PLS) was developed in the 1960s and 1970s as a method for predictive modeling. In the succeeding years, applied disciplines, including organizational and manage...

Journal ArticleDOI
02 May 2013-Nature
TL;DR: It is shown that IDAX, a reported inhibitor of Wnt signalling that has been implicated in malignant renal cell carcinoma and colonic villous adenoma, regulates TET2 protein expression, and that the expression and activity of TET3 is also regulated through its CXXC domain.
Abstract: The CXXC domains of TET2 (encoded by the distinct gene IDAX) and TET3 are found to have previously unknown roles in the regulation of TET proteins through the activation of caspases and subsequent reduction in TET catalytic activity; this regulation is dependent on DNA binding through the CXXC domain. TET family proteins modify the methylation status of DNA by oxidizing 5-methylcytosine to 5-hydroxymethylcytosine (5hmC, sometimes called the 'fifth base' of DNA) and other intermediates. TET1 and TET3 contain a CXXC domain but the ancestral CXXC domain of TET2 is encoded by a distinct gene, IDAX (or CXXC4). This paper demonstrates that IDAX binds unmethylated CpG-rich DNA via its CXXC domain and recruits TET2. The separate and linked CXXC domains of TET2 and TET3 are shown to act as regulators of caspase activation and TET enzymatic activity. The authors suggest that future studies should focus on the genomic targets of TET2, IDAX and the IDAX-related protein CXXC5 in normal development and in cancer. TET (ten-eleven-translocation) proteins are Fe(ii)- and α-ketoglutarate-dependent dioxygenases1,2,3 that modify the methylation status of DNA by successively oxidizing 5-methylcytosine to 5-hydroxymethylcytosine, 5-formylcytosine and 5-carboxycytosine1,3,4,5, potential intermediates in the active erasure of DNA-methylation marks5,6. Here we show that IDAX (also known as CXXC4), a reported inhibitor of Wnt signalling7 that has been implicated in malignant renal cell carcinoma8 and colonic villous adenoma9, regulates TET2 protein expression. IDAX was originally encoded within an ancestral TET2 gene that underwent a chromosomal gene inversion during evolution, thus separating the TET2 CXXC domain from the catalytic domain. The IDAX CXXC domain binds DNA sequences containing unmethylated CpG dinucleotides, localizes to promoters and CpG islands in genomic DNA and interacts directly with the catalytic domain of TET2. Unexpectedly, IDAX expression results in caspase activation and TET2 protein downregulation, in a manner that depends on DNA binding through the IDAX CXXC domain, suggesting that IDAX recruits TET2 to DNA before degradation. IDAX depletion prevents TET2 downregulation in differentiating mouse embryonic stem cells, and short hairpin RNA against IDAX increases TET2 protein expression in the human monocytic cell line U937. Notably, we find that the expression and activity of TET3 is also regulated through its CXXC domain. Taken together, these results establish the separate and linked CXXC domains of TET2 and TET3, respectively, as previously unknown regulators of caspase activation and TET enzymatic activity.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate the dynamical Casimir effect using a Josephson metamaterial embedded in a microwave cavity at 5.4 GHz, and extract the full 4 × 4 covariance matrix of the emitted microwave radiation, demonstrating that photons at frequencies symmetrical with respect to half of the modulation frequency are generated in pairs.
Abstract: The zero-point energy stored in the modes of an electromagnetic cavity has experimentally detectable effects, giving rise to an attractive interaction between the opposite walls, the static Casimir effect. A dynamical version of this effect was predicted to occur when the vacuum energy is changed either by moving the walls of the cavity or by changing the index of refraction, resulting in the conversion of vacuum fluctuations into real photons. Here, we demonstrate the dynamical Casimir effect using a Josephson metamaterial embedded in a microwave cavity at 5.4 GHz. We modulate the effective length of the cavity by flux-biasing the metamaterial based on superconducting quantum interference devices (SQUIDs), which results in variation of a few percentage points in the speed of light. We extract the full 4 × 4 covariance matrix of the emitted microwave radiation, demonstrating that photons at frequencies symmetrical with respect to half of the modulation frequency are generated in pairs. At large detunings of the cavity from half of the modulation frequency, we find power spectra that clearly show the theoretically predicted hallmark of the Casimir effect: a bimodal, “sparrow-tail” structure. The observed substantial photon flux cannot be assigned to parametric amplification of thermal fluctuations; its creation is a direct consequence of the noncommutativity structure of quantum field theory.

Journal ArticleDOI
TL;DR: It is illustrated that program visualization systems for beginners are often short-lived research prototypes that support the user-controlled viewing of program animations; a recent trend is to support more engaging modes of user interaction.
Abstract: This article is a survey of program visualization systems intended for teaching beginners about the runtime behavior of computer programs. Our focus is on generic systems that are capable of illustrating many kinds of programs and behaviors. We inclusively describe such systems from the last three decades and review findings from their empirical evaluations. A comparable review on the topic does not previously exist; ours is intended to serve as a reference for the creators, evaluators, and users of educational program visualization systems. Moreover, we revisit the issue of learner engagement which has been identified as a potentially key factor in the success of educational software visualization and summarize what little is known about engagement in the context of the generic program visualization systems for beginners that we have reviewed; a proposed refinement of the frameworks previously used by computing education researchers to rank types of learner engagement is a side product of this effort. Overall, our review illustrates that program visualization systems for beginners are often short-lived research prototypes that support the user-controlled viewing of program animations; a recent trend is to support more engaging modes of user interaction. The results of evaluations largely support the use of program visualization in introductory programming education, but research to date is insufficient for drawing more nuanced conclusions with respect to learner engagement. On the basis of our review, we identify interesting questions to answer for future research in relation to themes such as engagement, the authenticity of learning tasks, cognitive load, and the integration of program visualization into introductory programming pedagogy.

Journal ArticleDOI
TL;DR: This work reports a nanotube-mode-locked all-fiber ultrafast oscillator emitting three wavelengths at the central wavelengths of about 1540, 1550, and 1560 nm, which are tunable by stretching fiber Bragg gratings, agreeing well with the numerical simulations.
Abstract: Multi-wavelength lasers have widespread applications (e.g. fiber telecommunications, pump-probe measurements, terahertz generation). Here, we report a nanotube-mode-locked all-fiber ultrafast oscillator emitting three wavelengths at the central wavelengths of about 1540, 1550 and 1560 nm, which are tunable by stretching fiber Bragg gratings. The output pulse duration is around 6 ps with a spectral width of ~0.5 nm, agreeing well with the numerical simulations. The triple-laser system is controlled precisely and insensitive to environmental perturbations with <0.04% amplitude fluctuation. Our method provides a simple, stable, low-cost, multi-wavelength ultrafast-pulsed source for spectroscopy, biomedical research and telecommunications.

Journal ArticleDOI
14 Feb 2013-Nature
TL;DR: This is a model system with potential for a quantum interface, which may allow for storage of quantum information in long-lived phonon states, coupling to optical photons or for investigations of strongly coupled quantum systems near the classical limit.
Abstract: The properties of a quantum bit coupled to both a microwave cavity and a phonon mode in a micromechanical resonator suggest that such systems may allow for storage of quantum information in long-lived phonon states and read-out via microwave photons, with applications in quantum information control. In the emerging field of quantum information technologies the next advances are expected to involve the combination of different types of quantum systems to harness various degrees of freedom. In this spirit, this paper describes the construction of a solid-state system combining a memory element, which has long-lived quantum states, with a quantum interface that offers easy read-out. This is achieved by coupling an artificial two-level atom in the form of a superconducting transmon qubit, to two different resonant cavities — a microwave resonator and a nanomechanical resonator. In the resulting hybrid device the low-frequency phonon cavity stores the quantum information from the qubit, and the electrical microwave resonator communicates with the outside world. Hybrid quantum systems with inherently distinct degrees of freedom have a key role in many physical phenomena. Well-known examples include cavity quantum electrodynamics1, trapped ions2, and electrons and phonons in the solid state. In those systems, strong coupling makes the constituents lose their individual character and form dressed states, which represent a collective form of dynamics. As well as having fundamental importance, hybrid systems also have practical applications, notably in the emerging field of quantum information control. A promising approach is to combine long-lived atomic states2,3 with the accessible electrical degrees of freedom in superconducting cavities and quantum bits4,5 (qubits). Here we integrate circuit cavity quantum electrodynamics6,7 with phonons. Apart from coupling to a microwave cavity, our superconducting transmon qubit8, consisting of tunnel junctions and a capacitor, interacts with a phonon mode in a micromechanical resonator, and thus acts like an atom coupled to two different cavities. We measure the phonon Stark shift, as well as the splitting of the qubit spectral line into motional sidebands, which feature transitions between the dressed electromechanical states. In the time domain, we observe coherent conversion of qubit excitation to phonons as sideband Rabi oscillations. This is a model system with potential for a quantum interface, which may allow for storage of quantum information in long-lived phonon states, coupling to optical photons or for investigations of strongly coupled quantum systems near the classical limit.

Proceedings ArticleDOI
11 Aug 2013
TL;DR: This paper defines a novel density function, which gives subgraphs of much higher quality than densest sub graphs: the graphs found by the method are compact, dense, and with smaller diameter.
Abstract: Finding dense subgraphs is an important graph-mining task with many applications. Given that the direct optimization of edge density is not meaningful, as even a single edge achieves maximum density, research has focused on optimizing alternative density functions. A very popular among such functions is the average degree, whose maximization leads to the well-known densest-subgraph notion. Surprisingly enough, however, densest subgraphs are typically large graphs, with small edge density and large diameter. In this paper, we define a novel density function, which gives subgraphs of much higher quality than densest subgraphs: the graphs found by our method are compact, dense, and with smaller diameter. We show that the proposed function can be derived from a general framework, which includes other important density functions as subcases and for which we show interesting general theoretical properties. To optimize the proposed function we provide an additive approximation algorithm and a local-search heuristic. Both algorithms are very efficient and scale well to large graphs. We evaluate our algorithms on real and synthetic datasets, and we also devise several application studies as variants of our original problem. When compared with the method that finds the subgraph of the largest average degree, our algorithms return denser subgraphs with smaller diameter. Finally, we discuss new interesting research directions that our problem leaves open.

Journal ArticleDOI
TL;DR: A simple method using surface coating with wax to improve hydrophobicity and oxygen barrier properties at very high humidity is described, and the developed robust NFC films can be used as a generic, environmentally sustainable platform for functional materials.
Abstract: In this study, we present a rapid method to prepare robust, solvent-resistant, nanofibrillated cellulose (NFC) films that can be further surface-modified for functionality. The oxygen, water vapor, and grease barrier properties of the films were measured, and in addition, mechanical properties in the dry and wet state and solvent resistance were evaluated. The pure unmodified NFC films were good barriers for oxygen gas and grease. At a relative humidity below 65%, oxygen permeability of the pure and unmodified NFC films was below 0.6 cm3 μm m–2 d–1 kPa–1, and no grease penetrated the film. However, the largest advantage of these films was their resistance to various solvents, such as water, methanol, toluene, and dimethylacetamide. Although they absorbed a substantial amount of solvent, the films could still be handled after 24 h of solvent soaking. Hot-pressing was introduced as a convenient method to not only increase the drying speed of the films but also enhance the robustness of the films. The wet st...

Journal ArticleDOI
21 Aug 2013-PLOS ONE
TL;DR: It is shown that the popularity of a movie can be predicted much before its release by measuring and analyzing the activity level of editors and viewers of the corresponding entry to the movie in Wikipedia, the well-known online encyclopedia.
Abstract: Use of socially generated “big data” to access information about collective states of the minds in human societies has become a new paradigm in the emerging field of computational social science. A natural application of this would be the prediction of the society's reaction to a new product in the sense of popularity and adoption rate. However, bridging the gap between “real time monitoring” and “early predicting” remains a big challenge. Here we report on an endeavor to build a minimalistic predictive model for the financial success of movies based on collective activity data of online users. We show that the popularity of a movie can be predicted much before its release by measuring and analyzing the activity level of editors and viewers of the corresponding entry to the movie in Wikipedia, the well-known online encyclopedia.