scispace - formally typeset
Search or ask a question

Showing papers by "École Normale Supérieure published in 2010"


Journal ArticleDOI
TL;DR: This survey intends to relate the model selection performances of cross-validation procedures to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results.
Abstract: Used to estimate the risk of an estimator or to perform model selection, cross-validation is a widespread strategy because of its simplicity and its apparent universality. Many results exist on the model selection performances of cross-validation procedures. This survey intends to relate these results to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results. As a conclusion, guidelines are provided for choosing the best cross-validation procedure according to the particular features of the problem in hand.

2,980 citations


Journal ArticleDOI
TL;DR: A novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images, which outperforms all others submitted so far for four out of the six data sets.
Abstract: This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and "crowded" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.

2,863 citations


Journal ArticleDOI
TL;DR: In this paper, a new online optimization algorithm based on stochastic approximations is proposed to solve the large-scale matrix factorization problem, which scales up gracefully to large data sets with millions of training samples.
Abstract: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set in order to adapt it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large data sets.

2,348 citations


Journal ArticleDOI
29 Apr 2010
TL;DR: This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
Abstract: Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining state-of-the-art results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.

1,871 citations


Proceedings Article
06 Dec 2010
TL;DR: An online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA) based on online stochastic optimization with a natural gradient step is developed, which shows converges to a local optimum of the VB objective function.
Abstract: We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA). Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collections, including those arriving in a stream. We study the performance of online LDA in several ways, including by fitting a 100-topic topic model to 3.3M articles from Wikipedia in a single pass. We demonstrate that online LDA finds topic models as good or better than those found with batch VB, and in a fraction of the time.

1,551 citations


Journal ArticleDOI
TL;DR: An EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using a local image descriptor, DAISY, which is very efficient to compute densely and robust against many photometric and geometric transformations.
Abstract: In this paper, we introduce a local image descriptor, DAISY, which is very efficient to compute densely. We also present an EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using this descriptor. This yields much better results in wide-baseline situations than the pixel and correlation-based algorithms that are commonly used in narrow-baseline stereo. Also, using a descriptor makes our algorithm robust against many photometric and geometric transformations. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF, which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance when used densely. It is important to note that our approach is the first algorithm that attempts to estimate dense depth maps from wide-baseline image pairs, and we show that it is a good one at that with many experiments for depth estimation accuracy, occlusion detection, and comparing it against other descriptors on laser-scanned ground truth scenes. We also tested our approach on a variety of indoor and outdoor scenes with different photometric and geometric transformations and our experiments support our claim to being robust against these.

1,484 citations


Proceedings Article
21 Jun 2010
TL;DR: It is shown that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted.
Abstract: Many modern visual recognition algorithms incorporate a step of spatial 'pooling', where the outputs of several nearby feature detectors are combined into a local or global 'bag of features', in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.

1,239 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work seeks to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules and pooling schemes and shows how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding.
Abstract: Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.

1,177 citations


Journal ArticleDOI
08 Jul 2010-Nature
TL;DR: The root-mean-square charge radius, rp, has been determined with an accuracy of 2 per cent by electron–proton scattering experiments, and the present most accurate value of rp (with an uncertainty of 1 per cent) is given by the CODATA compilation of physical constants.
Abstract: Considering that the proton is a basic subatomic component of all ordinary matter — as well as being ubiquitous in its solo role as the hydrogen ion H+ — there are some surprising gaps in our knowledge of its structure and behaviour. A collaborative project to determine the root-mean-square charge radius of the proton to better than the 1% accuracy of the current 'best' value suggests that those knowledge gaps may be greater than was thought. The new determination comes from a technically challenging spectroscopic experiment — the measurement of the Lamb shift (the energy difference between a specific pair of energy states) in 'muonic hydrogen', an exotic atom in which the electron is replaced by its heavier twin, the muon. The result is unexpected: a charge radius about 4% smaller than the previous value. The discrepancy remains unexplained. Possible implications are that the value of the most accurately determined fundamental constant, the Rydberg constant, will need to be revised — or that the validity of quantum electrodynamics theory is called into question. Here, a technically challenging spectroscopic experiment is described: the measurement of the muonic Lamb shift. The results lead to a new determination of the charge radius of the proton. The new value is 5.0 standard deviations smaller than the previous world average, a large discrepancy that remains unexplained. Possible implications of the new finding are that the value of the Rydberg constant will need to be revised, or that the validity of quantum electrodynamics theory is called into question. The proton is the primary building block of the visible Universe, but many of its properties—such as its charge radius and its anomalous magnetic moment—are not well understood. The root-mean-square charge radius, rp, has been determined with an accuracy of 2 per cent (at best) by electron–proton scattering experiments1,2. The present most accurate value of rp (with an uncertainty of 1 per cent) is given by the CODATA compilation of physical constants3. This value is based mainly on precision spectroscopy of atomic hydrogen4,5,6,7 and calculations of bound-state quantum electrodynamics (QED; refs 8, 9). The accuracy of rp as deduced from electron–proton scattering limits the testing of bound-state QED in atomic hydrogen as well as the determination of the Rydberg constant (currently the most accurately measured fundamental physical constant3). An attractive means to improve the accuracy in the measurement of rp is provided by muonic hydrogen (a proton orbited by a negative muon); its much smaller Bohr radius compared to ordinary atomic hydrogen causes enhancement of effects related to the finite size of the proton. In particular, the Lamb shift10 (the energy difference between the 2S1/2 and 2P1/2 states) is affected by as much as 2 per cent. Here we use pulsed laser spectroscopy to measure a muonic Lamb shift of 49,881.88(76) GHz. On the basis of present calculations11,12,13,14,15 of fine and hyperfine splittings and QED terms, we find rp = 0.84184(67) fm, which differs by 5.0 standard deviations from the CODATA value3 of 0.8768(69) fm. Our result implies that either the Rydberg constant has to be shifted by −110 kHz/c (4.9 standard deviations), or the calculations of the QED effects in atomic hydrogen or muonic hydrogen atoms are insufficient.

1,152 citations


Journal ArticleDOI
TL;DR: It is demonstrated that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model, and the proposed model performs consistently.
Abstract: This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.

854 citations


Journal ArticleDOI
03 Jun 2010-Nature
TL;DR: The Ectocarpus genome sequence represents an important step towards developing this organism as a model species, providing the possibility to combine genomic and genetic approaches to explore these and other aspects of brown algal biology further.
Abstract: Brown algae (Phaeophyceae) are complex photosynthetic organisms with a very different evolutionary history to green plants, to which they are only distantly related. These seaweeds are the dominant species in rocky coastal ecosystems and they exhibit many interesting adaptations to these, often harsh, environments. Brown algae are also one of only a small number of eukaryotic lineages that have evolved complex multicellularity (Fig. 1). We report the 214 million base pair (Mbp) genome sequence of the filamentous seaweed Ectocarpus siliculosus (Dillwyn) Lyngbye, a model organism for brown algae closely related to the kelps (Fig. 1). Genome features such as the presence of an extended set of light-harvesting and pigment biosynthesis genes and new metabolic processes such as halide metabolism help explain the ability of this organism to cope with the highly variable tidal environment. The evolution of multicellularity in this lineage is correlated with the presence of a rich array of signal transduction genes. Of particular interest is the presence of a family of receptor kinases, as the independent evolution of related molecules has been linked with the emergence of multicellularity in both the animal and green plant lineages. The Ectocarpus genome sequence represents an important step towards developing this organism as a model species, providing the possibility to combine genomic and genetic approaches to explore these and other aspects of brown algal biology further

Journal ArticleDOI
22 Apr 2010-Nature
TL;DR: It is shown experimentally that the classical precision limit can be surpassed using nonlinear atom interferometry with a Bose–Einstein condensate and the results provide information on the many-particle quantum state, and imply the entanglement of 170 atoms.
Abstract: Interference is fundamental to wave dynamics and quantum mechanics. The quantum wave properties of particles are exploited in metrology using atom interferometers, allowing for high-precision inertia measurements. Furthermore, the state-of-the-art time standard is based on an interferometric technique known as Ramsey spectroscopy. However, the precision of an interferometer is limited by classical statistics owing to the finite number of atoms used to deduce the quantity of interest. Here we show experimentally that the classical precision limit can be surpassed using nonlinear atom interferometry with a Bose-Einstein condensate. Controlled interactions between the atoms lead to non-classical entangled states within the interferometer; this represents an alternative approach to the use of non-classical input states. Extending quantum interferometry to the regime of large atom number, we find that phase sensitivity is enhanced by 15 per cent relative to that in an ideal classical measurement. Our nonlinear atomic beam splitter follows the 'one-axis-twisting' scheme and implements interaction control using a narrow Feshbach resonance. We perform noise tomography of the quantum state within the interferometer and detect coherent spin squeezing with a squeezing factor of -8.2 dB (refs 11-15). The results provide information on the many-particle quantum state, and imply the entanglement of 170 atoms.

Journal ArticleDOI
TL;DR: This paper shows that formulating the problem in a naive Bayesian classification framework makes such preprocessing unnecessary and produces an algorithm that is simple, efficient, and robust, and it scales well as the number of classes grows.
Abstract: While feature point recognition is a key component of modern approaches to object detection, existing approaches require computationally expensive patch preprocessing to handle perspective distortion. In this paper, we show that formulating the problem in a naive Bayesian classification framework makes such preprocessing unnecessary and produces an algorithm that is simple, efficient, and robust. Furthermore, it scales well as the number of classes grows. To recognize the patches surrounding keypoints, our classifier uses hundreds of simple binary features and models class posterior probabilities. We make the problem computationally tractable by assuming independence between arbitrary sets of features. Even though this is not strictly true, we demonstrate that our classifier nevertheless performs remarkably well on image data sets containing very significant perspective changes.

Journal ArticleDOI
TL;DR: DETEX is proposed, a method that embodies and defines all the steps necessary for the specification and detection of code and design smells, and a detection technique that instantiates this method, and an empirical validation in terms of precision and recall of DETEX.
Abstract: Code and design smells are poor solutions to recurring implementation and design problems. They may hinder the evolution of a system by making it hard for software engineers to carry out changes. We propose three contributions to the research field related to code and design smells: (1) DECOR, a method that embodies and defines all the steps necessary for the specification and detection of code and design smells, (2) DETEX, a detection technique that instantiates this method, and (3) an empirical validation in terms of precision and recall of DETEX. The originality of DETEX stems from the ability for software engineers to specify smells at a high level of abstraction using a consistent vocabulary and domain-specific language for automatically generating detection algorithms. Using DETEX, we specify four well-known design smells: the antipatterns Blob, Functional Decomposition, Spaghetti Code, and Swiss Army Knife, and their 15 underlying code smells, and we automatically generate their detection algorithms. We apply and validate the detection algorithms in terms of precision and recall on XERCES v2.7.0, and discuss the precision of these algorithms on 11 open-source systems.

Journal ArticleDOI
TL;DR: Dynamic aspects of interactions between astrocytes, neurons and the vasculature have recently been in the neuroscience spotlight and this intercellular communication between glia has implications for neuroglial and gliovascular interactions.
Abstract: Dynamic aspects of interactions between astrocytes, neurons and the vasculature have recently been in the neuroscience spotlight. It has emerged that not only neurons but also astrocytes are organized into networks. Whereas neuronal networks exchange information through electrical and chemical synapses, astrocytes are interconnected through gap junction channels that are regulated by extra- and intracellular signals and allow exchange of information. This intercellular communication between glia has implications for neuroglial and gliovascular interactions and hence has added another level of complexity to our understanding of brain function.

Journal ArticleDOI
TL;DR: Five-degree-of-freedom (5-DOF) wireless magnetic control of a fully untethered microrobot (3-DOFs position, 2-DOf pointing orientation) is demonstrated, which is primarily designed for the control of intraocularmicrorobots for delicate retinal procedures, but it also has potential uses in other medical applications or micromanipulation under an optical microscope.
Abstract: We demonstrate five-degree-of-freedom (5-DOF) wireless magnetic control of a fully untethered microrobot (3-DOF position and 2-DOF pointing orientation). The microrobot can move through a large workspace and is completely unrestrained in the rotation DOF. We accomplish this level of wireless control with an electromagnetic system that we call OctoMag. OctoMag's unique abilities are due to its utilization of complex nonuniform magnetic fields, which capitalizes on a linear representation of the coupled field contributions of multiple soft-magnetic-core electromagnets acting in concert. OctoMag was primarily designed to control intraocular microrobots for delicate retinal procedures, but it also has potential uses in other medical applications or micromanipulation under an optical microscope.

Journal ArticleDOI
TL;DR: DNA microarray analysis in Malat1‐depleted neuroblastoma cells indicates that Malat 1 controls the expression of genes involved not only in nuclear processes, but also in synapse function, suggesting that Mal at1 regulates synapse formation by modulating the expressionof genes involved in synapses formation and/or maintenance.
Abstract: A growing number of long nuclear-retained non-coding RNAs (ncRNAs) have recently been described. However, few functions have been elucidated for these ncRNAs. Here, we have characterized the function of one such ncRNA, identified as metastasis-associated lung adenocarcinoma transcript 1 (Malat1). Malat1 RNA is expressed in numerous tissues and is highly abundant in neurons. It is enriched in nuclear speckles only when RNA polymerase II-dependent transcription is active. Knock-down studies revealed that Malat1 modulates the recruitment of SR family pre-mRNA-splicing factors to the transcription site of a transgene array. DNA microarray analysis in Malat1-depleted neuroblastoma cells indicates that Malat1 controls the expression of genes involved not only in nuclear processes, but also in synapse function. In cultured hippocampal neurons, knock-down of Malat1 decreases synaptic density, whereas its over-expression results in a cell-autonomous increase in synaptic density. Our results suggest that Malat1 regulates synapse formation by modulating the expression of genes involved in synapse formation and/or maintenance.

Journal ArticleDOI
TL;DR: Modules for Experiments in Stellar Astrophysics (MESA) as mentioned in this paper is a suite of open source libraries for a wide range of applications in computational stellar astrophysics, including advanced evolutionary phases.
Abstract: Stellar physics and evolution calculations enable a broad range of research in astrophysics. Modules for Experiments in Stellar Astrophysics (MESA) is a suite of open source libraries for a wide range of applications in computational stellar astrophysics. A newly designed 1-D stellar evolution module, MESA star, combines many of the numerical and physics modules for simulations of a wide range of stellar evolution scenarios ranging from very-low mass to massive stars, including advanced evolutionary phases. MESA star solves the fully coupled structure and composition equations simultaneously. It uses adaptive mesh refinement and sophisticated timestep controls, and supports shared memory parallelism based on OpenMP. Independently usable modules provide equation of state, opacity, nuclear reaction rates, and atmosphere boundary conditions. Each module is constructed as a separate Fortran 95 library with its own public interface. Examples include comparisons to other codes and show evolutionary tracks of very low mass stars, brown dwarfs, and gas giant planets; the complete evolution of a 1 Msun star from the pre-main sequence to a cooling white dwarf; the Solar sound speed profile; the evolution of intermediate mass stars through the thermal pulses on the He-shell burning AGB phase; the interior structure of slowly pulsating B Stars and Beta Cepheids; evolutionary tracks of massive stars from the pre-main sequence to the onset of core collapse; stars undergoing Roche lobe overflow; and accretion onto a neutron star. Instructions for downloading and installing MESA can be found on the project web site (this http URL).

Journal ArticleDOI
TL;DR: In this paper, a theory of amorphous packings, and more generally glassy states, of hard spheres that is based on the replica method, gives predictions on the structure and thermodynamics of these states.
Abstract: Hard spheres are ubiquitous in condensed matter: they have been used as models for liquids, crystals, colloidal systems, granular systems, and powders. Packings of hard spheres are of even wider interest, as they are related to important problems in information theory, such as digitalization of signals, error correcting codes, and optimization problems. In three dimensions the densest packing of identical hard spheres has been proven to be the FCC lattice, and it is conjectured that the closest packing is ordered (a regular lattice, e.g, a crystal) in low enough dimension. Still, amorphous packings have attracted a lot of interest, because for polydisperse colloids and granular materials the crystalline state is not obtained in experiments for kinetic reasons. We review here a theory of amorphous packings, and more generally glassy states, of hard spheres that is based on the replica method: this theory gives predictions on the structure and thermodynamics of these states. In dimensions between two and six these predictions can be successfully compared with numerical simulations. We will also discuss the limit of large dimension where an exact solution is possible. Some of the results we present here have been already published, but others are original: in particular we improved the discussion of the large dimension limit and we obtained new results on the correlation function and the contact force distribution in three dimensions. We also try here to clarify the main assumptions that are beyond our theory and in particular the relation between our static computation and the dynamical procedures used to construct amorphous packings.

Proceedings Article
06 Dec 2010
TL;DR: This work proposes an unsupervised method for learning multi-stage hierarchies of sparse convolutional features and trains an efficient feed-forward encoder that predicts quasi-sparse features from the input.
Abstract: We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an increasingly popular method for learning visual features, it is most often trained at the patch level. Applying the resulting filters convolutionally results in highly redundant codes because overlapping patches are encoded in isolation. By training convolutionally over large image windows, our method reduces the redudancy between feature vectors at neighboring locations and improves the efficiency of the overall representation. In addition to a linear decoder that reconstructs the image from sparse features, our method trains an efficient feed-forward encoder that predicts quasi-sparse features from the input. While patch-based training rarely produces anything but oriented edge detectors, we show that convolutional training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves performance on a number of visual recognition and detection tasks.

Journal ArticleDOI
TL;DR: In this paper, the authors used a Bayesian approach to analyze the observed probability density function (PDF) of B{sub z} from Zeeman surveys of H I, OH, and CN spectral lines in order to infer a density-dependent stochastic model of the total field strength B in diffuse and molecular clouds.
Abstract: The only direct measurements of interstellar magnetic field strengths depend on the Zeeman effect, which samples the line-of-sight component B{sub z} of the magnetic vector. In this paper, we use a Bayesian approach to analyze the observed probability density function (PDF) of B{sub z} from Zeeman surveys of H I, OH, and CN spectral lines in order to infer a density-dependent stochastic model of the total field strength B in diffuse and molecular clouds. We find that at n 300 cm{sup -3}, with an uncertainty at the 50% level in the power-law exponent of about {+-}0.05. This break-point density could be interpreted as the average density at which parsec-scale clouds become self-gravitating. Both the uniform PDF of total field strengths and the scalingmore » with density suggest that magnetic fields in molecular clouds are often too weak to dominate the star formation process. The stochasticity of the total field strength B implies that many fields are so weak that the mass/flux ratio in many clouds must be significantly supercritical. A two-thirds power law comes from isotropic contraction of gas too weakly magnetized for the magnetic field to affect the morphology of the collapse. On the other hand, our study does not rule out some clouds having strong magnetic fields with critical mass/flux ratios.« less

Journal ArticleDOI
TL;DR: An infinite set of integral non-linear equations for the spectrum of states/operators in AdS/CFT are derived and it is proved that all the kernels and free terms entering these TBA equations are real and have nice fusion properties in the relevant mirror kinematics.
Abstract: Using the thermodynamic Bethe ansatz method we derive an infinite set of integral non-linear equations for the spectrum of states/operators in AdS/CFT. The Y-system conjectured in Gromov et al. (Integrability for the Full Spectrum of Planar AdS/CFT. arXiv:0901.3753 [hep-th]) for the spectrum of all operators in planar N = 4 SYM theory follows from these equations. In particular, we present the integral TBA type equations for the spectrum of all operators within the sl(2) sector. We prove that all the kernels and free terms entering these TBA equations are real and have nice fusion properties in the relevant mirror kinematics. We find the analog of DHM formula for the dressing kernel in the mirror kinematics.

Journal ArticleDOI
TL;DR: The Motivation at Work Scale (MAWS) as discussed by the authors was developed in accordance with the multi-dimensional conceptualization of motivation postulated in self determination theory, and the authors examined the structure of the MAWS in a group of 1,644 workers in two different languages, English and French.
Abstract: The Motivation at Work Scale (MAWS) was developed in accordance with the multi­ dimensional conceptualization of motivation postulated in self ­ determination theory. The authors examined the structure of the MAWS in a group of 1,644 workers in two different languages, English and French. Results obtained from these samples suggested that the structure of motivation at work across languages is consistently organized into four different types: intrinsic motivation, identified regulation, introjected regulation, and external regulation. The MAWS subscales were predictably associated with organizational behavior constructs. The importance of this new multidimensional scale to the development of new work motivation research is discussed.

Posted Content
TL;DR: In this article, a general framework for image inverse problems is introduced, based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques.
Abstract: A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.

Journal ArticleDOI
TL;DR: In this article, the authors investigated to what extent eco-control influences environmental and economic performance in Canadian manufacturing firms and found that eco-management has no direct effect on economic performance.
Abstract: Eco-control is the application of financial and strategic control methods to environmental management. In this study, we investigate to what extent eco-control influences environmental and economic performance. Using survey-data from a sample of Canadian manufacturing firms, the results suggest that eco-control has no direct effect on economic performance. A mediating effect of environmental performance on the link between eco-control and economic performance is observed in different contexts. More specifically, eco-control indirectly influences economic performance in the context of (i) higher environmental exposure, (ii) higher public visibility, (iii) higher environmental concern, and (iv) larger size. This study contributes to the management accounting literature by providing insight into the roles and contributions of management accounting in the context of sustainable development.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper combines existing tools for bottom-up image segmentation such as normalized cuts, with kernel methods commonly used in object recognition, used within a discriminative clustering framework to obtain a combinatorial optimization problem which is relaxed to a continuous convex optimization problem that can be solved efficiently for up to dozens of images.
Abstract: Purely bottom-up, unsupervised segmentation of a single image into foreground and background regions remains a challenging task for computer vision. Co-segmentation is the problem of simultaneously dividing multiple images into regions (segments) corresponding to different object classes. In this paper, we combine existing tools for bottom-up image segmentation such as normalized cuts, with kernel methods commonly used in object recognition. These two sets of techniques are used within a discriminative clustering framework: the goal is to assign foreground/background labels jointly to all images, so that a supervised classifier trained with these labels leads to maximal separation of the two classes. In practice, we obtain a combinatorial optimization problem which is relaxed to a continuous convex optimization problem, that can itself be solved efficiently for up to dozens of images. We illustrate the proposed method on images with very similar foreground objects, as well as on more challenging problems with objects with higher intra-class variations.

Journal ArticleDOI
25 Feb 2010-Nature
TL;DR: It is shown that, despite strong interactions, the normal phase behaves as a mixture of two ideal gases: a Fermi gas of bare majority atoms and a non-interacting gas of dressed quasi-particles, the fermionic polarons.
Abstract: One of the greatest challenges in modern physics is to understand the behaviour of an ensemble of strongly interacting particles. A class of quantum many-body systems (such as neutron star matter and cold Fermi gases) share the same universal thermodynamic properties when interactions reach the maximum effective value allowed by quantum mechanics, the so-called unitary limit. This makes it possible in principle to simulate some astrophysical phenomena inside the highly controlled environment of an atomic physics laboratory. Previous work on the thermodynamics of a two-component Fermi gas led to thermodynamic quantities averaged over the trap, making comparisons with many-body theories developed for uniform gases difficult. Here we develop a general experimental method that yields the equation of state of a uniform gas, as well as enabling a detailed comparison with existing theories. The precision of our equation of state leads to new physical insights into the unitary gas. For the unpolarized gas, we show that the low-temperature thermodynamics of the strongly interacting normal phase is well described by Fermi liquid theory, and we localize the superfluid transition. For a spin-polarized system, our equation of state at zero temperature has a 2 per cent accuracy and extends work on the phase diagram to a new regime of precision. We show in particular that, despite strong interactions, the normal phase behaves as a mixture of two ideal gases: a Fermi gas of bare majority atoms and a non-interacting gas of dressed quasi-particles, the fermionic polarons.

Journal ArticleDOI
TL;DR: It is shown that host-encoded mechanisms control three alternative entry processes operating in the epidermis, the root cortex and at the single cell level, which provides support for the origin of rhizobial infection through direct intercellular epidermal invasion and subsequent evolution of crack entry and root hair invasions observed in most extant legumes.
Abstract: Bacterial infection of interior tissues of legume root nodules is controlled at the epidermal cell layer and is closely coordinated with progressing organ development. Using spontaneous nodulating Lotus japonicus plant mutants to uncouple nodule organogenesis from infection, we have determined the role of 16 genes in these two developmental processes. We show that host-encoded mechanisms control three alternative entry processes operating in the epidermis, the root cortex and at the single cell level. Single cell infection did not involve the formation of trans-cellular infection threads and was independent of host Nod-factor receptors and bacterial Nod-factor signals. In contrast, Nod-factor perception was required for epidermal root hair infection threads, whereas primary signal transduction genes preceding the secondary Ca2+ oscillations have an indirect role. We provide support for the origin of rhizobial infection through direct intercellular epidermal invasion and subsequent evolution of crack entry and root hair invasions observed in most extant legumes.

Journal ArticleDOI
10 Jun 2010-Neuron
TL;DR: Results reveal a mechanism whereby Abeta oligomers induce the abnormal accumulation and overstabilization of a glutamate receptor, thus providing a mechanistic and molecular basis for Abetas oligomer-induced early synaptic failure.

Journal ArticleDOI
12 Apr 2010
TL;DR: An instrumented TUG is proposed, called iTUG, using portable inertial sensors to improve TUG in several ways: automatic detection and separation of subcomponents, detailed analysis of each one of them and a higher sensitivity than TUG.
Abstract: Timed Up and Go (TUG) test is a widely used clinical paradigm to evaluate balance and mobility. Although TUG includes several complex subcomponents, namely: sit-to-stand, gait, 180° turn, and turn-to-sit; the only outcome is the total time to perform the task. We have proposed an instrumented TUG, called iTUG, using portable inertial sensors to improve TUG in several ways: automatic detection and separation of subcomponents, detailed analysis of each one of them and a higher sensitivity than TUG. Twelve subjects in early stages of Parkinson's disease (PD) and 12 age matched control subjects were enrolled. Stopwatch measurements did not show a significant difference between the two groups. The iTUG, however, showed a significant difference in cadence between early PD and control subjects (111.1 ± 6.2 versus 120.4 ± 7.6 step/min, p <; 0.006) as well as in angular velocity of arm-swing (123 ± 32.0 versus 174.0 ± 50.4°/s, p <; 0.005), turning duration (2.18 ± 0.43 versus 1.79 ± 0.27 s, p <; 0.023), and time to perform turn-to-sits (2.96 ± 0.68 versus 2.40 ± 0.33 s, p <; 0.023). By repeating the tests for a second time, the test-retest reliability of iTUG was also evaluated. Among the subcomponents of iTUG, gait, turning, and turn-to-sit were the most reliable and sit-to-stand was the least reliable.