scispace - formally typeset
Search or ask a question

Showing papers in "New Journal of Physics in 2007"


Journal ArticleDOI
TL;DR: In this article, a gapless phase between the spin Hall and the insulator phases in 3D was investigated in inversion-asymmetric systems, and it was shown that the existence of such a phase stems from the topological nature of gapless points (diabolical points) in three dimensions, but not in 2D.
Abstract: Phase transitions between the quantum spin Hall (QSH) and the insulator phases in three dimensions (3D) are studied. We find that in inversion-asymmetric systems there appears a gapless phase between the QSH and insulator phases in 3D which is in contrast with the 2D case. Existence of this gapless phase stems from a topological nature of gapless points (diabolical points) in 3D, but not in 2D.

979 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the acoustic equations in a fluid are identical in form to the single polarization Maxwell equations via a variable exchange that also preserves boundary conditions, and the existence of transformation-type solutions for the 2D acoustic equations with anisotropic mass via time harmonic simulations of acoustic cloaking.
Abstract: A complete analysis of coordinate transformations in elastic media by Milton et al has shown that, in general, the equations of motion are not form invariant and thus do not admit transformation-type solutions of the type discovered by Pendry et al for electromagnetics. However, in a two-dimensional (2D) geometry, the acoustic equations in a fluid are identical in form to the single polarization Maxwell equations via a variable exchange that also preserves boundary conditions. We confirm the existence of transformation-type solutions for the 2D acoustic equations with anisotropic mass via time harmonic simulations of acoustic cloaking. We discuss the possibilities of experimentally demonstrating acoustic cloaking and analyse why this special equivalence of acoustics and electromagnetics occurs only in 2D.

966 citations


Journal ArticleDOI
TL;DR: In this article, a procedure for modelling strong lensing galaxy clusters with parametric methods, and to rank models quantitatively using the Bayesian evidence is described, using a publicly available Markov chain Monte-Carlo (MCMC) sampler, allowing us to avoid local minima in the likelihood functions.
Abstract: In this paper, we describe a procedure for modelling strong lensing galaxy clusters with parametric methods, and to rank models quantitatively using the Bayesian evidence We use a publicly available Markov chain Monte-Carlo (MCMC) sampler ('bayesys'), allowing us to avoid local minima in the likelihood functions To illustrate the power of the MCMC technique, we simulate three clusters of galaxies, each composed of a cluster-scale halo and a set of perturbing galaxy-scale subhalos We ray-trace three light beams through each model to produce a catalogue of multiple images, and then use the MCMC sampler to recover the model parameters in the three different lensing configurations We find that, for typical Hubble Space Telescope (HST)-quality imaging data, the total mass in the Einstein radius is recovered with ~1?5% error according to the considered lensing configuration However, we find that the mass of the galaxies is strongly degenerated with the cluster mass when no multiple images appear in the cluster centre The mass of the galaxies is generally recovered with a 20% error, largely due to the poorly constrained cut-off radius Finally, we describe how to rank models quantitatively using the Bayesian evidence We confirm the ability of strong lensing to constrain the mass profile in the central region of galaxy clusters in this way Ultimately, such a method applied to strong lensing clusters with a very large number of multiple images may provide unique geometrical constraints on cosmology The implementation of the MCMC sampler used in this paper has been done within the framework of the lenstool software package, which is publicly available

578 citations


Journal ArticleDOI
TL;DR: In this paper, the authors quantify for arbitrary swimmer shapes and surface patterns, how efficient swimming requires both surface "activity" to generate the fields, and surface "phoretic mobility."
Abstract: Small objects can swim by generating around them fields or gradients which in turn induce fluid motion past their surface by phoretic surface effects. We quantify for arbitrary swimmer shapes and surface patterns, how efficient swimming requires both surface 'activity' to generate the fields, and surface 'phoretic mobility.' We show in particular that (i) swimming requires symmetry breaking in either or both of the patterns of 'activity' and 'mobility,' and (ii) for a given geometrical shape and surface pattern, the swimming velocity is size-independent. In addition, for given available surface properties, our calculation framework provides a guide for optimizing the design of swimmers.

566 citations


Journal ArticleDOI
TL;DR: In this article, a robust interferometric method was proposed to generate vector beam modes by diffracting a Gaussian laser beam from a spatial light modulator consisting of a high-resolution reflective nematic liquid crystal display.
Abstract: We present a robust interferometric method to generate arbitrary vector beam modes by diffracting a Gaussian laser beam from a spatial light modulator consisting of a high-resolution reflective nematic liquid crystal display. Vector beams may have the same intensity cross-section as the more common scalar Laguerre–Gaussian (LG) or Hermite–Gaussian (HG) beams, but with a spatially modulated polarization distribution. Special cases are the radially or azimuthally polarized 'doughnut' modes, which have superior focusing properties and promise novel applications in many fields, such as optical trapping, spectroscopy and super-resolution microscopy. Our system allows video rate switching between vector beam modes. We demonstrate the generation of high quality Hermite–Gaussian and Laguerre–Gaussian vector beam modes of different order, of vectorial anti-vortices, and of mode mixtures with interesting non-symmetric polarization distributions.

535 citations


Journal ArticleDOI
TL;DR: In this article, the experimental results show that the EG-based nanofluids are Newtonian under the conditions of this work with the shear viscosity as a strong function of temperature and particle concentration.
Abstract: This work aims at a more fundamental understanding of the rheological behaviour of nanofluids and the interpretation of the discrepancy in the recent literature. Both experiments and theoretical analyses are carried out with the experimental work on ethylene glycol (EG)-based nanofluids containing 0.5–8.0 wt% spherical TiO2 nanoparticles at 20–60 °C and the theoretical analyses on the high shear viscosity, shear thinning behaviour and temperature dependence. The experimental results show that the EG-based nanofluids are Newtonian under the conditions of this work with the shear viscosity as a strong function of temperature and particle concentration. The relative viscosity of the nanofluids is, however, independent of temperature. The theoretical analyses show that the high shear viscosity of nanofluids can be predicted by the Krieger–Dougherty equation if the effective nanoparticle concentration is used. For spherical nanoparticles, an aggregate size of approximately 3 times the primary nanoparticle size gives the best prediction of experimental data of both this work and those from the literature. The shear thinning behaviour of nanofluids depends on the effective particle concentration, the range of shear rate and viscosity of the base liquid. Such non-Newtonian behaviour can be characterized by a characteristic shear rate, which decreases with increasing volume fraction, increasing base liquid viscosity, or increasing aggregate size. These findings explain the reported controversy of the rheological behaviour of nanofluids in the literature. At temperatures not very far from the ambient temperature, the relative high shear viscosity is independent of temperature due to negligible Brownian diffusion in comparison to convection in high shear flows, in agreement with the experimental results. However, the characteristic shear rate can have strong temperature dependence, thus affecting the shear thinning behaviour. The theoretical analyses also lead to a classification of nanofluids into dilute, semi-dilute, semi-concentrated and concentrated nanofluids depending on particle concentration and particle structuring.

519 citations


Journal ArticleDOI
TL;DR: In this article, a fault-tolerant version of the one-way quantum computer using a cluster state in three spatial dimensions is described, where topologically protected quantum gates are realized by choosing appropriate boundary conditions on the cluster.
Abstract: We describe a fault-tolerant version of the one-way quantum computer using a cluster state in three spatial dimensions. Topologically protected quantum gates are realized by choosing appropriate boundary conditions on the cluster. We provide equivalence transformations for these boundary conditions that can be used to simplify fault-tolerant circuits and to derive circuit identities in a topological manner. The spatial dimensionality of the scheme can be reduced to two by converting one spatial axis of the cluster into time. The error threshold is 0.75% for each source in an error model with preparation, gate, storage and measurement errors. The operational overhead is poly-logarithmic in the circuit size.

482 citations


Journal ArticleDOI
TL;DR: A connected network of 3.9 million nodes from mobile phone call records is constructed, which can be regarded as a proxy for the underlying human communication network at the societal level and a positive correlation between the overlap and weight of a link is reported.
Abstract: We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societal level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing

422 citations


Journal ArticleDOI
TL;DR: In this paper, the authors use the linear intrinsic alignment model as a base and compare it to an alternative model and data, and find that when intrinsic alignments are included two or more times as many bins are required to obtain 80% of the available information.
Abstract: Cosmic shear constrains cosmology by exploiting the apparent alignments of pairs of galaxies due to gravitational lensing by intervening mass clumps. However, galaxies may become (intrinsically) aligned with each other, and with nearby mass clumps, during their formation. This effect needs to be disentangled from the cosmic shear signal to place constraints on cosmology. We use the linear intrinsic alignment model as a base and compare it to an alternative model and data. If intrinsic alignments are ignored then the dark energy equation of state is biased by ~50%. We examine how the number of tomographic redshift bins affects uncertainties on cosmological parameters and find that when intrinsic alignments are included two or more times as many bins are required to obtain 80% of the available information. We investigate how the degradation in the dark energy figure of merit depends on the photometric redshift scatter. Previous studies have shown that lensing does not place stringent requirements on the photometric redshift uncertainty, so long as the uncertainty is well known. However, if intrinsic alignments are included the requirements become a factor of three tighter. These results are quite insensitive to the fraction of catastrophic outliers, assuming that this fraction is well known. We show the effect of uncertainties in photometric redshift bias and scatter. Finally, we quantify how priors on the intrinsic alignment model would improve dark energy constraints.

375 citations


Journal ArticleDOI
TL;DR: In this article, a new class of measures of structural centrality for networks is introduced, called delta centralities, which is based on the concept of efficient propagation of information over the network.
Abstract: We introduce delta centralities, a new class of measures of structural centrality for networks. In particular, we focus on a measure in this class, the information centrality C I , which is based on the concept of efficient propagation of information over the network. C I is defined for both valued and non-valued graphs, and applies to groups as well as individuals. The measure is illustrated and compared with respect to the standard centrality measures by using a classic network data set. The statistical distribution of information centrality is investigated by considering large computer generated graphs and two networks from the real world.

374 citations


Journal ArticleDOI
TL;DR: In this article, it is suggested that the observed late flaring activity could be due to a secondary accretion episode induced by the delayed fall back of material dynamically stripped from a compact object during a merger or collision.
Abstract: Recent months have witnessed dramatic progress in our understanding of short γ-ray burst (SGRB) sources. There is now general agreement that SGRBs—or at least a substantial subset of them—are capable of producing directed outflows of relativistic matter with a kinetic luminosity exceeding by many millions that of active galactic nuclei. Given the twin requirements of energy and compactness, it is widely believed that SGRB activity is ultimately ascribable to a modest fraction of a solar mass of gas accreting on to a stellar mass black hole (BH) or to a precursor stage whose inevitable end point is a stellar mass BH. Astrophysical scenarios involving the violent birth of a rapidly rotating neutron star, or an accreting BH in a merging compact binary driven by gravitational wave emission are reviewed, along with other possible alternatives (collisions or collapse of compact objects). If a BH lies at the centre of this activity, then the fundamental pathways through which mass, angular momentum and energy can flow around and away from it play a key role in understanding how these prime movers can form collimated, relativistic outflows. Flow patterns near BHs accreting matter in the hypercritical regime, where photons are unable to provide cooling, but neutrinos do so efficiently, are discussed in detail, and we believe that they offer the best hope of understanding the central engine. On the other hand, statistical investigations of SGRB niches also furnish valuable information on their nature and evolutionary behaviour. The formation of particular kinds of progenitor sources appears to be correlated with environmental effects and cosmic epoch. In addition, there is now compelling evidence for the continuous fuelling of SGRB sources. We suggest here that the observed late flaring activity could be due to a secondary accretion episode induced by the delayed fall back of material dynamically stripped from a compact object during a merger or collision. Some important unresolved questions are identified, along with the types of observation that would discriminate among the various models. Many of the observed properties can be understood as resulting from outflows driven by hyperaccreting BHs and subsequently collimated into a pair of anti-parallel jets. It is likely that most of the radiation we receive is reprocessed by matter quite distant to the BH; SGRB jets, if powered by the hole itself, may therefore be one of the few observable consequences of how flows near nuclear density behave under the influence of strong gravitational fields.

Journal ArticleDOI
TL;DR: This work proposes an exact method for reducing the size of weighted (directed and undirected) complex networks while maintaining invariant its modularity, and compares the modularity obtained by using the Extremal Optimization algorithm, before and after the size reduction.
Abstract: The ubiquity of modular structure in real-world complex networks is being the focus of attention in many trials to understand the interplay between network topology and functionality. The best approaches to the identification of modular structure are based on the optimization of a quality function known as modularity. However this optimization is a hard task provided that the computational complexity of the problem is in the NP-hard class. Here we propose an exact method for reducing the size of weighted (directed and undirected) complex networks while maintaining invariant its modularity. This size reduction allows the heuristic algorithms that optimize modularity for a better exploration of the modularity landscape. We compare the modularity obtained in several real complex-networks by using the Extremal Optimization algorithm, before and after the size reduction, showing the improvement obtained. We speculate that the proposed analytical size reduction could be extended to an exact coarse graining of the network in the scope of real-space renormalization.

Journal ArticleDOI
TL;DR: The VAPOR tool as mentioned in this paper employs data reduction, advanced visualization, and quantitative analysis operations to permit the interactive exploration of vast datasets using only a desktop PC equipped with a commodity graphics card.
Abstract: The ever increasing processing capabilities of the supercomputers available to computational scientists today, combined with the need for higher and higher resolution computational grids, has resulted in deluges of simulation data. Yet the computational resources and tools required to make sense of these vast numerical outputs through subsequent analysis are often far from adequate, makingsuchanalysisofthedataapainstaking,ifnotahopeless,task.Inthispaper, we describe a new tool for the scientific investigation of massive computational datasets. This tool (VAPOR) employs data reduction, advanced visualization, and quantitative analysis operations to permit the interactive exploration of vast datasets using only a desktop PC equipped with a commodity graphics card. We describe VAPORs use in the study of two problems. The first, motivated by stellar envelope convection, investigates the hydrodynamic stability of compressible thermal starting plumes as they descend through a stratified layer of increasing density with depth. The second looks at current sheet formation in an incompressible helical magnetohydrodynamic flow to understand the early spontaneous development of quasi two-dimensional (2D) structures embedded within the 3D solution. Both of the problems were studied at sufficiently high spatial resolution, a grid of 504 2 by 2048 points for the first and 1536 3 points for the second, to overwhelm the interactive capabilities of typically available analysis resources.

Journal ArticleDOI
TL;DR: In this article, a theoretical design for a locally resonant two-dimensional cylindrical structure involving a pair of C-shaped voids in an elastic medium which they term as double 'C' resonators (DCRs) and imbedded thin stiff bars, that displays the negative refraction effect in the low frequency regime was given.
Abstract: We give a theoretical design for a locally resonant two-dimensional cylindrical structure involving a pair of C-shaped voids in an elastic medium which we term as double 'C' resonators (DCRs) and imbedded thin stiff bars, that displays the negative refraction effect in the low frequency regime. DCRs are responsible for a low frequency band gap which hybridizes with a tiny gap associated with the presence of the thin bars. Using an asymptotic analysis, typical working frequencies are given in closed form: DCRs behave as Helmholtz resonators modeled by masses connected to clamped walls by springs on either side, while thin bars behave as a periodic bi-atomic chain of masses connected by springs. The discrete models give an accurate description of the location and width of the stop band in the case of the DCR and the first two dispersion bands for the periodic thin bars. We then combine our asymptotic formulae for arrays of DCR and thin-bars to design a composite structure that displays a negative refraction effect and has a negative phase velocity in a frequency band, and thus behaves in many ways as a negative refractive acoustic medium (NRAM). Finite element computations show that at this frequency, a slab of such NRAM works as a phononic flat superlens whereas two corners of such NRAM sharing a vertex act as an open resonator and can be used to confine sound to a certain extent.

Journal ArticleDOI
TL;DR: In this paper, a new approach to cosmological averaging is presented, which implicitly solves the Sandage-de Vaucouleurs paradox, and combines with a nonlinear scheme for cosmology evolution with back-reaction via the Buchert equations, a new observationally viable model is presented.
Abstract: Cosmic acceleration is explained quantitatively, purely in general relativity with matter obeying the strong energy condition, as an apparent effect due to quasilocal gravitational energy differences that arise in the decoupling of bound systems from the global expansion of the universe. 'Dark energy' is recognized as a misidentification of those aspects of gravitational energy which by virtue of the equivalence principle cannot be localized. Matter is modelled as an inhomogeneous distribution of clusters of galaxies in bubble walls surrounding voids, as we observe. Gravitational energy differences between observers in bound systems, such as galaxies, and volume-averaged comoving locations in freely expanding space can be so large that the time dilation between the two significantly affects the parameters of any effective homogeneous isotropic model one fits to the universe. A new approach to cosmological averaging is presented, which implicitly solves the Sandage-de Vaucouleurs paradox. Comoving test particles in freely expanding space, which observe an isotropic cosmic microwave background (CMB), possess a quasilocal 'rest' energy E = h(, x)imc 2 on the spatial hypersurfaces of homogeneity. Here 1 6 1, representing the quasilocal gravitational energy of expansion and spatial curvature variations. Since all our cosmological measurements apart from the CMB involve photons exchanged between objects in bound systems, and since clocks in bound systems are largely unaffected, this is entirely consistent with observation. When combined with a non-linear scheme for cosmological evolution with back-reaction via the Buchert equations, a new observationally viable model

Journal ArticleDOI
TL;DR: In this article, the elastic properties of cylinders are taken into account and a gradient index lens for airborne sound is proposed, whose functionality is demonstrated by multiple scattering simulations, and it is shown that metamaterials with perfect matching of impedance with air are possible by using aerogel and rigid cylinders equally distributed in a square lattice.
Abstract: It has been shown that two-dimensional arrays of rigid or fluidlike cylinders in a fluid or a gas define, in the limit of large wavelengths, a class of acoustic metamaterials whose effective parameters (sound velocity and density) can be tailored up to a certain limit. This work goes a step further by considering arrays of solid cylinders in which the elastic properties of cylinders are taken into account. We have also treated mixtures of two different elastic cylinders. It is shown that both effects broaden the range of acoustic parameters available for designing metamaterials. For example, it is predicted that metamaterials with perfect matching of impedance with air are now possible by using aerogel and rigid cylinders equally distributed in a square lattice. As a potential application of the proposed metamaterial, we present a gradient index lens for airborne sound (i.e. a sonic Wood lens) whose functionality is demonstrated by multiple scattering simulations.

Journal ArticleDOI
TL;DR: This work introduces a clustering algorithm clique percolation method with weights (CPMw) for weighted networks based on the concept of percolating k-cliques with high enough intensity and shows that groups of three or more strong links prefer to cluster together in both original graphs.
Abstract: The inclusion of link weights into the analysis of network properties allows a deeper insight into the (often overlapping) modular structure of real- world webs. We introduce a clustering algorithm clique percolation method with weights (CPMw) for weighted networks based on the concept of percolating k-cliques with high enough intensity. The algorithm allows overlaps between the modules. First, we give detailed analytical and numerical results about the critical point of weighted k-clique percolation on (weighted) Erdý os-Renyi graphs. Then, for a scientist collaboration web and a stock correlation graph we compute three-link weight correlations and with the CPMw the weighted modules. After reshuffling link weights in both networks and computing the same quantities for the randomized control graphs as well, we show that groups of three or more strong links prefer to cluster together in both original graphs.

Journal ArticleDOI
TL;DR: A computationally efficient method to integrate the Langevin equation is proposed, and the applicability of the method to cytoskeletal modeling is illustrated with several examples.
Abstract: We develop a numerical method to simulate mechanical objects in a viscous medium at a scale where inertia is negligible. Fibers, spheres and other voluminous objects are represented with points. Different types of connections are used to link the points together and in this way create composite mechanical structures. The motion of such structures in a Brownian environment is described by a first-order multivariate Langevin equation. We propose a computationally efficient method to integrate the equation, and illustrate the applicability of the method to cytoskeletal modeling with several examples.

Journal ArticleDOI
TL;DR: In this article, a subwavelength slit in a thick metal plate was explored and two orientations of the cuts, parallel and perpendicular to the long axis of the slit, were examined.
Abstract: Resonant transmission of microwaves through a subwavelength slit in a thick metal plate, into which subwavelength cuts have been made, is explored. Two orientations of the cuts, parallel and perpendicular to the long axis of the slit, are examined. The results show that the slits act as though filled with a medium with anisotropic effective relative permeability which at low mode numbers has the two values ~(1, 9.1), increasing to ~(1, 14.4) for higher mode numbers.

Journal ArticleDOI
TL;DR: In this paper, a high-resolution detection technique is introduced which allows us to accurately determine the fine structure in the photoluminescence emission and therefore select appropriate QDs for quantum state tomography.
Abstract: The radiative biexciton-exciton decay in a semiconductor quantum dot (QD) has the potential of being a source of triggered polarization-entangled photon pairs. However, in most cases the anisotropy-induced exciton fine structure splitting destroys this entanglement. Here, we present measurements on improved QD structures, providing both significantly reduced inhomogeneous emission linewidths and near-zero fine structure splittings. A high-resolution detection technique is introduced which allows us to accurately determine the fine structure in the photoluminescence emission and therefore select appropriate QDs for quantum state tomography. We were able to verify the conditions of entangled or classically correlated photon pairs in full consistence with observed fine structure properties. Furthermore, we demonstrate reliable polarization- entanglement for elevated temperatures up to 30 K. The fidelity of the maximally entangled state decreases only a little from 72% at 4 K to 68% at 30 K. This is especially encouraging for future implementations in practical devices.

Journal ArticleDOI
TL;DR: All these tests—based on the very same data—give rise to quantitative estimates in terms of entanglement measures, and if a test is strongly violated, one can also infer that the state was quantitatively very much entangled, in the bipartite and multipartite setting.
Abstract: Entanglement witnesses provide tools to detect entanglement in experimental situations without the need of having full tomographic knowledge about the state. If one estimates in an experiment an expectation value smaller than zero, one can directly infer that the state has been entangled, or specifically multi-partite entangled, in the first place. In this paper, we emphasize that all these tests—based on the very same data—give rise to quantitative estimates in terms of entanglement measures: 'If a test is strongly violated, one can also infer that the state was quantitatively very much entangled'. We consider various measures of entanglement, including the negativity, the entanglement of formation, and the robustness of entanglement, in the bipartite and multipartite setting. As examples, we discuss several experiments in the context of quantum state preparation that have recently been performed.

Journal ArticleDOI
TL;DR: In this paper, a detailed report on a program of direct numerical simulations of incompressible nonhelical randomly forced magnetohydrodynamic (MHD) turbulence that are used to settle a long-standing issue in the turbulent dynamo theory and demonstrate that the fluctuation dynamo exists in the limit of large magnetic Reynolds number Rm 1 and small magnetic Prandtl number Pm 1.
Abstract: This paper is a detailed report on a programme of direct numerical simulations of incompressible nonhelical randomly forced magnetohydrodynamic (MHD) turbulence that are used to settle a long-standing issue in the turbulent dynamo theory and demonstrate that the fluctuation dynamo exists in the limit of large magnetic Reynolds number Rm 1 and small magnetic Prandtl number Pm 1. The dependence of the critical Rmc for dynamo versus the hydrodynamic Reynolds number Re is obtained for 1 Re 6700. In the limit Pm 1, Rmc is at most three times larger than for the previously well established dynamo at large and moderate Prandtl numbers: Rmc 200 for Re 6000 compared to Rmc ~ 60 for . The stability curve Rmc(Re) (and, it is argued, the nature of the dynamo) is substantially different from the case of the simulations and liquid-metal experiments with a mean flow. It is not as yet possible to determine numerically whether the growth rate of the magnetic energy is in the limit Re Rm 1, as should be the case if the dynamo is driven by the inertial-range motions at the resistive scale, or tends to an Rm-independent value comparable to the turnover rate of the outer-scale motions. The magnetic-energy spectrum in the low-Pm regime is qualitatively different from the Pm ≥ 1 case and appears to develop a negative spectral slope, although current resolutions are insufficient to determine its asymptotic form. At , the magnetic fluctuations induced via the tangling by turbulence of a weak mean field are investigated and the possibility of a k−1 spectrum above the resistive scale is examined. At low Rm < 1, the induced fluctuations are well described by the quasistatic approximation; the k−11/3 spectrum is confirmed for the first time in direct numerical simulations. Applications of the results on turbulent induction to understanding the nonlocal energy transfer from the dynamo-generated magnetic field to smaller-scale magnetic fluctuations are discussed. The results reported here are of fundamental importance for understanding the genesis of small-scale magnetic fields in cosmic plasmas.

Journal ArticleDOI
TL;DR: In this paper, the photon wave function and its equation of motion are established from the Einstein energy?momentum?mass relation, assuming a local energy density. And the proper Lorentz-invariant single-photon scalar product is found to be non-local in coordinate space, and correspond to orthogonalization of the Titulaer?Glauber wave-packet modes.
Abstract: The monochromatic Dirac and polychromatic Titulaer?Glauber quantized field theories (QFTs) of electromagnetism are derived from a photon-energy wave function in much the same way that one derives QFT for electrons, i.e., by quantization of a single-particle wave function. The photon wave function and its equation of motion are established from the Einstein energy?momentum?mass relation, assuming a local energy density. This yields a theory of photon wave mechanics (PWM). The proper Lorentz-invariant single-photon scalar product is found to be non-local in coordinate space, and is shown to correspond to orthogonalization of the Titulaer?Glauber wave-packet modes. The wave functions of PWM and mode functions of QFT are shown to be equivalent, evolving via identical equations of motion, and completely describe photonic states. We generalize PWM to two or more photons, and show how to switch between the PWM and QFT viewpoints. The second-order coherence tensors of classical coherence theory and the two-photon wave functions are shown to propagate equivalently. We give examples of beam-like states, which can be used as photon wave functions in PWM, or modes in QFT. We propose a practical mode converter based on spectral filtering to convert between wave packets and their corresponding biorthogonal dual wave packets.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate sub-millijoule energy, sub-4-fs-duration near-infrared laser pulses with a controlled waveform comprised of approximately 1.5 optical cycles within the full-width at half-maximum (FWHM) of their temporal intensity profile.
Abstract: We demonstrate sub-millijoule-energy, sub-4?fs-duration near-infrared laser pulses with a controlled waveform comprised of approximately 1.5 optical cycles within the full-width at half-maximum (FWHM) of their temporal intensity profile. We further demonstrate the utility of these pulses for producing high-order harmonic continua of unprecedented bandwidth at photon energies around 100?eV. Ultra-broadband coherent continua extending from 90?eV to more than 130?eV with smooth spectral intensity distributions that exhibit dramatic, never-before-observed sensitivity to the carrier-envelope offset (CEO) phase of the driver laser pulse were generated. These results suggest the feasibility of sub-100-attosecond XUV pulse generation for attosecond spectroscopy in the 100?eV range, and of a simple yet highly sensitive on-line CEO phase detector with sub-50-ms response time.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the possible Bohmian paths are naively observable from a large enough ensemble, assuming that one obtains the dynamics of BM, and that one requires that a particular path be determined experimentally.
Abstract: Bohmian mechanics (BM) is a popular interpretation of quantum mechanics (QM) in which particles have real positions. The velocity of a point x in configuration space is defined as the standard probability current j(x) divided by the probability density P(x). However, this 'standard' j is in fact only one of infinitely many that transform correctly and satisfy . In this paper, I show that a particular j is singled out if one requires that j be determined experimentally as a weak value, using a technique that would make sense to a physicist with no knowledge of QM. This 'naively observable' j seems the most natural way to define j operationally. Moreover, I show that this operationally defined j equals the standard j, so, assuming , one obtains the dynamics of BM. It follows that the possible Bohmian paths are naively observable from a large enough ensemble. Furthermore, this justification for the Bohmian law of motion singles out x as the hidden variable, because (for example) the analogously defined momentum current is in general incompatible with the evolution of the momentum distribution. Finally I discuss how, in this setting, the usual quantum probabilities can be motivated from a Bayesian standpoint, via the principle of indifference.

Journal ArticleDOI
TL;DR: It is found that directed modules of real-world graphs inherently overlap and the investigated networks can be classified into two major groups in terms of the overlaps between the modules.
Abstract: A search technique locating network modules, i.e. internally densely connected groups of nodes in directed networks is introduced by extending the clique percolation method originally proposed for undirected networks. After giving a suitable definition for directed modules we investigate their percolation transition in the Erdős–Renyi graph both analytically and numerically. We also analyse four real-world directed networks, including Google's own web-pages, an email network, a word association graph and the transcriptional regulatory network of the yeast Saccharomyces cerevisiae. The obtained directed modules are validated by additional information available for the nodes. We find that directed modules of real-world graphs inherently overlap and the investigated networks can be classified into two major groups in terms of the overlaps between the modules. Accordingly, in the word-association network and Google's web-pages, overlaps are likely to contain in-hubs, whereas the modules in the email and transcriptional regulatory network tend to overlap via out-hubs.

Journal ArticleDOI
TL;DR: The Large Hadron Collider (LHC) as discussed by the authors is a state-of-the-art particle accelerator that will provide proton-proton collisions with unprecedented luminosity and energy.
Abstract: The Large Hadron Collider (LHC), now close to completion at CERN will provide proton–proton collisions with unprecedented luminosity and energy. It will allow the Standard Model of physics to be explored in an energy range where new phenomena can be studied. This includes the validity of the Higgs mechanism, supersymmetry and CP violation. The machine presents a number of novel features discussed in detail below.

Journal ArticleDOI
TL;DR: It is shown how a useful measure of transfinite dimension may be defined and applied to the small-world nets and shown how first-passage time for diffusion and resistance between hubs (the most connected nodes) scale differently than for other nodes.
Abstract: We explore the concepts of self-similarity, dimensionality, and (multi)scaling in a new family of recursive scale-free nets that yield themselves to exact analysis through renormalization techniques. All nets in this family are self-similar and some are fractals—possessing a finite fractal dimension—while others are small-world (their diameter grows logarithmically with their size) and are infinite-dimensional. We show how a useful measure of transfinite dimension may be defined and applied to the small-world nets. Concerning multiscaling, we show how first-passage time for diffusion and resistance between hubs (the most connected nodes) scale differently than for other nodes. Despite the different scalings, the Einstein relation between diffusion and conductivity holds separately for hubs and nodes. The transfinite exponents of small-world nets obey Einstein relations analogous to those in fractal nets.

Journal ArticleDOI
TL;DR: The issue of how the characterization of the asymptotic states of the evolutionary dynamics depends on the initial concentration of cooperators is addressed and it is found that the measure and the connectedness properties of the set of nodes where cooperation reaches fixation is largely independent of initial conditions.
Abstract: Recent studies on the evolutionary dynamics of the prisoner's dilemma game in scale-free networks have demonstrated that the heterogeneity of the network interconnections enhances the evolutionary success of cooperation. In this paper we address the issue of how the characterization of the asymptotic states of the evolutionary dynamics depends on the initial concentration of cooperators. We find that the measure and the connectedness properties of the set of nodes where cooperation reaches fixation is largely independent of initial conditions, in contrast with the behaviour of both the set of nodes where defection is fixed, and the fluctuating nodes. We also check for the robustness of these results when varying the degree heterogeneity along a one-parametric family of networks interpolating between the class of Erdős–Renyi graphs and the Barabasi–Albert networks.

Journal ArticleDOI
TL;DR: The study of synchronization in a multilevel complex network model of cortex can provide insights into the relationship between network topology and functional organization of complex brain networks.
Abstract: The brain is one of the most complex systems in nature, with a structured complex connectivity. Recently, large-scale corticocortical connectivities, both structural and functional, have received a great deal of research attention, especially using the approach of complex network analysis. Understanding the relationship between structural and functional connectivity is of crucial importance in neuroscience. Here we try to illuminate this relationship by studying synchronization dynamics in a realistic anatomical network of cat cortical connectivity. We model the nodes (cortical areas) by a neural mass model (population model) or by a subnetwork of interacting excitable neurons (multilevel model). We show that if the dynamics is characterized by well-defined oscillations (neural mass model and subnetworks with strong couplings), the synchronization patterns are mainly determined by the node intensity (total input strengths of a node) and the detailed network topology is rather irrelevant. On the other hand, the multilevel model with weak couplings displays more irregular, biologically plausible dynamics, and the synchronization patterns reveal a hierarchical cluster organization in the network structure. The relationship between structural and functional connectivity at different levels of synchronization is explored. Thus, the study of synchronization in a multilevel complex network model of cortex can provide insights into the relationship between network topology and functional organization of complex brain networks.