scispace - formally typeset
Search or ask a question

Showing papers in "Reports on Progress in Physics in 2016"


Journal ArticleDOI
TL;DR: Recent progress in the physics of metasurfaces operating at wavelengths ranging from microwave to visible is reviewed, with opinions of opportunities and challenges in this rapidly developing research field.
Abstract: Metamaterials are composed of periodic subwavelength metal/dielectric structures that resonantly couple to the electric and/or magnetic components of the incident electromagnetic fields, exhibiting properties that are not found in nature. This class of micro- and nano-structured artificial media have attracted great interest during the past 15 years and yielded ground-breaking electromagnetic and photonic phenomena. However, the high losses and strong dispersion associated with the resonant responses and the use of metallic structures, as well as the difficulty in fabricating the micro- and nanoscale 3D structures, have hindered practical applications of metamaterials. Planar metamaterials with subwavelength thickness, or metasurfaces, consisting of single-layer or few-layer stacks of planar structures, can be readily fabricated using lithography and nanoprinting methods, and the ultrathin thickness in the wave propagation direction can greatly suppress the undesirable losses. Metasurfaces enable a spatially varying optical response (e.g. scattering amplitude, phase, and polarization), mold optical wavefronts into shapes that can be designed at will, and facilitate the integration of functional materials to accomplish active control and greatly enhanced nonlinear response. This paper reviews recent progress in the physics of metasurfaces operating at wavelengths ranging from microwave to visible. We provide an overview of key metasurface concepts such as anomalous reflection and refraction, and introduce metasurfaces based on the Pancharatnam-Berry phase and Huygens' metasurfaces, as well as their use in wavefront shaping and beam forming applications, followed by a discussion of polarization conversion in few-layer metasurfaces and their related properties. An overview of dielectric metasurfaces reveals their ability to realize unique functionalities coupled with Mie resonances and their low ohmic losses. We also describe metasurfaces for wave guidance and radiation control, as well as active and nonlinear metasurfaces. Finally, we conclude by providing our opinions of opportunities and challenges in this rapidly developing research field.

1,528 citations


Journal ArticleDOI
TL;DR: In this paper, the role of torsion in gravity has been extensively investigated along the main direction of bringing gravity closer to its gauge formulation and incorporating spin in a geometric description.
Abstract: Over recent decades, the role of torsion in gravity has been extensively investigated along the main direction of bringing gravity closer to its gauge formulation and incorporating spin in a geometric description. Here we review various torsional constructions, from teleparallel, to Einstein-Cartan, and metric-affine gauge theories, resulting in extending torsional gravity in the paradigm of f (T) gravity, where f (T) is an arbitrary function of the torsion scalar. Based on this theory, we further review the corresponding cosmological and astrophysical applications. In particular, we study cosmological solutions arising from f (T) gravity, both at the background and perturbation levels, in different eras along the cosmic expansion. The f (T) gravity construction can provide a theoretical interpretation of the late-time universe acceleration, alternative to a cosmological constant, and it can easily accommodate with the regular thermal expanding history including the radiation and cold dark matter dominated phases. Furthermore, if one traces back to very early times, for a certain class of f (T) models, a sufficiently long period of inflation can be achieved and hence can be investigated by cosmic microwave background observations-or, alternatively, the Big Bang singularity can be avoided at even earlier moments due to the appearance of non-singular bounces. Various observational constraints, especially the bounds coming from the large-scale structure data in the case of f (T) cosmology, as well as the behavior of gravitational waves, are described in detail. Moreover, the spherically symmetric and black hole solutions of the theory are reviewed. Additionally, we discuss various extensions of the f (T) paradigm. Finally, we consider the relation with other modified gravitational theories, such as those based on curvature, like f (R) gravity, trying to illuminate the subject of which formulation, or combination of formulations, might be more suitable for quantization ventures and cosmological applications.

969 citations


Journal ArticleDOI
Sergey Alekhin, Wolfgang Altmannshofer1, Takehiko Asaka2, Brian Batell3, Fedor Bezrukov4, Kyrylo Bondarenko5, Alexey Boyarsky5, Ki-Young Choi6, Cristóbal Corral7, Nathaniel Craig8, David Curtin9, Sacha Davidson10, Sacha Davidson11, André de Gouvêa12, Stefano Dell'Oro, Patrick deNiverville13, P. S. Bhupal Dev14, Herbi K. Dreiner15, Marco Drewes16, Shintaro Eijima17, Rouven Essig18, Anthony Fradette13, Björn Garbrecht16, Belen Gavela19, Gian F. Giudice3, Mark D. Goodsell20, Mark D. Goodsell21, Dmitry Gorbunov22, Stefania Gori1, Christophe Grojean23, Alberto Guffanti24, Thomas Hambye25, Steen Honoré Hansen24, Juan Carlos Helo7, Juan Carlos Helo26, Pilar Hernández27, Alejandro Ibarra16, Artem Ivashko28, Artem Ivashko5, Eder Izaguirre1, Joerg Jaeckel29, Yu Seon Jeong30, Felix Kahlhoefer, Yonatan Kahn31, Andrey Katz3, Andrey Katz32, Andrey Katz33, Choong Sun Kim30, Sergey Kovalenko7, Gordan Krnjaic1, Valery E. Lyubovitskij34, Valery E. Lyubovitskij35, Valery E. Lyubovitskij36, Simone Marcocci, Matthew McCullough3, David McKeen37, Guenakh Mitselmakher38, Sven Moch39, Rabindra N. Mohapatra9, David E. Morrissey40, Maksym Ovchynnikov28, Emmanuel A. Paschos, Apostolos Pilaftsis14, Maxim Pospelov13, Maxim Pospelov1, Mary Hall Reno41, Andreas Ringwald, Adam Ritz13, Leszek Roszkowski, Valery Rubakov, Oleg Ruchayskiy24, Oleg Ruchayskiy17, Ingo Schienbein42, Daniel Schmeier15, Kai Schmidt-Hoberg, Pedro Schwaller3, Goran Senjanovic43, Osamu Seto44, Mikhail Shaposhnikov17, Lesya Shchutska38, J. Shelton45, Robert Shrock18, Brian Shuve1, Michael Spannowsky46, Andrew Spray47, Florian Staub3, Daniel Stolarski3, Matt Strassler33, Vladimir Tello, Francesco Tramontano48, Anurag Tripathi, Sean Tulin49, Francesco Vissani, Martin Wolfgang Winkler15, Kathryn M. Zurek50, Kathryn M. Zurek51 
Perimeter Institute for Theoretical Physics1, Niigata University2, CERN3, University of Connecticut4, Leiden University5, Korea Astronomy and Space Science Institute6, Federico Santa María Technical University7, University of California, Santa Barbara8, University of Maryland, College Park9, University of Lyon10, Claude Bernard University Lyon 111, Northwestern University12, University of Victoria13, University of Manchester14, University of Bonn15, Technische Universität München16, École Polytechnique Fédérale de Lausanne17, Stony Brook University18, Autonomous University of Madrid19, University of Paris20, Centre national de la recherche scientifique21, Moscow Institute of Physics and Technology22, Autonomous University of Barcelona23, University of Copenhagen24, Université libre de Bruxelles25, University of La Serena26, University of Valencia27, Taras Shevchenko National University of Kyiv28, Heidelberg University29, Yonsei University30, Princeton University31, University of Geneva32, Harvard University33, University of Tübingen34, Tomsk Polytechnic University35, Tomsk State University36, University of Washington37, University of Florida38, University of Hamburg39, TRIUMF40, University of Iowa41, University of Grenoble42, International Centre for Theoretical Physics43, Hokkai Gakuen University44, University of Illinois at Urbana–Champaign45, Durham University46, University of Melbourne47, University of Naples Federico II48, York University49, University of California, Berkeley50, Lawrence Berkeley National Laboratory51
TL;DR: It is demonstrated that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.
Abstract: This paper describes the physics case for a new fixed target facility at CERN SPS. The SHiP (search for hidden particles) experiment is intended to hunt for new physics in the largely unexplored domain of very weakly interacting particles with masses below the Fermi scale, inaccessible to the LHC experiments, and to study tau neutrino physics. The same proton beam setup can be used later to look for decays of tau-leptons with lepton flavour number non-conservation, $\tau \to 3\mu $ and to search for weakly-interacting sub-GeV dark matter candidates. We discuss the evidence for physics beyond the standard model and describe interactions between new particles and four different portals—scalars, vectors, fermions or axion-like particles. We discuss motivations for different models, manifesting themselves via these interactions, and how they can be probed with the SHiP experiment and present several case studies. The prospects to search for relatively light SUSY and composite particles at SHiP are also discussed. We demonstrate that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.

842 citations


Journal ArticleDOI
TL;DR: This work reviews selected advances in the theoretical understanding of complex quantum many-body systems with regard to emergent notions of quantum statistical mechanics and elucidate the role played by key concepts, such as Lieb-Robinson bounds, entanglement growth, typicality arguments, quantum maximum entropy principles and the generalised Gibbs ensembles.
Abstract: We review selected advances in the theoretical understanding of complex quantum many-body systems with regard to emergent notions of quantum statistical mechanics. We cover topics such as equilibration and thermalisation in pure state statistical mechanics, the eigenstate thermalisation hypothesis, the equivalence of ensembles, non-equilibration dynamics following global and local quenches as well as ramps. We also address initial state independence, absence of thermalisation, and many-body localisation. We elucidate the role played by key concepts for these phenomena, such as Lieb-Robinson bounds, entanglement growth, typicality arguments, quantum maximum entropy principles and the generalised Gibbs ensembles, and quantum (non-)integrability. We put emphasis on rigorous approaches and present the most important results in a unified language.

647 citations


Journal ArticleDOI
TL;DR: This review discusses the current state of the art on how soft materials break and detach from solid surfaces and defines the important length scales in the problem and in particular the elasto-adhesive length Γ/E, which controls the fracture mechanisms.
Abstract: Soft materials are materials with a low shear modulus relative to their bulk modulus and where elastic restoring forces are mainly of entropic origin. A sparse population of strong bonds connects molecules together and prevents macroscopic flow. In this review we discuss the current state of the art on how these soft materials break and detach from solid surfaces. We focus on how stresses and strains are localized near the fracture plane and how elastic energy can flow from the bulk of the material to the crack tip. Adhesion of pressure-sensitive-adhesives, fracture of gels and rubbers are specifically addressed and the key concepts are pointed out. We define the important length scales in the problem and in particular the elasto-adhesive length Γ/E where Γ is the fracture energy and E is the elastic modulus, and how the ratio between sample size and Γ/E controls the fracture mechanisms. Theoretical concepts bridging solid mechanics and polymer physics are rationalized and illustrated by micromechanical experiments and mechanisms of fracture are described in detail. Open questions and emerging concepts are discussed at the end of the review.

507 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review the motivations underlying the need to introduce such interaction, its influence on the background dynamics and how it modifies the evolution of linear perturbations and test models using the most recent observational data and find that the interaction is compatible with the current astronomical and cosmological data.
Abstract: Models where dark matter and dark energy interact with each other have been proposed to solve the coincidence problem. We review the motivations underlying the need to introduce such interaction, its influence on the background dynamics and how it modifies the evolution of linear perturbations. We test models using the most recent observational data and we find that the interaction is compatible with the current astronomical and cosmological data. Finally, we describe the forthcoming data sets from current and future facilities that are being constructed or designed that will allow a clearer understanding of the physics of the dark sector.

506 citations


Journal ArticleDOI
TL;DR: A review of the progress in the construction of modified gravity models as alternatives to dark energy as well as the development of cosmological tests of gravity can be found in this paper.
Abstract: We review recent progress in the construction of modified gravity models as alternatives to dark energy as well as the development of cosmological tests of gravity. Einstein's theory of general relativity (GR) has been tested accurately within the local universe i.e. the Solar System, but this leaves the possibility open that it is not a good description of gravity at the largest scales in the Universe. This being said, the standard model of cosmology assumes GR on all scales. In 1998, astronomers made the surprising discovery that the expansion of the Universe is accelerating, not slowing down. This late-time acceleration of the Universe has become the most challenging problem in theoretical physics. Within the framework of GR, the acceleration would originate from an unknown dark energy. Alternatively, it could be that there is no dark energy and GR itself is in error on cosmological scales. In this review, we first give an overview of recent developments in modified gravity theories including f(R) gravity, braneworld gravity, Horndeski theory and massive/bigravity theory. We then focus on common properties these models share, such as screening mechanisms they use to evade the stringent Solar System tests. Once armed with a theoretical knowledge of modified gravity models, we move on to discuss how we can test modifications of gravity on cosmological scales. We present tests of gravity using linear cosmological perturbations and review the latest constraints on deviations from the standard [Formula: see text]CDM model. Since screening mechanisms leave distinct signatures in the non-linear structure formation, we also review novel astrophysical tests of gravity using clusters, dwarf galaxies and stars. The last decade has seen a number of new constraints placed on gravity from astrophysical to cosmological scales. Thanks to on-going and future surveys, cosmological tests of gravity will enjoy another, possibly even more, exciting ten years.

482 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a systematic introduction to the open system Keldysh functional integral approach, which is the proper technical tool to accomplish a merger of quantum optics and many-body physics, and leverages the power of modern quantum field theory to driven open quantum systems.
Abstract: Recent experimental developments in diverse areas-ranging from cold atomic gases to light-driven semiconductors to microcavity arrays-move systems into the focus which are located on the interface of quantum optics, many-body physics and statistical mechanics. They share in common that coherent and driven-dissipative quantum dynamics occur on an equal footing, creating genuine non-equilibrium scenarios without immediate counterpart in equilibrium condensed matter physics. This concerns both their non-thermal stationary states and their many-body time evolution. It is a challenge to theory to identify novel instances of universal emergent macroscopic phenomena, which are tied unambiguously and in an observable way to the microscopic drive conditions. In this review, we discuss some recent results in this direction. Moreover, we provide a systematic introduction to the open system Keldysh functional integral approach, which is the proper technical tool to accomplish a merger of quantum optics and many-body physics, and leverages the power of modern quantum field theory to driven open quantum systems.

444 citations


Journal ArticleDOI
TL;DR: In this article, the authors mainly focus on recent progress in the engineering of topologically nontrivial phases (such as topological insulators, quantum anomalous Hall effects, quantum valley Hall effects etc) in two-dimensional systems.
Abstract: Topological phases with insulating bulk and gapless surface or edge modes have attracted intensive attention because of their fundamental physics implications and potential applications in dissipationless electronics and spintronics. In this review, we mainly focus on recent progress in the engineering of topologically nontrivial phases (such as [Formula: see text] topological insulators, quantum anomalous Hall effects, quantum valley Hall effects etc) in two-dimensional systems, including quantum wells, atomic crystal layers of elements from group III to group VII, and the transition metal compounds.

389 citations


Journal ArticleDOI
TL;DR: The linear and nonlinear evolution of the generated primordial fields through the radiation era, including viscous effects are traced, and primordial magnetic fields could strongly influence structure formation, especially on dwarf galaxy scales.
Abstract: The universe is magnetized on all scales probed so far. On the largest scales, galaxies and galaxy clusters host magnetic fields at the micro Gauss level coherent on scales up to ten kpc. Recent observational evidence suggests that even the intergalactic medium in voids could host a weak ∼ 10(-16) Gauss magnetic field, coherent on Mpc scales. An intriguing possibility is that these observed magnetic fields are a relic from the early universe, albeit one which has been subsequently amplified and maintained by a dynamo in collapsed objects. We review here the origin, evolution and signatures of primordial magnetic fields. After a brief summary of magnetohydrodynamics in the expanding universe, we turn to magnetic field generation during inflation and phase transitions. We trace the linear and nonlinear evolution of the generated primordial fields through the radiation era, including viscous effects. Sensitive observational signatures of primordial magnetic fields on the cosmic microwave background, including current constraints from Planck, are discussed. After recombination, primordial magnetic fields could strongly influence structure formation, especially on dwarf galaxy scales. The resulting signatures on reionization, the redshifted 21 cm line, weak lensing and the Lyman-α forest are outlined. Constraints from radio and γ-ray astronomy are summarized. Astrophysical batteries and the role of dynamos in reshaping the primordial field are briefly considered. The review ends with some final thoughts on primordial magnetic fields.

383 citations


Journal ArticleDOI
TL;DR: The Hamiltonian formulation of lattice gauge theories is reviewed, and the recent progress in constructing the quantum simulation of Abelian and non-Abelian lattICE gauge theories in 1 +‬1 and 2‬2 dimensions using ultracold atoms in optical lattices is described.
Abstract: Can high-energy physics be simulated by low-energy, non-relativistic, many-body systems such as ultracold atoms? Such ultracold atomic systems lack the type of symmetries and dynamical properties of high energy physics models: in particular, they manifest neither local gauge invariance nor Lorentz invariance, which are crucial properties of the quantum field theories which are the building blocks of the standard model of elementary particles. However, it turns out, surprisingly, that there are ways to configure an atomic system to manifest both local gauge invariance and Lorentz invariance. In particular, local gauge invariance can arise either as an effective low-energy symmetry, or as an exact symmetry, following from the conservation laws in atomic interactions. Hence, one could hope that such quantum simulators may lead to a new type of (table-top) experiments which will be used to study various QCD (quantum chromodynamics) phenomena, such as the confinement of dynamical quarks, phase transitions and other effects, which are inaccessible using the currently known computational methods. In this report, we review the Hamiltonian formulation of lattice gauge theories, and then describe our recent progress in constructing the quantum simulation of Abelian and non-Abelian lattice gauge theories in 1 + 1 and 2 + 1 dimensions using ultracold atoms in optical lattices.

Journal ArticleDOI
Xu-Guang Huang1
TL;DR: A pedagogical review of various properties of the electromagnetic fields, the anomalous transport phenomena, and their experimental signatures in heavy-ion collisions is given.
Abstract: The hot and dense matter generated in heavy-ion collisions may contain domains which are not invariant under P and CP transformations. Moreover, heavy-ion collisions can generate extremely strong magnetic fields as well as electric fields. The interplay between the electromagnetic field and triangle anomaly leads to a number of macroscopic quantum phenomena in these P- and CP-odd domains known as anomalous transports. The purpose of this article is to give a pedagogical review of various properties of the electromagnetic fields, the anomalous transport phenomena, and their experimental signatures in heavy-ion collisions.

Journal ArticleDOI
TL;DR: This review adresses the physics of shock formation, shock dynamics and particle acceleration based on a close examination of available multi-wavelength or in situ observations, analytical and numerical developments and focuses on the different instabilities triggered during the shock formation and in association with particle acceleration processes.
Abstract: Collisionless shocks, that is shocks mediated by electromagnetic processes, are customary in space physics and in astrophysics. They are to be found in a great variety of objects and environments: magnetospheric and heliospheric shocks, supernova remnants, pulsar winds and their nebulae, active galactic nuclei, gamma-ray bursts and clusters of galaxies shock waves. Collisionless shock microphysics enters at different stages of shock formation, shock dynamics and particle energization and/or acceleration. It turns out that the shock phenomenon is a multi-scale non-linear problem in time and space. It is complexified by the impact due to high-energy cosmic rays in astrophysical environments. This review adresses the physics of shock formation, shock dynamics and particle acceleration based on a close examination of available multi-wavelength or in situ observations, analytical and numerical developments. A particular emphasis is made on the different instabilities triggered during the shock formation and in association with particle acceleration processes with regards to the properties of the background upstream medium. It appears that among the most important parameters the background magnetic field through the magnetization and its obliquity is the dominant one. The shock velocity that can reach relativistic speeds has also a strong impact over the development of the micro-instabilities and the fate of particle acceleration. Recent developments of laboratory shock experiments has started to bring some new insights in the physics of space plasma and astrophysical shock waves. A special section is dedicated to new laser plasma experiments probing shock physics.

Journal ArticleDOI
TL;DR: It is hypothesized that concepts of condensed-matter physics along with the new genomic knowledge and technologies and mechanistic mathematical modeling in conjunction with advances in experimental DNA repair and cell signaling have now provided us with unprecedented opportunities in radiation biophysics to address problems in targeted cancer therapy, and genetic risk estimation in humans.
Abstract: The purpose of this paper has been to review the current status and progress of the field of radiation biophysics, and draw attention to the fact that physics, in general, and radiation physics in particular, with the aid of mathematical modeling, can help elucidate biological mechanisms and cancer therapies. We hypothesize that concepts of condensed-matter physics along with the new genomic knowledge and technologies and mechanistic mathematical modeling in conjunction with advances in experimental DNA (Deoxyrinonucleic acid molecule) repair and cell signaling have now provided us with unprecedented opportunities in radiation biophysics to address problems in targeted cancer therapy, and genetic risk estimation in humans. Obviously, one is not dealing with 'low-hanging fruit', but it will be a major scientific achievement if it becomes possible to state, in another decade or so, that we can link mechanistically the stages between the initial radiation-induced DNA damage; in particular, at doses of radiation less than 2 Gy and with structural changes in genomic DNA as a precursor to cell inactivation and/or mutations leading to genetic diseases. The paper presents recent development in the physics of radiation track structure contained in the computer code system KURBUC, in particular for low-energy electrons in the condensed phase of water for which we provide a comprehensive discussion of the dielectric response function approach. The state-of-the-art in the simulation of proton and carbon ion tracks in the Bragg peak region is also presented. The paper presents a critical discussion of the models used for elastic scattering, and the validity of the trajectory approach in low-electron transport. Brief discussions of mechanistic and quantitative aspects of microdosimetry, DNA damage and DNA repair are also included as developed by the authors' work.

Journal ArticleDOI
TL;DR: This review focuses on state-of-the-art high-performance GaN-based LED materials and devices on unconventional substrates, and speculation on the prospects for LEDs on unconventional substrate is speculated.
Abstract: GaN and related III-nitrides have attracted considerable attention as promising materials for application in optoelectronic devices, in particular, light-emitting diodes (LEDs). At present, sapphire is still the most popular commercial substrate for epitaxial growth of GaN-based LEDs. However, due to its relatively large lattice mismatch with GaN and low thermal conductivity, sapphire is not the most ideal substrate for GaN-based LEDs. Therefore, in order to obtain high-performance and high-power LEDs with relatively low cost, unconventional substrates, which are of low lattice mismatch with GaN, high thermal conductivity and low cost, have been tried as substitutes for sapphire. As a matter of fact, it is not easy to obtain high-quality III-nitride films on those substrates for various reasons. However, by developing a variety of techniques, distincts progress has been made during the past decade, with high-performance LEDs being successfully achieved on these unconventional substrates. This review focuses on state-of-the-art high-performance GaN-based LED materials and devices on unconventional substrates. The issues involved in the growth of GaN-based LED structures on each type of unconventional substrate are outlined, and the fundamental physics behind these issues is detailed. The corresponding solutions for III-nitride growth, defect control, and chip processing for each type of unconventional substrate are discussed in depth, together with a brief introduction to some newly developed techniques in order to realize LED structures on unconventional substrates. This is very useful for understanding the progress in this field of physics. In this review, we also speculate on the prospects for LEDs on unconventional substrates.

Journal ArticleDOI
TL;DR: Research in applied nuclear physics, including nuclear interactions, dosimetry, image guidance, range verification, novel accelerators and beam delivery technologies, can significantly improve the clinical outcome in particle therapy.
Abstract: Charged particle therapy has been largely driven and influenced by nuclear physics. The increase in energy deposition density along the ion path in the body allows reducing the dose to normal tissues during radiotherapy compared to photons. Clinical results of particle therapy support the physical rationale for this treatment, but the method remains controversial because of the high cost and of the lack of comparative clinical trials proving the benefit compared to x-rays. Research in applied nuclear physics, including nuclear interactions, dosimetry, image guidance, range verification, novel accelerators and beam delivery technologies, can significantly improve the clinical outcome in particle therapy. Measurements of fragmentation cross-sections, including those for the production of positron-emitting fragments, and attenuation curves are needed for tuning Monte Carlo codes, whose use in clinical environments is rapidly increasing thanks to fast calculation methods. Existing cross sections and codes are indeed not very accurate in the energy and target regions of interest for particle therapy. These measurements are especially urgent for new ions to be used in therapy, such as helium. Furthermore, nuclear physics hardware developments are frequently finding applications in ion therapy due to similar requirements concerning sensors and real-time data processing. In this review we will briefly describe the physics bases, and concentrate on the open issues.

Journal ArticleDOI
TL;DR: This review summarizes a variety of beam damage phenomena relating to oxides in (scanning) transmission electron microscopes, and underlines the shortcomings of currently popular mechanisms.
Abstract: This review summarizes a variety of beam damage phenomena relating to oxides in (scanning) transmission electron microscopes, and underlines the shortcomings of currently popular mechanisms. These phenomena include mass loss, valence state reduction, phase decomposition, precipitation, gas bubble formation, phase transformation, amorphization and crystallization. Moreover, beam damage is also dependent on specimen thickness, specimen orientation, beam voltage, beam current density and beam size. This article incorporates all of these damage phenomena and experimental dependences into a general description, interpreted by a unified mechanism of damage by induced electric field. The induced electric field is produced by positive charges, which are generated from excitation and ionization. The distribution of the induced electric fields inside a specimen is beam-illumination- and specimen-shape- dependent, and associated with the experimental dependence of beam damage. Broadly speaking, the mechanism operates differently in two types of material. In type I, damage increases the resistivity of the irradiated materials, and is thus divergent, resulting in phase separation. In type II, damage reduces the resistivity of the irradiated materials, and is thus convergent, resulting in phase transformation. Damage by this mechanism is dependent on electron-beam current density. The two experimental thresholds are current density and irradiation time. The mechanism comes into effect when these thresholds are exceeded, below which the conventional mechanisms of knock-on and radiolysis still dominate.

Journal ArticleDOI
TL;DR: The emergent view of liquids is summarized, as a unique system with a mixed dynamical state, and several areas where interesting insights may appear are listed, to continue the extraordinary liquid story.
Abstract: Strongly interacting, dynamically disordered and with no small parameter, liquids took a theoretical status between gases and solids with the historical tradition of hydrodynamic description as the starting point. We review different approaches to liquids as well as recent experimental and theoretical work, and propose that liquids do not need classifying in terms of their proximity to gases and solids or any categorizing for that matter. Instead, they are a unique system in their own class with a notably mixed dynamical state in contrast to pure dynamical states of solids and gases. We start with explaining how the first-principles approach to liquids is an intractable, exponentially complex problem of coupled non-linear oscillators with bifurcations. This is followed by a reduction of the problem based on liquid relaxation time τ representing non-perturbative treatment of strong interactions. On the basis of τ, solid-like high-frequency modes are predicted and we review related recent experiments. We demonstrate how the propagation of these modes can be derived by generalizing either hydrodynamic or elasticity equations. We comment on the historical trend to approach liquids using hydrodynamics and compare it to an alternative solid-like approach. We subsequently discuss how collective modes evolve with temperature and how this evolution affects liquid energy and heat capacity as well as other properties such as fast sound. Here, our emphasis is on understanding experimental data in real, rather than model, liquids. Highlighting the dominant role of solid-like high-frequency modes for liquid energy and heat capacity, we review a wide range of liquids: subcritical low-viscous liquids, supercritical state with two different dynamical and thermodynamic regimes separated by the Frenkel line, highly-viscous liquids in the glass transformation range and liquid-glass transition. We subsequently discuss the fairly recent area of liquid–liquid phase transitions, the area where the solid-like properties of liquids have become further apparent. We then discuss gas-like and solid-like approaches to quantum liquids and theoretical issues that are similar to the classical case. Finally, we summarize the emergent view of liquids as a unique system with a mixed dynamical state, and list several areas where interesting insights may appear and continue the extraordinary liquid story.

Journal ArticleDOI
TL;DR: This paper identifies a number of different cases of vibrational mechanisms of NTE, some of which involve a small number of phonons that can be described as involving rotations of rigid polyhedral groups of atoms, others where there are large bands of phonon involved, and some where the transverse acoustic modes provide the main contribution to NTE.
Abstract: Negative thermal expansion (NTE) is the phenomenon in which materials shrink rather than expand on heating Although NTE had been previously observed in a few simple materials at low temperature, it was the realisation in 1996 that some materials have NTE over very wide ranges of temperature that kick-started current interest in this phenomenon Now, nearly two decades later, a number of families of ceramic NTE materials have been identified Increasingly quantitative studies focus on the mechanism of NTE, through techniques such as high-pressure diffraction, local structure probes, inelastic neutron scattering and atomistic simulation In this paper we review our understanding of vibrational mechanisms of NTE for a range of materials We identify a number of different cases, some of which involve a small number of phonons that can be described as involving rotations of rigid polyhedral groups of atoms, others where there are large bands of phonons involved, and some where the transverse acoustic modes provide the main contribution to NTE In a few cases the elasticity of NTE materials has been studied under pressure, identifying an elastic softening under pressure We propose that this property, called pressure-induced softening, is closely linked to NTE, which we can demonstrate using a simple model to describe NTE materials There has also been recent interest in the role of intrinsic anharmonic interactions on NTE, particularly guided by calculations of the potential energy wells for relevant phonons We review these effects, and show how anhamonicity affects the response of the properties of NTE materials to pressure

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of the most recent and representative results obtained for both spontaneous and induced fission, with the goal of emphasizing the coherence of the microscopic approaches employed, both in their general formulation and in their most common approximations.
Abstract: This article reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree-Fock-Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections, are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrodinger equation into a collective Schrodinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrodinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel-Kramers-Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. Although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.

Journal ArticleDOI
TL;DR: The Kepler Mission is a space observatory launched in 2009 by NASA to determine the frequency of Earth-size and larger planets in and near the habitable zone of Sun-like stars, the size and orbital distributions of these planets, and the types of stars they orbit.
Abstract: The Kepler Mission is a space observatory launched in 2009 by NASA to monitor 170,000 stars over a period of four years to determine the frequency of Earth-size and larger planets in and near the habitable zone of Sun-like stars, the size and orbital distributions of these planets, and the types of stars they orbit. Kepler is the tenth in the series of NASA Discovery Program missions that are competitively-selected, PI-directed, medium-cost missions. The Mission concept and various instrument prototypes were developed at the Ames Research Center over a period of 18 years starting in 1983. The development of techniques to do the 10 ppm photometry required for Mission success took years of experimentation, several workshops, and the exploration of many 'blind alleys' before the construction of the flight instrument. Beginning in 1992 at the start of the NASA Discovery Program, the Kepler Mission concept was proposed five times before its acceptance for mission development in 2001. During that period, the concept evolved from a photometer in an L2 orbit that monitored 6000 stars in a 50 sq deg field-of-view (FOV) to one that was in a heliocentric orbit that simultaneously monitored 170,000 stars with a 105 sq deg FOV. Analysis of the data to date has detected over 4600 planetary candidates which include several hundred Earth-size planetary candidates, over a thousand confirmed planets, and Earth-size planets in the habitable zone (HZ). These discoveries provide the information required for estimates of the frequency of planets in our galaxy. The Mission results show that most stars have planets, many of these planets are similar in size to the Earth, and that systems with several planets are common. Although planets in the HZ are common, many are substantially larger than Earth.

Journal ArticleDOI
TL;DR: The key challenges in realizing optimum pixel dimensions in FPA design including dark current, pixel hybridization, pixel delineation, and unit cell readout capacity are outlined to achieve a sufficiently adequate modulation transfer function for the ultra-small pitches involved.
Abstract: In the last two decades, several new concepts for improving the performance of infrared detectors have been proposed. These new concepts particularly address the drive towards the so-called high operating temperature focal plane arrays (FPAs), aiming to increase detector operating temperatures, and as a consequence reduce the cost of infrared systems. In imaging systems with the above megapixel formats, pixel dimension plays a crucial role in determining critical system attributes such as system size, weight and power consumption (SWaP). The advent of smaller pixels has also resulted in the superior spatial and temperature resolution of these systems. Optimum pixel dimensions are limited by diffraction effects from the aperture, and are in turn wavelength-dependent. In this paper, the key challenges in realizing optimum pixel dimensions in FPA design including dark current, pixel hybridization, pixel delineation, and unit cell readout capacity are outlined to achieve a sufficiently adequate modulation transfer function for the ultra-small pitches involved. Both photon and thermal detectors have been considered. Concerning infrared photon detectors, the trade-offs between two types of competing technology-HgCdTe material systems and III-V materials (mainly barrier detectors)-have been investigated.

Journal ArticleDOI
TL;DR: An emerging new paradigm of critical current by design is discussed-a drive to achieve a quantitative correlation between the observed critical current density and mesoscale mixed pinning landscapes by using realistic input parameters in an innovative and powerful large-scale time dependent Ginzburg-Landau approach to simulating vortex dynamics.
Abstract: The behavior of vortex matter in high-temperature superconductors (HTS) controls the entire electromagnetic response of the material, including its current carrying capacity. Here, we review the basic concepts of vortex pinning and its application to a complex mixed pinning landscape to enhance the critical current and to reduce its anisotropy. We focus on recent scientific advances that have resulted in large enhancements of the in-field critical current in state-of-the-art second generation (2G) YBCO coated conductors and on the prospect of an isotropic, high-critical current superconductor in the iron-based superconductors. Lastly, we discuss an emerging new paradigm of critical current by design-a drive to achieve a quantitative correlation between the observed critical current density and mesoscale mixed pinning landscapes by using realistic input parameters in an innovative and powerful large-scale time dependent Ginzburg-Landau approach to simulating vortex dynamics.

Journal ArticleDOI
TL;DR: It is demonstrated that these basic ingredients lead to exotic phenomena, such as, charge effects in Mott insulators, the stabilization of single magnetic vortices, as well as vortex and skyrmion crystals, and the emergence of different types of chiral liquids.
Abstract: The term frustration refers to lattice systems whose ground state cannot simultaneously satisfy all the interactions. Frustration is an important property of correlated electron systems, which stems from the sign of loop products (similar to Wilson products) of interactions on a lattice. It was early recognized that geometric frustration can produce rather exotic physical behaviors, such as macroscopic ground state degeneracy and helimagnetism. The interest in frustrated systems was renewed two decades later in the context of spin glasses and the emergence of magnetic superstructures. In particular, Phil Anderson's proposal of a quantum spin liquid ground state for a two-dimensional lattice S = 1/2 Heisenberg magnet generated a very active line of research that still continues. As a result of these early discoveries and conjectures, the study of frustrated models and materials exploded over the last two decades. Besides the large efforts triggered by the search of quantum spin liquids, it was also recognized that frustration plays a crucial role in a vast spectrum of physical phenomena arising from correlated electron materials. Here we review some of these phenomena with particular emphasis on the stabilization of chiral liquids and non-coplanar magnetic orderings. In particular, we focus on the ubiquitous interplay between magnetic and charge degrees of freedom in frustrated correlated electron systems and on the role of anisotropy. We demonstrate that these basic ingredients lead to exotic phenomena, such as, charge effects in Mott insulators, the stabilization of single magnetic vortices, as well as vortex and skyrmion crystals, and the emergence of different types of chiral liquids. In particular, these orderings appear more naturally in itinerant magnets with the potential of inducing a very large anomalous Hall effect.

Journal ArticleDOI
TL;DR: The relevant concepts of geodetic theory, data analysis, and physical modeling for a myriad of processes at multiple spatial and temporal scales are reviewed, including the extensive global infrastructure that has been built to support GPS geodesy consisting of thousands of continuously operating stations.
Abstract: Geodesy, the oldest science, has become an important discipline in the geosciences, in large part by enhancing Global Positioning System (GPS) capabilities over the last 35 years well beyond the satellite constellation's original design. The ability of GPS geodesy to estimate 3D positions with millimeter-level precision with respect to a global terrestrial reference frame has contributed to significant advances in geophysics, seismology, atmospheric science, hydrology, and natural hazard science. Monitoring the changes in the positions or trajectories of GPS instruments on the Earth's land and water surfaces, in the atmosphere, or in space, is important for both theory and applications, from an improved understanding of tectonic and magmatic processes to developing systems for mitigating the impact of natural hazards on society and the environment. Besides accurate positioning, all disturbances in the propagation of the transmitted GPS radio signals from satellite to receiver are mined for information, from troposphere and ionosphere delays for weather, climate, and natural hazard applications, to disturbances in the signals due to multipath reflections from the solid ground, water, and ice for environmental applications. We review the relevant concepts of geodetic theory, data analysis, and physical modeling for a myriad of processes at multiple spatial and temporal scales, and discuss the extensive global infrastructure that has been built to support GPS geodesy consisting of thousands of continuously operating stations. We also discuss the integration of heterogeneous and complementary data sets from geodesy, seismology, and geology, focusing on crustal deformation applications and early warning systems for natural hazards.

Journal ArticleDOI
TL;DR: The results obtained so far from experimental and theoretical studies in understanding silicene have shown enough significant promising features to open a new direction in the silicon industry, silicon based nano-structures in spintronics and in opto-electronic devices.
Abstract: Inspired by the success of graphene, various two dimensional (2D) structures in free standing (FS) (hypothetical) form and on different substrates have been proposed recently. Silicene, a silicon counterpart of graphene, is predicted to possess massless Dirac fermions and to exhibit an experimentally accessible quantum spin Hall effect. Since the effective spin-orbit interaction is quite significant compared to graphene, buckling in silicene opens a gap of 1.55 meV at the Dirac point. This band gap can be further tailored by applying in plane stress, an external electric field, chemical functionalization and defects. In this topical theoretical review, we would like to explore the electronic, magnetic and optical properties, including Raman spectroscopy of various important derivatives of monolayer and bilayer silicene (BLS) with different adatoms (doping). The magnetic properties can be tailored by chemical functionalization, such as hydrogenation and introducing vacancy into the pristine planar silicene. Apart from some universal features of optical absorption present in all these 2D materials, the study on reflectivity modulation with doping (Al and P) concentration in silicene has indicated the emergence of some strong peaks having the robust characteristic of a doped reflective surface for both polarizations of the electromagnetic (EM) field. Besides this, attempts will be made to understand the electronic properties of silicene from some simple tight-binding Hamiltonian. We also point out the importance of shape dependence and optical anisotropy properties in silicene nanodisks and establish that a zigzag trigonal possesses the maximum magnetic moment. We also suggest future directions to be explored to make the synthesis of silicene and its various derivatives viable for verification of theoretical predictions. Although this is a fairly new route, the results obtained so far from experimental and theoretical studies in understanding silicene have shown enough significant promising features to open a new direction in the silicon industry, silicon based nano-structures in spintronics and in opto-electronic devices.

Journal ArticleDOI
TL;DR: In this article, the authors describe the features of μ-τ permutation and reflection symmetries, and explore their various consequences on model building and neutrino phenomenology, paying particular attention to soft symmetry breaking, which is crucial for our deeper understanding of the fine effects of flavor mixing and CP violation.
Abstract: Behind the observed pattern of lepton flavor mixing is a partial or approximate μ-τ flavor symmetry-a milestone on our road to the true origin of neutrino masses and flavor structures. In this review article we first describe the features of μ-τ permutation and reflection symmetries, and then explore their various consequences on model building and neutrino phenomenology. We pay particular attention to soft μ-τ symmetry breaking, which is crucial for our deeper understanding of the fine effects of flavor mixing and CP violation.

Journal ArticleDOI
TL;DR: This review argues for the creation of a physics of moving systems-a 'locomotion robophysics'-which is defined as the pursuit of principles of self-generated motion, and discusses how such robophysical studies have begun to aid engineers in thecreation of devices that have begunto achieve life-like locomotor abilities on and within complex environments.
Abstract: Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems-a 'locomotion robophysics'-which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others' research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.

Journal ArticleDOI
TL;DR: The requirements for a simple modern particle tracking microrheology experiment are introduced, the error analysis methods associated with it and the mathematical techniques required to calculate the linear viscoelasticity are discussed.
Abstract: New developments in the microrheology of complex fluids are considered. Firstly the requirements for a simple modern particle tracking microrheology experiment are introduced, the error analysis methods associated with it and the mathematical techniques required to calculate the linear viscoelasticity. Progress in microrheology instrumentation is then described with respect to detectors, light sources, colloidal probes, magnetic tweezers, optical tweezers, diffusing wave spectroscopy, optical coherence tomography, fluorescence correlation spectroscopy, elastic- and quasi-elastic scattering techniques, 3D tracking, single molecule methods, modern microscopy methods and microfluidics. New theoretical techniques are also reviewed such as Bayesian analysis, oversampling, inversion techniques, alternative statistical tools for tracks (angular correlations, first passage probabilities, the kurtosis, motor protein step segmentation etc), issues in micro/macro rheological agreement and two particle methodologies. Applications where microrheology has begun to make some impact are also considered including semi-flexible polymers, gels, microorganism biofilms, intracellular methods, high frequency viscoelasticity, comb polymers, active motile fluids, blood clots, colloids, granular materials, polymers, liquid crystals and foods. Two large emergent areas of microrheology, non-linear microrheology and surface microrheology are also discussed.

Journal ArticleDOI
TL;DR: This paper reviews experiments documenting multiple sources of a Nernst signal, which, according to the Bridgman relation, measures the flow of transverse entropy caused by a longitudinal particle flow, which is linked to the quantum of thermoelectric conductance and a number of material-dependent length scales.
Abstract: The Nernst effect is the transverse electric field produced by a longitudinal thermal gradient in the presence of a magnetic field. At the beginning of this century, Nernst experiments on cuprates were analyzed assuming that: (i) the contribution of quasi-particles to the Nernst signal is negligible; and (ii) Gaussian superconducting fluctuations cannot produce a Nernst signal well above the critical temperature. Both these assumptions were contradicted by subsequent experiments. This paper reviews experiments documenting multiple sources of a Nernst signal, which, according to the Bridgman relation, measures the flow of transverse entropy caused by a longitudinal particle flow. Along the lines of Landauer's approach to transport phenomena, the magnitude of the transverse magneto-thermoelectric response is linked to the quantum of thermoelectric conductance and a number of material-dependent length scales: the mean free path, the Fermi wavelength, the de Broglie thermal wavelength and the superconducting coherence length. Extremely mobile quasi-particles in dilute metals generate a widely-documented Nernst signal. Fluctuating Cooper pairs in the normal state of superconductors have been found to produce a detectable Nernst signal with an amplitude conforming to the Gaussian theory, first conceived by Ussishkin, Sondhi and Huse. In addition to these microscopic sources, mobile Abrikosov vortices, mesoscopic objects simultaneously carrying entropy and magnetic flux, can produce a sizeable Nernst response. Finally, in metals subject to a magnetic field strong enough to truncate the Fermi surface to a few Landau tubes, each exiting tube generates a peak in the Nernst response. The survey of these well-established sources of the Nernst signal is a helpful guide to identify the origin of the Nernst signal in other controversial cases.