scispace - formally typeset
Search or ask a question

Showing papers in "Philosophical Transactions of the Royal Society A in 2013"


Journal ArticleDOI
TL;DR: This paper discusses how domain knowledge influences design of the Gaussian process models and provides case examples to highlight the approaches.
Abstract: In this paper, we offer a gentle introduction to Gaussian processes for time-series data analysis. The conceptual framework of Bayesian modelling for time-series data is discussed and the foundations of Bayesian non-parametric modelling presented for Gaussian processes . We discuss how domain knowledge influences design of the Gaussian process models and provide case examples to highlight the approaches.

502 citations


Journal ArticleDOI
TL;DR: Recent work to image and control nanostructure in polymer-based solar cells is reviewed, and very recent progress is described using the unique properties of organic semiconductors to develop strategies that may allow the Shockley–Queisser limit to be broken in a simple photovoltaic cell.
Abstract: This article reviews the motivations for developing polymer-based photovoltaics and describes some of the material systems used. Current challenges are identified, and some recent developments in the field are outlined. In particular, recent work to image and control nanostructure in polymer-based solar cells is reviewed, and very recent progress is described using the unique properties of organic semiconductors to develop strategies that may allow the Shockley–Queisser limit to be broken in a simple photovoltaic cell.

484 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change using Cenozoic temperature, sea level and CO2 covariations.
Abstract: Cenozoic temperature, sea level and CO2 covariations provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change. Climate sensitivity depends on the i...

392 citations


Journal ArticleDOI
TL;DR: A pCO2 record spanning the past 40 million years from a single marine locality, Ocean Drilling Program Site 925 located in the western equatorial Atlantic Ocean, shows that in the Neogene with low CO2 levels, algal carbon concentrating mechanisms and spontaneous biocarbonate–CO2 conversions are likely to play a more important role inAlgal carbon fixation, which provides a potential bias to the alkenone–pCO2 method.
Abstract: The alkenonepCO2 methodology has been used to reconstruct the partial pressure of ancient atmospheric carbon dioxide (pCO2) for the past 45 million years of Earth's history (Middle Eocene to Pleist...

389 citations


Journal ArticleDOI
TL;DR: An overview of some recent developments in the theory of independent component analysis is provided, including analysis of causal relations, testing independent components, analysing multiple datasets (three-way data), modelling dependencies between the components and improved methods for estimating the basic model.
Abstract: Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in the 1990s and summarized, for example, in our monograph in 2001. Here, we provide an overview of some recent developments in the theory since the year 2000. The main topics are: analysis of causal relations, testing independent components, analysing multiple datasets (three-way data), modelling dependencies between the components and improved methods for estimating the basic model.

311 citations


Journal ArticleDOI
TL;DR: The paper analyses the origin and appearance of blue as well as green water scarcity on different scales and with particular focus on risks to food production and water supply for municipalities and industry, and the importance of a paradigm shift in the further conceptual development of water security is stressed.
Abstract: As water is an essential component of the planetary life support system, water deficiency constitutes an insecurity that has to be overcome in the process of socio-economic development. The paper analyses the origin and appearance of blue as well as green water scarcity on different scales and with particular focus on risks to food production and water supply for municipalities and industry. It analyses water scarcity originating from both climatic phenomena and water partitioning disturbances on different scales: crop field, country level and the global circulation system. The implications by 2050 of water scarcity in terms of potential country-level water deficits for food self-reliance are analysed, and the compensating dependence on trade in virtual water for almost half the world population is noted. Planetary-scale conditions for sustainability of the global water circulation system are discussed in terms of a recently proposed Planetary Freshwater Boundary, and the consumptive water use reserve left to be shared between water requirements for global food production, fuelwood production and carbon sequestration is discussed. Finally, the importance of a paradigm shift in the further conceptual development of water security is stressed, so that adequate attention is paid to water's fundamental role in both natural and socio-economic systems.

224 citations


Journal ArticleDOI
TL;DR: It is shown how the development of the social Web has already helped trigger a ‘second wave’ of DEG2 changes, opening up an extensive agenda for future redesign of state organization and interventions.
Abstract: Widespread use of the Internet and the Web has transformed the public management ‘quasi-paradigm’ in advanced industrial countries. The toolkit for public management reform has shifted away from a ‘new public management’ (NPM) approach stressing fragmentation, competition and incentivization and towards a ‘digital-era governance’ (DEG) one, focusing on reintegrating services, providing holistic services for citizens and implementing thoroughgoing digital changes in administration. We review the current status of NPM and DEG approaches, showing how the development of the social Web has already helped trigger a ‘second wave’ of DEG 2 changes. Web science and organizational studies are converging swiftly in public management and public services, opening up an extensive agenda for future redesign of state organization and interventions. So far, DEG changes have survived austerity pressures well, whereas key NPM elements have been rolled back.

221 citations


Journal ArticleDOI
TL;DR: The frequency resolution and sensitivity have each been increased by an order of m agnitude, and this makes it possible to detect and resolve meteorological sources that have previously been out of reach.
Abstract: Almost fifteen years ago in the pages of this Journal, one of us presented power spectra of ocean waves and swell off Pendeen and Perranporth in north Cornwall (Barber & Ursell 1948). The outstanding feature of these spectra is the successive shift of peaks toward higher frequencies. This is the expected behaviour of dispersive wave trains from rather well-defined sources. Storms generate a broad spectrum of frequencies; the low frequencies are associated with the largest group velocity and accordingly are the first to arrive at distant stations. The time rate of increase in the frequency of peaks determines the distance and time of origin. In this way Barber & Ursell were able to identify the dispersive arrivals with a low pressure area in the North Atlantic, a tropical storm off Florida, and a storm off Cape Horn, at distances of 1200, 2800, and 6000 miles, respectively, from the Cornish stations. The measurements were consistent with the simple classical result that each frequency,/, is propagated with its appropriate group velocity, V = g/(47[/*). The present study is in a sense a refinement to the work of Barber & Ursell. The frequency resolution and sensitivity have each been increased by an order of m agnitude, and this makes it possible to detect and resolve meteorological sources that have previously been out of reach. The antipodal swell from the Indian Ocean is a case in point

217 citations


Journal ArticleDOI
TL;DR: This paper presents the results of modelling the heat transfer process in heterogeneous media with the assumption that part of the heat flux is dispersed in the air around the beam, and obtains theheat transfer equation in a new form.
Abstract: This paper presents the results of modelling the heat transfer process in heterogeneous media with the assumption that part of the heat flux is dispersed in the air around the beam. The heat transfer process in a solid material (beam) can be described by an integer order partial differential equation. However, in heterogeneous media, it can be described by a sub- or hyperdiffusion equation which results in a fractional order partial differential equation. Taking into consideration that part of the heat flux is dispersed into the neighbouring environment we additionally modify the main relation between heat flux and the temperature, and we obtain in this case the heat transfer equation in a new form. This leads to the transfer function that describes the dependency between the heat flux at the beginning of the beam and the temperature at a given distance. This article also presents the experimental results of modelling real plant in the frequency domain based on the obtained transfer function.

190 citations


Journal ArticleDOI
TL;DR: The initial work in performing large-eddy simulations of tidal turbine array flows found that staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing.
Abstract: This paper presents our initial work in performing large-eddy simulations of tidal turbine array flows. First, a horizontally periodic precursor simulation is performed to create turbulent flow data. Then those data are used as inflow into a tidal turbine array two rows deep and infinitely wide. The turbines are modelled using rotating actuator lines, and the finite-volume method is used to solve the governing equations. In studying the wakes created by the turbines, we observed that the vertical shear of the inflow combined with wake rotation causes lateral wake asymmetry. Also, various turbine configurations are simulated, and the total power production relative to isolated turbines is examined. We found that staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing. Counter-rotating consecutive downstream turbines in a non-staggered array shows a small benefit. This work has identified areas for improvement. For example, using a larger precursor domain would better capture elongated turbulent structures, and including salinity and temperature equations would account for density stratification and its effect on turbulence. Additionally, the wall shear stress modelling could be improved, and more array configurations could be examined.

185 citations


Journal ArticleDOI
TL;DR: By using fixed-point methods, the existence and uniqueness of a solution for the nonlinear fractional differential equation boundary-value problem Dαu(t)=f(t, u(t)) with a Riemann–Liouville fractional derivative via the different boundary- value problems u(0)=u(T), and the three-point boundary condition u( 0)=β1u(η) and u(T)=β2u( η).
Abstract: In this paper, by using fixed-point methods, we study the existence and uniqueness of a solution for the nonlinear fractional differential equation boundary-value problem D(α)u(t)=f(t,u(t)) with a Riemann-Liouville fractional derivative via the different boundary-value problems u(0)=u(T), and the three-point boundary condition u(0)=β(1)u(η) and u(T)=β(2)u(η), where T>0, t∈I=[0,T], 0<α<1, 0<η

Journal ArticleDOI
TL;DR: This review will discuss recent progress in the CZTSSe field, especially focusing on a direct comparison with analogous higher performing CIGSSe to probe the performance bottlenecks in Earth-abundant kesterite devices.
Abstract: While cadmium telluride and copper-indium-gallium-sulfide-selenide (CIGSSe) solar cells have either already surpassed (for CdTe) or reached (for CIGSSe) the 1 GW yr⁻¹ production level, highlighting the promise of these rapidly growing thin-film technologies, reliance on the heavy metal cadmium and scarce elements indium and tellurium has prompted concern about scalability towards the terawatt level. Despite recent advances in structurally related copper-zinc-tin-sulfide-selenide (CZTSSe) absorbers, in which indium from CIGSSe is replaced with more plentiful and lower cost zinc and tin, there is still a sizeable performance gap between the kesterite CZTSSe and the more mature CdTe and CIGSSe technologies. This review will discuss recent progress in the CZTSSe field, especially focusing on a direct comparison with analogous higher performing CIGSSe to probe the performance bottlenecks in Earth-abundant kesterite devices. Key limitations in the current generation of CZTSSe devices include a shortfall in open circuit voltage relative to the absorber band gap and secondarily a high series resistance, which contributes to a lower device fill factor. Understanding and addressing these performance issues should yield closer performance parity between CZTSSe and CdTe/CIGSSe absorbers and hopefully facilitate a successful launch of commercialization for the kesterite-based technology.

Journal ArticleDOI
TL;DR: This work proposes an encompassing definition rooted in risk science: water security is a tolerable level of water-related risk to society, and argues that water security policy questions need to be framed so that science can marshal interdisciplinary data and evidence to identify solutions.
Abstract: Water-related risks threaten society at the local, national and global scales in our inter-connected and rapidly changing world. Most of the world's poor are deeply water insecure and face intolera...

Journal ArticleDOI
TL;DR: A strategy to reduce demand by providing material services with less material (called ‘material efficiency’) is outlined as an approach to solving the energy intensity of material production dilemma.
Abstract: In this paper, we review the energy requirements to make materials on a global scale by focusing on the five construction materials that dominate energy used in material production: steel, cement, paper, plastics and aluminium. We then estimate the possibility of reducing absolute material production energy by half, while doubling production from the present to 2050. The goal therefore is a 75 per cent reduction in energy intensity. Four technology-based strategies are investigated, regardless of cost: (i) widespread application of best available technology (BAT), (ii) BAT to cutting-edge technologies, (iii) aggressive recycling and finally, and (iv) significant improvements in recycling technologies. Taken together, these aggressive strategies could produce impressive gains, of the order of a 50-56 per cent reduction in energy intensity, but this is still short of our goal of a 75 per cent reduction. Ultimately, we face fundamental thermodynamic as well as practical constraints on our ability to improve the energy intensity of material production. A strategy to reduce demand by providing material services with less material (called 'material efficiency') is outlined as an approach to solving this dilemma.

Journal ArticleDOI
TL;DR: This paper argues that a simple and convincing lever could accelerate theshift to a circular economy, and that this lever is the shift to a tax system based on the principles of sustainability: not taxing renewable resources including human labour—work—but taxing non-renewable resources instead is a powerful lever.
Abstract: The present economy is not sustainable with regard to its per capita material consumption. A dematerialization of the economy of industrialized countries can be achieved by a change in course, from an industrial economy built on throughput to a circular economy built on stock optimization, decoupling wealth and welfare from resource consumption while creating more work. The business models of a circular economy have been known since the mid-1970s and are now applied in a number of industrial sectors. This paper argues that a simple and convincing lever could accelerate the shift to a circular economy, and that this lever is the shift to a tax system based on the principles of sustainability: not taxing renewable resources including human labour--work--but taxing non-renewable resources instead is a powerful lever. Taxing materials and energies will promote low-carbon and low-resource solutions and a move towards a 'circular' regional economy as opposed to the 'linear' global economy requiring fuel-based transport for goods throughput. In addition to substantial improvements in material and energy efficiency, regional job creation and national greenhouse gas emission reductions, such a change will foster all activities based on 'caring', such as maintaining cultural heritage and natural wealth, health services, knowledge and know-how.

Journal ArticleDOI
TL;DR: Overall, the models simulate little global annual surface temperature change, while the proxy reconstructions suggest a global annual warming at LIG (as compared to the PI Holocene) of approximately 1°C, though with possible spatial sampling biases.
Abstract: A Community Climate System Model, Version 3 (CCSM3) simulation for 125 ka during the Last Interglacial (LIG) is compared to two recent proxy reconstructions to evaluate surface temperature changes from modern times. The dominant forcing change from modern, the orbital forcing, modified the incoming solar insolation at the top of the atmosphere, resulting in large positive anomalies in boreal summer. Greenhouse gas concentrations are similar to those of the pre-industrial (PI) Holocene. CCSM3 simulates an enhanced seasonal cycle over the Northern Hemisphere continents with warming most developed during boreal summer. In addition, year-round warming over the North Atlantic is associated with a seasonal memory of sea ice retreat in CCSM3, which extends the effects of positive summer insolation anomalies on the high-latitude oceans to winter months. The simulated Arctic terrestrial annual warming, though, is much less than the observational evidence, suggesting either missing feedbacks in the simulation and/or interpretation of the proxies. Over Antarctica, CCSM3 cannot reproduce the large LIG warming recorded by the Antarctic ice cores, even with simulations designed to consider observed evidence of early LIG warmth in Southern Ocean and Antarctica records and the possible disintegration of the West Antarctic Ice Sheet. Comparisons with a HadCM3 simulation indicate that sea ice is important for understanding model polar responses. Overall, the models simulate little global annual surface temperature change, while the proxy reconstructions suggest a global annual warming at LIG (as compared to the PI Holocene) of approximately 1(°)C, though with possible spatial sampling biases. The CCSM3 SRES B1 (low scenario) future projections suggest high-latitude warmth similar to that reconstructed for the LIG may be exceeded before the end of this century.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the influence of bounding surfaces on the structure of the wake of a single turbine and found that the rate of recovery of wake velocity is dependent on mixing between the wake and the surrounding flow.
Abstract: It is well known that a wake will develop downstream of a tidal stream turbine owing to extraction of axial momentum across the rotor plane. To select a suitable layout for an array of horizontal axis tidal stream turbines, it is important to understand the extent and structure of the wakes of each turbine. Studies of wind turbines and isolated tidal stream turbines have shown that the velocity reduction in the wake of a single device is a function of the rotor operating state (specifically thrust), and that the rate of recovery of wake velocity is dependent on mixing between the wake and the surrounding flow. For an unbounded flow, the velocity of the surrounding flow is similar to that of the incident flow. However, the velocity of the surrounding flow will be increased by the presence of bounding surfaces formed by the bed and free surface, and by the wake of adjacent devices. This paper presents the results of an experimental study investigating the influence of such bounding surfaces on the structure of the wake of tidal stream turbines.

Journal ArticleDOI
TL;DR: This short review highlights some of the exciting new experimental and theoretical developments in the field of photoactivatable metal complexes and their applications in biotechnology and medicine, many of which are featured in more detail in other articles in this issue.
Abstract: This short review highlights some of the exciting new experimental and theoretical developments in the field of photoactivatable metal complexes and their applications in biotechnology and medicine. The examples chosen are based on some of the presentations at the Royal Society Discussion Meeting in June 2012, many of which are featured in more detail in other articles in this issue. This is a young field. Even the photochemistry of well-known systems such as metal–carbonyl complexes is still being elucidated. Striking are the recent developments in theory and computation (e.g. time-dependent density functional theory) and in ultrafast-pulsed radiation techniques which allow photochemical reactions to be followed and their mechanisms to be revealed on picosecond/nanosecond time scales. Not only do some metal complexes (e.g. those of Ru and Ir) possess favourable emission properties which allow functional imaging of cells and tissues (e.g. DNA interactions), but metal complexes can also provide spatially controlled photorelease of bioactive small molecules (e.g. CO and NO)—a novel strategy for site-directed therapy. This extends to cancer therapy, where metal-based precursors offer the prospect of generating excited-state drugs with new mechanisms of action that complement and augment those of current organic photosensitizers.

Journal ArticleDOI
TL;DR: Concepts of transdimensional inference are introduced to a general readership and illustrate with particular seismological examples.
Abstract: Seismologists construct images of the Earth's interior structure using observations, derived from seismograms, collected at the surface. A common approach to such inverse problems is to build a single 'best' Earth model, in some sense. This is despite the fact that the observations by themselves often do not require, or even allow, a single best-fit Earth model to exist. Interpretation of optimal models can be fraught with difficulties, particularly when formal uncertainty estimates become heavily dependent on the regularization imposed. Similar issues occur across the physical sciences with model construction in ill-posed problems. An alternative approach is to embrace the non-uniqueness directly and employ an inference process based on parameter space sampling. Instead of seeking a best model within an optimization framework, one seeks an ensemble of solutions and derives properties of that ensemble for inspection. While this idea has itself been employed for more than 30 years, it is now receiving increasing attention in the geosciences. Recently, it has been shown that transdimensional and hierarchical sampling methods have some considerable benefits for problems involving multiple parameter types, uncertain data errors and/or uncertain model parametrizations, as are common in seismology. Rather than being forced to make decisions on parametrization, the level of data noise and the weights between data types in advance, as is often the case in an optimization framework, the choice can be informed by the data themselves. Despite the relatively high computational burden involved, the number of areas where sampling methods are now feasible is growing rapidly. The intention of this article is to introduce concepts of transdimensional inference to a general readership and illustrate with particular seismological examples. A growing body of references provide necessary detail.

Journal ArticleDOI
TL;DR: This review describes the approaches most commonly applied to detect direct and indirect couplings between time series, especially focusing on nonlinear approaches and gives their basic theoretical background, their basic requirements for application, their main features and their usefulness in different applications.
Abstract: Recently, methods have been developed to analyse couplings in dynamic systems. In the field of medical analysis of complex cardiovascular and cardiorespiratory systems, there is growing interest in how insights may be gained into the interaction between regulatory mechanisms in healthy and diseased persons. The couplings within and between these systems can be linear or nonlinear. However, the complex mechanisms involved in cardiovascular and cardiorespiratory regulation very likely interact with each other in a nonlinear way. Recent advances in nonlinear dynamics and information theory have allowed the multivariate study of information transfer between time series. They therefore might be able to provide additional diagnostic and prognostic information in medicine and might, in particular, be able to complement traditional linear coupling analysis techniques. In this review, we describe the approaches (Granger causality, nonlinear prediction, entropy, symbolization, phase synchronization) most commonly applied to detect direct and indirect couplings between time series, especially focusing on nonlinear approaches. We will discuss their capacity to quantify direct and indirect couplings and the direction (driver-response relationship) of the considered interaction between different biological time series. We also give their basic theoretical background, their basic requirements for application, their main features and demonstrate their usefulness in different applications in the field of cardiovascular and cardiorespiratory coupling analyses.

Journal ArticleDOI
TL;DR: This paper analyses a set of velocity time histories obtained at a fixed point in the bottom boundary layer of a tidal stream, 5 m from the seabed, and where the mean flow reached 2.5…m s−1.5, to increase the levels of confidence within the tidal energy industry of the characteristics of the higher frequency components of the onset flow, and subsequently lead to more realistic performance and loading predictions.
Abstract: This paper analyses a set of velocity time histories which were obtained at a fixed point in the bottom boundary layer of a tidal stream, 5 m from the seabed, and where the mean flow reached 2.5 m s −1 . Considering two complete tidal cycles near spring tide, the streamwise turbulence intensity during non-slack flow was found to be approximately 12–13%, varying slightly between flood and ebb tides. The ratio of the streamwise turbulence intensity to that of the transverse and vertical intensities is typically 1 : 0.75 : 0.56, respectively. Velocity autospectra computed near maximum flood tidal flow conditions exhibit an f −2/3 inertial subrange and conform reasonably well to atmospheric turbulence spectral models. Local isotropy is observed between the streamwise and transverse spectra at reduced frequencies of f >0.5. The streamwise integral time scales and length scales of turbulence at maximum flow are approximately 6 s and 11–14 m, respectively, and exhibit a relatively large degree of scatter. They are also typically much greater in magnitude than the transverse and vertical components. The findings are intended to increase the levels of confidence within the tidal energy industry of the characteristics of the higher frequency components of the onset flow, and subsequently lead to more realistic performance and loading predictions.

Journal ArticleDOI
TL;DR: The forecasting skill of the parametrizations was found to be linked to their ability to reproduce the climatology of the full model, important in a seamless prediction system, allowing the reliability of short-term forecasts to provide a quantitative constraint on the accuracy of climate predictions from the same system.
Abstract: Simple chaotic systems are useful tools for testing methods for use in numerical weather simulations owing to their transparency and computational cheapness. The Lorenz system was used here; the full system was defined as ‘truth’, whereas a truncated version was used as a testbed for parametrization schemes. Several stochastic parametrization schemes were investigated, including additive and multiplicative noise. The forecasts were started from perfect initial conditions, eliminating initial condition uncertainty. The stochastically generated ensembles were compared with perturbed parameter ensembles and deterministic schemes. The stochastic parametrizations showed an improvement in weather and climate forecasting skill over deterministic parametrizations. Including a temporal autocorrelation resulted in a significant improvement over white noise, challenging the standard idea that a parametrization should only represent sub-gridscale variability. The skill of the ensemble at representing model uncertainty was tested; the stochastic ensembles gave better estimates of model uncertainty than the perturbed parameter ensembles. The forecasting skill of the parametrizations was found to be linked to their ability to reproduce the climatology of the full model. This is important in a seamless prediction system, allowing the reliability of short-term forecasts to provide a quantitative constraint on the accuracy of climate predictions from the same system.

Journal ArticleDOI
TL;DR: This review examines recent debates regarding the governance dimensions of water security, including adaptive governance, polycentric governance, social learning and multi-level governance, and explores the relevance of social power.
Abstract: Water governance is critical to water security, and to the long-term sustainability of the Earth's freshwater systems. This review examines recent debates regarding the governance dimensions of wat...

Journal ArticleDOI
TL;DR: This paper aims to give an overview of current thinking on the topic of material efficiency, spanning environmental, engineering, economics, sociology and policy issues.
Abstract: Material efficiency, as discussed in this Meeting Issue, entails the pursuit of the technical strategies, business models, consumer preferences and policy instruments that would lead to a substantial reduction in the production of high-volume energy-intensive materials required to deliver human well-being. This paper, which introduces a Discussion Meeting Issue on the topic of material efficiency, aims to give an overview of current thinking on the topic, spanning environmental, engineering, economics, sociology and policy issues. The motivations for material efficiency include reducing energy demand, reducing the emissions and other environmental impacts of industry, and increasing national resource security. There are many technical strategies that might bring it about, and these could mainly be implemented today if preferred by customers or producers. However, current economic structures favour the substitution of material for labour, and consumer preferences for material consumption appear to continue even beyond the point at which increased consumption provides any increase in well-being. Therefore, policy will be required to stimulate material efficiency. A theoretically ideal policy measure, such as a carbon price, would internalize the externality of emissions associated with material production, and thus motivate change directly. However, implementation of such a measure has proved elusive, and instead the adjustment of existing government purchasing policies or existing regulations-- for instance to do with building design, planning or vehicle standards--is likely to have a more immediate effect.

Journal ArticleDOI
TL;DR: An identification algorithm is sketched that learns causal time-series structures in the presence of latent variables that is non-technical and thus accessible to applied scientists who are interested in adopting the method.
Abstract: I review the use of the concept of Granger causality for causal inference from time-series data. First, I give a theoretical justification by relating the concept to other theoretical causality measures. Second, I outline possible problems with spurious causality and approaches to tackle these problems. Finally, I sketch an identification algorithm that learns causal time-series structures in the presence of latent variables. The description of the algorithm is non-technical and thus accessible to applied scientists who are interested in adopting the method.

Journal ArticleDOI
TL;DR: This Theme Issue, including one review article and 12 research papers, can be regarded as a continuation of the first special issue of European Physical Journal Special Topics in 2011, and the second specialissue of International Journal of Bifurcation and Chaos in 2012.
Abstract: Fractional calculus was formulated in 1695, shortly after the development of classical calculus. The earliest systematic studies were attributed to Liouville, Riemann, Leibniz, etc. [1,2]. For a long time, fractional calculus has been regarded as a pure mathematical realm without real applications. But, in recent decades, such a state of affairs has been changed. It has been found that fractional calculus can be useful and even powerful, and an outline of the simple history about fractional calculus, especially with applications, can be found in Machado et al. [3]. Now, fractional calculus and its applications is undergoing rapid developments with more and more convincing applications in the real world [4,5]. This Theme Issue, including one review article and 12 research papers, can be regarded as a continuation of our first special issue of European Physical Journal Special Topics in 2011 [4], and our second special issue of International Journal of Bifurcation and Chaos in 2012 [5]. These selected papers were mostly reported in The Fifth Symposium on Fractional Derivatives and Their Applications (FDTA'11) that was held in Washington DC, USA in 2011. The first paper presents an overview of chaos synchronization of coupled fractional differential systems. A list of coupling schemes are presented, including one-way coupling, Pecora–Carroll coupling, active–passive decomposition coupling, bidirectional coupling and other unidirectional coupling configurations. Meanwhile, several extended concepts of synchronizations are introduced, namely projective synchronization, hybrid projective synchronization, function projective synchroni- zation, generalized synchronization and generalized projective synchronization. Corresponding to different kinds of synchronization schemes, various analysis methods are presented and discussed [6]. The rest of the papers can be roughly grouped into three parts: three papers for fundamental theories of fractional calculus [7–9], five papers for fractional modelling with applications [10–14] and four papers for numerical approaches [15–18]. In the theory part, three papers focus on the existence of the solutions to the considered classes of nonlinear fractional systems, the equivalence system of the multiple-rational-order fractional system, and the reflection symmetry with applications to the Euler–Lagrange equations [7–9]. Baleanu et al. [7] use fixed-point theorems to prove the existence and uniqueness of the solutions to a class of nonlinear fractional differential equations with different boundary-value conditions. Li et al. [8] apply the properties of the fractional derivatives to change the multiple-rational-order system into the fractional system with the same order. Such a reduction makes it convenient for stability analysis and numerical simulations. The reflection symmetry and its applications to the Euler–Lagrange equations in fractional mechanics are investigated in Klimek [9], where an illustrative example is presented. The part on fractional modelling with applications consists of five papers [10–14]. Chen et al. [10] establish a fractional variational optical flow model for motion estimation from video sequences, where the experiments demonstrate the validity of the generalization of derivative order. Another fractional modelling in heat transfer with heterogeneous media is studied in Sierociuk et al. [11]. In the following paper, two-particle dispersion is explored in the context of the anomalous diffusion, where two modelling approaches related to time subordination are considered and unified in the framework of self-similar stochastic processes [12]. The last two papers in this part emphasize the applications of fractional calculus [13,14], where a novel method for the solution of linear constant coefficient fractional differential equations of any commensurate order is introduced in the former paper, and where the CRONE control-system design toolbox for the control engineering community is presented in the latter paper. The last four papers in part three are attributed to numerical approaches [15–18]. Sun et al. [15] construct a semi-discrete finite-element method for a class of temporal-fractional diffusion equations. On the other hand, an implicit numerical algorithm for the spatial- and temporal-fractional Bloch–Torrey equation is established, where stability and convergence are also considered [16]. In Fukunaga & Shimizu [17], a high-speed scheme for the numerical approach of fractional differentiation and fractional integration is proposed. In the last paper, Podlubny et al. [18] further develop Podlubny's matrix approach to discretization of non-integer derivatives and integrals, where non-equidistant grids, variable step lengths and distributed orders are considered. We try our best to organize this Theme Issue in order to offer fresh stimuli for the fractional calculus community to further promote and develop cutting-edge research on fractional calculus and its applications.

Journal ArticleDOI
TL;DR: This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies and covariance structure.
Abstract: Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes.

Journal ArticleDOI
TL;DR: The role of substrate nonlinearity in the stability of wrinkling of thin films bonded to compliant substrates is investigated within the initial post-bifurcation range when wrinkling first emerges, and two dimensionless parameters are identified that control the stability and mode shape evolution of the bilayer.
Abstract: The role of substrate nonlinearity in the stability of wrinkling of thin films bonded to compliant substrates is investigated within the initial post-bifurcation range when wrinkling first emerges. A fully nonlinear neo-Hookean bilayer composed of a thin film on a deep substrate is analysed for a wide range of the film–substrate stiffness ratio, from films that are very stiff compared with the substrate to those only slightly stiffer. Substrate pre-stretch prior to film attachment is shown to have a significant effect on the nonlinearity relevant to wrinkling. Two dimensionless parameters are identified that control the stability and mode shape evolution of the bilayer: one specifying arbitrary uniform substrate pre-stretch and the other a stretch-modified modulus ratio. For systems with film stiffness greater than about five times that of the substrate the wrinkling bifurcation is stable, whereas for systems with smaller relative film stiffness bifurcation can be unstable, especially if substrate pre-stretch is not tensile.

Journal ArticleDOI
TL;DR: Comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads.
Abstract: The actuator disc-RANS model has widely been used in wind and tidal energy to predict the wake of a horizontal axis turbine. The model is appropriate where large-scale effects of the turbine on a flow are of interest, for example, when considering environmental impacts, or arrays of devices. The accuracy of the model for modelling the wake of tidal stream turbines has not been demonstrated, and flow predictions presented in the literature for similar modelled scenarios vary significantly. This paper compares the results of the actuator disc-RANS model, where the turbine forces have been derived using a blade-element approach, to experimental data measured in the wake of a scaled turbine. It also compares the results with those of a simpler uniform actuator disc model. The comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, therefore demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads. It can therefore be applied to similar scenarios with confidence.

Journal ArticleDOI
TL;DR: A review is given of the field of mineral colloidal liquid crystals: liquid crystal phases formed by individual mineral particles within colloidal suspensions starting from their discovery in the 1920s and highlighting some promising results from recent years.
Abstract: A review is given of the field of mineral colloidal liquid crystals: liquid crystal phases formed by individual mineral particles within colloidal suspensions. Starting from their discovery in the 1920s, we discuss developments on the levels of both fundamentals and applications. We conclude by highlighting some promising results from recent years, which may point the way towards future developments.