scispace - formally typeset
Search or ask a question

Showing papers in "Philosophical Transactions of the Royal Society A in 2016"


Journal ArticleDOI
TL;DR: The basic ideas of PCA are introduced, discussing what it can and cannot do, and some variants of the technique have been developed that are tailored to various different data types and structures.
Abstract: Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori , hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application.

4,289 citations


Journal ArticleDOI
TL;DR: Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems and a mathematical framework is introduced to analyse their properties.
Abstract: Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed.

490 citations


Journal ArticleDOI
TL;DR: Developing data ethics from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework.
Abstract: This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation,...

250 citations


Journal ArticleDOI
Rob Kitchin1
TL;DR: It is argued that smart city initiatives and urban science need to be re-cast in three ways: a re-orientation in how cities are conceived; a reconfiguring of the underlying epistemology to openly recognize the contingent and relational nature of urban systems, processes and science; and the adoption of ethical principles designed to realize benefits of smart cities and urbanScience while reducing pernicious effects.
Abstract: Software-enabled technologies and urban big data have become essential to the functioning of cities. Consequently, urban operational governance and city services are becoming highly responsive to a form of data-driven urbanism that is the key mode of production for smart cities. At the heart of data-driven urbanism is a computational understanding of city systems that reduces urban life to logic and calculative rules and procedures, which is underpinned by an instrumental rationality and realist epistemology. This rationality and epistemology are informed by and sustains urban science and urban informatics, which seek to make cities more knowable and controllable. This paper examines the forms, practices and ethics of smart cities and urban science, paying particular attention to: instrumental rationality and realist epistemology; privacy, datafication, dataveillance and geosurveillance; and data uses, such as social sorting and anticipatory governance. It argues that smart city initiatives and urban science need to be re-cast in three ways: a re-orientation in how cities are conceived; a reconfiguring of the underlying epistemology to openly recognize the contingent and relational nature of urban systems, processes and science; and the adoption of ethical principles designed to realize benefits of smart cities and urban science while reducing pernicious effects.This article is part of the themed issue 'The ethical impact of data science'.

244 citations


Journal ArticleDOI
TL;DR: This work discusses the fundamental aspects that can contribute to thermal hysteresis and the strategies that are developing to at least partially overcome the hysteResis problem in some selected classes of magnetocaloric materials with large application potential.
Abstract: Hysteresis is more than just an interesting oddity that occurs in materials with a first-order transition. It is a real obstacle on the path from existing laboratory-scale prototypes of magnetic refrigerators towards commercialization of this potentially disruptive cooling technology. Indeed, the reversibility of the magnetocaloric effect, being essential for magnetic heat pumps, strongly depends on the width of the thermal hysteresis and, therefore, it is necessary to understand the mechanisms causing hysteresis and to find solutions to minimize losses associated with thermal hysteresis in order to maximize the efficiency of magnetic cooling devices. In this work, we discuss the fundamental aspects that can contribute to thermal hysteresis and the strategies that we are developing to at least partially overcome the hysteresis problem in some selected classes of magnetocaloric materials with large application potential. In doing so, we refer to the most relevant classes of magnetic refrigerants La–Fe–Si-, Heusler- and Fe 2 P-type compounds. This article is part of the themed issue ‘Taking the temperature of phase transitions in cool materials’.

219 citations


Journal ArticleDOI
TL;DR: It is argued that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge in multiscale modelling.
Abstract: The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'.

178 citations


Journal ArticleDOI
TL;DR: This article summarizes and reviews recent progress in the development of catalysts for the ring-opening copolymerization of carbon dioxide and epoxides, highlighting and exemplifying some key recent findings and hypotheses.
Abstract: This article summarizes and reviews recent progress in the development of catalysts for the ring-opening copolymerization of carbon dioxide and epoxides. The copolymerization is an interesting method to add value to carbon dioxide, including from waste sources, and to reduce pollution associated with commodity polymer manufacture. The selection of the catalyst is of critical importance to control the composition, properties and applications of the resultant polymers. This review highlights and exemplifies some key recent findings and hypotheses, in particular using examples drawn from our own research.

161 citations


Journal ArticleDOI
TL;DR: A deeper understanding of the connections between autonomic cardiac control and brain dynamics through advanced signal and neuroimage processing may lead to invaluable tools for the early detection and treatment of pathological changes in the brain–heart interaction.
Abstract: The brain controls the heart directly through the sympathetic and parasympathetic branches of the autonomic nervous system, which consists of multi-synaptic pathways from myocardial cells back to peripheral ganglionic neurons and further to central preganglionic and premotor neurons. Cardiac function can be profoundly altered by the reflex activation of cardiac autonomic nerves in response to inputs from baro-, chemo-, nasopharyngeal and other receptors as well as by central autonomic commands, including those associated with stress, physical activity, arousal and sleep. In the clinical setting, slowly progressive autonomic failure frequently results from neurodegenerative disorders, whereas autonomic hyperactivity may result from vascular, inflammatory or traumatic lesions of the autonomic nervous system, adverse effects of drugs and chronic neurological disorders. Both acute and chronic manifestations of an imbalanced brain-heart interaction have a negative impact on health. Simple, widely available and reliable cardiovascular markers of the sympathetic tone and of the sympathetic-parasympathetic balance are lacking. A deeper understanding of the connections between autonomic cardiac control and brain dynamics through advanced signal and neuroimage processing may lead to invaluable tools for the early detection and treatment of pathological changes in the brain-heart interaction.

155 citations


Journal ArticleDOI
TL;DR: Surprisingly, no evidence for lasting superhydrophobicity in non-biological surfaces exists and biomimetic applications are discussed: self-cleaning is established, drag reduction becomes increasingly important, and novel air-retaining grid technology is introduced.
Abstract: A comprehensive survey of the construction principles and occurrences of superhydrophobic surfaces in plants, animals and other organisms is provided and is based on our own scanning electron micro...

135 citations


Journal ArticleDOI
TL;DR: It is argued that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.
Abstract: Information is a precise concept that can be defined mathematically, but its relationship to what we call ‘knowledge’ is not always made clear. Furthermore, the concepts ‘entropy’ and ‘information’, while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.

123 citations


Journal ArticleDOI
TL;DR: The potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages is discussed, as part of the themed issue ‘Energy and the subsurface’.
Abstract: Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages.This article is part of the themed issue 'Energy and the subsurface'.

Journal ArticleDOI
TL;DR: This review will cover state-of-the-art of graphene-based membranes, and also provide a material guideline on future research directions suitable for practical membrane applications.
Abstract: Recently, graphene-based membranes have been extensively studied, represented by two distinct research directions: (i) creating pores in graphene basal plane and (ii) engineering nanochannels in graphene layers. Most simulation results predict that porous graphene membranes can be much more selective and permeable than current existing membranes, also evidenced by some experimental results for gas separation and desalination. In addition, graphene oxide has been widely investigated in layered membranes with two-dimensional nanochannels, showing very intriguing separation properties. This review will cover state-of-the-art of graphene-based membranes, and also provide a material guideline on future research directions suitable for practical membrane applications.

Journal ArticleDOI
TL;DR: By studying how natural surfaces interact with liquids, new techniques can be developed to clean up oil spills and further protect the authors' most precious resource.
Abstract: Access to a safe supply of water is a human right. However, with growing populations, global warming and contamination due to human activity, it is one that is increasingly under threat. It is hoped that nature can inspire the creation of materials to aid in the supply and management of water, from water collection and purification to water source clean-up and rehabilitation from oil contamination. Many species thrive in even the driest places, with some surviving on water harvested from fog. By studying these species, new materials can be developed to provide a source of fresh water from fog for communities across the globe. The vast majority of water on the Earth is in the oceans. However, current desalination processes are energy-intensive. Systems in our own bodies have evolved to transport water efficiently while blocking other molecules and ions. Inspiration can be taken from such to improve the efficiency of desalination and help purify water containing other contaminants. Finally, oil contamination of water from spills or the fracking technique can be a devastating environmental disaster. By studying how natural surfaces interact with liquids, new techniques can be developed to clean up oil spills and further protect our most precious resource.This article is part of the themed issue 'Bioinspired hierarchically structured surfaces for green science'.

Journal ArticleDOI
TL;DR: This study uses molecular dynamics simulations to determine the permeability and salt rejection capabilities for membranes incorporating carbon nanotubes (CNTs) at a range of pore sizes, pressures and concentrations and finds that salt rejection is highly dependent on the applied hydrostatic pressure.
Abstract: Membranes made from nanomaterials such as nanotubes and graphene have been suggested to have a range of applications in water filtration and desalination, but determining their suitability for these purposes requires an accurate assessment of the properties of these novel materials. In this study, we use molecular dynamics simulations to determine the permeability and salt rejection capabilities for membranes incorporating carbon nanotubes (CNTs) at a range of pore sizes, pressures and concentrations. We include the influence of osmotic gradients and concentration build up and simulate at realistic pressures to improve the reliability of estimated membrane transport properties. We find that salt rejection is highly dependent on the applied hydrostatic pressure, meaning high rejection can be achieved with wider tubes than previously thought; while membrane permeability depends on salt concentration. The ideal size of the CNTs for desalination applications yielding high permeability and high salt rejection is found to be around 1.1 nm diameter. While there are limited energy gains to be achieved in using ultra-permeable CNT membranes in desalination by reverse osmosis, such membranes may allow for smaller plants to be built as is required when size or weight must be minimized. There are diminishing returns in further increasing membrane permeability, so efforts should focus on the fabrication of membranes containing narrow or functionalized CNTs that yield the desired rejection or selection properties rather than trying to optimize pore densities.

Journal ArticleDOI
TL;DR: The purpose of the meeting was to establish the nature of the capacity crunch, estimate the time scales associated with it and to begin to find solutions to enable continued growth in a post-crunch era.
Abstract: This issue of Philosophical Transactions of the Royal Society, Part A represents a summary of the recent discussion meeting 'Communication networks beyond the capacity crunch'. The purpose of the meeting was to establish the nature of the capacity crunch, estimate the time scales associated with it and to begin to find solutions to enable continued growth in a post-crunch era. The meeting confirmed that, in addition to a capacity shortage within a single optical fibre, many other 'crunches' are foreseen in the field of communications, both societal and technical. Technical crunches identified included the nonlinear Shannon limit, wireless spectrum, distribution of 5G signals (front haul and back haul), while societal influences included net neutrality, creative content generation and distribution and latency, and finally energy and cost. The meeting concluded with the observation that these many crunches are genuine and may influence our future use of technology, but encouragingly noted that research and business practice are already moving to alleviate many of the negative consequences.

Journal ArticleDOI
TL;DR: This work outlines a novel variational-based theory for the phase-field modelling of ductile fracture in elastic–plastic solids undergoing large strains that regularizes sharp crack surfaces within a pure continuum setting by a specific gradient damage modelling.
Abstract: This work outlines a novel variational-based theory for the phase-field modelling of ductile fracture in elastic-plastic solids undergoing large strains. The phase-field approach regularizes sharp crack surfaces within a pure continuum setting by a specific gradient damage modelling. It is linked to a formulation of gradient plasticity at finite strains. The framework includes two independent length scales which regularize both the plastic response as well as the crack discontinuities. This ensures that the damage zones of ductile fracture are inside of plastic zones, and guarantees on the computational side a mesh objectivity in post-critical ranges.

Journal ArticleDOI
TL;DR: A new method is proposed to determine the time–frequency content of time-dependent signals consisting of multiple oscillatory components, with time-varying amplitudes and instantaneous frequencies.
Abstract: A new method is proposed to determine the time-frequency content of time-dependent signals consisting of multiple oscillatory components, with time-varying amplitudes and instantaneous frequencies. Numerical experiments as well as a theoretical analysis are presented to assess its effectiveness.


Journal ArticleDOI
TL;DR: Zn and Ni isotope data are presented and it is suggested that a similar, non-quantitative, process, operating in porewaters, explains the Zn data from organic carbon-rich sediments, at least for Zn.
Abstract: Isotopic data collected to date as part of the GEOTRACES and other programmes show that the oceanic dissolved pool is isotopically heavy relative to the inputs for zinc (Zn) and nickel (Ni). All Zn sinks measured until recently, and the only output yet measured for Ni, are isotopically heavier than the dissolved pool. This would require either a non-steady state ocean or other unidentified sinks. Recently, isotopically light Zn has been measured in organic carbon-rich sediments from productive upwelling margins, providing a potential resolution of this issue, at least for Zn. However, the origin of the isotopically light sedimentary Zn signal is uncertain. Cellular uptake of isotopically light Zn followed by transfer to sediment does not appear to be a quantitatively important process. Here, we present Zn and Ni isotope data for the water column and sediments of the Black Sea. These data demonstrate that isotopically light Zn and Ni are extracted from the water column, likely through an equilibrium fractionation between different dissolved species followed by sequestration of light Zn and Ni in sulphide species to particulates and the sediment. We suggest that a similar, non-quanitative, process, operating in porewaters, explains the Zn data from organic carbon-rich sediments.

Journal ArticleDOI
TL;DR: This article analyses DMRs that are due to DMAs, and argues in favour of the allocation, by default and overridably, of full moral responsibility (faultless responsibility) to all the nodes/agents in the network causally relevant for bringing about the DMA in question, independently of intentionality.
Abstract: The concept of distributed moral responsibility (DMR) has a long history. When it is understood as being entirely reducible to the sum of (some) human, individual and already morally loaded actions, then the allocation of DMR, and hence of praise and reward or blame and punishment, may be pragmatically difficult, but not conceptually problematic. However, in distributed environments, it is increasingly possible that a network of agents, some human, some artificial (e.g. a program) and some hybrid (e.g. a group of people working as a team thanks to a software platform), may cause distributed moral actions (DMAs). These are morally good or evil (i.e. morally loaded) actions caused by local interactions that are in themselves neither good nor evil (morally neutral). In this article, I analyse DMRs that are due to DMAs, and argue in favour of the allocation, by default and overridably, of full moral responsibility (faultless responsibility) to all the nodes/agents in the network causally relevant for bringing about the DMA in question, independently of intentionality. The mechanism proposed is inspired by, and adapts, three concepts: back propagation from network theory, strict liability from jurisprudence and common knowledge from epistemic logic.This article is part of the themed issue 'The ethical impact of data science'.

Journal ArticleDOI
TL;DR: Evaluating high-resolution Nd sections for the western and eastern North Atlantic in the context of hydrography, nutrients and aluminium (Al) concentrations reveals that North Atlantic seawater Nd isotopes and concentrations generally follow the patterns of advection, as do Al concentrations.
Abstract: The neodymium (Nd) isotopic composition of seawater has been used extensively to reconstruct ocean circulation on a variety of time scales. However, dissolved neodymium concentrations and isotopes do not always behave conservatively, and quantitative deconvolution of this non-conservative component can be used to detect trace metal inputs and isotopic exchange at ocean–sediment interfaces. In order to facilitate such comparisons for historical datasets, we here provide an extended global database for Nd isotopes and concentrations in the context of hydrography and nutrients. Since 2010, combined datasets for a large range of trace elements and isotopes are collected on international GEOTRACES section cruises, alongside classical nutrient and hydrography measurements. Here, we take a first step towards exploiting these datasets by comparing high-resolution Nd sections for the western and eastern North Atlantic in the context of hydrography, nutrients and aluminium (Al) concentrations. Evaluating those data in tracer–tracer space reveals that North Atlantic seawater Nd isotopes and concentrations generally follow the patterns of advection, as do Al concentrations. Deviations from water mass mixing are observed locally, associated with the addition or removal of trace metals in benthic nepheloid layers, exchange with ocean margins (i.e. boundary exchange) and/or exchange with particulate phases (i.e. reversible scavenging). We emphasize that the complexity of some of the new datasets cautions against a quantitative interpretation of individual palaeo Nd isotope records, and indicates the importance of spatial reconstructions for a more balanced approach to deciphering past ocean changes. This article is part of the themed issue ‘Biological and climatic impacts of ocean trace element chemistry’.

Journal ArticleDOI
TL;DR: In this article, the capacity of the optical fiber channel has been investigated in the nonlinear regime and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fiber capacity.
Abstract: Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity.

Journal ArticleDOI
TL;DR: It is argued for a new approach to privacy research and practical design, focused on the development of conceptual analytics that facilitate dissecting privacy’s multiple uses across multiple contexts.
Abstract: The meaning of privacy has been much disputed throughout its history in response to wave after wave of new technological capabilities and social configurations. The current round of disputes over privacy fuelled by data science has been a cause of despair for many commentators and a death knell for privacy itself for others. We argue that privacy's disputes are neither an accidental feature of the concept nor a lamentable condition of its applicability. Privacy is essentially contested. Because it is, privacy is transformable according to changing technological and social conditions. To make productive use of privacy's essential contestability, we argue for a new approach to privacy research and practical design, focused on the development of conceptual analytics that facilitate dissecting privacy's multiple uses across multiple contexts.This article is part of the themed issue 'The ethical impact of data science'.

Journal ArticleDOI
TL;DR: Challenges in the use of mass spectrometry (MS) as a quantitative tool in plant metabolomics experiments are discussed, and important criteria for the development and validation of MS-based analytical methods provided are discussed.
Abstract: Metabolomics is a research field used to acquire comprehensive information on the composition of a metabolite pool to provide a functional screen of the cellular state. Studies of the plant metabol...

Journal ArticleDOI
TL;DR: Evidence for large-scale subglacial water flow in Antarctica is reviewed, including the discovery of ancient channels developed by former hydrological processes, and areas where future discoveries may be possible are predicted.
Abstract: It is now well documented that over 400 subglacial lakes exist across the bed of the Antarctic Ice Sheet. They comprise a variety of sizes and volumes (from the approx. 250 km long Lake Vostok to bodies of water less than 1 km in length), relate to a number of discrete topographic settings (from those contained within valleys to lakes that reside in broad flat terrain) and exhibit a range of dynamic behaviours (from ‘active’ lakes that periodically outburst some or all of their water to those isolated hydrologically for millions of years). Here we critique recent advances in our understanding of subglacial lakes, in particular since the last inventory in 2012. We show that within 3 years our knowledge of the hydrological processes at the ice-sheet base has advanced considerably. We describe evidence for further ‘active’ subglacial lakes, based on satellite observation of ice-surface changes, and discuss why detection of many ‘active’ lakes is not resolved in traditional radio-echo sounding methods. We go on to review evidence for large-scale subglacial water flow in Antarctica, including the discovery of ancient channels developed by former hydrological processes. We end by predicting areas where future discoveries may be possible, including the detection, measurement and significance of groundwater (i.e. water held beneath the ice-bed interface).

Journal ArticleDOI
TL;DR: The Holo-Hilbert spectral analysis method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data using a nested empirical mode decomposition and Hilbert–Huang transform approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems.
Abstract: The Holo-Hilbert spectral analysis (HHSA) method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data. It uses a nested empirical mode decomposition and Hilbert-Huang transform (HHT) approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems. Comparisons are first made with traditional spectrum analysis, which usually achieved its results through convolutional integral transforms based on additive expansions of an a priori determined basis, mostly under linear and stationary assumptions. Thus, for non-stationary processes, the best one could do historically was to use the time-frequency representations, in which the amplitude (or energy density) variation is still represented in terms of time. For nonlinear processes, the data can have both amplitude and frequency modulations (intra-mode and inter-mode) generated by two different mechanisms: linear additive or nonlinear multiplicative processes. As all existing spectral analysis methods are based on additive expansions, either a priori or adaptive, none of them could possibly represent the multiplicative processes. While the earlier adaptive HHT spectral analysis approach could accommodate the intra-wave nonlinearity quite remarkably, it remained that any inter-wave nonlinear multiplicative mechanisms that include cross-scale coupling and phase-lock modulations were left untreated. To resolve the multiplicative processes issue, additional dimensions in the spectrum result are needed to account for the variations in both the amplitude and frequency modulations simultaneously. HHSA accommodates all the processes: additive and multiplicative, intra-mode and inter-mode, stationary and non-stationary, linear and nonlinear interactions. The Holo prefix in HHSA denotes a multiple dimensional representation with both additive and multiplicative capabilities.

Journal ArticleDOI
TL;DR: The working hypothesis is that this may be a broadly applicable rule: behavioural and social systems are non-contextual, i.e. all ‘contextual effects’ in them result from the ubiquitous dependence of response distributions on the elements of contexts other than the ones to which the response is presumably or normatively directed.
Abstract: Most behavioural and social experiments aimed at revealing contextuality are confined to cyclic systems with binary outcomes. In quantum physics, this broad class of systems includes as special cases Klyachko–Can–Binicioglu–Shumovsky-type, Einstein–Podolsky–Rosen–Bell-type and Suppes–Zanotti–Leggett–Garg-type systems. The theory of contextuality known as contextuality-by-default allows one to define and measure contextuality in all such systems, even if there are context-dependent errors in measurements, or if something in the contexts directly interacts with the measurements. This makes the theory especially suitable for behavioural and social systems, where direct interactions of ‘everything with everything’ are ubiquitous. For cyclic systems with binary outcomes, the theory provides necessary and sufficient conditions for non-contextuality, and these conditions are known to be breached in certain quantum systems. We review several behavioural and social datasets (from polls of public opinion to visual illusions to conjoint choices to word combinations to psychophysical matching), and none of these data provides any evidence for contextuality. Our working hypothesis is that this may be a broadly applicable rule: behavioural and social systems are non-contextual, i.e. all ‘contextual effects’ in them result from the ubiquitous dependence of response distributions on the elements of contexts other than the ones to which the response is presumably or normatively directed.

Journal ArticleDOI
TL;DR: This paper advocates a participative, reflexive management of data practices that has the potential to improve not only the ethical oversight for data science initiatives, but also the quality and reliability of research outputs.
Abstract: The distributed and global nature of data science creates challenges for evaluating the quality, import and potential impact of the data and knowledge claims being produced. This has significant consequences for the management and oversight of responsibilities and accountabilities in data science. In particular, it makes it difficult to determine who is responsible for what output, and how such responsibilities relate to each other; what 'participation' means and which accountabilities it involves, with regard to data ownership, donation and sharing as well as data analysis, re-use and authorship; and whether the trust placed on automated tools for data mining and interpretation is warranted (especially as data processing strategies and tools are often developed separately from the situations of data use where ethical concerns typically emerge). To address these challenges, this paper advocates a participative, reflexive management of data practices. Regulatory structures should encourage data scientists to examine the historical lineages and ethical implications of their work at regular intervals. They should also foster awareness of the multitude of skills and perspectives involved in data science, highlighting how each perspective is partial and in need of confrontation with others. This approach has the potential to improve not only the ethical oversight for data science initiatives, but also the quality and reliability of research outputs.This article is part of the themed issue 'The ethical impact of data science'.

Journal ArticleDOI
TL;DR: To effectively reduce drag in turbulent flow, an SHS should have: preferentially streamwise-aligned features to enhance favourable slip, a capillary resistance of the order of megapascals, and a roughness no larger than 0.5, when non-dimensionalized by the viscous length scale.
Abstract: In this review, we discuss how superhydrophobic surfaces (SHSs) can provide friction drag reduction in turbulent flow. Whereas biomimetic SHSs are known to reduce drag in laminar flow, turbulence adds many new challenges. We first provide an overview on designing SHSs, and how these surfaces can cause slip in the laminar regime. We then discuss recent studies evaluating drag on SHSs in turbulent flow, both computationally and experimentally. The effects of streamwise and spanwise slip for canonical, structured surfaces are well characterized by direct numerical simulations, and several experimental studies have validated these results. However, the complex and hierarchical textures of scalable SHSs that can be applied over large areas generate additional complications. Many studies on such surfaces have measured no drag reduction, or even a drag increase in turbulent flow. We discuss how surface wettability, roughness effects and some newly found scaling laws can help explain these varied results. Overall, we discuss how, to effectively reduce drag in turbulent flow, an SHS should have: preferentially streamwise-aligned features to enhance favourable slip, a capillary resistance of the order of megapascals, and a roughness no larger than 0.5, when non-dimensionalized by the viscous length scale.This article is part of the themed issue 'Bioinspired hierarchically structured surfaces for green science'.

Journal ArticleDOI
TL;DR: It is demonstrated here the large range of use of these biobased rod-like nanoparticles, extending therefore their potential use to highly sophisticated formulations.
Abstract: Cellulose nanocrystals (CNCs) are negatively charged colloidal particles well known to form highly stable surfactant-free Pickering emulsions. These particles can vary in surface charge density depending on their preparation by acid hydrolysis or applying post-treatments. CNCs with three different surface charge densities were prepared corresponding to 0.08, 0.16 and 0.64 e nm −2 , respectively. Post-treatment might also increase the surface charge density. The well-known TEMPO-mediated oxidation substitutes C 6 -hydroxyl groups by C 6 -carboxyl groups on the surface. We report that these different modified CNCs lead to stable oil-in-water emulsions. TEMPO-oxidized CNC might be the basis of further modifications. It is shown that they can, for example, lead to hydrophobic CNCs with a simple method using quaternary ammonium salts that allow producing inverse water-in-oil emulsions. Different from CNC modification before emulsification, modification can be carried out on the droplets after emulsification. This way allows preparing functional capsules according to the layer-by-layer process. As a result, it is demonstrated here the large range of use of these biobased rod-like nanoparticles, extending therefore their potential use to highly sophisticated formulations. This article is part of the themed issue ‘Soft interfacial materials: from fundamentals to formulation’.