scispace - formally typeset
Search or ask a question

Showing papers in "Philosophical Transactions of the Royal Society A in 2009"


Journal ArticleDOI
TL;DR: This paper provides a broad overview of the various objects and processes of interest found in nature and applications under development or available in the marketplace.
Abstract: Nature has developed materials, objects and processes that function from the macroscale to the nanoscale. These have gone through evolution over 3.8Gyr. The emerging field of biomimetics allows one...

1,087 citations


Journal ArticleDOI
TL;DR: In this paper, Silicon surfaces patterned with pillars and deposited with a hydrophobic coating were studied to demonstrate how the effects of pitch value, droplet size and impact velocity influence the transition from a composite state to a wetted state.
Abstract: Superhydrophobic surfaces exhibit extreme water-repellent properties. These surfaces with high contact angle and low contact angle hysteresis also exhibit a self-cleaning effect and low drag for fluid flow. Certain plant leaves, such as lotus leaves, are known to be superhydrophobic and self-cleaning due to the hierarchical roughness of their leaf surfaces. The self-cleaning phenomenon is widely known as the ‘lotus effect’. Superhydrophobic and self-cleaning surfaces can be produced by using roughness combined with hydrophobic coatings. In this paper, the effect of micro- and nanopatterned polymers on hydrophobicity is reviewed. Silicon surfaces patterned with pillars and deposited with a hydrophobic coating were studied to demonstrate how the effects of pitch value, droplet size and impact velocity influence the transition from a composite state to a wetted state. In order to fabricate hierarchical structures, a low-cost and flexible technique that involves replication of microstructures and self-assembly of hydrophobic waxes is described. The influence of micro-, nano- and hierarchical structures on superhydrophobicity is discussed by the investigation of static contact angle, contact angle hysteresis, droplet evaporation and propensity for air pocket formation. In addition, their influence on adhesive force as well as efficiency of selfcleaning is discussed.

722 citations


Journal ArticleDOI
TL;DR: The structural basics of superhydrophobic andsuperhydrophilic plant surfaces and their biological functions are introduced and further types of plant surface structuring leading to superHydrophobicity and Superhydrophilicity are presented.
Abstract: The diversity of plant surface structures, evolved over 460 million years, has led to a large variety of highly adapted functional structures. The plant cuticle provides structural and chemical modifications for surface wetting, ranging from superhydrophilic to superhydrophobic. In this paper, the structural basics of superhydrophobic and superhydrophilic plant surfaces and their biological functions are introduced. Wetting in plants is influenced by the sculptures of the cells and by the fine structure of the surfaces, such as folding of the cuticle, or by epicuticular waxes. Hierarchical structures in plant surfaces are shown and further types of plant surface structuring leading to superhydrophobicity and superhydrophilicity are presented. The existing and potential uses of superhydrophobic and superhydrophilic surfaces for self-cleaning, drag reduction during moving in water, capillary liquid transport and other biomimetic materials are shown.

654 citations


Journal ArticleDOI
TL;DR: In this article, a structural model for the left ventricular myocardium is proposed, based on the invariants associated with the three mutually orthogonal directions of the myocardia.
Abstract: In this paper, we first of all review the morphology and structure of the myocardium and discuss the main features of the mechanical response of passive myocardium tissue, which is an orthotropic material. Locally within the architecture of the myocardium three mutually orthogonal directions can be identified, forming planes with distinct material responses. We treat the left ventricular myocardium as a non-homogeneous, thick-walled, nonlinearly elastic and incompressible material and develop a general theoretical framework based on invariants associated with the three directions. Within this framework we review existing constitutive models and then develop a structurally based model that accounts for the muscle fibre direction and the myocyte sheet structure. The model is applied to simple shear and biaxial deformations and a specific form fitted to the existing (and somewhat limited) experimental data, emphasizing the orthotropy and the limitations of biaxial tests. The need for additional data is highlighted. A brief discussion of issues of convexity of the model and related matters concludes the paper.

617 citations


Journal ArticleDOI
TL;DR: Methods from nonlinear dynamics have shown new insights into heart rate variability changes under various physiological and pathological conditions, providing additional prognostic information and complementing traditional time- and frequency-domain analyses.
Abstract: Methods from nonlinear dynamics (NLD) have shown new insights into heart rate (HR) variability changes under various physiological and pathological conditions, providing additional prognostic information and complementing traditional time- and frequency-domain analyses. In this review, some of the most prominent indices of nonlinear and fractal dynamics are summarized and their algorithmic implementations and applications in clinical trials are discussed. Several of those indices have been proven to be of diagnostic relevance or have contributed to risk stratification. In particular, techniques based on mono- and multifractal analyses and symbolic dynamics have been successfully applied to clinical studies. Further advances in HR variability analysis are expected through multidimensional and multivariate assessments. Today, the question is no longer about whether or not methods from NLD should be applied; however, it is relevant to ask which of the methods should be selected and under which basic and standardized conditions should they be applied.

529 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review connections between phase transitions in highdimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing, and show that the empirical phase transitions do not depend on the ensemble, and they agree extremely well with the asymptotic theory assuming Gaussianity.
Abstract: We review connections between phase transitions in high-dimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing. In data analysis, such transitions arise as abrupt breakdown of linear model selection, robust data fitting or compressed sensing reconstructions, when the complexity of the model or the number of outliers increases beyond a threshold. In combinatorial geometry, these transitions appear as abrupt changes in the properties of face counts of convex polytopes when the dimensions are varied. The thresholds in these very different problems appear in the same critical locations after appropriate calibration of variables. These thresholds are important in each subject area: for linear modelling, they place hard limits on the degree to which the now ubiquitous high-throughput data analysis can be successful; for robustness, they place hard limits on the degree to which standard robust fitting methods can tolerate outliers before breaking down; for compressed sensing, they define the sharp boundary of the undersampling/sparsity trade-off curve in undersampling theorems. Existing derivations of phase transitions in combinatorial geometry assume that the underlying matrices have independent and identically distributed Gaussian elements. In applications, however, it often seems that Gaussianity is not required. We conducted an extensive computational experiment and formal inferential analysis to test the hypothesis that these phase transitions are universal across a range of underlying matrix ensembles. We ran millions of linear programs using random matrices spanning several matrix ensembles and problem sizes; visually, the empirical phase transitions do not depend on the ensemble, and they agree extremely well with the asymptotic theory assuming Gaussianity. Careful statistical analysis reveals discrepancies that can be explained as transient terms, decaying with problem size. The experimental results are thus consistent with an asymptotic large-n universality across matrix ensembles; finite-sample universality can be rejected.

369 citations


Journal ArticleDOI
TL;DR: Different mechanisms of organ movement in plants are reviewed and design principles of such systems may be particularly useful for a biomimetic translation into active technical composites and moving devices.
Abstract: Plants have evolved a multitude of mechanisms to actuate organ movement. The osmotic influx and efflux of water in living cells can cause a rapid movement of organs in a predetermined direction. Even dead tissue can be actuated by a swelling or drying of the plant cell walls. The deformation of the organ is controlled at different levels of tissue hierarchy by geometrical constraints at the micrometre level (e.g. cell shape and size) and cell wall polymer composition at the nanoscale (e.g. cellulose fibril orientation). This paper reviews different mechanisms of organ movement in plants and highlights recent research in the field. Particular attention is paid to systems that are activated without any metabolism. The design principles of such systems may be particularly useful for a biomimetic translation into active technical composites and moving devices.

302 citations


Journal ArticleDOI
TL;DR: Although pre-ozonation may increase the formation of trihaloacetaldehydes or selected HNMs during post-chlorination or chloramination, biofiltration may reduce the formation potential of these by-products, and optimization of the various treatment processes and disinfection scenarios can allow plants to control to varying degrees the formation.
Abstract: When drinking water treatment plants disinfect water, a wide range of disinfection by-products (DBPs) of health and regulatory concern are formed. Recent studies have identified emerging DBPs (e.g. iodinated trihalomethanes (THMs) and acids, haloacetonitriles, halonitromethanes (HNMs), haloacetaldehydes, nitrosamines) that may be more toxic than some of the regulated ones (e.g. chlorine- and bromine-containing THMs and haloacetic acids). Some of these emerging DBPs are associated with impaired drinking water supplies (e.g. impacted by treated wastewater, algae, iodide). In some cases, alternative primary or secondary disinfectants to chlorine (e.g. chloramines, chlorine dioxide, ozone, ultraviolet) that minimize the formation of some of the regulated DBPs may increase the formation of some of the emerging by-products. However, optimization of the various treatment processes and disinfection scenarios can allow plants to control to varying degrees the formation of regulated and emerging DBPs. For example, pre-disinfection with chlorine, chlorine dioxide or ozone can destroy precursors for N -nitrosodimethylamine, which is a chloramine by-product, whereas pre-oxidation with chlorine or ozone can oxidize iodide to iodate and minimize iodinated DBP formation during post-chloramination. Although pre-ozonation may increase the formation of trihaloacetaldehydes or selected HNMs during post-chlorination or chloramination, biofiltration may reduce the formation potential of these by-products.

265 citations


Journal ArticleDOI
TL;DR: It is shown that true yield stress materials indeed exist, and in addition, it is accounted for shear banding that is generically observed in yield stress fluids.
Abstract: We propose a new view on yield stress materials. Dense suspensions and many other materials have a yield stress—they flow only if a large enough shear stress is exerted on them. There has been an ongoing debate in the literature on whether true yield stress fluids exist, and even whether the concept is useful. This is mainly due to the experimental difficulties in determining the yield stress. We show that most if not all of these difficulties disappear when a clear distinction is made between two types of yield stress fluids: thixotropic and simple ones. For the former, adequate experimental protocols need to be employed that take into account the time evolution of these materials: ageing and shear rejuvenation. This solves the problem of experimental determination of the yield stress. Also, we show that true yield stress materials indeed exist, and in addition, we account for shear banding that is generically observed in yield stress fluids.

259 citations


Journal ArticleDOI
TL;DR: The difficulties that arise with high-dimensional data in the context of the very familiar linear statistical model are introduced and a taste of what can nevertheless be achieved when the parameter vector of interest is sparse, that is, contains many zero elements is given.
Abstract: Modern applications of statistical theory and methods can involve extremely large datasets, often with huge numbers of measurements on each of a comparatively small number of experimental units. New methodology and accompanying theory have emerged in response: the goal of this Theme Issue is to illustrate a number of these recent developments. This overview article introduces the difficulties that arise with high-dimensional data in the context of the very familiar linear statistical model: we give a taste of what can nevertheless be achieved when the parameter vector of interest is sparse, that is, contains many zero elements. We describe other ways of identifying low-dimensional subspaces of the data space that contain all useful information. The topic of classification is then reviewed along with the problem of identifying, from within a very large set, the variables that help to classify observations. Brief mention is made of the visualization of high-dimensional data and ways to handle computational problems in Bayesian analysis are described. At appropriate points, reference is made to the other papers in the issue.

257 citations


Journal ArticleDOI
TL;DR: The chemical quality of sludge is continually improving and concentrations of potentially harmful and persistent organic compounds have declined to background values, so recycling sewage sludge on farmland is not constrained by concentrations of OCs found in contemporary sewage sludges.
Abstract: Organic chemicals discharged in urban wastewater from industrial and domestic sources, or those entering through atmospheric deposition onto paved areas via surface run-off, are predominantly lipophilic in nature and therefore become concentrated in sewage sludge, with potential implications for the agricultural use of sludge as a soil improver. Biodegradation occurs to varying degrees during wastewater and sludge treatment processes. However, residues will probably still be present in the resulting sludge and can vary from trace values of several micrograms per kilogram up to approximately 1 per cent in the dry solids for certain bulk chemicals, such as linear alkylbenzene sulphonate, which is widely used as a surfactant in detergent formulations. However, the review of the scientific literature on the potential environmental and health impacts of organic contaminants (OCs) in sludge indicates that the presence of a compound in sludge, or of seemingly large amounts of certain compounds used in bulk volumes domestically and by industry, does not necessarily constitute a hazard when the material is recycled to farmland. Furthermore, the chemical quality of sludge is continually improving and concentrations of potentially harmful and persistent organic compounds have declined to background values. Thus, recycling sewage sludge on farmland is not constrained by concentrations of OCs found in contemporary sewage sludges. A number of issues, while unlikely to be significant for agricultural utilization, require further investigation and include: (i) the impacts of chlorinated paraffins on the food chain and human health, (ii) the risk assessment of the plasticizer di(2-ethylhexyl)phthalate, a bulk chemical present in large amounts in sludge, (iii) the microbiological risk assessment of antibiotic-resistant micro-organisms in sewage sludge and sludge-amended agricultural soil, and (iv) the potential significance of personal-care products (e.g. triclosan), pharmaceuticals and endocrine-disrupting compounds in sludge on soil quality and human health.

Journal ArticleDOI
TL;DR: It is found that increasing the polydispersity at a given concentration slows down crystal nucleation, and reduces the supersaturation since it tends to stabilize the fluid but to destabilize the crystal.
Abstract: Motivated by old experiments on colloidal suspensions, we report molecular dynamics simulations of assemblies of hard spheres, addressing crystallization and glass formation. The simulations cover wide ranges of polydispersity s (standard deviation of the particle size distribution divided by its mean) and particle concentration. No crystallization is observed for s>0.07. For 0.02

Journal ArticleDOI
TL;DR: The micro-architecture of nacre has been classically illustrated as a ‘brick-and-mortar’ arrangement, but it is clear now that hierarchical organization and other structural features play an important role in the amazing mechanical properties of this natural nanocomposite.
Abstract: The micro-architecture of nacre (mother of pearl) has been classically illustrated as a 'brick-and-mortar' arrangement. It is clear now that hierarchical organization and other structural features play an important role in the amazing mechanical properties of this natural nanocomposite. The more important structural characteristics and mechanical properties of nacre are exposed as a base that has inspired scientists and engineers to develop biomimetic strategies that could be useful in areas such as materials science, biomaterials development and nanotechnology. A strong emphasis is given on the latest advances on the synthetic design and production of nacre-inspired materials and coatings, in particular to be used in biomedical applications.

Journal ArticleDOI
Serguei Semenov1
TL;DR: This paper presents a review of research results obtained by the author and his colleagues and focuses on various potential clinical applications of MWT.
Abstract: Microwave tomography (MWT) is an emerging biomedical imaging modality with great potential for non-invasive assessment of functional and pathological conditions of soft tissues. This paper presents a review of research results obtained by the author and his colleagues and focuses on various potential clinical applications of MWT. Most clinical applications of MWT imaging have complicated, nonlinear, high dielectric contrast inverse problems of three-dimensional diffraction tomography. There is a very high dielectric contrast between bones and fatty areas compared with soft tissues. In most cases, the contrast between soft-tissue abnormalities (the target imaging areas) is less pronounced than between bone (fat) and soft tissue. This additionally complicates the imaging problem. In spite of the difficulties mentioned, it has been demonstrated that MWT is applicable for extremities imaging, breast cancer detection, diagnostics of lung cancer, brain imaging and cardiac imaging.

Journal ArticleDOI
TL;DR: Dynamic causal modelling (DCM) indicated the highest evidence for a system architecture featuring the insula in a serial position between BA 44 and two parallel nodes (cerebellum/basal ganglia), from which information converges onto the PMC and finally M1.
Abstract: The aim of this study was to provide a computational system model of effective connectivity in the human brain underlying overt speech production. Meta-analysis of neuroimaging studies and functional magnetic resonance imaging data acquired during a verbal fluency task revealed a core network consisting of Brodmann's area (BA) 44 in Broca's region, anterior insula, basal ganglia, cerebellum, premotor cortex (PMC, BA 6) and primary motor cortex (M1, areas 4a/4p). Dynamic causal modelling (DCM) indicated the highest evidence for a system architecture featuring the insula in a serial position between BA 44 and two parallel nodes (cerebellum/basal ganglia), from which information converges onto the PMC and finally M1. Parameter inference revealed that effective connectivity from the insular relay into the cerebellum/basal ganglia is primarily task driven (preparation) while the output into the cortical motor system strongly depends on the actual word production rate (execution). DCM hence allowed not only a quantitative characterization of the human speech production network, but also the distinction of a preparatory and an executive subsystem within it. The proposed model of physiological integration during speech production may now serve as a reference for investigations into the neurobiology of pathological states such as dysarthria and apraxia of speech.

Journal ArticleDOI
TL;DR: The application of model-based image reconstruction is reviewed, together with a numerical modelling approach to light propagation in tissue as well as generalized image reconstruction using boundary data, whereby the use of spectral and dual-modality systems can improve contrast and spatial resolution.
Abstract: The development of diffuse optical tomography as a functional imaging modality has relied largely on the use of model-based image reconstruction. The recovery of optical parameters from boundary measurements of light propagation within tissue is inherently a difficult one, because the problem is nonlinear, ill-posed and ill-conditioned. Additionally, although the measured near-infrared signals of light transmission through tissue provide high imaging contrast, the reconstructed images suffer from poor spatial resolution due to the diffuse propagation of light in biological tissue. The application of model-based image reconstruction is reviewed in this paper, together with a numerical modelling approach to light propagation in tissue as well as generalized image reconstruction using boundary data. A comprehensive review and details of the basis for using spatial and structural prior information are also discussed, whereby the use of spectral and dual-modality systems can improve contrast and spatial resolution.

Journal ArticleDOI
TL;DR: The results show that EEG and MEG background activities in AD patients are less complex and more regular than in healthy control subjects, which suggests that nonlinear analysis techniques could be useful in AD diagnosis.
Abstract: The aim of the present study is to show the usefulness of nonlinear methods to analyse the electroencephalogram (EEG) and magnetoencephalogram (MEG) in patients with Alzheimer's disease (AD). The following nonlinear methods have been applied to study the EEG and MEG background activity in AD patients and control subjects: approximate entropy, sample entropy, multiscale entropy, auto-mutual information and Lempel-Ziv complexity. We discuss why these nonlinear methods are appropriate to analyse the EEG and MEG. Furthermore, the performance of all these methods has been compared when applied to the same databases of EEG and MEG recordings. Our results show that EEG and MEG background activities in AD patients are less complex and more regular than in healthy control subjects. In line with previous studies, our work suggests that nonlinear analysis techniques could be useful in AD diagnosis.

Journal ArticleDOI
TL;DR: The state of the art of optical imaging is reviewed and the two main clinical applications—functional brain imaging and imaging for breast cancer—are reviewed in some detail, followed by a discussion of other issues such as imaging small animals and multimodality imaging.
Abstract: Diffuse optical imaging is a medical imaging technique that is beginning to move from the laboratory to the hospital. It is a natural extension of near-infrared spectroscopy (NIRS), which is now used in certain niche applications clinically and particularly for physiological and psychological research. Optical imaging uses sophisticated image reconstruction techniques to generate images from multiple NIRS measurements. The two main clinical applications--functional brain imaging and imaging for breast cancer--are reviewed in some detail, followed by a discussion of other issues such as imaging small animals and multimodality imaging. We aim to review the state of the art of optical imaging.

Journal ArticleDOI
TL;DR: A non-local theory is proposed to model dense granular flows that is applicable from the quasi-static regime up to the inertial regime and shows that many of the experimental observations are predicted within the self-activated model.
Abstract: A non-local theory is proposed to model dense granular flows. The idea is to describe the rearrangements occurring when a granular material is sheared as a self-activated process. A rearrangement at one position is triggered by the stress fluctuations induced by rearrangements elsewhere in the material. Within this framework, the constitutive law, which gives the relation between the shear rate and the stress distribution, is written as an integral over the entire flow. Taking into account the finite time of local rearrangements, the model is applicable from the quasi-static regime up to the inertial regime. We have checked the prediction of the model in two different configurations, namely granular flows down inclined planes and plane shear under gravity, and we show that many of the experimental observations are predicted within the self-activated model.

Journal ArticleDOI
TL;DR: In this article, the authors have suggested that some indices describing nonlinear heart rate dynamics, such as fractal scaling exponents, heart rate turbulence and deceleration capacity may provide useful prognostic information in various clinical settings and their reproducibility may be better than that of traditional indices.
Abstract: Heart rate variability (HRV) has been conventionally analysed with time- and frequency-domain methods, which measure the overall magnitude of RR interval fluctuations around its mean value or the magnitude of fluctuations in some predetermined frequencies. Analysis of heart rate dynamics by novel methods, such as heart rate turbulence after ventricular premature beats, deceleration capacity of heart rate and methods based on chaos theory and nonlinear system theory, have gained recent interest. Recent observational studies have suggested that some indices describing nonlinear heart rate dynamics, such as fractal scaling exponents, heart rate turbulence and deceleration capacity, may provide useful prognostic information in various clinical settings and their reproducibility may be better than that of traditional indices. For example, the short-term fractal scaling exponent measured by the detrended fluctuation analysis method has been shown to predict fatal cardiovascular events in various populations. Similarly, heart rate turbulence and deceleration capacity have performed better than traditional HRV measures in predicting mortality in post-infarction patients. Approximate entropy, a nonlinear index of heart rate dynamics, which describes the complexity of RR interval behaviour, has provided information on the vulnerability to atrial fibrillation. There are many other nonlinear indices which also give information on the characteristics of heart rate dynamics, but their clinical usefulness is not as well established. Although the concepts of nonlinear dynamics, fractal mathematics and complexity measures of heart rate behaviour, heart rate turbulence, deceleration capacity in relation to cardiovascular physiology or various cardiovascular events are still far away from clinical medicine, they are a fruitful area for research to expand our knowledge concerning the behaviour of cardiovascular oscillations in normal healthy conditions as well as in disease states.

Journal ArticleDOI
TL;DR: The design and implementation of the Sector storage cloud and the Sphere compute cloud are described and some experimental studies comparing Sector/Sphere and Hadoop using the Terasort benchmark are described.
Abstract: Cloud computing has demonstrated that processing very large datasets over commodity clusters can be done simply, given the right programming model and infrastructure. In this paper, we describe the design and implementation of the Sector storage cloud and the Sphere compute cloud. By contrast with the existing storage and compute clouds, Sector can manage data not only within a data centre, but also across geographically distributed data centres. Similarly, the Sphere compute cloud supports user-defined functions (UDFs) over data both within and across data centres. As a special case, MapReduce-style programming can be implemented in Sphere by using a Map UDF followed by a Reduce UDF. We describe some experimental studies comparing Sector/Sphere and Hadoop using the Terasort benchmark. In these studies, Sector is approximately twice as fast as Hadoop. Sector/Sphere is open source.

Journal ArticleDOI
TL;DR: This paper presents methods to build histo-anatomically detailed individualized cardiac models based on high-resolution three-dimensional anatomical and/or diffusion tensor magnetic resonance images, combined with serial histological sectioning data, and is used to investigate individualized heart function.
Abstract: This paper presents methods to build histo-anatomically detailed individualized cardiac models. The models are based on high-resolution three-dimensional anatomical and/or diffusion tensor magnetic resonance images, combined with serial histological sectioning data, and are used to investigate individualized cardiac function. The current state of the art is reviewed, and its limitations are discussed. We assess the challenges associated with the generation of histo-anatomically representative individualized in silico models of the heart. The entire processing pipeline including image acquisition, image processing, mesh generation, model set-up and execution of computer simulations, and the underlying methods are described. The multifaceted challenges associated with these goals are highlighted, suitable solutions are proposed, and an important application of developed high-resolution structure-function models in elucidating the effect of individual structural heterogeneity upon wavefront dynamics is demonstrated.

Journal ArticleDOI
TL;DR: The influence of time delay in systems of two coupled excitable neurons is studied in the framework of the FitzHugh–Nagumo model to derive stochastic synchronization of instantaneously coupled neurons under the influence of white noise by appropriate choice of the delay time.
Abstract: The influence of time delay in systems of two coupled excitable neurons is studied in the framework of the FitzHugh–Nagumo model. A time delay can occur in the coupling between neurons or in a self-feedback loop. The stochastic synchronization of instantaneously coupled neurons under the influence of white noise can be deliberately controlled by local time-delayed feedback. By appropriate choice of the delay time, synchronization can be either enhanced or suppressed. In delay-coupled neurons, antiphase oscillations can be induced for sufficiently large delay and coupling strength. The additional application of time-delayed self-feedback leads to complex scenarios of synchronized in-phase or antiphase oscillations, bursting patterns or amplitude death.

Journal ArticleDOI
TL;DR: The data regarding the levels of pharmaceuticals and illicit drugs detected in wastewaters is reviewed and an overview of their removal by conventional treatment technologies as well as advanced treatments such as membrane bioreactor are given.
Abstract: Pharmaceutically active compounds (PhACs) and drugs of abuse (DAs) are two important groups of emerging environmental contaminants that have raised an increasing interest in the scientific community. A number of studies revealed their presence in the environment. This is mainly due to the fact that some compounds are not efficiently removed during wastewater treatment processes, being able to reach surface and groundwater and subsequently, drinking waters. This paper reviews the data regarding the levels of pharmaceuticals and illicit drugs detected in wastewaters and gives an overview of their removal by conventional treatment technologies (applying activated sludge) as well as advanced treatments such as membrane bioreactor. The paper also gives an overview of bank filtration practices at managed aquifer recharge sites and discusses the potential of this approach to mitigate the contamination by PhACs and DAs.

Journal ArticleDOI
TL;DR: It is indicated that continental ice volume varied significantly during the Mid-Pliocene warm period and that at times there were considerable reductions of Antarctic ice.
Abstract: Ostracode magnesium/calcium (Mg/Ca)-based bottom-water temperatures were combined with benthic foraminiferal oxygen isotopes in order to quantify the oxygen isotopic composition of seawater, and estimate continental ice volume and sea-level variability during the Mid-Pliocene warm period, ca 3.3–3.0 Ma. Results indicate that, following a low stand of approximately 65 m below present at marine isotope stage (MIS) M2 ( ca 3.3 Ma), sea level generally fluctuated by 20–30 m above and below a mean value similar to present-day sea level. In addition to the low-stand event at MIS M2, significant low stands occurred at MIS KM2 (−40 m), G22 (−40 m) and G16 (−60 m). Six high stands of +10 m or more above present day were also observed; four events (+10, +25,+15 and +30 m) from MIS M1 to KM3, a high stand of +15 m at MIS K1, and a high stand of +25 m at MIS G17. These results indicate that continental ice volume varied significantly during the Mid-Pliocene warm period and that at times there were considerable reductions of Antarctic ice.

Journal ArticleDOI
TL;DR: The analytical study of the simplified large-scale time-delayed models of balancing provides a Newtonian insight into the functioning of these organs that may also serve as a basis to support theories and hypotheses on balancing and vision.
Abstract: Mechanical models of human self-balancing often use the Newtonian equations of inverted pendula. While these mathematical models are precise enough on the mechanical side, the ways humans balance themselves are still quite unexplored on the control side. Time delays in the sensory and motoric neural pathways give essential limitations to the stabilization of the human body as a multiple inverted pendulum. The sensory systems supporting each other provide the necessary signals for these control tasks; but the more complicated the system is, the larger delay is introduced. Human ageing as well as our actual physical and mental state affects the time delays in the neural system, and the mechanical structure of the human body also changes in a large range during our lives. The human balancing organ, the labyrinth, and the vision system essentially adapted to these relatively large time delays and parameter regions occurring during balancing. The analytical study of the simplified large-scale time-delayed models of balancing provides a Newtonian insight into the functioning of these organs that may also serve as a basis to support theories and hypotheses on balancing and vision.

Journal ArticleDOI
TL;DR: There appears to be good epidemiological evidence for a relationship between exposure to DBPs, as measured by trihalomethanes (THMs), in drinking water and bladder cancer, but the evidence for other cancers including colorectal cancer is inconclusive and inconsistent.
Abstract: This paper summarizes the epidemiological evidence for adverse health effects associated with disinfection by-products (DBPs) in drinking water and describes the potential mechanism of action. There appears to be good epidemiological evidence for a relationship between exposure to DBPs, as measured by trihalomethanes (THMs), in drinking water and bladder cancer, but the evidence for other cancers including colorectal cancer is inconclusive and inconsistent. There appears to be some evidence for an association between exposure to DBPs, specifically THMs, and little for gestational age/intrauterine growth retardation and, to a lesser extent, pre-term delivery, but evidence for relationships with other outcomes such as low birth weight, stillbirth, congenital anomalies and semen quality is inconclusive and inconsistent. Major limitations in exposure assessment, small sample sizes and potential biases may account for the inconclusive and inconsistent results in epidemiological studies. Moreover, most studies have focused on total THMs as the exposure metric, whereas other DBPs appear to be more toxic than the THMs, albeit generally occurring at lower levels in the water. The mechanisms through which DBPs may cause adverse health effects including cancer and adverse reproductive effects have not been well investigated. Several mechanisms have been suggested, including genotoxicity, oxidative stress, disruption of folate metabolism, disruption of the synthesis and/or secretion of placental syncytiotrophoblast-derived chorionic gonadotropin and lowering of testosterone levels, but further work is required in this area.

Journal ArticleDOI
TL;DR: The newly developed PF cell model adds a new member to the family of human cardiac cell models developed previously for the SA node, atrial and ventricular cells, which can be incorporated into an anatomical model of the human heart with details of its electrophysiological heterogeneity and anatomical complexity.
Abstract: Early development of ionic models for cardiac myocytes, from the pioneering modification of the Hodgkin-Huxley giant squid axon model by Noble to the iconic DiFrancesco-Noble model integrating voltage-gated ionic currents, ion pumps and exchangers, Ca(2+) sequestration and Ca(2+)-induced Ca(2+) release, provided a general description for a mammalian Purkinje fibre (PF) and the framework for modern cardiac models. In the past two decades, development has focused on tissue-specific models with an emphasis on the sino-atrial (SA) node, atria and ventricles, while the PFs have largely been neglected. However, achieving the ultimate goal of creating a virtual human heart will require detailed models of all distinctive regions of the cardiac conduction system, including the PFs, which play an important role in conducting cardiac excitation and ensuring the synchronized timing and sequencing of ventricular contraction. In this paper, we present details of our newly developed model for the human PF cell including validation against experimental data. Ionic mechanisms underlying the heterogeneity between the PF and ventricular action potentials in humans and other species are analysed. The newly developed PF cell model adds a new member to the family of human cardiac cell models developed previously for the SA node, atrial and ventricular cells, which can be incorporated into an anatomical model of the human heart with details of its electrophysiological heterogeneity and anatomical complexity.

Journal ArticleDOI
TL;DR: The results provide a new standard for ion channel modelling to further the automation of model development, the validation process and the predictive power of these models.
Abstract: Markov models (MMs) represent a generalization of Hodgkin–Huxley models. They provide a versatile structure for modelling single channel data, gating currents, state-dependent drug interaction data, exchanger and pump dynamics, etc. This paper uses examples from cardiac electrophysiology to discuss aspects related to parameter estimation. (i) Parameter unidentifiability (found in 9 out of 13 of the considered models) results in an inability to determine the correct layout of a model, contradicting the idea that model structure and parameters provide insights into underlying molecular processes. (ii) The information content of experimental voltage step clamp data is discussed, and a short but sufficient protocol for parameter estimation is presented. (iii) MMs have been associated with high computational cost (owing to their large number of state variables), presenting an obstacle for multicellular whole organ simulations as well as parameter estimation. It is shown that the stiffness of models increases computation time more than the number of states. (iv) Algorithms and software programs are provided for steady-state analysis, analytical solutions for voltage steps and numerical derivation of parameter identifiability. The results provide a new standard for ion channel modelling to further the automation of model development, the validation process and the predictive power of these models.

Journal ArticleDOI
TL;DR: A broad overview of ideas underlying a particular class of methods for dimension reduction that includes principal components, along with an introduction to the corresponding methodology is given.
Abstract: Dimension reduction for regression is a prominent issue today because technological advances now allow scientists to routinely formulate regressions in which the number of predictors is considerably larger than in the past. While several methods have been proposed to deal with such regressions, principal components (PCs) still seem to be the most widely used across the applied sciences. We give a broad overview of ideas underlying a particular class of methods for dimension reduction that includes PCs, along with an introduction to the corresponding methodology. New methods are proposed for prediction in regressions with many predictors.