scispace - formally typeset
Search or ask a question

Showing papers by "University of Windsor published in 2016"


Journal ArticleDOI
07 May 2016
TL;DR: In this article, the efficiency of both conventional and advanced treatment methods for phenol and some common derivatives is discussed. But, the applicability of these treatments with phenol compounds is compared.
Abstract: Phenolic compounds are priority pollutants with high toxicity even at low concentrations. In this review, the efficiency of both conventional and advanced treatment methods is discussed. The applicability of these treatments with phenol and some common derivatives is compared. Conventional treatments such as distillation, absorption, extraction, chemical oxidation, and electrochemical oxidation show high efficiencies with various phenolic compounds, while advanced treatments such as Fenton processes, ozonation, wet air oxidation, and photochemical treatment use less chemicals compared to the conventional ones but have high energy costs. Compared to physico-chemical treatment, biological treatment is environmentally friendly and energy saving, but it cannot treat high concentration pollutants. Enzymatic treatment has proven to be the best way to treat various phenolic compounds under mild conditions with different enzymes such as peroxidases, laccases, and tyrosinases. This review covers papers from 2013 through January 2016.

498 citations


Journal ArticleDOI
TL;DR: These findings demonstrate that Lp(a) induces monocyte trafficking to the arterial wall and mediates proinflammatory responses through its OxPL content, providing a novel mechanism by which Lp (a) mediates cardiovascular disease.
Abstract: BACKGROUND: Elevated lipoprotein(a) [Lp(a)] is a prevalent, independent cardiovascular risk factor, but the underlying mechanisms responsible for its pathogenicity are poorly defined. Because Lp(a) is the prominent carrier of proinflammatory oxidized phospholipids (OxPLs), part of its atherothrombosis might be mediated through this pathway. METHODS: In vivo imaging techniques including magnetic resonance imaging, (18)F-fluorodeoxyglucose uptake positron emission tomography/computed tomography and single-photon emission computed tomography/computed tomography were used to measure subsequently atherosclerotic burden, arterial wall inflammation, and monocyte trafficking to the arterial wall. Ex vivo analysis of monocytes was performed with fluorescence-activated cell sorter analysis, inflammatory stimulation assays, and transendothelial migration assays. In vitro studies of the pathophysiology of Lp(a) on monocytes were performed with an in vitro model for trained immunity. RESULTS: We show that subjects with elevated Lp(a) (108 mg/dL [50-195 mg/dL]; n=30) have increased arterial inflammation and enhanced peripheral blood mononuclear cells trafficking to the arterial wall compared with subjects with normal Lp(a) (7 mg/dL [2-28 mg/dL]; n=30). In addition, monocytes isolated from subjects with elevated Lp(a) remain in a long-lasting primed state, as evidenced by an increased capacity to transmigrate and produce proinflammatory cytokines on stimulation (n=15). In vitro studies show that Lp(a) contains OxPL and augments the proinflammatory response in monocytes derived from healthy control subjects (n=6). This effect was markedly attenuated by inactivating OxPL on Lp(a) or removing OxPL on apolipoprotein(a). CONCLUSIONS: These findings demonstrate that Lp(a) induces monocyte trafficking to the arterial wall and mediates proinflammatory responses through its OxPL content. These findings provide a novel mechanism by which Lp(a) mediates cardiovascular disease. CLINICAL TRIAL REGISTRATION: URL: http://www.trialregister.nl. Unique identifier: NTR5006 (VIPER Study).

345 citations


Journal ArticleDOI
TL;DR: It is shown that Bayesian models are able to use prior information and model measurements with various distributions, and a range of deep neural networks can be integrated in multi-modal learning for capturing the complex mechanism of biological systems.
Abstract: Driven by high-throughput sequencing techniques, modern genomic and clinical studies are in a strong need of integrative machine learning models for better use of vast volumes of heterogeneous information in the deep understanding of biological systems and the development of predictive models. How data from multiple sources (called multi-view data) are incorporated in a learning system is a key step for successful analysis. In this article, we provide a comprehensive review on omics and clinical data integration techniques, from a machine learning perspective, for various analyses such as prediction, clustering, dimension reduction and association. We shall show that Bayesian models are able to use prior information and model measurements with various distributions; tree-based methods can either build a tree with all features or collectively make a final decision based on trees learned from each view; kernel methods fuse the similarity matrices learned from individual views together for a final similarity matrix or learning model; network-based fusion methods are capable of inferring direct and indirect associations in a heterogeneous network; matrix factorization models have potential to learn interactions among features from different views; and a range of deep neural networks can be integrated in multi-modal learning for capturing the complex mechanism of biological systems.

333 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the relative merits of two types of basalt fiber (bundle dispersion fibres and minibars) in enhancing the mechanical behaviour of concrete.

267 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an actualistic tectonic division and evolution of the North China Craton based on the Wilson Cycle and comparative analysis that uses a multi-disciplinary approach to define sutures, their ages, and the nature of the rocks between them, to determine their mode of formation and means of accretion or exhumation.

253 citations


Journal ArticleDOI
TL;DR: In this article, sixteen HAR-type volatility models with structural breaks were introduced and their parameters were estimated by applying 5min high-frequency transaction data for WTI crude oil futures.

223 citations


Journal ArticleDOI
TL;DR: The fact that most of these are precipitated by underlying atherosclerosis continues to confound the understanding of the true pathogenic roles of Lp(a) and, therefore, the most appropriate therapeutic target through which to mitigate the harmful effects of this lipoprotein.

166 citations



Journal ArticleDOI
TL;DR: Although complexity theory shows promise in health services research, particularly related to relationships and interactions, conceptual confusion and inconsistent application hinders the operationalization of this potentially important perspective.
Abstract: There are calls for better application of theory in health services research. Research exploring knowledge translation and interprofessional collaboration are two examples, and in both areas, complexity theory has been identified as potentially useful. However, how best to conceptualize and operationalize complexity theory in health services research is uncertain. The purpose of this scoping review was to explore how complexity theory has been incorporated in health services research focused on allied health, medicine, and nursing in order to offer guidance for future application. Given the extensiveness of how complexity theory could be conceptualized and ultimately operationalized within health services research, a scoping review of complexity theory in health services research is warranted. A scoping review of published research in English was conducted using CINAHL, EMBASE, Medline, Cochrane, and Web of Science databases. We searched terms synonymous with complexity theory. We included 44 studies in this review: 27 were qualitative, 14 were quantitative, and 3 were mixed methods. Case study was the most common method. Long-term care was the most studied setting. The majority of research was exploratory and focused on relationships between health care workers. Authors most commonly used complexity theory as a conceptual framework for their study. Authors described complexity theory in their research in a variety of ways. The most common attributes of complexity theory used in health services research included relationships, self-organization, and diversity. A common theme across descriptions of complexity theory is that authors incorporate aspects of the theory related to how diverse relationships and communication between individuals in a system can influence change. Complexity theory is incorporated in many ways across a variety of research designs to explore a multitude of phenomena.. Although complexity theory shows promise in health services research, particularly related to relationships and interactions, conceptual confusion and inconsistent application hinders the operationalization of this potentially important perspective. Generalizability from studies that incorporate complexity theory is, therefore, difficult. Heterogeneous conceptualization and operationalization of complexity theory in health services research suggests there is no universally agreed upon approach of how to use this theory in health services research. Future research should include clear definitions and descriptions of complexity and how it was used in studies. Clear reporting will aid in determining how best to use complexity theory in health services research.

127 citations


Journal ArticleDOI
TL;DR: A novel effective and efficient image copy detection method is proposed based on two global features extracted from rotation invariant partitions, which can effectively and efficiently resist rotations with arbitrary degrees.
Abstract: For detecting the image copies of a given original image generated by arbitrary rotation, the existing image copy detection methods can not simultaneously achieve desirable performances in the aspects of both accuracy and efficiency. To address this challenge, a novel effective and efficient image copy detection method is proposed based on two global features extracted from rotation invariant partitions. Firstly, candidate images are preprocessed by an averaging operation to suppress noise. Secondly, the rotation invariant partitions of the preprocessed images are constructed based on pixel intensity orders. Thirdly, two global features are extracted from these partitions by utilizing image gradient magnitudes and orientations, respectively. Finally, the extracted features of images are compared to implement copy detection. Promising experimental results demonstrate our proposed method can effectively and efficiently resist rotations with arbitrary degrees. Furthermore, the performances of the proposed method are also desirable for resisting other typical copy attacks, such as flipping, rescaling, illumination and contrast change, as well as Gaussian noising. key words: image copy detection, copy attacks, arbitrary rotation, rotation invariant, intensity orders

117 citations


Journal ArticleDOI
03 Mar 2016
TL;DR: In this paper, a dc component-based current injection model considering VSI nonlinearity is proposed, which employs the dc components of dq-axis currents and voltages for PMSM parameter and VSI-distorted voltage estimation.
Abstract: To develop a high-performance and reliable permanent-magnet synchronous machine (PMSM) drive for electric vehicle (EV) applications, accurate knowledge of the PMSM parameters is of significance. This paper investigates online estimation of PMSM parameters and voltage source inverter (VSI) nonlinearity using current injection method in which magnetic saturation is also considered. First, a novel dc component-based current injection model considering VSI nonlinearity is proposed, which employs the dc components of dq -axis currents and voltages for PMSM parameter and VSI-distorted voltage estimation. This method can eliminate the influence of rotor position error on VSI nonlinearity estimation. Second, a simplified linear equation is employed to model the cross- and self-saturation of the dq -axis inductances during current injection, which can facilitate the estimation of the inductance variations induced by magnetic saturation. Third, a novel current compensation strategy is proposed to minimize the torque ripples caused by current injection, which contributes to making our approach applicable to both surface and interior PMSMs. Therefore, the proposed online parameter estimation approach can estimate the winding resistance, rotor flux, VSI-distorted voltage, and the varying dq -axis inductances under different operating conditions. The proposed approach is experimentally validated on a down-scaled laboratory interior PMSM prototyped for direct-drive EV powertrain.

Journal ArticleDOI
TL;DR: A critical but forward-looking appraisal of the opportunities and challenges in using existing and emerging electronic sensor-tags for the study of fish energetics in the wild.
Abstract: The generalized energy budget for fish (i.e., Energy Consumed = Metabolism + Waste + Growth) is as relevant today as when it was first proposed decades ago and serves as a foundational concept in fish biology. Yet, generating accurate measurements of components of the bioenergetics equation in wild fish is a major challenge. How often does a fish eat and what does it consume? How much energy is expended on locomotion? How do human-induced stressors influence energy acquisition and expenditure? Generating answers to these questions is important to fisheries management and to our understanding of adaptation and evolutionary processes. The advent of electronic tags (transmitters and data loggers) has provided biologists with improved opportunities to understand bioenergetics in wild fish. Here, we review the growing diversity of electronic tags with a focus on sensor-equipped devices that are commercially available (e.g., heart rate/electrocardiogram, electromyogram, acceleration, image capture). Next, we discuss each component of the bioenergetics model, recognizing that most research to date has focused on quantifying the activity component of metabolism, and identify ways in which the other, less studied components (e.g., consumption, specific dynamic action component of metabolism, somatic growth, reproductive investment, waste) could be estimated remotely. We conclude with a critical but forward-looking appraisal of the opportunities and challenges in using existing and emerging electronic sensor-tags for the study of fish energetics in the wild. Electronic tagging has become a central and widespread tool in fish ecology and fisheries management; the growing and increasingly affordable toolbox of sensor tags will ensure this trend continues, which will lead to major advances in our understanding of fish biology over the coming decades.

Journal ArticleDOI
TL;DR: This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning.
Abstract: The extreme learning machine (ELM), which was originally proposed for “generalized” single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.

Journal ArticleDOI
TL;DR: A public and patient involvement framework has been developed for implementation in a government agency’s HTA process and core elements of this framework may apply to other organizations responsible for HTA and health system quality improvement.
Abstract: Objective As health technology assessment (HTA) organizations in Canada and around the world seek to involve the public and patients in their activities, frameworks to guide decisions about whom to involve, through which mechanisms, and at what stages of the HTA process have been lacking. The aim of this study was to describe the development and outputs of a comprehensive framework for involving the public and patients in a government agency's HTA process. Methods The framework was informed by a synthesis of international practice and published literature, a dialogue with local, national and international stakeholders, and the deliberations of a government agency's public engagement subcommittee in Ontario, Canada. Results The practice and literature synthesis failed to identify a single, optimal approach to involving the public and patients in HTA. Choice of methods should be considered in the context of each HTA stage, goals for incorporating societal and/or patient perspectives into the process, and relevant societal and/or patient values at stake. The resulting framework is structured around four actionable elements: (i) guiding principles and goals for public and patient involvement (PPI) in HTA, (ii) the establishment of a common language to support PPI efforts, (iii) a flexible array of PPI approaches, and (iv) on-going evaluation of PPI to inform adjustments over time. Conclusions A public and patient involvement framework has been developed for implementation in a government agency's HTA process. Core elements of this framework may apply to other organizations responsible for HTA and health system quality improvement.

Journal ArticleDOI
TL;DR: A survey of five most widely used in-vehicle networks from three perspectives: system cost, data transmission capacity, and fault-tolerance capability and a proposal of a topology of the next generation in- vehicle network is presented.
Abstract: This paper presents a comprehensive survey of five most widely used in-vehicle networks from three perspectives: system cost, data transmission capacity, and fault-tolerance capability. The paper reviews the pros and cons of each network, and identifies possible approaches to improve the quality of service (QoS). In addition, two classifications of automotive gateways have been presented along with a brief discussion about constructing a comprehensive in-vehicle communication system with different networks and automotive gateways. Furthermore, security threats to in-vehicle networks are briefly discussed, along with the corresponding protective methods. The survey concludes with highlighting the trends in future development of in-vehicle network technology and a proposal of a topology of the next generation in-vehicle network.

Journal ArticleDOI
TL;DR: It is argued that given the dichotomous function of GCs the current ‘reproduction vs. survival’ paradigm is unnecessarily restrictive and predicts only deleterious GC effects on fitness, so a broader set of hypotheses should be considered when testing the fitness effects of GC manipulations.
Abstract: Summary 1. Experimental glucocorticoid (GC) manipulations can be useful for identifying the mechanisms that drive life-history and fitness variation in free-living animals, but predicting the effects of GC treatment can be complicated. Much of the uncertainty stems from the multifaceted role of GCs in organismal metabolism, and their variable influence with respect to life-history stage, ecological context, age, sex and individual variation. 2. Glucocorticoid hormones have been implicated in the regulation of parental care in many vertebrate taxa but in two seemingly contradictory ways, which sets up a potential GC-induced ‘reproductive conflict’. Circulating GCs mediate adaptive physiological and behavioural responses to stressful events, and elevated levels can lead to trade offs between reproductive effort and survival (e.g. the current reproduction vs. survival hypothesis). The majority of studies examining the fitness effects of GC manipulations extend from this hypothesis. However, when animals are not stressed (likely most of the time) baseline GCs act as key metabolic regulators of daily energy balance, homoeostasis, osmoregulation and food acquisition, with pleiotropic effects on locomotor activity or foraging behaviour. Slight increases in circulating baseline levels can then have positive effects on reproductive effort (e.g. the ‘cort’ fitness/adaptation hypotheses), but comparatively few GC manipulation studies have targeted these small, non-stress induced increases. 3. We review studies of GC manipulations and examine the specific hypotheses used to predict the effects of manipulations in wild, breeding vertebrates. We argue that given the dichotomous function of GCs the current ‘reproduction vs. survival’ paradigm is unnecessarily restrictive and predicts only deleterious GC effects on fitness. Therefore, a broader set of hypotheses should be considered when testing the fitness effects of GC manipulations. 4. When framing experimental manipulation studies, we urge researchers to consider three key points: life-history context (e.g. long vs. short lived, semelparous vs. iteroparous, etc.), ecological context and dose delivery.

Journal ArticleDOI
TL;DR: Model results indicated that the geometric centroid-distance-order spatial weight feature, which was introduced in macro-level safety analysis for the first time, outperformed all the other spatial weight features and can help transportation planners and managers understand the characteristics of pedestrian crashes and improve pedestrian safety.

Journal ArticleDOI
TL;DR: Empirical patterns in the structure of lake food webs are discussed to suggest that ecosystems change consistently, from individual traits to theructure of whole food webs, under changing environmental conditions, to understand the relationship between structure and function in the face of ongoing environmental change.
Abstract: Aquatic ecosystems support size structured food webs, wherein predator-prey body sizes span orders of magnitude. As such, these food webs are replete with extremely generalized feeding strategies, especially among the larger bodied, higher trophic position taxa. The movement scale of aquatic organisms also generally increases with body size and trophic position. Together, these body size, mobility, and foraging relationships suggest that organisms lower in the food web generate relatively distinct energetic pathways by feeding over smaller spatial areas. Concurrently, the potential capacity for generalist foraging and spatial coupling of these pathways often increases, on average, moving up the food web toward higher trophic levels. We argue that these attributes make for a food web architecture that is inherently ‘adaptive’ in its response to environmental conditions. This is because variation in lower trophic level dynamics is dampened by the capacity of predators to flexibly alter their foraging behavior. We argue that empirical, theoretical, and applied research needs to embrace this inherently adaptive architecture if we are to understand the relationship between structure and function in the face of ongoing environmental change. Toward this goal, we discuss empirical patterns in the structure of lake food webs to suggest that ecosystems change consistently, from individual traits to the structure of whole food webs, under changing environmental conditions. We then explore an empirical example to reveal that explicitly unfolding the mechanisms that drive these adaptive responses offers insight into how human-driven impacts, such as climate change, invasive species, and fisheries harvest, ought to influence ecosystem structure and function (e.g., stability, secondary productivity, maintenance of major energy pathways). We end by arguing that such a directed food web research program promises a powerful across-scale framework for more effective ecosystem monitoring and management.

Journal ArticleDOI
TL;DR: In this article, a conceptual model of workplace bullying incorporating factors at the individual-, dyadic-, group-, and organizational levels is presented, and a number of propositions are offered which emphasize an interactionist, multi-level approach.
Abstract: There has been increased interest in the “dark side” of organizational behavior in recent decades. Workplace bullying, in particular, has received growing attention in the social sciences literature. However, this literature has lacked an integrated approach. More specifically, few studies have investigated causes at levels beyond the individual, such as the group or organization. Extending victim precipitation theory, we present a conceptual model of workplace bullying incorporating factors at the individual-, dyadic-, group-, and organizational-levels. Based on our theoretical model, a number of propositions are offered which emphasize an interactionist, multi-level approach. This approach provides a valuable stepping stone and framework to guide future empirical research. Theoretical and practical implications are discussed.

Journal ArticleDOI
TL;DR: An emerging technique, inspired from the natural and social tendency of individuals to learn from each other referred to as Cohort Intelligence (CI), which ability of the approach is tested by solving an NP-hard combinatorial problem such as Knapsack Problem (KP).
Abstract: The previous chapters discussed the algorithm Cohort Intelligence (CI) and its applicability solving several unconstrained and constrained problems. In addition CI was also applied for solving several clustering problems. This validated the learning and self supervising behavior of the cohort. This chapter further tests the ability of CI by solving an NP-hard combinatorial problem such as Knapsack Problem (KP). Several cases of the 0–1 KP are solved. The effect of various parameters on the solution quality has been discussed. The advantages and limitations of the CI methodology are also discussed.

Journal ArticleDOI
TL;DR: In this article, a system dynamics model is proposed for analysing the behavior and relationships of the fast fashion apparel industry with three supply chain levels, and the Conditional Value at Risk measure is applied to quantify the risks associated with the supply chain of these products and also to determine the expected value of the losses and their corresponding probabilities.
Abstract: With the rapid progress of science and technology and continuously growing customer expectations, share of merchandise exhibiting characteristics of perishability is on the rise and a wide range of industries are affected by this phenomenon. This paper focuses on the fast fashion apparel industry due to its particular characteristics such as short life cycle products, volatile demand, low predictability, high level of impulse purchase, high level of price competition and global sourcing. A system dynamics model is proposed for analysing the behaviour and relationships of the fast fashion apparel industry with three supply chain levels. The Conditional Value at Risk measure is applied to quantify the risks associated with the supply chain of these products and also to determine the expected value of the losses and their corresponding probabilities. Multiple business situations for effective strategic planning and decision-making are generated. In particular, the impact of lead time and delivery delays on t...

Journal ArticleDOI
TL;DR: Zooplankton communities were distinct among coastlines and significantly divergent among marine, freshwater and estuarine habitats even at the family level, and biodiversity varied substantially across two seasons reaching a beta diversity of 0.9 in a sub-Arctic port exposed to high vessel traffic.
Abstract: Aim The urgent need for large-scale spatio-temporal assessments of biodiversity in the face of rapid environmental change prompts technological advancements in species identification and biomonitoring such as metabarcoding. The high-throughput DNA sequencing of bulk samples offers many advantages over traditional morphological identification for describing community composition. Our objective was to evaluate the applicability of metabarcoding to identify species in taxonomically complex samples, evaluate biodiversity trends across broad geographical and temporal scales and facilitate cross-study comparisons. Location Marine and freshwater ports along Canadian coastlines (Pacific, Arctic and Atlantic) and the Great Lakes. Methods We used metabarcoding of bulk zooplankton samples to identify species and profile biodiversity across habitats and seasons in busy commercial ports. A taxonomic assignment approach circumventing sequence clustering was implemented to provide increased resolution and accuracy compared to pre-clustering. Results Taxonomic classification of over seven million sequences identified organisms spanning around 400 metazoan families and complements previous surveys based on morphological identification. Metabarcoding revealed over 30 orders that were previously not reported, while certain taxonomic groups were underrepresented because of depauperate reference databases. Despite the limitations of assigning metabarcoding data to the species level, zooplankton communities were distinct among coastlines and significantly divergent among marine, freshwater and estuarine habitats even at the family level. Furthermore, biodiversity varied substantially across two seasons reaching a beta diversity of 0.9 in a sub-Arctic port exposed to high vessel traffic. Main Conclusions Metabarcoding offers a powerful and sensitive approach to conduct large-scale biodiversity surveys and allows comparability across studies when rooted in taxonomy. We highlight ways of overcoming current limitations of metabarcoding for identifying species and assessing biodiversity, which has important implications for detecting organisms at low abundance such as endangered species and early invaders. Our study conveys pertinent and timely considerations for future large-scale monitoring surveys in relationship to environmental change.

Journal ArticleDOI
TL;DR: It is recommended that muscle tissue samples be treated with LE+DW to efficiently extract both urea and lipids to standardize isotopic values, identifying that urea removal is required prior to SIA of pelagic sharks.
Abstract: Rationale Stable isotope analysis (SIA) provides a powerful tool to investigate diverse ecological questions for marine species, but standardized values are required for comparative assessments. For elasmobranchs, their unique osmoregulatory strategy involves retention of 15N-depleted urea in body tissues and this may bias δ15N values. This may be a particular problem for large predatory species, where δ15N discrimination between predator and consumed prey can be small. Methods We evaluated three treatments (deionized water rinsing [DW], chloroform/methanol [LE] and combined chloroform/methanol and deionized water rinsing [LE+DW]) applied to white muscle tissue of 125 individuals from seven pelagic shark species to (i) assess urea and lipid effects on stable isotope values determined by IRMS and (ii) investigate mathematical normalization of these values. Results For all species examined, the δ15N values and C:N ratios increased significantly following all three treatments, identifying that urea removal is required prior to SIA of pelagic sharks. The more marked change in δ15N values following DW (1.3 ± 0.4‰) and LE+DW (1.2 ± 0.6‰) than following LE alone (0.7 ± 0.4‰) indicated that water rinsing was more effective at removing urea. The DW and LE+DW treatments lowered the %N values, resulting in an increase in C:N ratios from the unexpected low values of <2.6 in bulk samples to ~3.1 ± 0.1, the expected value of protein. The δ13C values of all species also increased significantly following LE and LE+DW treatments. Conclusions Given the mean change in δ15N (1.2 ± 0.6‰) and δ13C values (0.7 ± 0.4‰) across pelagic shark species, it is recommended that muscle tissue samples be treated with LE+DW to efficiently extract both urea and lipids to standardize isotopic values. Mathematical normalization of urea and lipid-extracted δ15NLE+DW and δ13CLE+DW values using the lipid-extracted δ15NLE and δ13CLE data were established for all pelagic shark species. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a 2MW underwater compressed air energy storage (UWCAES) system was studied using both conventional and advanced exergy analyses, and the exergy efficiency of the proposed UWCAES system was found to be 53.6% under real conditions.

Journal ArticleDOI
TL;DR: In this article, the tensile flow behavior of AA5182-O sheet was experimentally obtained in different material directions (RD, DD, and TD) at strain rates ranging from 0.001 to 1000 s − 1 and predicted by means of both phenomenological models and neural networks (NNs).

Journal ArticleDOI
TL;DR: It is suggested that reactive oxygen species were present in the Eoarchean surface environment, under a very low oxygen atmosphere, inducing oxidative elemental cycling during the deposition of the Isua BIFs and possibly supporting early aerobic biology.
Abstract: The Great Oxidation Event signals the first large-scale oxygenation of the atmosphere roughly 2.4 Gyr ago. Geochemical signals diagnostic of oxidative weathering, however, extend as far back as 3.3–2.9 Gyr ago. 3.8–3.7 Gyr old rocks from Isua, Greenland stand as a deep time outpost, recording information on Earth’s earliest surface chemistry and the low oxygen primordial biosphere. Here we find fractionated Cr isotopes, relative to the igneous silicate Earth reservoir, in metamorphosed banded iron formations (BIFs) from Isua that indicate oxidative Cr cycling 3.8–3.7 Gyr ago. Elevated U/Th ratios in these BIFs relative to the contemporary crust, also signal oxidative mobilization of U. We suggest that reactive oxygen species were present in the Eoarchean surface environment, under a very low oxygen atmosphere, inducing oxidative elemental cycling during the deposition of the Isua BIFs and possibly supporting early aerobic biology.

Journal ArticleDOI
TL;DR: The results reveal the power of the metabarcoding approach, whilst also highlighting the need to account for potentially low levels of genetic diversity when processing data, to use barcode markers that allow differentiation of closely related species and to continue building comprehensive sequence databases that allow reliable and fine-scale taxonomic designation.
Abstract: Aim Invasive species represent one of the greatest threats to biodiversity. The ability to detect non-indigenous species (NIS), particularly those present at low abundance, is limited by difficulties in performing exhaustive sampling and in identifying species. Here we sample zooplankton from 16 major Canadian ports and apply a metabarcoding approach to detect NIS. Location Marine and freshwater ports along Canadian coastlines (Pacific, Arctic, Atlantic) and the Great Lakes. Methods We amplified the V4 region of the small subunit ribosomal DNA (18S) and used two distinct analytic protocols to identify species present at low abundance. Taxonomic assignment was conducted using BLAST searches against a local 18S sequence database of either (i) individual reads (totalling 7,733,541 reads) or (ii) operational taxonomic units (OTUs) generated by sequence clustering. Phylogenetic analyses were performed to confirm the identity of reads with ambiguous taxonomic assignment. Results Taxonomic assignment of individual reads identified 379 zooplankton species at a minimum sequence identity of 97%. Of these, 24 species were identified as NIS, 11 of which were detected in previously unreported locations. When reads were clustered into OTUs prior to taxonomic assignment, six NIS were no longer detected and an additional NIS was falsely identified. Phylogenetic analyses revealed that sequences belonging to closely related species clustered together into shared OTUs as a result of low interspecific variation. NIS can thus be misidentified when their sequences join the OTUs of more abundant native species. Main conclusions Our results reveal the power of the metabarcoding approach, whilst also highlighting the need to account for potentially low levels of genetic diversity when processing data, to use barcode markers that allow differentiation of closely related species and to continue building comprehensive sequence databases that allow reliable and fine-scale taxonomic designation.

Journal ArticleDOI
TL;DR: A new protocol is reported, called spinning-on spinning-off (SOSO) acquisition, where MAS is applied during part of the polarization delay to increase the DNP enhancements and then the MAS rotation is stopped so that a wideline 35Cl NMR powder pattern free from the effects of spinning sidebands can be acquired under static conditions.
Abstract: In this work, we show how to obtain efficient dynamic nuclear polarization (DNP) enhanced 35Cl solid-state NMR (SSNMR) spectra at 9.4 T and demonstrate how they can be used to characterize the molecular-level structure of hydrochloride salts of active pharmaceutical ingredients (APIs) in both bulk and low wt% API dosage forms. 35Cl SSNMR central-transition powder patterns of chloride ions are typically tens to hundreds of kHz in breadth, and most cannot be excited uniformly with high-power rectangular pulses or acquired under conditions of magic-angle spinning (MAS). Herein, we demonstrate the combination of DNP and 1H–35Cl broadband adiabatic inversion cross polarization (BRAIN-CP) experiments for the acquisition of high quality wideline spectra of APIs under static sample conditions, and obtain signals up to 50 times greater than in spectra acquired without the use of DNP at 100 K. We report a new protocol, called spinning-on spinning-off (SOSO) acquisition, where MAS is applied during part of the polarization delay to increase the DNP enhancements and then the MAS rotation is stopped so that a wideline 35Cl NMR powder pattern free from the effects of spinning sidebands can be acquired under static conditions. This method provides an additional two-fold signal enhancement compared to DNP-enhanced SSNMR spectra acquired under purely static conditions. DNP-enhanced 35Cl experiments are used to characterize APIs in bulk and dosage forms with Cl contents as low as 0.45 wt%. These results are compared to DNP-enhanced 1H–13C CP/MAS spectra of APIs in dosage forms, which are often hindered by interfering signals arising from the binders, fillers and other excipient materials.

Journal ArticleDOI
TL;DR: In this paper, a novel sulfur and nitrogen co-doped graphene-like nanobubble and nanosheet hybridized architecture was developed by a cost-efficient strategy using keratin containing abundant N and S sources as the precursor and KOH as the activating agent.
Abstract: Heteroatom doped graphene-based materials generally offer great advantages towards constructing advanced catalysts. In this work, we develop a novel sulfur (S) and nitrogen (N) co-doped graphene-like nanobubble and nanosheet hybridized architecture prepared by a cost-efficient strategy using keratin containing abundant N and S sources as the precursor and KOH as the activating agent. After further graphitization and ammonia treatments at 1000 °C, it displays an ultrahigh surface area (1799 m2 g−1) as well as abundant active heteroatom dopants (graphitic-N, pyridinic-N and thiophene-S). Electrochemical measurements show that its onset potential is nearly 26 mV positive than that of the commercial Pt/C catalyst towards the oxygen reduction reaction (ORR) in alkaline media, and it has higher electrochemical stability and fuel tolerance than Pt/C. To the best of our knowledge, such ORR activity is the best one among the metal-free graphene-based catalysts in alkaline media, and much higher than that of the other reported biomass-derived carbon-based catalysts. Significantly, when employed as the air electrode for zinc–air batteries, this graphene-like hybrid catalyst displays an outstanding performance compared to the Pt/C catalyst. Moreover, compared with Pt/C, such a catalyst also exhibits comparable ORR activity and higher stability in acidic media. The outstanding ORR performance can be mainly attributed to its novel hybridized graphene-like architecture, which endows it with a similar thin layered property to graphene and an ultrahigh surface area as well as excellent hierarchical porous structures, and to the synergistic effect of the appropriate graphitization degree and high content of active heteroatoms.

Journal ArticleDOI
TL;DR: In the absence of serious neurocognitive disorder, FCR ≤14 is highly specific, but only moderately sensitive to invalid responding, while passing FCR does not rule out a non-credible presentation, but failing FCR rules it in with high accuracy.
Abstract: Objectives The Forced Choice Recognition (FCR) trial of the California Verbal Learning Test, 2nd edition, was designed as an embedded performance validity test (PVT). To our knowledge, this is the first systematic review of classification accuracy against reference PVTs. Methods Results from peer-reviewed studies with FCR data published since 2002 encompassing a variety of clinical, research, and forensic samples were summarized, including 37 studies with FCR failure rates (N=7575) and 17 with concordance rates with established PVTs (N=4432). Results All healthy controls scored >14 on FCR. On average, 16.9% of the entire sample scored ≤14, while 25.9% failed reference PVTs. Presence or absence of external incentives to appear impaired (as identified by researchers) resulted in different failure rates (13.6% vs. 3.5%), as did failing or passing reference PVTs (49.0% vs. 6.4%). FCR ≤14 produced an overall classification accuracy of 72%, demonstrating higher specificity (.93) than sensitivity (.50) to invalid performance. Failure rates increased with the severity of cognitive impairment. Conclusions In the absence of serious neurocognitive disorder, FCR ≤14 is highly specific, but only moderately sensitive to invalid responding. Passing FCR does not rule out a non-credible presentation, but failing FCR rules it in with high accuracy. The heterogeneity in sample characteristics and reference PVTs, as well as the quality of the criterion measure across studies, is a major limitation of this review and the basic methodology of PVT research in general. (JINS, 2016, 22, 851-858).