scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this paper, a recursive flexible window method was proposed for detecting and dating financial bubbles in real-time data, which is better suited for practical implementation with long historical time series.
Abstract: Recent work on econometric detection mechanisms has shown the effectiveness of recursive procedures in identifying and dating financial bubbles in real time. These procedures are useful as warning alerts in surveillance strategies conducted by central banks and fiscal regulators with real-time data. Use of these methods over long historical periods presents a more serious econometric challenge due to the complexity of the nonlinear structure and break mechanisms that are inherent in multiple-bubble phenomena within the same sample period. To meet this challenge, this article develops a new recursive flexible window method that is better suited for practical implementation with long historical time series. The method is a generalized version of the sup augmented Dickey–Fuller (ADF) test of Phillips et al. (“Explosive behavior in the 1990s NASDAQ: When did exuberance escalate asset values?” International Economic Review 52 (2011), 201–26; PWY) and delivers a consistent real-time date-stamping strategy for the origination and termination of multiple bubbles. Simulations show that the test significantly improves discriminatory power and leads to distinct power gains when multiple bubbles occur. An empirical application of the methodology is conducted on S&P 500 stock market data over a long historical period from January 1871 to December 2010. The new approach successfully identifies the well-known historical episodes of exuberance and collapses over this period, whereas the strategy of PWY and a related cumulative sum (CUSUM) dating procedure locate far fewer episodes in the same sample range

594 citations


Journal ArticleDOI
TL;DR: The regulatory mechanisms involved in melanogenesis are discussed and how intrinsic and extrinsic factors regulate melanin production are explained, as well as the regulatory roles of different proteins involved in pigment molecules that are endogenously synthesized by melanocytes.
Abstract: Melanocytes are melanin-producing cells found in skin, hair follicles, eyes, inner ear, bones, heart and brain of humans. They arise from pluripotent neural crest cells and differentiate in response to a complex network of interacting regulatory pathways. Melanins are pigment molecules that are endogenously synthesized by melanocytes. The light absorption of melanin in skin and hair leads to photoreceptor shielding, thermoregulation, photoprotection, camouflage and display coloring. Melanins are also powerful cation chelators and may act as free radical sinks. Melanin formation is a product of complex biochemical events that starts from amino acid tyrosine and its metabolite, dopa. The types and amounts of melanin produced by melanocytes are determined genetically and are influenced by a variety of extrinsic and intrinsic factors such as hormonal changes, inflammation, age and exposure to UV light. These stimuli affect the different pathways in melanogenesis. In this review we will discuss the regulatory mechanisms involved in melanogenesis and explain how intrinsic and extrinsic factors regulate melanin production. We will also explain the regulatory roles of different proteins involved in melanogenesis.

594 citations


Journal ArticleDOI
03 Feb 2015-JAMA
TL;DR: Readmissions after surgery were associated with new postdischarge complications related to the procedure and not exacerbation of prior index hospitalization complications, suggesting that readmission after surgery are a measure of postdis discharge complications.
Abstract: Importance Financial penalties for readmission have been expanded beyond medical conditions to include surgical procedures. Hospitals are working to reduce readmissions; however, little is known about the reasons for surgical readmission. Objective To characterize the reasons, timing, and factors associated with unplanned postoperative readmissions. Design, Setting, and Participants Patients undergoing surgery at one of 346 continuously enrolled US hospitals participating in the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) between January 1, 2012, and December 31, 2012, had clinically abstracted information examined. Readmission rates and reasons (ascertained by clinical data abstractors at each hospital) were assessed for all surgical procedures and for 6 representative operations: bariatric procedures, colectomy or proctectomy, hysterectomy, total hip or knee arthroplasty, ventral hernia repair, and lower extremity vascular bypass. Main Outcomes and Measures Unplanned 30-day readmission and reason for readmission. Results The unplanned readmission rate for the 498 875 operations was 5.7%. For the individual procedures, the readmission rate ranged from 3.8% for hysterectomy to 14.9% for lower extremity vascular bypass. The most common reason for unplanned readmission was surgical site infection (SSI) overall (19.5%) and also after colectomy or proctectomy (25.8%), ventral hernia repair (26.5%), hysterectomy (28.8%), arthroplasty (18.8%), and lower extremity vascular bypass (36.4%). Obstruction or ileus was the most common reason for readmission after bariatric surgery (24.5%) and the second most common reason overall (10.3%), after colectomy or proctectomy (18.1%), ventral hernia repair (16.7%), and hysterectomy (13.4%). Only 2.3% of patients were readmitted for the same complication they had experienced during their index hospitalization. Only 3.3% of patients readmitted for SSIs had experienced an SSI during their index hospitalization. There was no time pattern for readmission, and early (≤7 days postdischarge) and late (>7 days postdischarge) readmissions were associated with the same 3 most common reasons: SSI, ileus or obstruction, and bleeding. Patient comorbidities, index surgical admission complications, non-home discharge (hazard ratio [HR], 1.40 [95% CI, 1.35-1.46]), teaching hospital status (HR, 1.14 [95% CI 1.07-1.21]), and higher surgical volume (HR, 1.15 [95% CI, 1.07-1.25]) were associated with a higher risk of hospital readmission. Conclusions and Relevance Readmissions after surgery were associated with new postdischarge complications related to the procedure and not exacerbation of prior index hospitalization complications, suggesting that readmissions after surgery are a measure of postdischarge complications. These data should be considered when developing quality indicators and any policies penalizing hospitals for surgical readmission.

594 citations


Journal ArticleDOI
TL;DR: For the first time, virus resistance has been developed in cucumber, non-transgenically, not visibly affecting plant development and without long-term backcrossing, via a new technology that can be expected to be applicable to a wide range of crop plants.
Abstract: Genome editing in plants has been boosted tremendously by the development of CRISPR/Cas9 (Clustered Regularly Interspaced Short Palindromic Repeats) technology. This powerful tool allows substantial improvement in plant traits in addition to those provided by classical breeding. Here, we demonstrate the development of virus resistance in cucumber (Cucumis sativus L.) using Cas9/subgenomic RNA (sgRNA) technology to disrupt the function of the recessive eIF4E (eukaryotic translation initiation factor 4E) gene. Cas9/sgRNA constructs were targeted to the N' and C' termini of the eIF4E gene. Small deletions and single nucleotide polymorphisms (SNPs) were observed in the eIF4E gene targeted sites of transformed T1 generation cucumber plants, but not in putative off-target sites. Non-transgenic heterozygous eif4e mutant plants were selected for the production of non-transgenic homozygous T3 generation plants. Homozygous T3 progeny following Cas9/sgRNA that had been targeted to both eif4e sites exhibited immunity to Cucumber vein yellowing virus (Ipomovirus) infection and resistance to the potyviruses Zucchini yellow mosaic virus and Papaya ring spot mosaic virus-W. In contrast, heterozygous mutant and non-mutant plants were highly susceptible to these viruses. For the first time, virus resistance has been developed in cucumber, non-transgenically, not visibly affecting plant development and without long-term backcrossing, via a new technology that can be expected to be applicable to a wide range of crop plants.

594 citations


Journal ArticleDOI
TL;DR: This data indicates that pre-emptive surgery is a viable option for the treatment of deep vein thrombosis in women with pre-operative indications and this work’s results support this view.
Abstract: Disclosures outside the scope of this work: Dr. Minei receiv grant support from Irrespet Corp. AtoxBio. Dr. Laronga rec sation for lectures from Genomic Health Inc. and royalties Date. Dr. Jensen is a consultant and paid speaker for Ethico ceives honoraria from CareFusion for their Speaker’s Progr from Irrimax Corp. for consulting and Research Funding honoraria from Surgical Inc. for consultation. Dr. Itani for a multi-institutional study for Sanofi-Pastuer and the Committee Chair. Dr. Dellinger is on the Advisory B Melinta, and Therevance and a grant recipient from Moti trial of iclaprim vs. vancomycin for treatment of skin and so tions. The remaining authors declare no conflicts. Presented at the Surgical Infection Society, Palm Beach, FL

594 citations


Journal ArticleDOI
TL;DR: The concept that modulating these mechanisms may help to improve brain function in Alzheimer disease and related disorders is explored.
Abstract: The function of neural circuits and networks can be controlled, in part, by modulating the synchrony of their components' activities. Network hypersynchrony and altered oscillatory rhythmic activity may contribute to cognitive abnormalities in Alzheimer disease (AD). In this condition, network activities that support cognition are altered decades before clinical disease onset, and these alterations predict future pathology and brain atrophy. Although the precise causes and pathophysiological consequences of these network alterations remain to be defined, interneuron dysfunction and network abnormalities have emerged as potential mechanisms of cognitive dysfunction in AD and related disorders. Here, we explore the concept that modulating these mechanisms may help to improve brain function in these conditions.

594 citations


Journal ArticleDOI
TL;DR: In this article, an IRS-enhanced orthogonal frequency division multiplexing (OFDM) system under frequency-selective channels is considered and a practical transmission protocol with channel estimation is proposed.
Abstract: Intelligent reflecting surface (IRS) is a promising new technology for achieving both spectrum and energy efficient wireless communication systems in the future. However, existing works on IRS mainly consider frequency-flat channels and assume perfect knowledge of channel state information (CSI) at the transmitter. Motivated by the above, in this paper we study an IRS-enhanced orthogonal frequency division multiplexing (OFDM) system under frequency-selective channels and propose a practical transmission protocol with channel estimation. First, to reduce the overhead in channel training as well as exploit the channel spatial correlation, we propose a novel IRS elements grouping method, where each group consists of a set of adjacent IRS elements that share a common reflection coefficient. Based on this method, we propose a practical transmission protocol where only the combined channel of each group needs to be estimated, thus substantially reducing the training overhead. Next, with any given grouping and estimated CSI, we formulate the problem to maximize the achievable rate by jointly optimizing the transmit power allocation and the IRS passive array reflection coefficients. Although the formulated problem is non-convex and thus difficult to solve, we propose an efficient algorithm to obtain a high-quality suboptimal solution for it, by alternately optimizing the power allocation and the passive array coefficients in an iterative manner, along with a customized method for the initialization. Simulation results show that the proposed design significantly improves the OFDM link rate performance as compared to the case without using IRS. Moreover, it is shown that there exists an optimal size for IRS elements grouping which achieves the maximum achievable rate due to the practical trade-off between the training overhead and IRS passive beamforming flexibility.

594 citations


Journal ArticleDOI
TL;DR: A wearable and flexible sweat-sensing platform toward real-time multiplexed perspiration analysis is developed and an integrated iontophoresis module on a wearable sweat sensor could enable autonomous and programmed sweat extraction.
Abstract: Wearable sensors play a crucial role in realizing personalized medicine, as they can continuously collect data from the human body to capture meaningful health status changes in time for preventive intervention. However, motion artifacts and mechanical mismatches between conventional rigid electronic materials and soft skin often lead to substantial sensor errors during epidermal measurement. Because of its unique properties such as high flexibility and conformability, flexible electronics enables a natural interaction between electronics and the human body. In this Account, we summarize our recent studies on the design of flexible electronic devices and systems for physical and chemical monitoring. Material innovation, sensor design, device fabrication, system integration, and human studies employed toward continuous and noninvasive wearable sensing are discussed. A flexible electronic device typically contains several key components, including the substrate, the active layer, and the interface layer. The inorganic-nanomaterials-based active layer (prepared by a physical transfer or solution process) is shown to have good physicochemical properties, electron/hole mobility, and mechanical strength. Flexible electronics based on the printed and transferred active materials has shown great promise for physical sensing. For example, integrating a nanowire transistor array for the active matrix and a conductive pressure-sensitive rubber enables tactile pressure mapping; tactile-pressure-sensitive e-skin and organic light-emitting diodes can be integrated for instantaneous pressure visualization. Such printed sensors have been applied as wearable patches to monitor skin temperature, electrocardiograms, and human activities. In addition, liquid metals could serve as an attractive candidate for flexible electronics because of their excellent conductivity, flexibility, and stretchability. Liquid-metal-enabled electronics (based on liquid-liquid heterojunctions and embedded microchannels) have been utilized to monitor a wide range of physiological parameters (e.g., pulse and temperature). Despite the rapid growth in wearable sensing technologies, there is an urgent need for the development of flexible devices that can capture molecular data from the human body to retrieve more insightful health information. We have developed a wearable and flexible sweat-sensing platform toward real-time multiplexed perspiration analysis. An integrated iontophoresis module on a wearable sweat sensor could enable autonomous and programmed sweat extraction. A microfluidics-based sensing system was demonstrated for sweat sampling, sensing, and sweat rate analysis. Roll-to-roll gravure printing allows for mass production of high-performance flexible chemical sensors at low cost. These wearable and flexible sweat sensors have shown great promise in dehydration monitoring, cystic fibrosis diagnosis, drug monitoring, and noninvasive glucose monitoring. Future work in this field should focus on designing robust wearable sensing systems to accurately collect data from the human body and on large-scale human studies to determine how the measured physical and chemical information relates to the individual's specific health conditions. Further research in these directions, along with the large sets of data collected via these wearable and flexible sensing technologies, will have a significant impact on future personalized healthcare.

594 citations


Journal ArticleDOI
TL;DR: Current knowledge of how macronutrient metabolism by the gut microbiome influences human health is summarized and knowledge gaps that could contribute to the understanding of overall human wellness will be identified.
Abstract: The human gut microbiome is a critical component of digestion, breaking down complex carbohydrates, proteins, and to a lesser extent fats that reach the lower gastrointestinal tract. This process results in a multitude of microbial metabolites that can act both locally and systemically (after being absorbed into the bloodstream). The impact of these biochemicals on human health is complex, as both potentially beneficial and potentially toxic metabolites can be yielded from such microbial pathways, and in some cases, these effects are dependent upon the metabolite concentration or organ locality. The aim of this review is to summarize our current knowledge of how macronutrient metabolism by the gut microbiome influences human health. Metabolites to be discussed include short-chain fatty acids and alcohols (mainly yielded from monosaccharides); ammonia, branched-chain fatty acids, amines, sulfur compounds, phenols, and indoles (derived from amino acids); glycerol and choline derivatives (obtained from the breakdown of lipids); and tertiary cycling of carbon dioxide and hydrogen. Key microbial taxa and related disease states will be referred to in each case, and knowledge gaps that could contribute to our understanding of overall human wellness will be identified.

594 citations


Journal ArticleDOI
TL;DR: In this article, the authors review the recent development of high-entropy alloys and summarize their preparation methods, composition design, phase formation and microstructures, various properties, and modeling and simulation calculations.
Abstract: As human improve their ability to fabricate materials, alloys have evolved from simple to complex compositions, accordingly improving functions and performances, promoting the advancements of human civilization. In recent years, high-entropy alloys (HEAs) have attracted tremendous attention in various fields. With multiple principal components, they inherently possess unique microstructures and many impressive properties, such as high strength and hardness, excellent corrosion resistance, thermal stability, fatigue, fracture, and irradiation resistance, in terms of which they overwhelm the traditional alloys. All these properties have endowed HEAs with many promising potential applications. An in-depth understanding of the essence of HEAs is important to further developing numerous HEAs with better properties and performance in the future. In this paper, we review the recent development of HEAs, and summarize their preparation methods, composition design, phase formation and microstructures, various properties, and modeling and simulation calculations. In addition, the future trends and prospects of HEAs are put forward.

594 citations


Journal ArticleDOI
TL;DR: A review of the history of research on chimera states and major advances in understanding their behavior can be found in this article, where the authors highlight major advances on understanding their behaviour.
Abstract: A chimera state is a spatio-temporal pattern in a network of identical coupled oscillators in which synchronous and asynchronous oscillation coexist. This state of broken symmetry, which usually coexists with a stable spatially symmetric state, has intrigued the nonlinear dynamics community since its discovery in the early 2000s. Recent experiments have led to increasing interest in the origin and dynamics of these states. Here we review the history of research on chimera states and highlight major advances in understanding their behaviour.

Journal ArticleDOI
TL;DR: In this article, the adaptive/improved droop control, network-based control methods, and cost-based droop schemes are compared and summarized for active power sharing for islanded microgrids.
Abstract: Microgrids consist of multiple parallel-connected distributed generation (DG) units with coordinated control strategies, which are able to operate in both grid-connected and islanded modes Microgrids are attracting considerable attention since they can alleviate the stress of main transmission systems, reduce feeder losses, and improve system power quality When the islanded microgrids are concerned, it is important to maintain system stability and achieve load power sharing among the multiple parallel-connected DG units However, the poor active and reactive power sharing problems due to the influence of impedance mismatch of the DG feeders and the different ratings of the DG units are inevitable when the conventional droop control scheme is adopted Therefore, the adaptive/improved droop control, network-based control methods, and cost-based droop schemes are compared and summarized in this paper for active power sharing Moreover, nonlinear and unbalanced loads could further affect the reactive power sharing when regulating the active power, and it is difficult to share the reactive power accurately only by using the enhanced virtual impedance method Therefore, the hierarchical control strategies are utilized as supplements of the conventional droop controls and virtual impedance methods The improved hierarchical control approaches such as the algorithms based on graph theory, multi-agent system, the gain scheduling method, and predictive control have been proposed to achieve proper reactive power sharing for islanded microgrids and eliminate the effect of the communication delays on hierarchical control Finally, the future research trends on islanded microgrids are also discussed in this paper

Journal ArticleDOI
TL;DR: Meta-analytic findings showing reliably increased functional connectivity between the DMN and subgenual prefrontal cortex (sgPFC)-connectivity that often predicts levels of depressive rumination and an integration of the self-referential processes supported by theDMN with the affectively laden, behavioral withdrawal processes associated with sgPFC are presented.

Journal ArticleDOI
TL;DR: In three phase 3 trials involving patients with psoriasis, ixekizumab was effective through 60 weeks of treatment and the benefits need to be weighed against the risks of adverse events.
Abstract: BackgroundTwo phase 3 trials (UNCOVER-2 and UNCOVER-3) showed that at 12 weeks of treatment, ixekizumab, a monoclonal antibody against interleukin-17A, was superior to placebo and etanercept in the treatment of moderate-to-severe psoriasis. We report the 60-week data from the UNCOVER-2 and UNCOVER-3 trials, as well as 12-week and 60-week data from a third phase 3 trial, UNCOVER-1. MethodsWe randomly assigned 1296 patients in the UNCOVER-1 trial, 1224 patients in the UNCOVER-2 trial, and 1346 patients in the UNCOVER-3 trial to receive subcutaneous injections of placebo (placebo group), 80 mg of ixekizumab every 2 weeks after a starting dose of 160 mg (2-wk dosing group), or 80 mg of ixekizumab every 4 weeks after a starting dose of 160 mg (4-wk dosing group). Additional cohorts in the UNCOVER-2 and UNCOVER-3 trials were randomly assigned to receive 50 mg of etanercept twice weekly. At week 12 in the UNCOVER-3 trial, the patients entered a long-term extension period during which they received 80 mg of ixeki...

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the sustainability performance of the circular business models (CBM) and circular supply chains necessary to implement the concept on an organisational level and propose a framework to integrate circular business model and supply chain management towards sustainable development.

Journal ArticleDOI
Leilei Liang1, Hui Ren1, Ruilin Cao1, Yueyang Hu1, Zeying Qin1, Chuanen Li1, Songli Mei1 
TL;DR: Assessing the youth mental health after the coronavirus disease 19 (COVID-19) occurred in China two weeks later, and to investigate factors of mental health among youth groups, suggests that nearly 40.4% of the youth group had a tendency to have psychological problems, a remarkable evidence that infectious diseases may have an immense influence on youthmental health.
Abstract: The purposes of this study was to assess the youth mental health after the coronavirus disease 19 (COVID-19) occurred in China two weeks later, and to investigate factors of mental health among youth groups. A cross-sectional study was conducted two weeks after the occurrence of COVID-19 in China. A total of 584 youth enrolled in this study and completed the question about cognitive status of COVID-19, the General Health Questionnaire(GHQ-12), the PTSD Checklist-Civilian Version (PCL-C) and the Negative coping styles scale. Univariate analysis and univariate logistic regression were used to evaluate the effect of COVID-19 on youth mental health. The results of this cross-sectional study suggest that nearly 40.4% the sampled youth were found to be prone to psychological problems and 14.4% the sampled youth with Post-traumatic stress disorder (PTSD) symptoms. Univariate logistic regression revealed that youth mental health was significantly related to being less educated (OR = 8.71, 95%CI:1.97–38.43), being the enterprise employee (OR = 2.36, 95%CI:1.09–5.09), suffering from the PTSD symptom (OR = 1.05, 95%CI:1.03–1.07) and using negative coping styles (OR = 1.03, 95%CI:1.00–1.07). Results of this study suggest that nearly 40.4% of the youth group had a tendency to have psychological problems. Thus, this was a remarkable evidence that infectious diseases, such as COVID-19, may have an immense influence on youth mental health. Therefor, local governments should develop effective psychological interventions for youth groups, moreover, it is important to consider the educational level and occupation of the youth during the interventions.

Journal ArticleDOI
30 Jun 2016-Nature
TL;DR: This work reports an alternative platform for the study of spin systems, using individual atoms trapped in tunable two-dimensional arrays of optical microtraps with arbitrary geometries, to establish arrays of single Rydberg atoms as a versatile platform forThe study of quantum magnetism.
Abstract: Spin models are the prime example of simplified many-body Hamiltonians used to model complex, strongly correlated real-world materials. However, despite the simplified character of such models, their dynamics often cannot be simulated exactly on classical computers when the number of particles exceeds a few tens. For this reason, quantum simulation of spin Hamiltonians using the tools of atomic and molecular physics has become a very active field over the past years, using ultracold atoms or molecules in optical lattices, or trapped ions. All of these approaches have their own strengths and limitations. Here we report an alternative platform for the study of spin systems, using individual atoms trapped in tunable two-dimensional arrays of optical microtraps with arbitrary geometries, where filling fractions range from 60 to 100 per cent. When excited to high-energy Rydberg D states, the atoms undergo strong interactions whose anisotropic character opens the way to simulating exotic matter. We illustrate the versatility of our system by studying the dynamics of a quantum Ising-like spin-1/2 system in a transverse field with up to 30 spins, for a variety of geometries in one and two dimensions, and for a wide range of interaction strengths. For geometries where the anisotropy is expected to have small effects on the dynamics, we find excellent agreement with ab initio simulations of the spin-1/2 system, while for strongly anisotropic situations the multilevel structure of the D states has a measurable influence. Our findings establish arrays of single Rydberg atoms as a versatile platform for the study of quantum magnetism.


Journal ArticleDOI
TL;DR: The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups as discussed by the authors.
Abstract: Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.

Posted Content
TL;DR: In this paper, the authors propose to unify the tasks of instance segmentation and semantic segmentation at the architectural level, designing a single network for both tasks, which is called Panoptic FPN (Panoptic Feature Pyramid Network).
Abstract: The recently introduced panoptic segmentation task has renewed our community's interest in unifying the tasks of instance segmentation (for thing classes) and semantic segmentation (for stuff classes). However, current state-of-the-art methods for this joint task use separate and dissimilar networks for instance and semantic segmentation, without performing any shared computation. In this work, we aim to unify these methods at the architectural level, designing a single network for both tasks. Our approach is to endow Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. Surprisingly, this simple baseline not only remains effective for instance segmentation, but also yields a lightweight, top-performing method for semantic segmentation. In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks. Given its effectiveness and conceptual simplicity, we hope our method can serve as a strong baseline and aid future research in panoptic segmentation.

Proceedings ArticleDOI
13 Nov 2016
TL;DR: This paper develops models describing LoRa communication behaviour and uses these models to parameterise a LoRa simulation to study scalability, showing that a typical smart city deployment can support 120 nodes per 3.8 ha, which is not sufficient for future IoT deployments.
Abstract: New Internet of Things (IoT) technologies such as Long Range (LoRa) are emerging which enable power efficient wireless communication over very long distances. Devices typically communicate directly to a sink node which removes the need of constructing and maintaining a complex multi-hop network. Given the fact that a wide area is covered and that all devices communicate directly to a few sink nodes a large number of nodes have to share the communication medium. LoRa provides for this reason a range of communication options (centre frequency, spreading factor, bandwidth, coding rates) from which a transmitter can choose. Many combination settings are orthogonal and provide simultaneous collision free communications. Nevertheless, there is a limit regarding the number of transmitters a LoRa system can support. In this paper we investigate the capacity limits of LoRa networks. Using experiments we develop models describing LoRa communication behaviour. We use these models to parameterise a LoRa simulation to study scalability. Our experiments show that a typical smart city deployment can support 120 nodes per 3.8 ha, which is not sufficient for future IoT deployments. LoRa networks can scale quite well, however, if they use dynamic communication parameter selection and/or multiple sinks.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this paper, a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint is proposed.
Abstract: This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30% on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.

Journal ArticleDOI
TL;DR: The authors provides an overview of the main issues and challenges associated with energy demand reduction, summarises how this challenge is framed by key academic disciplines, indicates how these can provide complementary insights for policymakers and argues that a socotechnical perspective can provide a deeper understanding of the nature of this challenge and the processes through which it can be achieved.
Abstract: Most commentators expect improved energy efficiency and reduced energy demand to provide the dominant contribution to tackling global climate change. But at the global level, the correlation between increased wealth and increased energy consumption is very strong and the impact of policies to reduce energy demand is both limited and contested. Different academic disciplines approach energy demand reduction in different ways: emphasising some mechanisms and neglecting others, being more or less optimistic about the potential for reducing energy demand and providing insights that are more or less useful for policymakers. This article provides an overview of the main issues and challenges associated with energy demand reduction, summarises how this challenge is ‘framed’ by key academic disciplines, indicates how these can provide complementary insights for policymakers and argues that a ‘sociotechnical’ perspective can provide a deeper understanding of the nature of this challenge and the processes through which it can be achieved. The article integrates ideas from the natural sciences, economics, psychology, innovation studies and sociology but does not give equal weight to each. It argues that reducing energy demand will prove more difficult than is commonly assumed and current approaches will be insufficient to deliver the transformation required.

Journal ArticleDOI
TL;DR: This comprehensive meta-analysis reports a significant protective effect of a vegetarian diet versus the incidence and/or mortality from ischemic heart disease and incidence from total cancer, and vegan diet conferred a significant reduced risk of incidence fromTotal cancer.
Abstract: Background: Beneficial effects of vegetarian and vegan diets on health outcomes have been supposed in previous studies. Objectives: Aim of this study was to clarify the association between vegetarian, vegan diets, risk factors for chronic diseases, risk of all-cause mortality, incidence, and mortality from cardio-cerebrovascular diseases, total cancer and specific type of cancer (colorectal, breast, prostate and lung), through meta-analysis. Methods: A comprehensive search of Medline, EMBASE, Scopus, The Cochrane Library, and Google Scholar was conducted. Results: Eighty-six cross-sectional and 10 cohort prospective studies were included. The overall analysis among cross-sectional studies reported significant reduced levels of body mass index, total cholesterol, LDL-cholesterol, and glucose levels in vegetarians and vegans versus omnivores. With regard to prospective cohort studies, the analysis showed a significant reduced risk of incidence and/or mortality from ischemic heart disease (RR 0.75; 9...

Posted Content
TL;DR: This paper proposes a novel training objective that enables the convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data, and produces state of the art results for monocular depth estimation on the KITTI driving dataset.
Abstract: Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Exploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.

Journal ArticleDOI
TL;DR: This article made a distinction between qualitative and descriptive research in the field of second language teaching and learning, and made a comparison between the two types of research, focusing on what rather than how or why something has happened.
Abstract: Qualitative and descriptive research methods have been very common procedures for conducting research in many disciplines, including education, psychology, and social sciences. These types of research have also begun to be increasingly used in the field of second language teaching and learning. The interest in such methods, particularly in qualitative research, is motivated in part by the recognition that L2 teaching and learning is complex. To uncover this complexity, we need to not only examine how learning takes place in general or what factors affect it, but also provide more in-depth examination and understanding of individual learners and their behaviors and experiences. Qualitative and descriptive research is well suited to the study of L2 classroom teaching, where conducting tightly controlled experimental research is hardly possible, and even if controlled experimental research is conducted in such settings, the generalizability of its findings to real classroom contexts are questionable. Therefore, Language Teaching Research receives many manuscripts that report qualitative or descriptive research. The terms qualitative research and descriptive research are sometimes used interchangeably. However, a distinction can be made between the two. One fundamental characteristic of both types of research is that they involve naturalistic data. That is, they attempt to study language learning and teaching in their naturally occurring settings without any intervention or manipulation of variables. Nonetheless, these two types of research may differ in terms of their goal, degree of control, and the way the data are analyzed. The goal of descriptive research is to describe a phenomenon and its characteristics. This research is more concerned with what rather than how or why something has happened. Therefore, observation and survey tools are often used to gather data (Gall, Gall, & Borg, 2007). In such research, the data may be collected qualitatively, but it is often analyzed quantitatively, using frequencies, percentages, averages, or other statistical analyses to determine relationships. Qualitative research, however, is more holistic and often involves a rich collection of data from various sources to gain a deeper understanding of individual participants, including their opinions, perspectives, and attitudes. Qualitative research collects data qualitatively, and the method of analysis is 572747 LTR0010.1177/1362168815572747Language Teaching ResearchEditorial editorial2015

Posted Content
TL;DR: It is demonstrated that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.
Abstract: We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, making our model an attractive candidate for applications where the encoding and/or decoding speed is critical. Additionally, VQ-VAE requires sampling an autoregressive model only in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.

Proceedings ArticleDOI
21 Apr 2019
TL;DR: VoteNet as mentioned in this paper proposes an end-to-end 3D object detection network based on a synergy of deep point set networks and Hough voting, which achieves state-of-the-art performance on two large datasets of real 3D scans.
Abstract: Current 3D object detection methods are heavily influenced by 2D detectors. In order to leverage architectures in 2D detectors, they often convert 3D point clouds to regular grids (i.e., to voxel grids or to bird's eye view images), or rely on detection in 2D images to propose 3D boxes. Few works have attempted to directly detect objects in point clouds. In this work, we return to first principles to construct a 3D detection pipeline for point cloud data and as generic as possible. However, due to the sparse nature of the data -- samples from 2D manifolds in 3D space -- we face a major challenge when directly predicting bounding box parameters from scene points: a 3D object centroid can be far from any surface point thus hard to regress accurately in one step. To address the challenge, we propose VoteNet, an end-to-end 3D object detection network based on a synergy of deep point set networks and Hough voting. Our model achieves state-of-the-art 3D detection on two large datasets of real 3D scans, ScanNet and SUN RGB-D with a simple design, compact model size and high efficiency. Remarkably, VoteNet outperforms previous methods by using purely geometric information without relying on color images.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper introduces the binary segmentation masks to construct synthetic RGB-Mask pairs as inputs, then designs a mask-guided contrastive attention model (MGCAM) to learn features separately from the body and background regions, and proposes a novel region-level triplet loss to restrain the features learnt from different regions.
Abstract: Person Re-identification (ReID) is an important yet challenging task in computer vision. Due to the diverse background clutters, variations on viewpoints and body poses, it is far from solved. How to extract discriminative and robust features invariant to background clutters is the core problem. In this paper, we first introduce the binary segmentation masks to construct synthetic RGB-Mask pairs as inputs, then we design a mask-guided contrastive attention model (MGCAM) to learn features separately from the body and background regions. Moreover, we propose a novel region-level triplet loss to restrain the features learnt from different regions, i.e., pulling the features from the full image and body region close, whereas pushing the features from backgrounds away. We may be the first one to successfully introduce the binary mask into person ReID task and the first one to propose region-level contrastive learning. We evaluate the proposed method on three public datasets, including MARS, Market-1501 and CUHK03. Extensive experimental results show that the proposed method is effective and achieves the state-of-the-art results. Mask and code will be released upon request.

Posted Content
TL;DR: This is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem and proposes a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data.
Abstract: In recent years, deep neural networks have emerged as a dominant machine learning tool for a wide variety of application domains. However, training a deep neural network requires a large amount of labeled data, which is an expensive process in terms of time, labor and human expertise. Domain adaptation or transfer learning algorithms address this challenge by leveraging labeled data in a different, but related source domain, to develop a model for the target domain. Further, the explosive growth of digital data has posed a fundamental challenge concerning its storage and retrieval. Due to its storage and retrieval efficiency, recent years have witnessed a wide application of hashing in a variety of computer vision applications. In this paper, we first introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms. The dataset contains images of a variety of everyday objects from multiple domains. We then propose a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data. To the best of our knowledge, this is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem. Our extensive empirical studies on multiple transfer tasks corroborate the usefulness of the framework in learning efficient hash codes which outperform existing competitive baselines for unsupervised domain adaptation.