scispace - formally typeset
Search or ask a question
Browse all papers

Proceedings ArticleDOI
18 Jun 2018
TL;DR: Scale Normalization for Image Pyramids (SNIP) as discussed by the authors selectively back-propagates the gradients of object instances of different sizes as a function of the image scale to detect small objects.
Abstract: An analysis of different techniques for recognizing and detecting objects under extreme scale variation is presented. Scale specific and scale invariant design of detectors are compared by training them with different configurations of input data. By evaluating the performance of different network architectures for classifying small objects on ImageNet, we show that CNNs are not robust to changes in scale. Based on this analysis, we propose to train and test detectors on the same scales of an image-pyramid. Since small and large objects are difficult to recognize at smaller and larger scales respectively, we present a novel training scheme called Scale Normalization for Image Pyramids (SNIP) which selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. On the COCO dataset, our single model performance is 45.7% and an ensemble of 3 networks obtains an mAP of 48.3%. We use off-the-shelf ImageNet-1000 pre-trained models and only train with bounding box supervision. Our submission won the Best Student Entry in the COCO 2017 challenge. Code will be made available at http://bit.ly/2yXVg4c.

662 citations


Posted Content
TL;DR: It is shown that a variant of the stacked-sparse denoising autoencoder can learn from synthetically darkened and noise-added training examples to adaptively enhance images taken from natural low-light environment and/or are hardware-degraded.
Abstract: In surveillance, monitoring and tactical reconnaissance, gathering the right visual information from a dynamic environment and accurately processing such data are essential ingredients to making informed decisions which determines the success of an operation. Camera sensors are often cost-limited in ability to clearly capture objects without defects from images or videos taken in a poorly-lit environment. The goal in many applications is to enhance the brightness, contrast and reduce noise content of such images in an on-board real-time manner. We propose a deep autoencoder-based approach to identify signal features from low-light images handcrafting and adaptively brighten images without over-amplifying the lighter parts in images (i.e., without saturation of image pixels) in high dynamic range. We show that a variant of the recently proposed stacked-sparse denoising autoencoder can learn to adaptively enhance and denoise from synthetically darkened and noisy training examples. The network can then be successfully applied to naturally low-light environment and/or hardware degraded images. Results show significant credibility of deep learning based approaches both visually and by quantitative comparison with various popular enhancing, state-of-the-art denoising and hybrid enhancing-denoising techniques.

662 citations


Journal ArticleDOI
24 Apr 2020
TL;DR: In this article, the authors discuss the potential implications of social distancing on daily travel patterns, i.e., reducing interactions between individuals in order to slow down the spread of the virus, has become the new norm.
Abstract: The spread of the COVID-19 virus has resulted in unprecedented measures restricting travel and activity participation in many countries. Social distancing, i.e., reducing interactions between individuals in order to slow down the spread of the virus, has become the new norm. In this viewpoint I will discuss the potential implications of social distancing on daily travel patterns. Avoiding social contact might completely change the number and types of out-of-home activities people perform, and how people reach these activities. It can be expected that the demand for travel will reduce and that people will travel less by public transport. Social distancing might negatively affect subjective well-being and health status, as it might result in social isolation and limited physical activity. As a result, walking and cycling, recreationally or utilitarian, can be important ways to maintain satisfactory levels of health and well-being. Policymakers and planners should consequently try to encourage active travel, while public transport operators should focus on creating ways to safely use public transport.

662 citations


Posted Content
TL;DR: This article showed that gradient descent converges at a global linear rate to the global optimum for two-layer fully connected ReLU activated neural networks, where over-parameterization and random initialization jointly restrict weight vector to be close to its initialization for all iterations.
Abstract: One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an $m$ hidden node shallow neural network with ReLU activation and $n$ training data, we show as long as $m$ is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods.

662 citations


Journal ArticleDOI
TL;DR: The results indicate that theCFIR has been used across a wide range of studies, though more in-depth use of the CFIR may help advance implementation science.
Abstract: In 2009, Damschroder et al. developed the Consolidated Framework for Implementation Research (CFIR), which provides a comprehensive listing of constructs thought to influence implementation. This systematic review assesses the extent to which the CFIR’s use in implementation research fulfills goals set forth by Damschroder et al. in terms of breadth of use, depth of application, and contribution to implementation research. We searched Scopus and Web of Science for publications that cited the original CFIR publication by Damschroder et al. (Implement Sci 4:50, 2009) and downloaded each unique result for review. After applying exclusion criteria, the final articles were empirical studies published in peer-review journals that used the CFIR in a meaningful way (i.e., used the CFIR to guide data collection, measurement, coding, analysis, and/or reporting). A framework analysis approach was used to guide abstraction and synthesis of the included articles. Twenty-six of 429 unique articles (6 %) met inclusion criteria. We found great breadth in CFIR application; the CFIR was applied across a wide variety of study objectives, settings, and units of analysis. There was also variation in the method of included studies (mixed methods (n = 13); qualitative (n = 10); quantitative (n = 3)). Depth of CFIR application revealed some areas for improvement. Few studies (n = 3) reported justification for selection of CFIR constructs used; the majority of studies (n = 14) used the CFIR to guide data analysis only; and few studies investigated any outcomes (n = 11). Finally, reflections on the contribution of the CFIR to implementation research were scarce. Our results indicate that the CFIR has been used across a wide range of studies, though more in-depth use of the CFIR may help advance implementation science. To harness its potential, researchers should consider how to most meaningfully use the CFIR. Specific recommendations for applying the CFIR include explicitly justifying selection of CFIR constructs; integrating the CFIR throughout the research process (in study design, data collection, and analysis); and appropriately using the CFIR given the phase of implementation of the research (e.g., if the research is post-implementation, using the CFIR to link determinants of implementation to outcomes).

662 citations


Posted ContentDOI
23 Jan 2020-bioRxiv
TL;DR: The identification and characterization of a novel coronavirus (nCoV-2019) which caused an epidemic of acute respiratory syndrome in humans, in Wuhan, China, and it is confirmed that this novel CoV uses the same cell entry receptor, ACE2, as SARS-CoV.
Abstract: Since the SARS outbreak 18 years ago, a large number of severe acute respiratory syndrome related coronaviruses (SARSr-CoV) have been discovered in their natural reservoir host, bats. Previous studies indicated that some of those bat SARSr-CoVs have the potential to infect humans. Here we report the identification and characterization of a novel coronavirus (nCoV-2019) which caused an epidemic of acute respiratory syndrome in humans, in Wuhan, China. The epidemic, started from December 12th, 2019, has caused 198 laboratory confirmed infections with three fatal cases by January 20th, 2020. Full-length genome sequences were obtained from five patients at the early stage of the outbreak. They are almost identical to each other and share 79.5% sequence identify to SARS-CoV. Furthermore, it was found that nCoV-2019 is 96% identical at the whole genome level to a bat coronavirus. The pairwise protein sequence analysis of seven conserved non-structural proteins show that this virus belongs to the species of SARSr-CoV. The nCoV-2019 virus was then isolated from the bronchoalveolar lavage fluid of a critically ill patient, which can be neutralized by sera from several patients. Importantly, we have confirmed that this novel CoV uses the same cell entry receptor, ACE2, as SARS-CoV.

662 citations



Journal ArticleDOI
21 Aug 2015-Science
TL;DR: This work states that defining forest health integrates utilitarian and ecosystem measures of forest condition and function, implemented across a range of spatial scales, and is particularly critical to identify thresholds for rapid forest decline.
Abstract: Humans rely on healthy forests to supply energy, building materials, and food and to provide services such as storing carbon, hosting biodiversity, and regulating climate. Defining forest health integrates utilitarian and ecosystem measures of forest condition and function, implemented across a range of spatial scales. Although native forests are adapted to some level of disturbance, all forests now face novel stresses in the form of climate change, air pollution, and invasive pests. Detecting how intensification of these stresses will affect the trajectory of forests is a major scientific challenge that requires developing systems to assess the health of global forests. It is particularly critical to identify thresholds for rapid forest decline, because it can take many decades for forests to restore the services that they provide.

662 citations


Book ChapterDOI
08 Oct 2016
TL;DR: The Spatial Memory Network, a novel spatial attention architecture that aligns words with image patches in the first hop, is proposed and improved results are obtained compared to a strong deep baseline model which concatenates image and question features to predict the answer.
Abstract: We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses attention to choose regions relevant for computing the answer. We propose a novel question-guided spatial attention architecture that looks for regions relevant to either individual words or the entire question, repeating the process over multiple recurrent steps, or “hops”. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the network’s attention. We evaluate our model on two available visual question answering datasets and obtain improved results.

662 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the quantum computational complexity of a holographic state is given by the classical action of a region in the bulk (the ''Wheeler-DeWitt'' patch).
Abstract: Our earlier paper ``Complexity Equals Action'' conjectured that the quantum computational complexity of a holographic state is given by the classical action of a region in the bulk (the ``Wheeler-DeWitt'' patch). We provide calculations for the results quoted in that paper, explain how it fits into a broader (tensor) network of ideas, and elaborate on the hypothesis that black holes are the fastest computers in nature.

662 citations


Patent
Samuli Laine1, Timo Aila1
29 Sep 2017
TL;DR: In this paper, a method, computer readable medium, and system are disclosed for implementing a temporal ensembling model for training a deep neural network, which includes the steps of receiving a set of training data for a DNN and training the DNN utilizing the set of data by analyzing the plurality of input vectors by the deep network to generate a plurality of prediction vectors, and computing a loss term associated with the particular input vector by combining a supervised component and an unsupervised component according to a weighting function.
Abstract: A method, computer readable medium, and system are disclosed for implementing a temporal ensembling model for training a deep neural network. The method for training the deep neural network includes the steps of receiving a set of training data for a deep neural network and training the deep neural network utilizing the set of training data by: analyzing the plurality of input vectors by the deep neural network to generate a plurality of prediction vectors, and, for each prediction vector in the plurality of prediction vectors corresponding to the particular input vector, computing a loss term associated with the particular input vector by combining a supervised component and an unsupervised component according to a weighting function and updating the target prediction vector associated with the particular input vector.

Posted Content
TL;DR: In this article, a discriminative convolutional network is proposed to generate object proposals, which is trained jointly with two objectives: given an image patch, the first part outputs a class-agnostic segmentation mask, while the second part outputs the likelihood of the patch being centered on a full object.
Abstract: Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.

Journal ArticleDOI
12 Feb 2015-Cell
TL;DR: It is reported that, in the absence of focal adhesions and under conditions of confinement, mesenchymal cells can spontaneously switch to a fast amoeboid migration phenotype and, Interestingly, transformed cells are more prone to adopt this fast migration mode.

Journal ArticleDOI
TL;DR: Shortcuts to adiabaticity (STA) as mentioned in this paper is a systematic approach to accomplish the same final state transfer in a faster manner, which is used for atomic and molecular physics.
Abstract: Adiabatic evolution along the instantaneous eigenstate of a time-dependent Hamiltonian is used for robust and high fidelity state transfer in atomic and molecular physics. Shortcuts to adiabaticity (STA) are systematic approaches to accomplish the same final state transfer in a faster manner. This article presents an introduction to STA and reviews different theoretical approaches and applications of STA to a range of scientific and engineering tasks in quantum physics and beyond.

Journal ArticleDOI
TL;DR: This paper systematically examines computational intelligence-based transfer learning techniques and clusters related technique developments into four main categories and provides state-of-the-art knowledge that will directly support researchers and practice-based professionals to understand the developments in computational Intelligence- based transfer learning research and applications.
Abstract: Transfer learning aims to provide a framework to utilize previously-acquired knowledge to solve new but similar problems much more quickly and effectively. In contrast to classical machine learning methods, transfer learning methods exploit the knowledge accumulated from data in auxiliary domains to facilitate predictive modeling consisting of different data patterns in the current domain. To improve the performance of existing transfer learning methods and handle the knowledge transfer process in real-world systems, computational intelligence has recently been applied in transfer learning. This paper systematically examines computational intelligence-based transfer learning techniques and clusters related technique developments into four main categories: (a) neural network-based transfer learning; (b) Bayes-based transfer learning; (c) fuzzy transfer learning, and (d) applications of computational intelligence-based transfer learning. By providing state-of-the-art knowledge, this survey will directly support researchers and practice-based professionals to understand the developments in computational intelligence-based transfer learning research and applications.

Journal ArticleDOI
TL;DR: This in vitro model replicated results from past animal and human studies, including demonstration that probiotic and antibiotic therapies can suppress villus injury induced by pathogenic bacteria and proof-of-principle to show that the microfluidic gut-on-a-chip device can be used to create human intestinal disease models and gain new insights into gut pathophysiology.
Abstract: A human gut-on-a-chip microdevice was used to coculture multiple commensal microbes in contact with living human intestinal epithelial cells for more than a week in vitro and to analyze how gut microbiome, inflammatory cells, and peristalsis-associated mechanical deformations independently contribute to intestinal bacterial overgrowth and inflammation. This in vitro model replicated results from past animal and human studies, including demonstration that probiotic and antibiotic therapies can suppress villus injury induced by pathogenic bacteria. By ceasing peristalsis-like motions while maintaining luminal flow, lack of epithelial deformation was shown to trigger bacterial overgrowth similar to that observed in patients with ileus and inflammatory bowel disease. Analysis of intestinal inflammation on-chip revealed that immune cells and lipopolysaccharide endotoxin together stimulate epithelial cells to produce four proinflammatory cytokines (IL-8, IL-6, IL-1β, and TNF-α) that are necessary and sufficient to induce villus injury and compromise intestinal barrier function. Thus, this human gut-on-a-chip can be used to analyze contributions of microbiome to intestinal pathophysiology and dissect disease mechanisms in a controlled manner that is not possible using existing in vitro systems or animal models.

Journal ArticleDOI
TL;DR: Cisplatin and radiotherapy should be used as the standard of care for HPV-positive low-risk patients who are able to tolerate cisplatin, and cetuximab showed significant detriment in terms of tumour control.

Journal ArticleDOI
TL;DR: The size of the ego-depletion effect was small with 95% confidence intervals (CIs) that encompassed zero (d = 0.04, 95% CI [−0.07, 0.15]), and implications of the findings for the psyche depletion effect and the resource depletion model of self-control are discussed.
Abstract: Good self-control has been linked to adaptive outcomes such as better health, cohesive personal relationships, success in the workplace and at school, and less susceptibility to crime and addictions. In contrast, self-control failure is linked to maladaptive outcomes. Understanding the mechanisms by which self-control predicts behavior may assist in promoting better regulation and outcomes. A popular approach to understanding self-control is the strength or resource depletion model. Self-control is conceptualized as a limited resource that becomes depleted after a period of exertion resulting in self-control failure. The model has typically been tested using a sequential-task experimental paradigm, in which people completing an initial self-control task have reduced self-control capacity and poorer performance on a subsequent task, a state known as ego depletion Although a meta-analysis of ego-depletion experiments found a medium-sized effect, subsequent meta-analyses have questioned the size and existence of the effect and identified instances of possible bias. The analyses served as a catalyst for the current Registered Replication Report of the ego-depletion effect. Multiple laboratories (k = 23, total N = 2,141) conducted replications of a standardized ego-depletion protocol based on a sequential-task paradigm by Sripada et al. Meta-analysis of the studies revealed that the size of the ego-depletion effect was small with 95% confidence intervals (CIs) that encompassed zero (d = 0.04, 95% CI [-0.07, 0.15]. We discuss implications of the findings for the ego-depletion effect and the resource depletion model of self-control.

Proceedings ArticleDOI
01 May 2018
TL;DR: This work presents a simple but efficient unsupervised objective to train distributed representations of sentences, which outperforms the state-of-the-art un supervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.
Abstract: The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well. We present a simple but efficient unsupervised objective to train distributed representations of sentences. Our method outperforms the state-of-the-art unsupervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.

Journal ArticleDOI
TL;DR: In this article, the authors take a first step towards an extended, interdisciplinary policy mix concept based on a review of the bodies of literature on innovation studies, environmental economics and policy analysis.

Journal ArticleDOI
10 Jan 2018-Nature
TL;DR: In this article, it was shown that the lowest exciton in caesium lead halide perovskites (CsPbX_3, with X = Cl, Br or I) involves a highly emissive triplet state.
Abstract: Nanostructured semiconductors emit light from electronic states known as excitons. For organic materials, Hund’s rules state that the lowest-energy exciton is a poorly emitting triplet state. For inorganic semiconductors, similar rules predict an analogue of this triplet state known as the ‘dark exciton’. Because dark excitons release photons slowly, hindering emission from inorganic nanostructures, materials that disobey these rules have been sought. However, despite considerable experimental and theoretical efforts, no inorganic semiconductors have been identified in which the lowest exciton is bright. Here we show that the lowest exciton in caesium lead halide perovskites (CsPbX_3, with X = Cl, Br or I) involves a highly emissive triplet state. We first use an effective-mass model and group theory to demonstrate the possibility of such a state existing, which can occur when the strong spin–orbit coupling in the conduction band of a perovskite is combined with the Rashba effect. We then apply our model to CsPbX_3 nanocrystals, and measure size- and composition-dependent fluorescence at the single-nanocrystal level. The bright triplet character of the lowest exciton explains the anomalous photon-emission rates of these materials, which emit about 20 and 1,000 times faster than any other semiconductor nanocrystal at room and cryogenic temperatures, respectively. The existence of this bright triplet exciton is further confirmed by analysis of the fine structure in low-temperature fluorescence spectra. For semiconductor nanocrystals, which are already used in lighting, lasers and displays, these excitons could lead to materials with brighter emission. More generally, our results provide criteria for identifying other semiconductors that exhibit bright excitons, with potential implications for optoelectronic devices.

Journal ArticleDOI
TL;DR: Review of selected capabilities of HYDRUS implemented since 2008 New standard and nonstandard specialized add‐on modules significantly expanded capabilities of the software.
Abstract: The HYDRUS-1D and HYDRUS (2D/3D) computer software packages are widely used finite-element models for simulating the one- and two- or three-dimensional movement of water, heat, and multiple solutes in variably saturated media, respectively. In 2008, Simůnek et al. (2008b) described the entire history of the development of the various HYDRUS programs and related models and tools such as STANMOD, RETC, ROSETTA, UNSODA, UNSATCHEM, HP1, and others. The objective of this manuscript is to review selected capabilities of HYDRUS that have been implemented since 2008. Our review is not limited to listing additional processes that were implemented in the standard computational modules, but also describes many new standard and nonstandard specialized add-on modules that significantly expanded the capabilities of the two software packages. We also review additional capabilities that have been incorporated into the graphical user interface (GUI) that supports the use of HYDRUS (2D/3D). Another objective of this manuscript is to review selected applications of the HYDRUS models such as evaluation of various irrigation schemes, evaluation of the effects of plant water uptake on groundwater recharge, assessing the transport of particle-like substances in the subsurface, and using the models in conjunction with various geophysical methods.

Journal ArticleDOI
TL;DR: It is argued that neuroscientific evidence plays a critical role in understanding the mechanisms by which motivation and cognitive control interact, and is advocated for a view of control function that treats it as a domain of reward-based decision making.
Abstract: Research on cognitive control and executive function has long recognized the relevance of motivational factors. Recently, however, the topic has come increasingly to center stage, with a surge of new studies examining the interface of motivation and cognitive control. In the present article we survey research situated at this interface, considering work from cognitive and social psychology and behavioral economics, but with a particular focus on neuroscience research. We organize existing findings into three core areas, considering them in the light of currently vying theoretical perspectives. Based on the accumulated evidence, we advocate for a view of control function that treats it as a domain of reward-based decision making. More broadly, we argue that neuroscientific evidence plays a critical role in understanding the mechanisms by which motivation and cognitive control interact. Opportunities for further cross-fertilization between behavioral and neuroscientific research are highlighted.

Journal ArticleDOI
TL;DR: This paper provides a review of the main PSI algorithms proposed in the literature, describing the main approaches and the most important works devoted to single aspects of PSI, and discusses the main open PSI problems and the associated future research lines.
Abstract: Persistent Scatterer Interferometry (PSI) is a powerful remote sensing technique able to measure and monitor displacements of the Earth’s surface over time. Specifically, PSI is a radar-based technique that belongs to the group of differential interferometric Synthetic Aperture Radar (SAR). This paper provides a review of such PSI technique. It firstly recalls the basic principles of SAR interferometry, differential SAR interferometry and PSI. Then, a review of the main PSI algorithms proposed in the literature is provided, describing the main approaches and the most important works devoted to single aspects of PSI. A central part of this paper is devoted to the discussion of different characteristics and technical aspects of PSI, e.g. SAR data availability, maximum deformation rates, deformation time series, thermal expansion component of PSI observations, etc. The paper then goes through the most important PSI validation activities, which have provided valuable inputs for the PSI development and its acceptability at scientific, technical and commercial level. This is followed by a description of the main PSI applications developed in the last fifteen years. The paper concludes with a discussion of the main open PSI problems and the associated future research lines.

Journal ArticleDOI
TL;DR: Micrometer-thick MXene membranes demonstrated ultrafast water flux of 37.4 L/(Bar·h·m(2)) and differential sieving of salts depending on both the hydration radius and charge of the ions.
Abstract: Nanometer-thin sheets of 2D Ti3C2Tx (MXene) have been assembled into freestanding or supported membranes for the charge- and size-selective rejection of ions and molecules. MXene membranes with controllable thicknesses ranging from hundreds of nanometers to several micrometers exhibited flexibility, high mechanical strength, hydrophilic surfaces, and electrical conductivity that render them promising for separation applications. Micrometer-thick MXene membranes demonstrated ultrafast water flux of 37.4 L/(Bar·h·m2) and differential sieving of salts depending on both the hydration radius and charge of the ions. Cations with a larger charge and hydration radii smaller than the interlayer spacing of MXene (∼6 A) demonstrate an order of magnitude slower permeation compared to single-charged cations. Our findings may open a door for developing efficient and highly selective separation membranes from 2D carbides.

Journal ArticleDOI
TL;DR: An F-RAN is presented as a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency and key techniques and their corresponding solutions, including transmission mode selection and interference suppression, are discussed.
Abstract: An F-RAN is presented in this article as a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. The core idea is to take full advantage of local radio signal processing, cooperative radio resource management, and distributed storing capabilities in edge devices, which can decrease the heavy burden on fronthaul and avoid large-scale radio signal processing in the centralized baseband unit pool. This article comprehensively presents the system architecture and key techniques of F-RANs. In particular, key techniques and their corresponding solutions, including transmission mode selection and interference suppression, are discussed. Open issues in terms of edge caching, software-defined networking, and network function virtualization are also identified.

Journal ArticleDOI
TL;DR: The abundance of microplastic pollution in sea salts was significantly higher than that in lake salts and rock/well salts, and indicates that sea products, such as sea salts, are contaminated by microplastics.
Abstract: Microplastics have been found in seas all over the world. We hypothesize that sea salts might contain microplastics, because they are directly supplied by seawater. To test our hypothesis, we collected 15 brands of sea salts, lake salts, and rock/well salts from supermarkets throughout China. The microplastics content was 550–681 particles/kg in sea salts, 43–364 particles/kg in lake salts, and 7–204 particles/kg in rock/well salts. In sea salts, fragments and fibers were the prevalent types of particles compared with pellets and sheets. Microplastics measuring less than 200 μm represented the majority of the particles, accounting for 55% of the total microplastics, and the most common microplastics were polyethylene terephthalate, followed by polyethylene and cellophane in sea salts. The abundance of microplastics in sea salts was significantly higher than that in lake salts and rock/well salts. This result indicates that sea products, such as sea salts, are contaminated by microplastics. To the best of ...

Proceedings Article
30 Oct 2018
TL;DR: DropBlock as discussed by the authors introduces DropBlock, a form of structured dropout, where units in a contiguous region of a feature map are dropped together and applies DropbBlock in skip connections in addition to the convolution layers.
Abstract: Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers This lack of success of dropout for convolutional layers is perhaps due to the fact that activation units in convolutional layers are spatially correlated so information can still flow through convolutional networks despite dropout Thus a structured form of dropout is needed to regularize convolutional networks In this paper, we introduce DropBlock, a form of structured dropout, where units in a contiguous region of a feature map are dropped together We found that applying DropbBlock in skip connections in addition to the convolution layers increases the accuracy Also, gradually increasing number of dropped units during training leads to better accuracy and more robust to hyperparameter choices Extensive experiments show that DropBlock works better than dropout in regularizing convolutional networks On ImageNet classification, ResNet-50 architecture with DropBlock achieves $7813\%$ accuracy, which is more than $16\%$ improvement on the baseline On COCO detection, DropBlock improves Average Precision of RetinaNet from $368\%$ to $384\%$

Journal ArticleDOI
TL;DR: This work shows that the highly stable, non-toxic and earth-abundant material, ZrSiS, has an electronic band structure that hosts several Dirac cones that form a Fermi surface with a diamond-shaped line of Dirac nodes, making it a very promising candidate to study Dirac electrons, as well as the properties of lines ofDirac nodes.
Abstract: Materials harbouring exotic quasiparticles, such as massless Dirac and Weyl fermions, have garnered much attention from physics and material science communities due to their exceptional physical properties such as ultra-high mobility and extremely large magnetoresistances. Here, we show that the highly stable, non-toxic and earth-abundant material, ZrSiS, has an electronic band structure that hosts several Dirac cones that form a Fermi surface with a diamond-shaped line of Dirac nodes. We also show that the square Si lattice in ZrSiS is an excellent template for realizing new types of two-dimensional Dirac cones recently predicted by Young and Kane. Finally, we find that the energy range of the linearly dispersed bands is as high as 2 eV above and below the Fermi level; much larger than of other known Dirac materials. This makes ZrSiS a very promising candidate to study Dirac electrons, as well as the properties of lines of Dirac nodes.

Journal ArticleDOI
TL;DR: There is an overall survival benefit for intensifying the follow-up of patients after curative surgery for colorectal cancer, and it is suggested that there is a cost-effective approach to adopt in this clinical area.
Abstract: Background This is the fourth update of a Cochrane Review first published in 2002 and last updated in 2016. It is common clinical practice to follow patients with colorectal cancer for several years following their curative surgery or adjuvant therapy, or both. Despite this widespread practice, there is considerable controversy about how often patients should be seen, what tests should be performed, and whether these varying strategies have any significant impact on patient outcomes. Objectives To assess the effect of follow-up programmes (follow-up versus no follow-up, follow-up strategies of varying intensity, and follow-up in different healthcare settings) on overall survival for patients with colorectal cancer treated with curative intent. Secondary objectives are to assess relapse-free survival, salvage surgery, interval recurrences, quality of life, and the harms and costs of surveillance and investigations. Search methods For this update, on 5 April 2109 we searched CENTRAL, MEDLINE, Embase, CINAHL, and Science Citation Index. We also searched reference lists of articles, and handsearched the Proceedings of the American Society for Radiation Oncology. In addition, we searched the following trials registries: ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform. We contacted study authors. We applied no language or publication restrictions to the search strategies. Selection criteria We included only randomised controlled trials comparing different follow-up strategies for participants with non-metastatic colorectal cancer treated with curative intent. Data collection and analysis We used standard methodological procedures expected by Cochrane. Two review authors independently determined study eligibility, performed data extraction, and assessed risk of bias and methodological quality. We used GRADE to assess evidence quality. Main results We identified 19 studies, which enrolled 13,216 participants (we included four new studies in this second update). Sixteen out of the 19 studies were eligible for quantitative synthesis. Although the studies varied in setting (general practitioner (GP)-led, nurse-led, or surgeon-led) and ’intensity’ of follow-up, there was very little inconsistency in the results. Overall survival: we found intensive follow-up made little or no difference (hazard ratio (HR) 0.91, 95% confidence interval (CI) 0.80 to 1.04: I² = 18%; high-quality evidence). There were 1453 deaths among 12,528 participants in 15 studies. In absolute terms, the average effect of intensive follow-up on overall survival was 24 fewer deaths per 1000 patients, but the true effect could lie between 60 fewer to 9 more per 1000 patients. Colorectal cancer-specific survival: we found intensive follow-up probably made little or no difference (HR 0.93, 95% CI 0.81 to 1.07: I² = 0%; moderate-quality evidence). There were 925 colorectal cancer deaths among 11,771 participants enrolled in 11 studies. In absolute terms, the average effect of intensive follow-up on colorectal cancer-specific survival was 15 fewer colorectal cancer-specific survival deaths per 1000 patients, but the true effect could lie between 47 fewer to 12 more per 1000 patients. Relapse-free survival: we found intensive follow-up made little or no difference (HR 1.05, 95% CI 0.92 to 1.21; I² = 41%; high-quality evidence). There were 2254 relapses among 8047 participants enrolled in 16 studies. The average effect of intensive follow-up on relapse-free survival was 17 more relapses per 1000 patients, but the true effect could lie between 30 fewer and 66 more per 1000 patients. Salvage surgery with curative intent: this was more frequent with intensive follow-up (risk ratio (RR) 1.98, 95% CI 1.53 to 2.56; I² = 31%; high-quality evidence). There were 457 episodes of salvage surgery in 5157 participants enrolled in 13 studies. In absolute terms, the effect of intensive follow-up on salvage surgery was 60 more episodes of salvage surgery per 1000 patients, but the true effect could lie between 33 to 96 more episodes per 1000 patients. Interval (symptomatic) recurrences: these were less frequent with intensive follow-up (RR 0.59, 95% CI 0.41 to 0.86; I² = 66%; moderate-quality evidence). There were 376 interval recurrences reported in 3933 participants enrolled in seven studies. Intensive follow-up was associated with fewer interval recurrences (52 fewer per 1000 patients); the true effect is between 18 and 75 fewer per 1000 patients. Intensive follow-up probably makes little or no difference to quality of life, anxiety, or depression (reported in 7 studies; moderate-quality evidence). The data were not available in a form that allowed analysis. Intensive follow-up may increase the complications (perforation or haemorrhage) from colonoscopies (OR 7.30, 95% CI 0.75 to 70.69; 1 study, 326 participants; very low-quality evidence). Two studies reported seven colonoscopic complications in 2292 colonoscopies, three perforations and four gastrointestinal haemorrhages requiring transfusion. We could not combine the data, as they were not reported by study arm in one study. The limited data on costs suggests that the cost of more intensive follow-up may be increased in comparison with less intense follow-up (low-quality evidence). The data were not available in a form that allowed analysis. Authors’ conclusions The results of our review suggest that there is no overall survival benefit for intensifying the follow-up of patients after curative surgery for colorectal cancer. Although more participants were treated with salvage surgery with curative intent in the intensive follow-up groups, this was not associated with improved survival. Harms related to intensive follow-up and salvage therapy were not well reported.