scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: This review addresses anatomy of the breast, risk factors, epidemiology of breast cancer, pathogenesis of Breast cancer, stages of breastcancer, diagnostic investigations and treatment including chemotherapy, surgery, targeted therapies, hormone replacement therapy, radiation therapy, complementary therapies, gene therapy and stem-cell therapy etc for breast cancer.
Abstract: Breast cancer remains a worldwide public health dilemma and is currently the most common tumour in the globe. Awareness of breast cancer, public attentiveness, and advancement in breast imaging has made a positive impact on recognition and screening of breast cancer. Breast cancer is life-threatening disease in females and the leading cause of mortality among women population. For the previous two decades, studies related to the breast cancer has guided to astonishing advancement in our understanding of the breast cancer, resulting in further proficient treatments. Amongst all the malignant diseases, breast cancer is considered as one of the leading cause of death in post menopausal women accounting for 23% of all cancer deaths. It is a global issue now, but still it is diagnosed in their advanced stages due to the negligence of women regarding the self inspection and clinical examination of the breast. This review addresses anatomy of the breast, risk factors, epidemiology of breast cancer, pathogenesis of breast cancer, stages of breast cancer, diagnostic investigations and treatment including chemotherapy, surgery, targeted therapies, hormone replacement therapy, radiation therapy, complementary therapies, gene therapy and stem-cell therapy etc for breast cancer.

635 citations


Journal ArticleDOI
TL;DR: A 12‐week regimen of DCV plus SOF achieved SVR12 in 96% of patients with genotype 3 infection without cirrhosis and was well tolerated; there were no adverse events leading to discontinuation and only 1 serious AE on‐treatment, which was unrelated to study medications.

635 citations


Proceedings ArticleDOI
31 Mar 2017
TL;DR: A novel method for 3D object detection and pose estimation from color images only that uses segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background and is the first to report results on the Occlusion dataset using color imagesonly.
Abstract: We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7% [2] to 89.3% of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54% of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67% of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously.

635 citations


Journal ArticleDOI
TL;DR: Adoption of this staging classification provides a standardized taxonomy for type 1 diabetes and will aid the development of therapies and the design of clinical trials to prevent symptomatic disease, promote precision medicine, and provide a framework for an optimized benefit/risk ratio.
Abstract: Insights from prospective, longitudinal studies of individuals at risk for developing type 1 diabetes have demonstrated that the disease is a continuum that progresses sequentially at variable but predictable rates through distinct identifiable stages prior to the onset of symptoms. Stage 1 is defined as the presence of β-cell autoimmunity as evidenced by the presence of two or more islet autoantibodies with normoglycemia and is presymptomatic, stage 2 as the presence of β-cell autoimmunity with dysglycemia and is presymptomatic, and stage 3 as onset of symptomatic disease. Adoption of this staging classification provides a standardized taxonomy for type 1 diabetes and will aid the development of therapies and the design of clinical trials to prevent symptomatic disease, promote precision medicine, and provide a framework for an optimized benefit/risk ratio that will impact regulatory approval, reimbursement, and adoption of interventions in the early stages of type 1 diabetes to prevent symptomatic disease.

634 citations


Journal ArticleDOI
TL;DR: Recently, the effects of spin-orbit coupling (SOC) in correlated materials have become one of the most actively studied subjects in condensed matter physics, as correlations and SOC together can lead to the discovery of new phases.
Abstract: Recently, the effects of spin-orbit coupling (SOC) in correlated materials have become one of the most actively studied subjects in condensed matter physics, as correlations and SOC together can lead to the discovery of new phases. Examples include unconventional magnetism, spin liquids, and strongly correlated topological phases such as topological superconductivity. Among candidate materials, iridium oxides (iridates) have been an excellent playground to uncover such novel phenomena. In this review, we discuss recent progress in iridates and related materials, focusing on the basic concepts, relevant microscopic Hamiltonians, and unusual properties of iridates in perovskite- and honeycomb-based structures. Perspectives on SOC and correlation physics beyond iridates are also discussed.

634 citations


Journal ArticleDOI
TL;DR: This paper provides additional background information on the checkCIF procedure and additional details for a number of ALERTS along with options for how to act on them.
Abstract: Authors of a paper that includes a new crystal-structure determination are expected to not only report the structural results of inter­est and their inter­pretation, but are also expected to archive in computer-readable CIF format the experimental data on which the crystal-structure analysis is based. Additionally, an IUCr/checkCIF validation report will be required for the review of a submitted paper. Such a validation report, automatically created from the deposited CIF file, lists as ALERTS not only potential errors or unusual findings, but also suggestions for improvement along with inter­esting information on the structure at hand. Major ALERTS for issues are expected to have been acted on already before the submission for publication or discussed in the associated paper and/or commented on in the CIF file. In addition, referees, readers and users of the data should be able to make their own judgment and inter­pretation of the underlying experimental data or perform their own calculations with the archived data. All the above is consistent with the FAIR (findable, accessible, inter­operable, and reusable) initiative [Helliwell (2019). Struct. Dyn. 6, 05430]. Validation can also be helpful for less experienced authors in pointing to and avoiding of crystal-structure determination and inter­pretation pitfalls. The IUCr web-based checkCIF server provides such a validation report, based on data uploaded in CIF format. Alternatively, a locally installable checkCIF version is available to be used iteratively during the structure-determination process. ALERTS come mostly as short single-line messages. There is also a short explanation of the ALERTS available through the IUCr web server or with the locally installed PLATON/checkCIF version. This paper provides additional background information on the checkCIF procedure and additional details for a number of ALERTS along with options for how to act on them.

634 citations


Proceedings Article
06 Jul 2015
TL;DR: This work finds an advantage for correlation-based representation learning, while the best results on most tasks are obtained with the new variant, deep canonically correlated autoencoders (DCCAE).
Abstract: We consider learning representations (features) in the setting in which we have access to multiple unlabeled views of the data for representation learning while only one view is available at test time. Previous work on this problem has proposed several techniques based on deep neural networks, typically involving either autoencoder-like networks with a reconstruction objective or paired feedforward networks with a correlation-based objective. We analyze several techniques based on prior work, as well as new variants, and compare them experimentally on visual, speech, and language domains. To our knowledge this is the first head-to-head comparison of a variety of such techniques on multiple tasks. We find an advantage for correlation-based representation learning, while the best results on most tasks are obtained with our new variant, deep canonically correlated autoencoders (DCCAE).

634 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new method for climate modeling based on the work of the National Science Foundation (NSF) and the National Oceanic and Atmospheric Administration (NOAA).
Abstract: US Department of Energy; National Science Foundation (NSF) [DEB 1552747]; NSF [DEB 1552976, EF 1241881, EAR 125501, EAR 155489]; NOAA/GFDL-Princeton University Cooperative Institute for Climate Science

634 citations


Posted Content
TL;DR: This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions.
Abstract: Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance.

634 citations


Journal ArticleDOI
TL;DR: A deep learning based model that uses only sequence information of both targets and drugs to predict DT interaction binding affinities is proposed, outperforming the KronRLS algorithm and SimBoost, a state‐of‐the‐art method for DT binding affinity prediction.
Abstract: Motivation The identification of novel drug-target (DT) interactions is a substantial part of the drug discovery process. Most of the computational methods that have been proposed to predict DT interactions have focused on binary classification, where the goal is to determine whether a DT pair interacts or not. However, protein-ligand interactions assume a continuum of binding strength values, also called binding affinity and predicting this value still remains a challenge. The increase in the affinity data available in DT knowledge-bases allows the use of advanced learning techniques such as deep learning architectures in the prediction of binding affinities. In this study, we propose a deep-learning based model that uses only sequence information of both targets and drugs to predict DT interaction binding affinities. The few studies that focus on DT binding affinity prediction use either 3D structures of protein-ligand complexes or 2D features of compounds. One novel approach used in this work is the modeling of protein sequences and compound 1D representations with convolutional neural networks (CNNs). Results The results show that the proposed deep learning based model that uses the 1D representations of targets and drugs is an effective approach for drug target binding affinity prediction. The model in which high-level representations of a drug and a target are constructed via CNNs achieved the best Concordance Index (CI) performance in one of our larger benchmark datasets, outperforming the KronRLS algorithm and SimBoost, a state-of-the-art method for DT binding affinity prediction. Availability and implementation https://github.com/hkmztrk/DeepDTA. Supplementary information Supplementary data are available at Bioinformatics online.

634 citations


Journal ArticleDOI
TL;DR: In this paper, an emissions data set has been constructed using regional emission grid maps (annual and monthly) for SO2, NOx, CO, NMVOC, NH3, PM10, PM2.5, BC and OC for the years 2008 and 2010, with the purpose of providing consistent information to global and regional scale modelling efforts.
Abstract: The mandate of the Task Force Hemispheric Transport of Air Pollution (TF HTAP) under the Convention on Long-Range Transboundary Air Pollution (CLRTAP) is to improve the scientific understanding of the intercontinental air pollution transport, to quantify impacts on human health, vegetation and climate, to identify emission mitigation options across the regions of the Northern Hemisphere, and to guide future policies on these aspects. The harmonization and improvement of regional emission inventories is imperative to obtain consolidated estimates on the formation of global-scale air pollution. An emissions data set has been constructed using regional emission grid maps (annual and monthly) for SO2, NOx, CO, NMVOC, NH3, PM10, PM2.5, BC and OC for the years 2008 and 2010, with the purpose of providing consistent information to global and regional scale modelling efforts. This compilation of different regional gridded inventories-including that of the Environmental Protection Agency (EPA) for USA, the EPA and Environment Canada (for Canada), the European Monitoring and Evaluation Programme (EMEP) and Netherlands Organisation for Applied Scientific Research (TNO) for Europe, and the Model Inter-comparison Study for Asia (MICS-Asia III) for China, India and other Asian countries-was gap-filled with the emission grid maps of the Emissions Database for Global Atmospheric Research (EDGARv4.3) for the rest of the world (mainly South America, Africa, Russia and Oceania). Emissions from seven main categories of human activities (power, industry, residential, agriculture, ground transport, aviation and shipping) were estimated and spatially distributed on a common grid of 0.1° × 0.1° longitude-latitude, to yield monthly, global, sector-specific grid maps for each substance and year. The HTAP-v2.2 air pollutant grid maps are considered to combine latest available regional information within a complete global data set. The disaggregation by sectors, high spatial and temporal resolution and detailed information on the data sources and references used will provide the user the required transparency. Because HTAP-v2.2 contains primarily official and/or widely used regional emission grid maps, it can be recommended as a global baseline emission inventory, which is regionally accepted as a reference and from which different scenarios assessing emission reduction policies at a global scale could start. An analysis of country-specific implied emission factors shows a large difference between industrialised countries and developing countries for acidifying gaseous air pollutant emissions (SO2 and NOx) from the energy and industry sectors. This is not observed for the particulate matter emissions (PM10, PM2.5), which show large differences between countries in the residential sector instead. The per capita emissions of all world countries, classified from low to high income, reveal an increase in level and in variation for gaseous acidifying pollutants, but not for aerosols. For aerosols, an opposite trend is apparent with higher per capita emissions of particulate matter for low income countries.

Journal ArticleDOI
TL;DR: The DanQ model, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence, improves considerably upon other models across several metrics.
Abstract: Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.

Journal ArticleDOI
TL;DR: The purpose of this review is primarily to review the pathogen, clinical features, diagnosis, and treatment of COVID‐19, but also to comment briefly on the epidemiology and pathology based on the current evidence.
Abstract: In late December 2019, a cluster of unexplained pneumonia cases has been reported in Wuhan, China. A few days later, the causative agent of this mysterious pneumonia was identified as a novel coronavirus. This causative virus has been temporarily named as severe acute respiratory syndrome coronavirus 2 and the relevant infected disease has been named as coronavirus disease 2019 (COVID-19) by the World Health Organization, respectively. The COVID-19 epidemic is spreading in China and all over the world now. The purpose of this review is primarily to review the pathogen, clinical features, diagnosis, and treatment of COVID-19, but also to comment briefly on the epidemiology and pathology based on the current evidence.

Journal ArticleDOI
TL;DR: In this article, an etch pit observation revealed that the dislocation density was on the order of 103 cm−3 and the effective donor concentration (N d − N a) was governed by the Si concentration.
Abstract: β-Ga2O3 bulk crystals were grown by the edge-defined film-fed growth (EFG) process and the floating zone process. Semiconductor substrates containing no twin boundaries with sizes up to 4 in. in diameter were fabricated. It was found that Si was the main residual impurity in the EFG-grown crystals and that the effective donor concentration (N d − N a) of unintentionally doped crystals was governed by the Si concentration. Intentional n-type doping was shown to be possible. An etch pit observation revealed that the dislocation density was on the order of 103 cm−3. N d − N a for the samples annealed in nitrogen ambient was almost the same as the Si concentration, while for the samples annealed in oxygen ambient, it was around 1 × 1017 cm−3 and independent of the Si concentration.

Journal ArticleDOI
TL;DR: A synthesis of knowledge is presented at this stage for application of this new and powerful detection method, which can reduce impacts on sensitive species and increase the power of field surveys for rare and elusive species.
Abstract: Summary Species detection using environmental DNA (eDNA) has tremendous potential for contributing to the understanding of the ecology and conservation of aquatic species. Detecting species using eDNA methods, rather than directly sampling the organisms, can reduce impacts on sensitive species and increase the power of field surveys for rare and elusive species. The sensitivity of eDNA methods, however, requires a heightened awareness and attention to quality assurance and quality control protocols. Additionally, the interpretation of eDNA data demands careful consideration of multiple factors. As eDNA methods have grown in application, diverse approaches have been implemented to address these issues. With interest in eDNA continuing to expand, supportive guidelines for undertaking eDNA studies are greatly needed. Environmental DNA researchers from around the world have collaborated to produce this set of guidelines and considerations for implementing eDNA methods to detect aquatic macroorganisms. Critical considerations for study design include preventing contamination in the field and the laboratory, choosing appropriate sample analysis methods, validating assays, testing for sample inhibition and following minimum reporting guidelines. Critical considerations for inference include temporal and spatial processes, limits of correlation of eDNA with abundance, uncertainty of positive and negative results, and potential sources of allochthonous DNA. We present a synthesis of knowledge at this stage for application of this new and powerful detection method.

Journal ArticleDOI
TL;DR: In this paper, the authors estimate that import competition from China, which surged after 2000, was a major force behind both recent reductions in US manufacturing employment and weak overall US job growth and suggest job losses from rising Chinese import competition over 1999-2011 in the range of 2.0-2.4 million.
Abstract: Even before the Great Recession, US employment growth was unimpressive. Between 2000 and 2007, the economy gave back the considerable employment gains achieved during the 1990s, with a historic contraction in manufacturing employment being a prime contributor to the slump. We estimate that import competition from China, which surged after 2000, was a major force behind both recent reductions in US manufacturing employment and—through input-output linkages and other general equilibrium channels—weak overall US job growth. Our central estimates suggest job losses from rising Chinese import competition over 1999–2011 in the range of 2.0–2.4 million.

Journal ArticleDOI
TL;DR: In this article, market reactions to the 2019 novel coronavirus disease (COVID-19) provide new insights into how real shocks and financial policies drive firm value, and the results illustrate how anticipated real effects from the health crisis, a rare disaster, were amplified through financial channels.
Abstract: Market reactions to the 2019 novel coronavirus disease (COVID-19) provide new insights into how real shocks and financial policies drive firm value. Initially, internationally oriented firms, especially those more exposed to trade with China, underperformed. As the virus spread to Europe and the United States, corporate debt and cash holdings emerged as important value drivers, relevant even after the Fed intervened in the bond market. The content and tone of conference calls mirror this development over time. Overall, the results illustrate how anticipated real effects from the health crisis, a rare disaster, were amplified through financial channels.

Proceedings ArticleDOI
27 Jun 2016
TL;DR: Zhang et al. as discussed by the authors proposed a patch-patch context between image regions and patch-background context, and formulated conditional random fields (CRFs) with CNN-based pairwise potential functions to capture semantic correlations between neighboring patches.
Abstract: Recent advances in semantic image segmentation have mostly been achieved by training deep convolutional neural networks (CNNs). We show how to improve semantic segmentation through the use of contextual information, specifically, we explore 'patch-patch' context between image regions, and 'patch-background' context. For learning from the patch-patch context, we formulate Conditional Random Fields (CRFs) with CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied to avoid repeated expensive CRF inference for back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image input and sliding pyramid pooling is effective for improving performance. Our experimental results set new state-of-the-art performance on a number of popular semantic segmentation datasets, including NYUDv2, PASCAL VOC 2012, PASCAL-Context, and SIFT-flow. In particular, we achieve an intersection-overunion score of 78:0 on the challenging PASCAL VOC 2012 dataset.

Proceedings ArticleDOI
17 Aug 2015
TL;DR: This paper built a centralized control mechanism based on a global configuration pushed to all datacenter switches, and modular hardware design coupled with simple, robust software allowed the design to also support inter-cluster and wide-area networks.
Abstract: We present our approach for overcoming the cost, operational complexity, and limited scale endemic to datacenter networks a decade ago. Three themes unify the five generations of datacenter networks detailed in this paper. First, multi-stage Clos topologies built from commodity switch silicon can support cost-effective deployment of building-scale networks. Second, much of the general, but complex, decentralized network routing and management protocols supporting arbitrary deployment scenarios were overkill for single-operator, pre-planned datacenter networks. We built a centralized control mechanism based on a global configuration pushed to all datacenter switches. Third, modular hardware design coupled with simple, robust software allowed our design to also support inter-cluster and wide-area networks. Our datacenter networks run at dozens of sites across the planet, scaling in capacity by 100x over ten years to more than 1Pbps of bisection bandwidth.

Journal ArticleDOI
TL;DR: TianQin this article is a proposal for a space-borne detector of gravitational waves in the millihertz frequencies, which relies on a constellation of three drag-free spacecraft orbiting the Earth.
Abstract: TianQin is a proposal for a space-borne detector of gravitational waves in the millihertz frequencies. The experiment relies on a constellation of three drag-free spacecraft orbiting the Earth. Inter-spacecraft laser interferometry is used to monitor the distances between the test masses. The experiment is designed to be capable of detecting a signal with high confidence from a single source of gravitational waves within a few months of observing time. We describe the preliminary mission concept for TianQin, including the candidate source and experimental designs. We present estimates for the major constituents of the experiment's error budget and discuss the project's overall feasibility. Given the current level of technology readiness, we expect TianQin to be flown in the second half of the next decade.

Journal ArticleDOI
TL;DR: The Task Force on Thyroid Nodules of the KSThR has revised the recommendations for the ultrasound diagnosis and imaging-based management of thyroid nodules, based on a comprehensive analysis of the current literature and the consensus of experts.
Abstract: The rate of detection of thyroid nodules and carcinomas has increased with the widespread use of ultrasonography (US), which is the mainstay for the detection and risk stratification of thyroid nodules as well as for providing guidance for their biopsy and nonsurgical treatment. The Korean Society of Thyroid Radiology (KSThR) published their first recommendations for the US-based diagnosis and management of thyroid nodules in 2011. These recommendations have been used as the standard guidelines for the past several years in Korea. Lately, the application of US has been further emphasized for the personalized management of patients with thyroid nodules. The Task Force on Thyroid Nodules of the KSThR has revised the recommendations for the ultrasound diagnosis and imaging-based management of thyroid nodules. The review and recommendations in this report have been based on a comprehensive analysis of the current literature and the consensus of experts.

Proceedings ArticleDOI
Jian Ding1, Nan Xue1, Yang Long1, Gui-Song Xia1, Qikai Lu1 
01 Jun 2019
TL;DR: The core idea of RoI Transformer is to apply spatial transformations on RoIs and learn the transformation parameters under the supervision of oriented bounding box (OBB) annotations.
Abstract: Object detection in aerial images is an active yet challenging task in computer vision because of the bird’s-eye view perspective, the highly complex backgrounds, and the variant appearances of objects. Especially when detecting densely packed objects in aerial images, methods relying on horizontal proposals for common object detection often introduce mismatches between the Region of Interests (RoIs) and objects. This leads to the common misalignment between the final object classification confidence and localization accuracy. In this paper, we propose a RoI Transformer to address these problems. The core idea of RoI Transformer is to apply spatial transformations on RoIs and learn the transformation parameters under the supervision of oriented bounding box (OBB) annotations. RoI Transformer is with lightweight and can be easily embedded into detectors for oriented object detection. Simply apply the RoI Transformer to light head RCNN has achieved state-of-the-art performances on two common and challenging aerial datasets, i.e., DOTA and HRSC2016, with a neglectable reduction to detection speed. Our RoI Transformer exceeds the deformable Position Sensitive RoI pooling when oriented bounding-box annotations are available. Extensive experiments have also validated the flexibility and effectiveness of our RoI Transformer.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: A novel directed graph neural network is designed specially to extract the information of joints, bones and their relations and make prediction based on the extracted features and is tested on two large-scale datasets, NTU-RGBD and Skeleton-Kinetics, and exceeds state-of-the-art performance on both of them.
Abstract: The skeleton data have been widely used for the action recognition tasks since they can robustly accommodate dynamic circumstances and complex backgrounds. In existing methods, both the joint and bone information in skeleton data have been proved to be of great help for action recognition tasks. However, how to incorporate these two types of data to best take advantage of the relationship between joints and bones remains a problem to be solved. In this work, we represent the skeleton data as a directed acyclic graph based on the kinematic dependency between the joints and bones in the natural human body. A novel directed graph neural network is designed specially to extract the information of joints, bones and their relations and make prediction based on the extracted features. In addition, to better fit the action recognition task, the topological structure of the graph is made adaptive based on the training process, which brings notable improvement. Moreover, the motion information of the skeleton sequence is exploited and combined with the spatial information to further enhance the performance in a two-stream framework. Our final model is tested on two large-scale datasets, NTU-RGBD and Skeleton-Kinetics, and exceeds state-of-the-art performance on both of them.

Journal ArticleDOI
TL;DR: DynaMut is presented, a web server implementing two distinct, well established normal mode approaches, which can be used to analyze and visualize protein dynamics by sampling conformations and assess the impact of mutations on protein dynamics and stability resulting from vibrational entropy changes.
Abstract: Proteins are highly dynamic molecules, whose function is intrinsically linked to their molecular motions. Despite the pivotal role of protein dynamics, their computational simulation cost has led to most structure-based approaches for assessing the impact of mutations on protein structure and function relying upon static structures. Here we present DynaMut, a web server implementing two distinct, well established normal mode approaches, which can be used to analyze and visualize protein dynamics by sampling conformations and assess the impact of mutations on protein dynamics and stability resulting from vibrational entropy changes. DynaMut integrates our graph-based signatures along with normal mode dynamics to generate a consensus prediction of the impact of a mutation on protein stability. We demonstrate our approach outperforms alternative approaches to predict the effects of mutations on protein stability and flexibility (P-value < 0.001), achieving a correlation of up to 0.70 on blind tests. DynaMut also provides a comprehensive suite for protein motion and flexibility analysis and visualization via a freely available, user friendly web server at http://biosig.unimelb.edu.au/dynamut/.

Journal ArticleDOI
TL;DR: These guidelines are a working document that reflects the state of the field at the time of publication and any decision by practitioners to apply these guidelines must be made in light of local resources and individual patient circumstances.

Book
01 Mar 2021
TL;DR: A collection of papers on all theoretical and practical aspects of SAT solving will be extremely useful to both students and researchers and will lead to many further advances in the field.
Abstract: 'Satisfiability (SAT) related topics have attracted researchers from various disciplines: logic, applied areas such as planning, scheduling, operations research and combinatorial optimization, but also theoretical issues on the theme of complexity and much more, they all are connected through SAT. My personal interest in SAT stems from actual solving: The increase in power of modern SAT solvers over the past 15 years has been phenomenal. It has become the key enabling technology in automated verification of both computer hardware and software' - Edmund M. Clarke (FORE Systems University Professor of Computer Science and Professor of Electrical and Computer Engineering at Carnegie Mellon University). 'Bounded Model Checking (BMC) of computer hardware is now probably the most widely used model checking technique. The counterexamples that it finds are just satisfying instances of a Boolean formula obtained by unwinding to some fixed depth a sequential circuit and its specification in linear temporal logic. Extending model checking to software verification is a much more difficult problem on the frontier of current research. One promising approach for languages like C with finite word-length integers is to use the same idea as in BMC but with a decision procedure for the theory of bit-vectors instead of SAT. All decision procedures for bit-vectors that I am familiar with ultimately make use of a fast SAT solver to handle complex formulas' - Edmund M. Clarke (FORE Systems University Professor of Computer Science and Professor of Electrical and Computer Engineering at Carnegie Mellon University). 'Decision procedures for more complicated theories, like linear real and integer arithmetic, are also used in program verification. Most of them use powerful SAT solvers in an essential way. Clearly, efficient SAT solving is a key technology for 21st century computer science. I expect this collection of papers on all theoretical and practical aspects of SAT solving will be extremely useful to both students and researchers and will lead to many further advances in the field' - Edmund M. Clarke (FORE Systems University Professor of Computer Science and Professor of Electrical and Computer Engineering at Carnegie Mellon University).

Journal ArticleDOI
TL;DR: By providing a fully integrated framework and evaluation of the impacts of high VPD on plant function, improvements in forecasting and long-term projections of climate impacts can be made.
Abstract: Recent decades have been characterized by increasing temperatures worldwide, resulting in an exponential climb in vapor pressure deficit (VPD). VPD has been identified as an increasingly important driver of plant functioning in terrestrial biomes and has been established as a major contributor in recent drought-induced plant mortality independent of other drivers associated with climate change. Despite this, few studies have isolated the physiological response of plant functioning to high VPD, thus limiting our understanding and ability to predict future impacts on terrestrial ecosystems. An abundance of evidence suggests that stomatal conductance declines under high VPD and transpiration increases in most species up until a given VPD threshold, leading to a cascade of subsequent impacts including reduced photosynthesis and growth, and higher risks of carbon starvation and hydraulic failure. Incorporation of photosynthetic and hydraulic traits in 'next-generation' land-surface models has the greatest potential for improved prediction of VPD responses at the plant- and global-scale, and will yield more mechanistic simulations of plant responses to a changing climate. By providing a fully integrated framework and evaluation of the impacts of high VPD on plant function, improvements in forecasting and long-term projections of climate impacts can be made.

Journal ArticleDOI
TL;DR: This article presents the network slicing concept, with a particular focus on its application to 5G systems, and analyzes a proposal from ETSI to incorporate the capabilities of SDN into the NFV architecture.
Abstract: The fifth generation of mobile communications is anticipated to open up innovation opportunities for new industries such as vertical markets. However, these verticals originate myriad use cases with diverging requirements that future 5G networks have to efficiently support. Network slicing may be a natural solution to simultaneously accommodate, over a common network infrastructure, the wide range of services that vertical- specific use cases will demand. In this article, we present the network slicing concept, with a particular focus on its application to 5G systems. We start by summarizing the key aspects that enable the realization of so-called network slices. Then we give a brief overview on the SDN architecture proposed by the ONF and show that it provides tools to support slicing. We argue that although such architecture paves the way for network slicing implementation, it lacks some essential capabilities that can be supplied by NFV. Hence, we analyze a proposal from ETSI to incorporate the capabilities of SDN into the NFV architecture. Additionally, we present an example scenario that combines SDN and NFV technologies to address the realization of network slices. Finally, we summarize the open research issues with the purpose of motivating new advances in this field.

Journal ArticleDOI
TL;DR: Dietary therapy had independent and rapid effects on microbiota composition distinct from other stressor-induced changes and effectively reduced inflammation and shed light on Crohn disease treatments.

Journal ArticleDOI
TL;DR: It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation.
Abstract: Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.