scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The main objective of this paper is to provide an overview of Internet of Things, architectures, and vital technologies and their usages in the authors' daily life and this manuscript will give good comprehension for the new researchers and facilitate knowledge accumulation in efficiently.
Abstract: One of the buzzwords in the Information Technology is Internet of Things (IoT). The future is Internet of Things, which will transform the real world objects into intelligent virtual objects. The IoT aims to unify everything in our world under a common infrastructure, giving us not only control of things around us, but also keeping us informed of the state of the things. In Light of this, present study addresses IoT concepts through systematic review of scholarly research papers, corporate white papers, professional discussions with experts and online databases. Moreover this research article focuses on definitions, geneses, basic requirements, characteristics and aliases of Internet of Things. The main objective of this paper is to provide an overview of Internet of Things, architectures, and vital technologies and their usages in our daily life. However, this manuscript will give good comprehension for the new researchers, who want to do research in this field of Internet of Things (Technological GOD) and facilitate knowledge accumulation in efficiently.

1,062 citations


Journal ArticleDOI
TL;DR: In this paper, an oxide selection method was proposed to balance the optimization between sulfide-adsorption and diffusion on the oxides, which showed that better surface diffusion leads to higher sulfide species on electrodes.
Abstract: Lithium-sulfur batteries have attracted attention due to their six-fold specific energy compared with conventional lithium-ion batteries. Dissolution of lithium polysulfides, volume expansion of sulfur and uncontrollable deposition of lithium sulfide are three of the main challenges for this technology. State-of-the-art sulfur cathodes based on metal-oxide nanostructures can suppress the shuttle-effect and enable controlled lithium sulfide deposition. However, a clear mechanistic understanding and corresponding selection criteria for the oxides are still lacking. Herein, various nonconductive metal-oxide nanoparticle-decorated carbon flakes are synthesized via a facile biotemplating method. The cathodes based on magnesium oxide, cerium oxide and lanthanum oxide show enhanced cycling performance. Adsorption experiments and theoretical calculations reveal that polysulfide capture by the oxides is via monolayered chemisorption. Moreover, we show that better surface diffusion leads to higher deposition efficiency of sulfide species on electrodes. Hence, oxide selection is proposed to balance optimization between sulfide-adsorption and diffusion on the oxides.

1,062 citations


Journal Article
TL;DR: This work introduces generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings, and provides the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m ≥ 1 under general assumptions.
Abstract: The stochastic multi-armed bandit model is a simple abstraction that has proven useful in many different contexts in statistics and machine learning. Whereas the achievable limit in terms of regret minimization is now well known, our aim is to contribute to a better understanding of the performance in terms of identifying the m best arms. We introduce generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings. In the fixed-confidence setting, we provide the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m ≥ 1 under general assumptions. In the specific case of two armed-bandits, we derive refined lower bounds in both the fixedcon fidence and fixed-budget settings, along with matching algorithms for Gaussian and Bernoulli bandit models. These results show in particular that the complexity of the fixed-budget setting may be smaller than the complexity of the fixed-confidence setting, contradicting the familiar behavior observed when testing fully specified alternatives. In addition, we also provide improved sequential stopping rules that have guaranteed error probabilities and shorter average running times. The proofs rely on two technical results that are of independent interest: a deviation lemma for self-normalized sums (Lemma 7) and a novel change of measure inequality for bandit models (Lemma 1).

1,061 citations


Journal ArticleDOI
Elena Aprile1, Jelle Aalbers2, F. Agostini, M. Alfonsi3, F. D. Amaro4, M. Anthony1, F. Arneodo5, P. Barrow6, Laura Baudis6, Boris Bauermeister7, M. L. Benabderrahmane5, T. Berger8, P. A. Breur2, April S. Brown2, Ethan Brown8, S. Bruenner9, Giacomo Bruno, Ran Budnik10, L. Bütikofer11, J. Calvén7, João Cardoso4, M. Cervantes12, D. Cichon9, D. Coderre11, Auke-Pieter Colijn2, Jan Conrad7, Jean-Pierre Cussonneau13, M. P. Decowski2, P. de Perio1, P. Di Gangi14, A. Di Giovanni5, Sara Diglio13, G. Eurin9, J. Fei15, A. D. Ferella7, A. Fieguth16, W. Fulgione, A. Gallo Rosso, Michelle Galloway6, F. Gao1, M. Garbini14, Robert Gardner17, C. Geis3, Luke Goetzke1, L. Grandi17, Z. Greene1, C. Grignon3, C. Hasterok9, E. Hogenbirk2, J. Howlett1, R. Itay10, B. Kaminsky11, Shingo Kazama6, G. Kessler6, A. Kish6, H. Landsman10, R. F. Lang12, D. Lellouch10, L. Levinson10, Qing Lin1, Sebastian Lindemann9, Manfred Lindner9, F. Lombardi15, J. A. M. Lopes4, A. Manfredini10, I. Mariș5, T. Marrodán Undagoitia9, Julien Masbou13, F. V. Massoli14, D. Masson12, D. Mayani6, M. Messina1, K. Micheneau13, A. Molinario, K. Morâ7, M. Murra16, J. Naganoma18, Kaixuan Ni15, Uwe Oberlack3, P. Pakarha6, Bart Pelssers7, R. Persiani13, F. Piastra6, J. Pienaar12, V. Pizzella9, M.-C. Piro8, Guillaume Plante1, N. Priel10, L. Rauch9, S. Reichard6, C. Reuter12, B. Riedel17, A. Rizzo1, S. Rosendahl16, N. Rupp9, R. Saldanha17, J.M.F. dos Santos4, Gabriella Sartorelli14, M. Scheibelhut3, S. Schindler3, J. Schreiner9, Marc Schumann11, L. Scotto Lavina19, M. Selvi14, P. Shagin18, E. Shockley17, Manuel Gameiro da Silva4, H. Simgen9, M. V. Sivers11, A. Stein20, S. Thapa17, Dominique Thers13, A. Tiseni2, Gian Carlo Trinchero, C. Tunnell17, M. Vargas16, N. Upole17, Hui Wang20, Zirui Wang, Yuehuan Wei6, Ch. Weinheimer16, J. Wulf6, J. Ye15, Yanxi Zhang1, T. Zhu1 
TL;DR: The first dark matter search results from XENON1T, a ∼2000-kg-target-mass dual-phase (liquid-gas) xenon time projection chamber in operation at the Laboratori Nazionali del Gran Sasso in Italy, are reported and a profile likelihood analysis shows that the data are consistent with the background-only hypothesis.
Abstract: We report the first dark matter search results from XENON1T, a ∼2000-kg-target-mass dual-phase (liquid-gas) xenon time projection chamber in operation at the Laboratori Nazionali del Gran Sasso in Italy and the first ton-scale detector of this kind The blinded search used 342 live days of data acquired between November 2016 and January 2017 Inside the (1042±12)-kg fiducial mass and in the [5,40] keVnr energy range of interest for weakly interacting massive particle (WIMP) dark matter searches, the electronic recoil background was (193±025)×10-4 events/(kg×day×keVee), the lowest ever achieved in such a dark matter detector A profile likelihood analysis shows that the data are consistent with the background-only hypothesis We derive the most stringent exclusion limits on the spin-independent WIMP-nucleon interaction cross section for WIMP masses above 10 GeV/c2, with a minimum of 77×10-47 cm2 for 35-GeV/c2 WIMPs at 90% CL

1,061 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: An in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet shows that the proposed tacker outperforms the state-of-the-art significantly.
Abstract: We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.

1,061 citations


Journal ArticleDOI
TL;DR: To figure out whether diabetes is a risk factor influencing the progression and prognosis of 2019 novel coronavirus disease (COVID‐19), a large number of patients with a history of diabetes will be recruited for this study.
Abstract: Backgound To figure out whether diabetes is a risk factor influencing the progression and prognosis of 2019 novel coronavirus disease (COVID-19). Methods A total of 174 consecutive patients confirmed with COVID-19 were studied. Demographic data, medical history, symptoms and signs, laboratory findings, chest computed tomography (CT) as well the treatment measures were collected and analysed. Results We found that COVID-19 patients without other comorbidities but with diabetes (n = 24) were at higher risk of severe pneumonia, release of tissue injury-related enzymes, excessive uncontrolled inflammation responses and hypercoagulable state associated with dysregulation of glucose metabolism. Furthermore, serum levels of inflammation-related biomarkers such as IL-6, C-reactive protein, serum ferritin and coagulation index, D-dimer, were significantly higher (P Conclusions Our data support the notion that diabetes should be considered as a risk factor for a rapid progression and bad prognosis of COVID-19. More intensive attention should be paid to patients with diabetes, in case of rapid deterioration.

1,061 citations



Journal ArticleDOI
TL;DR: It is shown that penta-graphene, composed of only carbon pentagons and resembling Cairo pentagonal tiling, is dynamically, thermally, and mechanically stable, and exhibits negative Poisson's ratio, a large band gap, and an ultrahigh mechanical strength.
Abstract: A 2D metastable carbon allotrope, penta-graphene, composed entirely of carbon pentagons and resembling the Cairo pentagonal tiling, is proposed. State-of-the-art theoretical calculations confirm that the new carbon polymorph is not only dynamically and mechanically stable, but also can withstand temperatures as high as 1000 K. Due to its unique atomic configuration, penta-graphene has an unusual negative Poisson’s ratio and ultrahigh ideal strength that can even outperform graphene. Furthermore, unlike graphene that needs to be functionalized for opening a band gap, penta-graphene possesses an intrinsic quasi-direct band gap as large as 3.25 eV, close to that of ZnO and GaN. Equally important, penta-graphene can be exfoliated from T12-carbon. When rolled up, it can form pentagon-based nanotubes which are semiconducting, regardless of their chirality. When stacked in different patterns, stable 3D twin structures of T12-carbon are generated with band gaps even larger than that of T12-carbon. The versatility of penta-graphene and its derivatives are expected to have broad applications in nanoelectronics and nanomechanics.

1,060 citations


Journal ArticleDOI
Heng Li1
TL;DR: A new mapper, minimap and a de novo assembler, miniasm, is presented for efficiently mapping and assembling SMRT and ONT reads without an error correction stage.
Abstract: Motivation: Single Molecule Real-Time (SMRT) sequencing technology and Oxford Nanopore technologies (ONT) produce reads over 10 kb in length, which have enabled high-quality genome assembly at an affordable cost. However, at present, long reads have an error rate as high as 10–15%. Complex and computationally intensive pipelines are required to assemble such reads. Results: We present a new mapper, minimap and a de novo assembler, miniasm, for efficiently mapping and assembling SMRT and ONT reads without an error correction stage. They can often assemble a sequencing run of bacterial data into a single contig in a few minutes, and assemble 45-fold Caenorhabditis elegans data in 9 min, orders of magnitude faster than the existing pipelines, though the consensus sequence error rate is as high as raw reads. We also introduce a pairwise read mapping format and a graphical fragment assembly format, and demonstrate the interoperability between ours and current tools. Availability and implementation: https://github.com/lh3/minimap and https://github.com/lh3/miniasm Contact: gro.etutitsnidaorb@ilgneh Supplementary information: Supplementary data are available at Bioinformatics online.

1,060 citations


Journal ArticleDOI
TL;DR: Although it is known that diarrheal diseases are a major burden in children, this first global and regional estimates of the disease burden of the most important foodborne bacterial, protozoal, and viral diseases demonstrated for the first time the importance of contaminated food as a cause.
Abstract: BACKGROUND: Foodborne diseases are important worldwide, resulting in considerable morbidity and mortality. To our knowledge, we present the first global and regional estimates of the disease burden of the most important foodborne bacterial, protozoal, and viral diseases. METHODS AND FINDINGS: We synthesized data on the number of foodborne illnesses, sequelae, deaths, and Disability Adjusted Life Years (DALYs), for all diseases with sufficient data to support global and regional estimates, by age and region. The data sources included varied by pathogen and included systematic reviews, cohort studies, surveillance studies and other burden of disease assessments. We sought relevant data circa 2010, and included sources from 1990-2012. The number of studies per pathogen ranged from as few as 5 studies for bacterial intoxications through to 494 studies for diarrheal pathogens. To estimate mortality for Mycobacterium bovis infections and morbidity and mortality for invasive non-typhoidal Salmonella enterica infections, we excluded cases attributed to HIV infection. We excluded stillbirths in our estimates. We estimate that the 22 diseases included in our study resulted in two billion (95% uncertainty interval [UI] 1.5-2.9 billion) cases, over one million (95% UI 0.89-1.4 million) deaths, and 78.7 million (95% UI 65.0-97.7 million) DALYs in 2010. To estimate the burden due to contaminated food, we then applied proportions of infections that were estimated to be foodborne from a global expert elicitation. Waterborne transmission of disease was not included. We estimate that 29% (95% UI 23-36%) of cases caused by diseases in our study, or 582 million (95% UI 401-922 million), were transmitted by contaminated food, resulting in 25.2 million (95% UI 17.5-37.0 million) DALYs. Norovirus was the leading cause of foodborne illness causing 125 million (95% UI 70-251 million) cases, while Campylobacter spp. caused 96 million (95% UI 52-177 million) foodborne illnesses. Of all foodborne diseases, diarrheal and invasive infections due to non-typhoidal S. enterica infections resulted in the highest burden, causing 4.07 million (95% UI 2.49-6.27 million) DALYs. Regionally, DALYs per 100,000 population were highest in the African region followed by the South East Asian region. Considerable burden of foodborne disease is borne by children less than five years of age. Major limitations of our study include data gaps, particularly in middle- and high-mortality countries, and uncertainty around the proportion of diseases that were foodborne. CONCLUSIONS: Foodborne diseases result in a large disease burden, particularly in children. Although it is known that diarrheal diseases are a major burden in children, we have demonstrated for the first time the importance of contaminated food as a cause. There is a need to focus food safety interventions on preventing foodborne diseases, particularly in low- and middle-income settings.

1,060 citations


Journal ArticleDOI
TL;DR: Reduced-dimensionality (quasi-2D) perovskite films are reported that exhibit improved stability while retaining the high performance of conventional three-dimensionalperovskites, and are achieved by the choice of stoichiometry in materials synthesis.
Abstract: Metal halide perovskites have rapidly advanced thin-film photovoltaic performance; as a result, the materials’ observed instabilities urgently require a solution. Using density functional theory (DFT), we show that a low energy of formation, exacerbated in the presence of humidity, explains the propensity of perovskites to decompose back into their precursors. We find, also using DFT, that intercalation of phenylethylammonium between perovskite layers introduces quantitatively appreciable van der Waals interactions. These drive an increased formation energy and should therefore improve material stability. Here we report reduced-dimensionality (quasi-2D) perovskite films that exhibit improved stability while retaining the high performance of conventional three-dimensional perovskites. Continuous tuning of the dimensionality, as assessed using photophysical studies, is achieved by the choice of stoichiometry in materials synthesis. We achieve the first certified hysteresis-free solar power conversion in a p...

Proceedings Article
05 Dec 2016
TL;DR: The interaction network is introduced, a model which can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system, and is implemented using deep neural networks.
Abstract: Reasoning about objects, relations, and physics is central to human intelligence, and a key goal of artificial intelligence. Here we introduce the interaction network, a model which can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system. Our model takes graphs as input, performs object- and relation-centric reasoning in a way that is analogous to a simulation, and is implemented using deep neural networks. We evaluate its ability to reason about several challenging physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. Our results show it can be trained to accurately simulate the physical trajectories of dozens of objects over thousands of time steps, estimate abstract quantities such as energy, and generalize automatically to systems with different numbers and configurations of objects and relations. Our interaction network implementation is the first general-purpose, learnable physics engine, and a powerful general framework for reasoning about object and relations in a wide variety of complex real-world domains.

Journal ArticleDOI
TL;DR: In a senior seminar on Globalization, Human Rights, and Citizenship, this 1948 document surprises and attracts students with its broad, progressive vision as mentioned in this paper, and it was used as a reference for a course on globalization, human rights, and citizenship.
Abstract: In a senior seminar on Globalization, Human Rights, and Citizenship, this 1948 document surprises and attracts students with its broad, progressive vision

Journal ArticleDOI
TL;DR: Among patients with the Dravet syndrome, cannabidiol resulted in a greater reduction in convulsive‐seizure frequency than placebo and was associated with higher rates of adverse events.
Abstract: BackgroundThe Dravet syndrome is a complex childhood epilepsy disorder that is associated with drug-resistant seizures and a high mortality rate We studied cannabidiol for the treatment of drug-resistant seizures in the Dravet syndrome MethodsIn this double-blind, placebo-controlled trial, we randomly assigned 120 children and young adults with the Dravet syndrome and drug-resistant seizures to receive either cannabidiol oral solution at a dose of 20 mg per kilogram of body weight per day or placebo, in addition to standard antiepileptic treatment The primary end point was the change in convulsive-seizure frequency over a 14-week treatment period, as compared with a 4-week baseline period ResultsThe median frequency of convulsive seizures per month decreased from 124 to 59 with cannabidiol, as compared with a decrease from 149 to 141 with placebo (adjusted median difference between the cannabidiol group and the placebo group in change in seizure frequency, −228 percentage points; 95% confidence i

Posted Content
TL;DR: This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training.
Abstract: With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.

Posted Content
TL;DR: In this paper, the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability was shown.
Abstract: Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this paper, a regional multi-person pose estimation (RMPE) framework is proposed to facilitate pose estimation in the presence of inaccurate human bounding boxes, which achieves state-of-the-art performance on the MPII dataset.
Abstract: Multi-person pose estimation in the wild is challenging. Although state-of-the-art human detectors have demonstrated good performance, small errors in localization and recognition are inevitable. These errors can cause failures for a single-person pose estimator (SPPE), especially for methods that solely depend on human detection results. In this paper, we propose a novel regional multi-person pose estimation (RMPE) framework to facilitate pose estimation in the presence of inaccurate human bounding boxes. Our framework consists of three components: Symmetric Spatial Transformer Network (SSTN), Parametric Pose Non-Maximum-Suppression (NMS), and Pose-Guided Proposals Generator (PGPG). Our method is able to handle inaccurate bounding boxes and redundant detections, allowing it to achieve 76:7 mAP on the MPII (multi person) dataset[3]. Our model and source codes are made publicly available.

Journal ArticleDOI
TL;DR: After high-risk or moderate-risk exposure to Covid-19, hydroxychloroquine did not prevent illness compatible with Covid -19 or confirmed infection when used as postexposure prophylaxis within 4 days after exposure.
Abstract: Background Coronavirus disease 2019 (Covid-19) occurs after exposure to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). For persons who are exposed, the standard of care is o...

Book
16 Apr 2016
TL;DR: In this paper, the Kimberly Nixon Case Statement for Social Service Agencies and Transsexual/Transgendered Organisations on Service Delivery to Transsexual and Transvestite Prostitutes is discussed.
Abstract: Introduction Making the Lives of Transsexual People Visible: Addressing the Politics of Social Erasure Sex Change, Social Change: Reflections on Identity and Institutions Transsexuals Behind Bars Beyond Image Content: Examining Transsexuals' Access to the Media Inclusive Pedagogy in the Women's Studies Classroom: Teaching the Kimberly Nixon Case Statement for Social Service Agencies and Transsexual/Transgendered Organisations on Service Delivery to Transsexual and Transvestite Prostitutes Interview with Mirha-Soleil Ross Against Transgender Rights: Understanding the Imperialism of Contemporary Transgender Politics Conclusion.

Journal ArticleDOI
TL;DR: In this article, a prospective trial involving women with hormone-receptor-positive, human epidermal growth factor receptor type 2 (HER2)-negative, axillary node-negative breast cancer with tumors of 1.1 to 5.0 cm in the greatest dimension (or 0.6 to 1.0cm in the intermediate or high tumor grade) who met established guidelines for the consideration of adjuvant chemotherapy on the basis of clinicopathologic features.
Abstract: BackgroundPrior studies with the use of a prospective–retrospective design including archival tumor samples have shown that gene-expression assays provide clinically useful prognostic information. However, a prospectively conducted study in a uniformly treated population provides the highest level of evidence supporting the clinical validity and usefulness of a biomarker. MethodsWe performed a prospective trial involving women with hormone-receptor–positive, human epidermal growth factor receptor type 2 (HER2)–negative, axillary node–negative breast cancer with tumors of 1.1 to 5.0 cm in the greatest dimension (or 0.6 to 1.0 cm in the greatest dimension and intermediate or high tumor grade) who met established guidelines for the consideration of adjuvant chemotherapy on the basis of clinicopathologic features. A reverse-transcriptase–polymerase-chain-reaction assay of 21 genes was performed on the paraffin-embedded tumor tissue, and the results were used to calculate a score indicating the risk of breast-...

Journal ArticleDOI
TL;DR: Despite increased attention on assessment and management, pain continues to be a prevalent symptom in patients with cancer and in the upcoming decade, the authors need to overcome barriers toward effective pain treatment and develop and implement interventions to optimally manage pain in Patients with cancer.

Journal ArticleDOI
TL;DR: This article reviews the works listed in the literature with recent updates regarding the toxicity of lead and focuses on toxic effects of lead on the renal, reproductive and nervous system.
Abstract: Lead toxicity is an important environmental disease and its effects on the human body are devastating. There is almost no function in the human body which is not affected by lead toxicity. Though in countries like US and Canada the use of lead has been controlled up to a certain extent, it is still used vehemently in the developing countries. This is primarily because lead bears unique physical and chemical properties that make it suitable for a large number of applications for which humans have exploited its benefits from historical times and thus it has become a common environmental pollutant. Lead is highly persistent in the environment and because of its continuous use its levels rise in almost every country, posing serious threats. This article reviews the works listed in the literature with recent updates regarding the toxicity of lead. Focus is also on toxic effects of lead on the renal, reproductive and nervous system. Finally the techniques available for treating lead toxicity are presented with some recent updates.

Proceedings Article
05 Nov 2016
TL;DR: The authors prune filters from CNNs that are identified as having a small effect on the output accuracy, by removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly.
Abstract: The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.

Proceedings Article
01 Jan 2017
TL;DR: The Deep Generative Replay is proposed, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"), with only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task.
Abstract: Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of the hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (“generator”) and a task solving model (“solver”). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.

Journal ArticleDOI
24 Dec 2015-Nature
TL;DR: This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.
Abstract: An electronic–photonic microprocessor chip manufactured using a conventional microelectronics foundry process is demonstrated; the chip contains 70 million transistors and 850 photonic components and directly uses light to communicate to other chips. The rapid transfer of data between chips in computer systems and data centres has become one of the bottlenecks in modern information processing. One way of increasing speeds is to use optical connections rather than electrical wires and the past decade has seen significant efforts to develop silicon-based nanophotonic approaches to integrate such links within silicon chips, but incompatibility between the manufacturing processes used in electronics and photonics has proved a hindrance. Now Chen Sun et al. describe a 'system on a chip' microprocessor that successfully integrates electronics and photonics yet is produced using standard microelectronic chip fabrication techniques. The resulting microprocessor combines 70 million transistors and 850 photonic components and can communicate optically with the outside world. This result promises a way forward for new fast, low-power computing systems architectures. Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome1,2,3 by using optical communications based on chip-scale electronic–photonic systems4,5,6,7 enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic–photonic chips9,10,11 are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic–photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics12, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors13,14,15,16. This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.

Journal ArticleDOI
TL;DR: An overview of the miRNA pathway, including biogenesis routes, biological roles, and clinical approaches is presented, including clinical diagnostics and therapeutic targets.

Proceedings Article
17 Jun 2020
TL;DR: In this paper, the authors propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
Abstract: Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions.

Posted Content
TL;DR: A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Abstract: We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.

Journal ArticleDOI
TL;DR: The data and modelling methods presented show potential as a means of performing ingredient safety assessments for personal care and cosmetics products, and the robustness and ability to estimate aggregate consumer product exposure are presented.

Book
19 Oct 2017
TL;DR: "Overview, Ralph B. D'Agostino and Michael A. Stephens Graphical Analysis and Tests of Chi-Squared Type, David S. Moore Tests Based on EDF Statistics, and tests for the Normal Distribution,"
Abstract: "a comprehensive book, well presented and clearly written." The New Zealand Statistician "Chapters contain some theory but they are mainly directed toward application with some numerical illustrations which generally use simulated sets to be found in the AppendixThe level of exposition is generally clear and directthe book should prove useful to practising statisticians." Short Book Reviews (International Statistical Institute) "the authors have carried out the helpful task of bringing together into a useful reference volume a large amount of material on an interesting topic which has previously been scattered throughout the literature." Royal Statistical Society