scispace - formally typeset
Search or ask a question

Showing papers by "Stevens Institute of Technology published in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors summarize the experimental findings for various classes of solid electrolytes and relate them to computational predictions, with the aim of providing a deeper understanding of the interfacial reactions and insight for the future design and engineering of interfaces in SSBs.
Abstract: Solid-state batteries (SSBs) using a solid electrolyte show potential for providing improved safety as well as higher energy and power density compared with conventional Li-ion batteries. However, two critical bottlenecks remain: the development of solid electrolytes with ionic conductivities comparable to or higher than those of conventional liquid electrolytes and the creation of stable interfaces between SSB components, including the active material, solid electrolyte and conductive additives. Although the first goal has been achieved in several solid ionic conductors, the high impedance at various solid/solid interfaces remains a challenge. Recently, computational models based on ab initio calculations have successfully predicted the stability of solid electrolytes in various systems. In addition, a large amount of experimental data has been accumulated for different interfaces in SSBs. In this Review, we summarize the experimental findings for various classes of solid electrolytes and relate them to computational predictions, with the aim of providing a deeper understanding of the interfacial reactions and insight for the future design and engineering of interfaces in SSBs. We find that, in general, the electrochemical stability and interfacial reaction products can be captured with a small set of chemical and physical principles. The reliable operation of solid-state batteries requires stable or passivating interfaces between solid components. In this Review, we discuss models for interfacial reactions and relate the predictions to experimental findings, aiming to provide a deeper understanding of interface stability.

521 citations


Journal ArticleDOI
TL;DR: Results show that intelligent reflecting surface (IRS) can help create effective virtual line-of-sight (LOS) paths and thus substantially improve robustness against blockages in mmWave communications.
Abstract: Millimeter wave (MmWave) communications is capable of supporting multi-gigabit wireless access thanks to its abundant spectrum resource. However, severe path loss and high directivity make it vulnerable to blockage events, which can be frequent in indoor and dense urban environments. To address this issue, in this paper, we introduce intelligent reflecting surface (IRS) as a new technology to provide effective reflected paths to enhance the coverage of mmWave signals. In this framework, we study joint active and passive precoding design for IRS-assisted mmWave systems, where multiple IRSs are deployed to assist the data transmission from a base station (BS) to a single-antenna receiver. Our objective is to maximize the received signal power by jointly optimizing the BS's transmit precoding vector and IRSs’ phase shift coefficients. Although such an optimization problem is generally non-convex, we show that, by exploiting some important characteristics of mmWave channels, an optimal closed-form solution can be derived for the single IRS case and a near-optimal analytical solution can be obtained for the multi-IRS case. Our analysis reveals that the received signal power increases quadratically with the number of reflecting elements for both the single IRS and multi-IRS cases. Simulation results are included to verify the optimality and near-optimality of our proposed solutions. Results also show that IRSs can help create effective virtual line-of-sight (LOS) paths and thus substantially improve robustness against blockages in mmWave communications.

391 citations


Posted Content
TL;DR: A framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building and an efficient sliding window approach that registers a new keyframe to a fixed-size set of prior "sub-keyframes."
Abstract: We propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. LIO-SAM formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system. The estimated motion from inertial measurement unit (IMU) pre-integration de-skews point clouds and produces an initial guess for lidar odometry optimization. The obtained lidar odometry solution is used to estimate the bias of the IMU. To ensure high performance in real-time, we marginalize old lidar scans for pose optimization, rather than matching lidar scans to a global map. Scan-matching at a local scale instead of a global scale significantly improves the real-time performance of the system, as does the selective introduction of keyframes, and an efficient sliding window approach that registers a new keyframe to a fixed-size set of prior ``sub-keyframes.'' The proposed method is extensively evaluated on datasets gathered from three platforms over various scales and environments.

379 citations


Proceedings ArticleDOI
24 Oct 2020
TL;DR: In this article, a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, is proposed for real-time mobile robot trajectory estimation and map-building.
Abstract: We propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. LIO-SAM formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system. The estimated motion from inertial measurement unit (IMU) pre-integration de-skews point clouds and produces an initial guess for lidar odometry optimization. The obtained lidar odometry solution is used to estimate the bias of the IMU. To ensure high performance in real-time, we marginalize old lidar scans for pose optimization, rather than matching lidar scans to a global map. Scan-matching at a local scale instead of a global scale significantly improves the real-time performance of the system, as does the selective introduction of keyframes, and an efficient sliding window approach that registers a new keyframe to a fixed-size set of prior "sub-keyframes." The proposed method is extensively evaluated on datasets gathered from three platforms over various scales and environments.

337 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed method can provide an accurate channel estimate and achieve a substantial training overhead reduction and the inherent sparsity in mmWave channels is exploited.
Abstract: In this letter, we consider channel estimation for intelligent reflecting surface (IRS)-assisted millimeter wave (mmWave) systems, where an IRS is deployed to assist the data transmission from the base station (BS) to a user. It is shown that for the purpose of joint active and passive beamforming, the knowledge of a large-size cascade channel matrix needs to be acquired. To reduce the training overhead, the inherent sparsity in mmWave channels is exploited. By utilizing properties of Katri-Rao and Kronecker products, we find a sparse representation of the cascade channel and convert cascade channel estimation into a sparse signal recovery problem. Simulation results show that our proposed method can provide an accurate channel estimate and achieve a substantial training overhead reduction.

327 citations


Proceedings Article
30 Apr 2020
TL;DR: In this paper, the authors analyzed the convergence of Federated Averaging on non-iid data and established a convergence rate of O(mathcal{O}(\frac{1}{T}) for strongly convex and smooth problems, where T is the number of SGDs.
Abstract: Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate $\eta$ must decay, even if full-gradient is used; otherwise, the solution will be $\Omega (\eta)$ away from the optimal.

307 citations


Journal ArticleDOI
TL;DR: In this paper, the authors assess the benefits of decentralized finance, identify existing business models, and evaluate potential challenges and limits, and highlight the promises and challenges of decentralized business models.

275 citations


Journal ArticleDOI
TL;DR: Significant inter-patient variability is found in the composition and functional programs of ascites cells, including immunomodulatory fibroblast sub-populations and dichotomous macrophage populations, which contributes to resolving the HSGOC landscape and provides a resource for the development of novel therapeutic approaches.
Abstract: Malignant abdominal fluid (ascites) frequently develops in women with advanced high-grade serous ovarian cancer (HGSOC) and is associated with drug resistance and a poor prognosis1 To comprehensively characterize the HGSOC ascites ecosystem, we used single-cell RNA sequencing to profile ~11,000 cells from 22 ascites specimens from 11 patients with HGSOC We found significant inter-patient variability in the composition and functional programs of ascites cells, including immunomodulatory fibroblast sub-populations and dichotomous macrophage populations We found that the previously described immunoreactive and mesenchymal subtypes of HGSOC, which have prognostic implications, reflect the abundance of immune infiltrates and fibroblasts rather than distinct subsets of malignant cells2 Malignant cell variability was partly explained by heterogeneous copy number alteration patterns or expression of a stemness program Malignant cells shared expression of inflammatory programs that were largely recapitulated in single-cell RNA sequencing of ~35,000 cells from additionally collected samples, including three ascites, two primary HGSOC tumors and three patient ascites-derived xenograft models Inhibition of the JAK/STAT pathway, which was expressed in both malignant cells and cancer-associated fibroblasts, had potent anti-tumor activity in primary short-term cultures and patient-derived xenograft models Our work contributes to resolving the HSGOC landscape3-5 and provides a resource for the development of novel therapeutic approaches

221 citations


Journal ArticleDOI
TL;DR: Clinicians as the primary users of AI systems in health care are focused on and factors shaping trust between clinicians and AI are presented, highlighting critical challenges related to trust that should be considered during the development of any AI system for clinical use.
Abstract: Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable-though imperfect-clinical decisions or suggestions In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians' use and adoption of AI Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians However, assessing the magnitude and impact of human trust on AI technology demands substantial attention Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use

202 citations


Journal ArticleDOI
TL;DR: This study demonstrates the effectiveness of deep transfer learning techniques for the identification of COVID-19 cases using CXR images.
Abstract: Background The novel coronavirus disease 2019 (COVID-19) constitutes a public health emergency globally. The number of infected people and deaths are proliferating every day, which is putting tremendous pressure on our social and healthcare system. Rapid detection of COVID-19 cases is a significant step to fight against this virus as well as release pressure off the healthcare system. Objective One of the critical factors behind the rapid spread of COVID-19 pandemic is a lengthy clinical testing time. The imaging tool, such as Chest X-ray (CXR), can speed up the identification process. Therefore, our objective is to develop an automated CAD system for the detection of COVID-19 samples from healthy and pneumonia cases using CXR images. Methods Due to the scarcity of the COVID-19 benchmark dataset, we have employed deep transfer learning techniques, where we examined 15 different pre-trained CNN models to find the most suitable one for this task. Results A total of 860 images (260 COVID-19 cases, 300 healthy and 300 pneumonia cases) have been employed to investigate the performance of the proposed algorithm, where 70% images of each class are accepted for training, 15% is used for validation, and rest is for testing. It is observed that the VGG19 obtains the highest classification accuracy of 89.3% with an average precision, recall, and F1 score of 0.90, 0.89, 0.90, respectively. Conclusion This study demonstrates the effectiveness of deep transfer learning techniques for the identification of COVID-19 cases using CXR images.

192 citations



Journal ArticleDOI
TL;DR: The threats, security requirements, challenges, and the attack vectors pertinent to IoT networks are reviewed, and a novel paradigm that combines a network-based deployment of IoT architecture through software-defined networking (SDN) is proposed.
Abstract: Internet of Things (IoT) is transforming everyone’s life by providing features, such as controlling and monitoring of the connected smart objects. IoT applications range over a broad spectrum of services including smart cities, homes, cars, manufacturing, e-healthcare, smart control system, transportation, wearables, farming, and much more. The adoption of these devices is growing exponentially, that has resulted in generation of a substantial amount of data for processing and analyzing. Thus, besides bringing ease to the human lives, these devices are susceptible to different threats and security challenges, which do not only worry the users for adopting it in sensitive environments, such as e-health, smart home, etc., but also pose hazards for the advancement of IoT in coming days. This article thoroughly reviews the threats, security requirements, challenges, and the attack vectors pertinent to IoT networks. Based on the gap analysis, a novel paradigm that combines a network-based deployment of IoT architecture through software-defined networking (SDN) is proposed. This article presents an overview of the SDN along with a thorough discussion on SDN-based IoT deployment models, i.e., centralized and decentralized. We further elaborated SDN-based IoT security solutions to present a comprehensive overview of the software-defined security (SDSec) technology. Furthermore, based on the literature, core issues are highlighted that are the main hurdles in unifying all IoT stakeholders on one platform and few findings that emphases on a network-based security solution for IoT paradigm. Finally, some future research directions of SDN-based IoT security technologies are discussed.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: This paper proposes a local structure preserving module that explicitly accounts for the topological semantics of the teacher GCN, and achieves the state-of-the-art knowledge distillation performance for GCN models.
Abstract: Existing knowledge distillation methods focus on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, and have largely overlooked graph convolutional networks (GCN) that handle non-grid data. In this paper, we propose to our best knowledge the first dedicated approach to distilling knowledge from a pre-trained GCN model. To enable the knowledge transfer from the teacher GCN to the student, we propose a local structure preserving module that explicitly accounts for the topological semantics of the teacher. In this module, the local structure information from both the teacher and the student are extracted as distributions, and hence minimizing the distance between these distributions enables topology-aware knowledge transfer from the teacher, yielding a compact yet high-performance student model. Moreover, the proposed approach is readily extendable to dynamic graph models, where the input graphs for the teacher and the student may differ. We evaluate the proposed method on two different datasets using GCN models of different architectures, and demonstrate that our method achieves the state-of-the-art knowledge distillation performance for GCN models.

Journal ArticleDOI
TL;DR: Examination of positive and negative responses toward older adults in the United States during the pandemic and the consequences for older adults and society finds positive responses can reinforce the value of older adults, improve older adults' mental and physical health, reduce ageism, and improve intergenerational relations, whereas negative responses can have the opposite effects.
Abstract: The disproportionately high rates of coronavirus disease 2019 (COVID-19) health complications and mortality among older adults prompted supportive public responses, such as special senior early shopping hours and penpal programs. Simultaneously, some older adults faced neglect and blatant displays of ageism (e.g., #BoomerRemover) and were considered the lowest priority to receive health care. This article examines positive and negative responses toward older adults in the United States during the pandemic and the consequences for older adults and society using data from the pandemic in the United States (and informed by data from other countries) as well as past theorizing and empirical research on views and treatment of older adults. Specifically, positive responses can reinforce the value of older adults, improve older adults' mental and physical health, reduce ageism, and improve intergenerational relations, whereas negative responses can have the opposite effects. However, positive responses (social distancing to protect older adults from COVID-19 infection) can inadvertently increase loneliness, depression, health problems, and negative stereotyping of older adults (e.g., helpless, weak). Pressing policy issues evident from the treatment of older adults during the pandemic include health care (triaging, elder abuse), employment (layoffs, retirement), and education about ageism, as well as the intersection of ageism with other forms of prejudice (e.g., racism) that cuts across these policies. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In vitro and in vivo results demonstrate that Au@Rh‐ICG‐CM is able to effectively convert endogenous hydrogen peroxide into oxygen and then elevate the production of tumor‐toxic singlet oxygen to significantly enhance PDT.
Abstract: In treatment of hypoxic tumors, oxygen-dependent photodynamic therapy (PDT) is considerably limited. Herein, a new bimetallic and biphasic Rh-based core-shell nanosystem (Au@Rh-ICG-CM) is developed to address tumor hypoxia while achieving high PDT efficacy. Such porous Au@Rh core-shell nanostructures are expected to exhibit catalase-like activity to efficiently catalyze oxygen generation from endogenous hydrogen peroxide in tumors. Coating Au@Rh nanostructures with tumor cell membrane (CM) enables tumor targeting via homologous binding. As a result of the large pores of Rh shells and the trapping ability of CM, the photosensitizer indocyanine green (ICG) is successfully loaded and retained in the cavity of Au@Rh-CM. Au@Rh-ICG-CM shows good biocompatibility, high tumor accumulation, and superior fluorescence and photoacoustic imaging properties. Both in vitro and in vivo results demonstrate that Au@Rh-ICG-CM is able to effectively convert endogenous hydrogen peroxide into oxygen and then elevate the production of tumor-toxic singlet oxygen to significantly enhance PDT. As noted, the mild photothermal effect of Au@Rh-ICG-CM also improves PDT efficacy. By integrating the superiorities of hypoxia regulation function, tumor accumulation capacity, bimodal imaging, and moderate photothermal effect into a single nanosystem, Au@Rh-ICG-CM can readily serve as a promising nanoplatform for enhanced cancer PDT.

Journal ArticleDOI
03 Apr 2020
TL;DR: The proposed Dynamic Instance Normalization (DIN) provides flexible support for state-of-the-art convolutional operations, and thus triggers novel functionalities, such as uniform-stroke placement for non-natural images and automatic spatial-stroke control.
Abstract: Prior normalization methods rely on affine transformations to produce arbitrary image style transfers, of which the parameters are computed in a pre-defined way. Such manually-defined nature eventually results in the high-cost and shared encoders for both style and content encoding, making style transfer systems cumbersome to be deployed in resource-constrained environments like on the mobile-terminal side. In this paper, we propose a new and generalized normalization module, termed as Dynamic Instance Normalization (DIN), that allows for flexible and more efficient arbitrary style transfers. Comprising an instance normalization and a dynamic convolution, DIN encodes a style image into learnable convolution parameters, upon which the content image is stylized. Unlike conventional methods that use shared complex encoders to encode content and style, the proposed DIN introduces a sophisticated style encoder, yet comes with a compact and lightweight content encoder for fast inference. Experimental results demonstrate that the proposed approach yields very encouraging results on challenging style patterns and, to our best knowledge, for the first time enables an arbitrary style transfer using MobileNet-based lightweight architecture, leading to a reduction factor of more than twenty in computational cost as compared to existing approaches. Furthermore, the proposed DIN provides flexible support for state-of-the-art convolutional operations, and thus triggers novel functionalities, such as uniform-stroke placement for non-natural images and automatic spatial-stroke control.

Journal ArticleDOI
TL;DR: One-pass multi-task network (OM-Net) as discussed by the authors integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific features to learn discriminative features.
Abstract: Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams. The code is publicly available at https://github.com/chenhong-zhou/OM-Net .

Journal ArticleDOI
TL;DR: An improvement of the existing stable election protocol (SEP) that implements a threshold-based cluster head (CH) selection for a heterogeneous network that outperforms SEP and DEEC protocols with an improvement of 300% in network lifetime and 56% in throughput.
Abstract: Wireless sensor networks (WSNs) is a virtual layer in the paradigm of the Internet of Things (IoT). It inter-relates information associated with the physical domain to the IoT drove computational systems. WSN provides an ubiquitous access to location, the status of different entities of the environment, and data acquisition for long-term IoT monitoring. Since energy is a major constraint in the design process of a WSN, recent advances have led to project various energy-efficient protocols. Routing of data involves energy expenditure in considerable amount. In recent times, various heuristic clustering protocols have been discussed to solve the purpose. This article is an improvement of the existing stable election protocol (SEP) that implements a threshold-based cluster head (CH) selection for a heterogeneous network. The threshold maintains uniform energy distribution between member and CH nodes. The sensor nodes are also categorized into three different types called normal, intermediate, and advanced depending on the initial energy supply to distribute the network load evenly. The simulation result shows that the proposed scheme outperforms SEP and DEEC protocols with an improvement of 300% in network lifetime and 56% in throughput.

Journal ArticleDOI
TL;DR: This review presents a comprehensive overview of the BHIA techniques based on ANNs, and categorizes the existing models into classical and deep neural networks for in-depth investigation.
Abstract: Breast cancer is one of the most common and deadliest cancers among women. Since histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of breast cancers. To improve the accuracy and objectivity of Breast Histopathological Image Analysis (BHIA), Artificial Neural Network (ANN) approaches are widely used in the segmentation and classification tasks of breast histopathological images. In this review, we present a comprehensive overview of the BHIA techniques based on ANNs. First of all, we categorize the BHIA systems into classical and deep neural networks for in-depth investigation. Then, the relevant studies based on BHIA systems are presented. After that, we analyze the existing models to discover the most suitable algorithms. Finally, publicly accessible datasets, along with their download links, are provided for the convenience of future researchers.

Journal ArticleDOI
TL;DR: In this paper, a review of different results reported by different research groups and proposes new perspectives based on analyzing underlying mechanisms, considering different types of waste glass, including soda-lime, electric, lead, and borosilicate glass.

Proceedings ArticleDOI
18 May 2020
TL;DR: This work proposes SAVIOR, a new hybrid testing framework pioneering a bug-driven principle that outperforms mainstream automated testing techniques, including state-of-the-art hybrid testing systems driven by code coverage.
Abstract: Hybrid testing combines fuzz testing and concolic execution. It leverages fuzz testing to test easy-to-reach code regions and uses concolic execution to explore code blocks guarded by complex branch conditions. As a result, hybrid testing is able to reach deeper into program state space than fuzz testing or concolic execution alone. Recently, hybrid testing has seen significant advancement. However, its code coverage-centric design is inefficient in vulnerability detection. First, it blindly selects seeds for concolic execution and aims to explore new code continuously. However, as statistics show, a large portion of the explored code is often bug-free. Therefore, giving equal attention to every part of the code during hybrid testing is a non-optimal strategy. It slows down the detection of real vulnerabilities by over 43%. Second, classic hybrid testing quickly moves on after reaching a chunk of code, rather than examining the hidden defects inside. It may frequently miss subtle vulnerabilities despite that it has already explored the vulnerable code paths.We propose SAVIOR, a new hybrid testing framework pioneering a bug-driven principle. Unlike the existing hybrid testing tools, SAVIOR prioritizes the concolic execution of the seeds that are likely to uncover more vulnerabilities. Moreover, SAVIOR verifies all vulnerable program locations along the executing program path. By modeling faulty situations using SMT constraints, SAVIOR reasons the feasibility of vulnerabilities and generates concrete test cases as proofs. Our evaluation shows that the bug-driven approach outperforms mainstream automated testing techniques, including state-of-the-art hybrid testing systems driven by code coverage. On average, SAVIOR detects vulnerabilities 43.4% faster than DRILLER and 44.3% faster than QSYM, leading to the discovery of 88 and 76 more unique bugs, respectively. According to the evaluation on 11 well fuzzed benchmark programs, within the first 24 hours, SAVIOR triggers 481 UBSAN violations, among which 243 are real bugs.

Journal ArticleDOI
TL;DR: It is shown that Fe:MoS2 monolayers remain magnetized even at ambient conditions, manifesting ferromagnetism at room temperature, which is highly desirable for practical spintronics applications.
Abstract: Two-dimensional semiconductors, including transition metal dichalcogenides, are of interest in electronics and photonics but remain nonmagnetic in their intrinsic form. Previous efforts to form two-dimensional dilute magnetic semiconductors utilized extrinsic doping techniques or bulk crystal growth, detrimentally affecting uniformity, scalability, or Curie temperature. Here, we demonstrate an in situ substitutional doping of Fe atoms into MoS2 monolayers in the chemical vapor deposition growth. The iron atoms substitute molybdenum sites in MoS2 crystals, as confirmed by transmission electron microscopy and Raman signatures. We uncover an Fe-related spectral transition of Fe:MoS2 monolayers that appears at 2.28 eV above the pristine bandgap and displays pronounced ferromagnetic hysteresis. The microscopic origin is further corroborated by density functional theory calculations of dipole-allowed transitions in Fe:MoS2. Using spatially integrating magnetization measurements and spatially resolving nitrogen-vacancy center magnetometry, we show that Fe:MoS2 monolayers remain magnetized even at ambient conditions, manifesting ferromagnetism at room temperature. Ferromagnetism with a Curie temperature above room temperature in 2D materials is highly desirable for practical spintronics applications. Here, the authors demonstrate such phenomenon in monolayer MoS2 via in situ iron-doping and measured local magnetic field strength up to 0.5 ± 0.1 mT.

Proceedings ArticleDOI
10 Dec 2020
TL;DR: In this article, the authors proposed an asynchronous online federated learning (ASO-Fed) framework, where the edge devices perform online learning with continuous streaming local data and a central server aggregates model parameters from clients.
Abstract: Federated learning (FL) is a machine learning paradigm where a shared central model is learned across distributed devices while the training data remains on these devices. Federated Averaging (FedAvg) is the leading optimization method for training non-convex models in this setting with a synchronized protocol. However, the assumptions made by FedAvg are not realistic given the heterogeneity of devices. First, the volume and distribution of collected data vary in the training process due to different sampling rates of edge devices. Second, the edge devices themselves also vary in latency and system configurations, such as memory, processor speed, and power requirements. This leads to vastly different computation times. Third, availability issues at edge devices can lead to a lack of contribution from specific edge devices to the federated model. In this paper, we present an Asynchronous Online Federated Learning (ASO-Fed) framework, where the edge devices perform online learning with continuous streaming local data and a central server aggregates model parameters from clients. Our framework updates the central model in an asynchronous manner to tackle the challenges associated with both varying computational loads at heterogeneous edge devices and edge devices that lag behind or dropout. We perform extensive experiments on a benchmark image dataset and three real-world datasets with non-IID streaming data. The results demonstrate ASO-Fed converging fast and maintaining good prediction performance.

Journal ArticleDOI
TL;DR: In the proposed work, the elliptic Galois cryptography protocol is introduced and discussed, and a cryptography technique is used to encrypt confidential data that came from different medical sources and embeds the encrypted data into a low complexity image.
Abstract: Internet of Things (IoT) is a domain wherein which the transfer of data is taking place every single second. The security of these data is a challenging task; however, security challenges can be mitigated with cryptography and steganography techniques. These techniques are crucial when dealing with user authentication and data privacy. In the proposed work, the elliptic Galois cryptography protocol is introduced and discussed. In this protocol, a cryptography technique is used to encrypt confidential data that came from different medical sources. Next, a Matrix XOR encoding steganography technique is used to embed the encrypted data into a low complexity image. The proposed work also uses an optimization algorithm called Adaptive Firefly to optimize the selection of cover blocks within the image. Based on the results, various parameters are evaluated and compared with the existing techniques. Finally, the data that is hidden in the image is recovered and is then decrypted.

Journal ArticleDOI
TL;DR: The results show that the response time of the proposed system with the blockchain technology is almost 50% shorter than the conventional techniques and the cost of storage is about 20% less for the system with blockchain in comparison with the existing techniques.
Abstract: Health record maintenance and sharing are one of the essential tasks in the healthcare system. In this system, loss of confidentiality leads to a passive impact on the security of health record whereas loss of integrity leads can have a serious impact such as loss of a patient’s life. Therefore, it is of prime importance to secure electronic health records. Health records are represented by Fast Healthcare Interoperability Resources standards and managed by Health Level Seven International Healthcare Standards Organization. Centralized storage of health data is attractive to cyber-attacks and constant viewing of patient records is challenging. Therefore, it is necessary to design a system using the cloud that helps to ensure authentication and that also provides integrity to health records. The keyless signature infrastructure used in the proposed system for ensuring the secrecy of digital signatures also ensures aspects of authentication. Furthermore, data integrity is managed by the proposed blockchain technology. The performance of the proposed framework is evaluated by comparing the parameters like average time, size, and cost of data storage and retrieval of the blockchain technology with conventional data storage techniques. The results show that the response time of the proposed system with the blockchain technology is almost 50% shorter than the conventional techniques. Also they express the cost of storage is about 20% less for the system with blockchain in comparison with the existing techniques.

Journal ArticleDOI
TL;DR: The decihertz band is uniquely suited to observation of intermediate-mass (10^2-10^4$ M$_\odot$) black holes, which may form the missing link between stellar-mass and massive black holes.
Abstract: The gravitational-wave astronomical revolution began in 2015 with LIGO's observation of the coalescence of two stellar-mass black holes. Over the coming decades, ground-based detectors like LIGO will extend their reach, discovering thousands of stellar-mass binaries. In the 2030s, the space-based LISA will enable gravitational-wave observations of the massive black holes in galactic centres. Between LISA and ground-based observatories lies the unexplored decihertz gravitational-wave frequency band. Here, we propose a Decihertz Observatory to cover this band, and complement observations made by other gravitational-wave observatories. The decihertz band is uniquely suited to observation of intermediate-mass ($\sim 10^2-10^4$ M$_\odot$) black holes, which may form the missing link between stellar-mass and massive black holes, offering a unique opportunity to measure their properties. Decihertz observations will be able to detect stellar-mass binaries days to years before they merge and are observed by ground-based detectors, providing early warning of nearby binary neutron star mergers, and enabling measurements of the eccentricity of binary black holes, providing revealing insights into their formation. Observing decihertz gravitational-waves also opens the possibility of testing fundamental physics in a new laboratory, permitting unique tests of general relativity and the Standard Model of particle physics. Overall, a Decihertz Observatory will answer key questions about how black holes form and evolve across cosmic time, open new avenues for multimessenger astronomy, and advance our understanding of gravitation, particle physics and cosmology.

Journal ArticleDOI
TL;DR: In this article, the authors show that the band gap of reduced graphene oxide (rGO) can be increased and, importantly, tuned from 0.264 to 0.786 eV by controlling the surface concentration of epoxide groups using a developed mild oxidation treatment with nitric acid, HNO3.
Abstract: Reduced graphene oxide (rGO) is a material with a unique set of electrical and physical properties. The potential of rGO for numerous semiconductor applications, however, has not been fully realized because the dependence of its band gap on the chemical structure and, specifically, on the presence of terminal functional groups has not been systematically studied and, as a result, there are no efficient methods for tuning the band gap. Here we report that the band gap of rGO can be increased and, importantly, tuned from 0.264 to 0.786 eV by controlling the surface concentration of epoxide groups using a developed mild oxidation treatment with nitric acid, HNO3. Increasing the concentration of an HNO3 treatment solution gradually increases the surface concentration of epoxides without introducing microscopic defects or d-spacing changes and, thus, produces functionalized rGO materials with desirable properties for semiconductor applications. A combination of experimental measurements using infrared spectroscopy, ultraviolet-visible spectroscopy, X-ray diffraction, X-ray photoelectron spectroscopy, scanning electron microscopy and density functional theory calculations demonstrates that epoxides are unique among oxygen-containing functional groups for allowing to tune the band gap. Unlike epoxides, other oxygen-containing functional groups are not effective: hydroxyls do not change the band gap, while carbonyls and carboxyls break the hexagonal carbon-ring structure of rGO.

Journal ArticleDOI
TL;DR: In this paper, a low-cost, bi-stable piezoelectric energy harvester is proposed, analyzed, and experimentally tested for the purpose of broadband energy harvesting.

Proceedings Article
01 Jan 2020
TL;DR: This paper introduces a novel graph convolutional network (GCN), termed as factorizable graph convolved network (FactorGCN) that explicitly disentangles such intertwined relations encoded in a graph.
Abstract: Graphs have been widely adopted to denote structural connections between entities. The relations are in many cases heterogeneous, but entangled together and denoted merely as a single edge between a pair of nodes. For example, in a social network graph, users in different latent relationships like friends and colleagues, are usually connected via a bare edge that conceals such intrinsic connections. In this paper, we introduce a novel graph convolutional network (GCN), termed as factorizable graph convolutional network(FactorGCN), that explicitly disentangles such intertwined relations encoded in a graph. FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs, each of which represents a latent and disentangled relation among nodes. The features of the nodes are then aggregated separately in each factorized latent space to produce disentangled features, which further leads to better performances for downstream tasks. We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets, and demonstrate that it yields truly encouraging results in terms of both disentangling and feature aggregation. Code is publicly available at this https URL.

Journal ArticleDOI
TL;DR: This survey provides a comprehensive study of the state of the art approaches based on deep learning for the analysis of cervical cytology images and introduces deep learning and its simplified architectures that have been used in this field.
Abstract: Cervical cancer is one of the most common and deadliest cancers among women. Despite that, this cancer is entirely treatable if it is detected at a precancerous stage. Pap smear test is the most extensively performed screening method for early detection of cervical cancer. However, this hand-operated screening approach suffers from a high false-positive result because of human errors. To improve the accuracy and manual screening practice, computer-aided diagnosis methods based on deep learning is developed widely to segment and classify the cervical cytology images automatically. In this survey, we provide a comprehensive study of the state of the art approaches based on deep learning for the analysis of cervical cytology images. Firstly, we introduce deep learning and its simplified architectures that have been used in this field. Secondly, we discuss the publicly available cervical cytopathology datasets and evaluation metrics for segmentation and classification tasks. Then, a thorough review of the recent development of deep learning for the segmentation and classification of cervical cytology images is presented. Finally, we investigate the existing methodology along with the most suitable techniques for the analysis of pap smear cells.