scispace - formally typeset
Search or ask a question

Showing papers on "Personal computer published in 2019"


Journal ArticleDOI
TL;DR: Harmony, for the integration of single-cell transcriptomic data, identifies broad and fine-grained populations, scales to large datasets, and can integrate sequencing- and imaging-based data.
Abstract: The emerging diversity of single-cell RNA-seq datasets allows for the full transcriptional characterization of cell types across a wide variety of biological and clinical conditions. However, it is challenging to analyze them together, particularly when datasets are assayed with different technologies, because biological and technical differences are interspersed. We present Harmony ( https://github.com/immunogenomics/harmony ), an algorithm that projects cells into a shared embedding in which cells group by cell type rather than dataset-specific conditions. Harmony simultaneously accounts for multiple experimental and biological factors. In six analyses, we demonstrate the superior performance of Harmony to previously published algorithms while requiring fewer computational resources. Harmony enables the integration of ~106 cells on a personal computer. We apply Harmony to peripheral blood mononuclear cells from datasets with large experimental differences, five studies of pancreatic islet cells, mouse embryogenesis datasets and the integration of scRNA-seq with spatial transcriptomics data. Harmony, for the integration of single-cell transcriptomic data, identifies broad and fine-grained populations, scales to large datasets, and can integrate sequencing- and imaging-based data.

2,459 citations


Journal ArticleDOI
TL;DR: NGPhylogeny.fr is developed to be more flexible in terms of tools and workflows, easily installable, and more scalable, and is managed and run by an underlying Galaxy workflow system, which makes workflows more scalable interms of number of jobs and size of data.
Abstract: Phylogeny.fr, created in 2008, has been designed to facilitate the execution of phylogenetic workflows, and is nowadays widely used. However, since its development, user needs have evolved, new tools and workflows have been published, and the number of jobs has increased dramatically, thus promoting new practices, which motivated its refactoring. We developed NGPhylogeny.fr to be more flexible in terms of tools and workflows, easily installable, and more scalable. It integrates numerous tools in their latest version (e.g. TNT, FastME, MrBayes, etc.) as well as new ones designed in the last ten years (e.g. PhyML, SMS, FastTree, trimAl, BOOSTER, etc.). These tools cover a large range of usage (sequence searching, multiple sequence alignment, model selection, tree inference and tree drawing) and a large panel of standard methods (distance, parsimony, maximum likelihood and Bayesian). They are integrated in workflows, which have been already configured ('One click'), can be customized ('Advanced'), or are built from scratch ('A la carte'). Workflows are managed and run by an underlying Galaxy workflow system, which makes workflows more scalable in terms of number of jobs and size of data. NGPhylogeny.fr is deployable on any server or personal computer, and is freely accessible at https://ngphylogeny.fr.

391 citations


Journal ArticleDOI
TL;DR: It is shown that by using a powerful deep learning technique, even with only a personal computer the authors can predict new folds much more accurately than ever before and accurately predict interresidue distance distribution of a protein by deep learning, even for proteins with ∼60 sequence homologs.
Abstract: Direct coupling analysis (DCA) for protein folding has made very good progress, but it is not effective for proteins that lack many sequence homologs, even coupled with time-consuming conformation sampling with fragments. We show that we can accurately predict interresidue distance distribution of a protein by deep learning, even for proteins with ∼60 sequence homologs. Using only the geometric constraints given by the resulting distance matrix we may construct 3D models without involving extensive conformation sampling. Our method successfully folded 21 of the 37 CASP12 hard targets with a median family size of 58 effective sequence homologs within 4 h on a Linux computer of 20 central processing units. In contrast, DCA-predicted contacts cannot be used to fold any of these hard targets in the absence of extensive conformation sampling, and the best CASP12 group folded only 11 of them by integrating DCA-predicted contacts into fragment-based conformation sampling. Rigorous experimental validation in CASP13 shows that our distance-based folding server successfully folded 17 of 32 hard targets (with a median family size of 36 sequence homologs) and obtained 70% precision on the top L/5 long-range predicted contacts. The latest experimental validation in CAMEO shows that our server predicted correct folds for 2 membrane proteins while all of the other servers failed. These results demonstrate that it is now feasible to predict correct fold for many more proteins lack of similar structures in the Protein Data Bank even on a personal computer.

319 citations


Journal ArticleDOI
Md. Zia Uddin1
TL;DR: A wearable sensor-based system is proposed for activity prediction using Recurrent Neural Network (RNN) on an edge device and the experimental results show that the proposed approach outperforms other traditional methods.

120 citations


Journal ArticleDOI
TL;DR: The Open Global Glacier Model (OGGM) as discussed by the authors is a modular and open-source numerical model framework for simulating past and future change of any mountain glacier in the world.
Abstract: . Despite their importance for sea-level rise, seasonal water availability, and as a source of geohazards, mountain glaciers are one of the few remaining subsystems of the global climate system for which no globally applicable, open source, community-driven model exists. Here we present the Open Global Glacier Model (OGGM), developed to provide a modular and open-source numerical model framework for simulating past and future change of any glacier in the world. The modeling chain comprises data downloading tools (glacier outlines, topography, climate, validation data), a preprocessing module, a mass-balance model, a distributed ice thickness estimation model, and an ice-flow model. The monthly mass balance is obtained from gridded climate data and a temperature index melt model. To our knowledge, OGGM is the first global model to explicitly simulate glacier dynamics: the model relies on the shallow-ice approximation to compute the depth-integrated flux of ice along multiple connected flow lines. In this paper, we describe and illustrate each processing step by applying the model to a selection of glaciers before running global simulations under idealized climate forcings. Even without an in-depth calibration, the model shows very realistic behavior. We are able to reproduce earlier estimates of global glacier volume by varying the ice dynamical parameters within a range of plausible values. At the same time, the increased complexity of OGGM compared to other prevalent global glacier models comes at a reasonable computational cost: several dozen glaciers can be simulated on a personal computer, whereas global simulations realized in a supercomputing environment take up to a few hours per century. Thanks to the modular framework, modules of various complexity can be added to the code base, which allows for new kinds of model intercomparison studies in a controlled environment. Future developments will add new physical processes to the model as well as automated calibration tools. Extensions or alternative parameterizations can be easily added by the community thanks to comprehensive documentation. OGGM spans a wide range of applications, from ice–climate interaction studies at millennial timescales to estimates of the contribution of glaciers to past and future sea-level change. It has the potential to become a self-sustained community-driven model for global and regional glacier evolution.

115 citations


Posted Content
TL;DR: A capsule network that can detect various kinds of attacks, from presentation attacks using printed images and replayed videos to attacks using fake videos created using deep learning, uses many fewer parameters than traditional convolutional neural networks with similar performance.
Abstract: The revolution in computer hardware, especially in graphics processing units and tensor processing units, has enabled significant advances in computer graphics and artificial intelligence algorithms. In addition to their many beneficial applications in daily life and business, computer-generated/manipulated images and videos can be used for malicious purposes that violate security systems, privacy, and social trust. The deepfake phenomenon and its variations enable a normal user to use his or her personal computer to easily create fake videos of anybody from a short real online video. Several countermeasures have been introduced to deal with attacks using such videos. However, most of them are targeted at certain domains and are ineffective when applied to other domains or new attacks. In this paper, we introduce a capsule network that can detect various kinds of attacks, from presentation attacks using printed images and replayed videos to attacks using fake videos created using deep learning. It uses many fewer parameters than traditional convolutional neural networks with similar performance. Moreover, we explain, for the first time ever in the literature, the theory behind the application of capsule networks to the forensics problem through detailed analysis and visualization.

109 citations


Journal ArticleDOI
20 Jun 2019-Sensors
TL;DR: A prototype model of a smart digital-stethoscope system to monitor patient’s heart sounds and diagnose any abnormality in a real-time manner is proposed and tested to improve accuracy.
Abstract: One of the major causes of death all over the world is heart disease or cardiac dysfunction. These diseases could be identified easily with the variations in the sound produced due to the heart activity. These sophisticated auscultations need important clinical experience and concentrated listening skills. Therefore, there is an unmet need for a portable system for the early detection of cardiac illnesses. This paper proposes a prototype model of a smart digital-stethoscope system to monitor patient's heart sounds and diagnose any abnormality in a real-time manner. This system consists of two subsystems that communicate wirelessly using Bluetooth low energy technology: A portable digital stethoscope subsystem, and a computer-based decision-making subsystem. The portable subsystem captures the heart sounds of the patient, filters and digitizes, and sends the captured heart sounds to a personal computer wirelessly to visualize the heart sounds and for further processing to make a decision if the heart sounds are normal or abnormal. Twenty-seven t-domain, f-domain, and Mel frequency cepstral coefficients (MFCC) features were used to train a public database to identify the best-performing algorithm for classifying abnormal and normal heart sound (HS). The hyper parameter optimization, along with and without a feature reduction method, was tested to improve accuracy. The cost-adjusted optimized ensemble algorithm can produce 97% and 88% accuracy of classifying abnormal and normal HS, respectively.

93 citations


Journal ArticleDOI
TL;DR: The results show that MySignals is successfully interfaced with the ECG, temperature, oxygen saturation, and pulse rate sensors and the communication with the hyper-terminal program using LoRa has been implemented and an IoT based healthcare system is being developed in My signals platform with the expected results getting from the sensors.
Abstract: Internet of Things (IoT) based healthcare system is now at the top peak because of its potentialities among all other IoT applications. Supporting sensors integrated with IoT healthcare can effectively analyze and gather the patients’ physical health data that has made the IoT based healthcare ubiquitously acceptable. A set of challenges including the continuous presence of the healthcare professionals and staff as well as the proper amenities in remote areas during emergency situations need to be addressed for developing a flexible IoT based healthcare system. Besides that, the human entered data are not as reliable as automated generated data. The development of the IoT based health monitoring system allows a personalized treatment in certain circumstances that helps to reduce the healthcare cost and wastage with a continuous improving outcome. We present an IoT based health monitoring system using the MySignals development shield with (Low power long range) LoRa wireless network system. Electrocardiogram (ECG) sensor, body temperature sensor, pulse rate, and oxygen saturation sensor have been used with MySignals and LoRa. Evaluating the performances and effectiveness of the sensors and wireless platform devices are also analyzed in this paper by applying physiological data analysis methodology and statistical analysis. MySignals enables the stated sensors to gather physical data. The aim is to transmit the gathered data from MySignals to a personal computer by implementing a wireless system with LoRa. The results show that MySignals is successfully interfaced with the ECG, temperature, oxygen saturation, and pulse rate sensors. The communication with the hyper-terminal program using LoRa has been implemented and an IoT based healthcare system is being developed in MySignals platform with the expected results getting from the sensors.

85 citations


Journal ArticleDOI
TL;DR: A self-powered band that can recognize human identity through gait pattern which is achieved by detecting muscle activity is reported, which opens new frontiers for the development of self- powered electronics and inspires new thoughts in human-machine interface.

78 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a review of the ingredients and recipes required for constructing models and a new modeling strategy based on iterative thermodynamic models, integrated with quantitative compositional mapping.
Abstract: The astonishing progress of personal computer technology in the past 30 years as well as the availability of thermodynamic data and modeling programs have revolutionized our ability to investigate and quantify metamorphic processes. Equilibrium thermodynamics has played a central role in this revolution, providing simultaneously a physico-chemical framework and efficient modeling strategies to calculate mineral stability relations in the Earth’s lithosphere (and beyond) as well as thermobarometric results. This Perspectives contribution provides a review of the ingredients and recipes required for constructing models. A fundamental requirement to perform thermodynamic modeling is an internally consistent database containing standard state properties and activity–composition models of pure minerals, solid solutions, and fluids. We demonstrate how important internal consistency is to this database, and show some of the advantages and pitfalls of the two main modeling strategies (inverse and forward modeling). Both techniques are commonly applied to obtain thermobarometric estimates; that is, to derive P–T (pressure–temperature) information to quantify the conditions of metamorphism. In the last section, we describe a new modeling strategy based on iterative thermodynamic models, integrated with quantitative compositional mapping. This technique provides a powerful alternative to traditional modeling tools and permits use of local bulk compositions for testing the assumption of local equilibrium in rocks that were not fully re-equilibrated during their metamorphic history. We argue that this is the case for most natural samples, even at high-temperature conditions, and that this natural complexity must be taken into consideration when applying equilibrium models.

76 citations


Journal ArticleDOI
TL;DR: A subtype discovery tool, called PINSPlus, that is robust against noise and unstable quantitative assays, able to integrate multiple types of omics data in a single analysis, and dramatically superior to established approaches in identifying known subtypes and novel subgroups with significant survival differences is developed.
Abstract: SUMMARY Since cancer is a heterogeneous disease, tumor subtyping is crucial for improved treatment and prognosis. We have developed a subtype discovery tool, called PINSPlus, that is: (i) robust against noise and unstable quantitative assays, (ii) able to integrate multiple types of omics data in a single analysis and (iii) dramatically superior to established approaches in identifying known subtypes and novel subgroups with significant survival differences. Our validation on 12,158 samples from 44 datasets shows that PINSPlus vastly outperforms other approaches. The software is easy-to-use and can partition hundreds of patients in a few minutes on a personal computer. AVAILABILITY AND IMPLEMENTATION The package is available at https://cran.r-project.org/package=PINSPlus. Data and R script used in this manuscript are available at https://bioinformatics.cse.unr.edu/software/PINSPlus/. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: This work introduces a non-enzymatic metal oxide glucose sensor that functions in neutral fluids by electronically inducing a reversible and localized pH change and demonstrates glucose monitoring at physiologically relevant levels inneutral fluids mimicking sweat, and wireless communication with a personal computer via an integrated circuit board.
Abstract: Continuous glucose monitoring from sweat and tears can improve the quality of life of diabetic patients and provide data for more accurate diagnosis and treatment. Current continuous glucose sensors use enzymes with a one-to-two week lifespan, which forces periodic replacement. Metal oxide sensors are an alternative to enzymatic sensors with a longer lifetime. However, metal oxide sensors do not operate in sweat and tears because they function at high pH (pH > 10), and sweat and tears are neutral (pH = 7). Here, we introduce a non-enzymatic metal oxide glucose sensor that functions in neutral fluids by electronically inducing a reversible and localized pH change. We demonstrate glucose monitoring at physiologically relevant levels in neutral fluids mimicking sweat, and wireless communication with a personal computer via an integrated circuit board.

Journal ArticleDOI
TL;DR: An automated landmark predicting system, based on a deep learning neural network, that predicted angles and lengths in cephalometric analysis, predicted by the neural network were not statistically different from those calculated from manually plotted points.
Abstract: Background Cephalometric analysis has long been, and still is one of the most important tools in evaluating craniomaxillofacial skeletal profile. To perform this, manual tracing of x-ray film and plotting landmarks have been required. This procedure is time-consuming and demands expertise. In these days, computerized cephalometric systems have been introduced; however, tracing and plotting still have to be done on the monitor display. Artificial intelligence is developing rapidly. Deep learning is one of the most evolving areas in artificial intelligence. The authors made an automated landmark predicting system, based on a deep learning neural network. Methods On a personal desktop computer, a convolutional network was built for regression analysis of cephalometric landmarks' coordinate values. Lateral cephalogram images were gathered through the internet and 219 images were obtained. Ten skeletal cephalometric landmarks were manually plotted and coordinate values of them were listed. The images were randomly divided into 153 training images and 66 testing images. Training images were expanded 51 folds. The network was trained with the expanded training images. With the testing images, landmarks were predicted by the network. Prediction errors from manually plotted points were evaluated. Results Average and median prediction errors were 17.02 and 16.22 pixels. Angles and lengths in cephalometric analysis, predicted by the neural network, were not statistically different from those calculated from manually plotted points. Conclusion Despite the variety of image quality, using cephalogram images on the internet is a feasible approach for landmark prediction.

Journal ArticleDOI
TL;DR: A wearable electromyographic gesture recognition system based on the hyperdimensional computing paradigm, running on a programmable parallel ultra-low-power (PULP) platform, which leads to a fully embedded implementation with no need to perform any offline training on a personal computer.
Abstract: This paper presents a wearable electromyographic gesture recognition system based on the hyperdimensional computing paradigm, running on a programmable parallel ultra-low-power (PULP) platform. The processing chain includes efficient on-chip training, which leads to a fully embedded implementation with no need to perform any offline training on a personal computer. The proposed solution has been tested on 10 subjects in a typical gesture recognition scenario achieving 85% average accuracy on 11 gestures recognition, which is aligned with the state-of-the-art, with the unique capability of performing online learning. Furthermore, by virtue of the hardware friendly algorithm and of the efficient PULP system-on-chip (Mr. Wolf) used for prototyping and evaluation, the energy budget required to run the learning part with 11 gestures is 10.04 mJ, and 83.2 $\mu$ J per classification. The system works with a average power consumption of 10.4 mW in classification, ensuring around 29 h of autonomy with a 100 mAh battery. Finally, the scalability of the system is explored by increasing the number of channels (up to 256 electrodes), demonstrating the suitability of our approach as universal, energy-efficient biopotential wearable recognition framework.

Journal ArticleDOI
TL;DR: The basis of the proposed analysis method is the connection between heart rate variability and oxygen saturation with d apnea events, which was transferred to a cloud-based system architecture to diagnose and warn the remote patients.

Journal ArticleDOI
TL;DR: This paper investigates a 3-D extension of a classical mean-shift tracker whose greedy gradient ascend strategy is generally considered as unreliable in conventional 2-D tracking, and proposes two important mechanisms to further boost the tracker's robustness.
Abstract: Depth cameras have recently become popular and many vision problems can be better solved with depth information. But, how to integrate depth information into a visual tracker to overcome the challenges such as occlusion and background distraction is still underinvestigated in current literature on visual tracking. In this paper, we investigate a 3-D extension of a classical mean-shift tracker whose greedy gradient ascend strategy is generally considered as unreliable in conventional 2-D tracking. However, through careful study of the physical property of 3-D point clouds, we reveal that objects which may appear to be adjacent on a 2-D image will form distinctive modes in the 3-D probability distribution approximated by kernel density estimation, and finding the nearest mode using 3-D mean-shift can always work in tracking. Based on the understanding of 3-D mean-shift, we propose two important mechanisms to further boost the tracker's robustness: one is to enable the tracker to be aware of potential distractions and make corresponding adjustments to the appearance model; and the other is to enable the tracker to detect and recover from tracking failures caused by total occlusion. The proposed method is both effective and computationally efficient. On a conventional personal computer, it runs at more than 60 FPS without graphical processing unit acceleration.

Journal ArticleDOI
TL;DR: By optimizing the algorithms, this work finds that some transform methods are sufficiently fast to transform 1-megapixel images at more than 100 frames per second on a desktop personal computer.
Abstract: The Abel transform is a mathematical operation that transforms a cylindrically symmetric three-dimensional (3D) object into its two-dimensional (2D) projection. The inverse Abel transform reconstructs the 3D object from the 2D projection. Abel transforms have wide application across numerous fields of science, especially chemical physics, astronomy, and the study of laser-plasma plumes. Consequently, many numerical methods for the Abel transform have been developed, which makes it challenging to select the ideal method for a specific application. In this work, eight published transform methods have been incorporated into a single, open-source Python software package (PyAbel) to provide a direct comparison of the capabilities, advantages, and relative computational efficiency of each transform method. Most of the tested methods provide similar, high-quality results. However, the computational efficiency varies across several orders of magnitude. By optimizing the algorithms, we find that some transform methods are sufficiently fast to transform 1-megapixel images at more than 100 frames per second on a desktop personal computer. In addition, we demonstrate the transform of gigapixel images.

Journal ArticleDOI
TL;DR: A reinforcement learning approach is proposed to solve the problem of personalized learning within the Markov decision framework and achieves a balance between making the best possible recommendations based on the current knowledge and exploring new learning trajectories that may potentially pay off.
Abstract: Personalized learning refers to instruction in which the pace of learning and the instructional approach are optimized for the needs of each learner. With the latest advances in information technology and data science, personalized learning is becoming possible for anyone with a personal computer, supported by a data-driven recommendation system that automatically schedules the learning sequence. The engine of such a recommendation system is a recommendation strategy that, based on data from other learners and the performance of the current learner, recommends suitable learning materials to optimize certain learning outcomes. A powerful engine achieves a balance between making the best possible recommendations based on the current knowledge and exploring new learning trajectories that may potentially pay off. Building such an engine is a challenging task. We formulate this problem within the Markov decision framework and propose a reinforcement learning approach to solving the problem.

Journal ArticleDOI
TL;DR: As a scalable and open-source information management system, CropSight can be used to maintain and collate important crop performance and microclimate datasets captured by IoT sensors and distributed phenotyping installations.
Abstract: Background: High-quality plant phenotyping and climate data lay the foundation of phenotypic analysis and genotype-environment interaction, providing important evidence not only for plant scientists to understand the dynamics between crop performance, genotypes, and environmental factors, but also for agronomists and farmers to closely monitor crops in fluctuating agricultural conditions. With the rise of Internet of Things technologies (IoT) in recent years, many IoT-based remote sensing devices have been applied to plant phenotyping and crop monitoring, which are generating terabytes of biological datasets every day. However, it is still technically challenging to calibrate, annotate, and aggregate the big data effectively, especially when they were produced in multiple locations, at different scales. Findings: CropSight is a PHP and SQL based server platform, which provides automated data collation, storage, and information management through distributed IoT sensors and phenotyping workstations. It provides a two-component solution to monitor biological experiments through networked sensing devices, with interfaces specifically designed for distributed plant phenotyping and centralised data management. Data transfer and annotation are accomplished automatically though an HTTP accessible RESTful API installed on both device-side and server-side of the CropSight system, which synchronise daily representative crop growth images for visual-based crop assessment and hourly microclimate readings for GxE studies. CropSight also supports the comparison of historical and ongoing crop performance whilst different experiments are being conducted. Conclusions: As a scalable and open-source information management system, CropSight can be used to maintain and collate important crop performance and microclimate datasets captured by IoT sensors and distributed phenotyping installations. It provides near real-time environmental and crop growth monitoring in addition to historical and current experiment comparison through an integrated cloud-ready server system. Accessible both locally in the field through smart devices and remotely in an office using a personal computer, CropSight has been applied to field experiments of bread wheat prebreeding since 2016 and speed breeding since 2017. We believe that the CropSight system could have a significant impact on scalable plant phenotyping and IoT-style crop management to enable smart agricultural practices in the near future.

Journal ArticleDOI
TL;DR: A learning-based Shack-Hartmann wavefront sensor to achieve the high-order aberration detection without image segmentation or centroid positioning is presented to improve the wavefront sensing ability of SHWS, which could be combined with an existing adaptive optics system and be further applied in biological applications.
Abstract: We present a learning-based Shack-Hartmann wavefront sensor (SHWS) to achieve the high-order aberration detection without image segmentation or centroid positioning. Zernike coefficient amplitudes of aberrations measured from biological samples are referred and expanded to generate the training datasets. With one SHWS pattern inputted, up to 120th Zernike modes could be predicted within 10.9 ms with 95.56% model accuracy by a personal computer. The statistical experimental results show that compared with traditional modal-based SHWS, the root mean squared error in phase residuals of this method is reduced by ∼40.54% and the Strehl ratio of the point spread functions is improved by ∼27.31%. The aberration detection performance of this method is also validated on a mouse brain slice with 300 µm thickness and the median improvement of peak-to-background ratio of this method is ∼30% to 40% compared with traditional SHWS. With the high detection accuracy, simple processes, fast prediction speed and good compatibility, this work offers a potential approach to improve the wavefront sensing ability of SHWS, which could be combined with an existing adaptive optics system and be further applied in biological applications.

Journal ArticleDOI
TL;DR: The role of Universal Serial Bus (USB), the most widely accepted interface, in enabling communication between peripheral devices and a host controller like laptop, personal computer, smart phone, tablet etc, is came across and motivated with the benefits of USB, a secure three-factor authentication scheme for smart healthcare is proposed.
Abstract: Now-a-days, the society is witnessing a keen urge to enhance the quality of healthcare services with the intervention of technology in the health sector. The main focus in transforming traditional healthcare to smart healthcare is on facilitating the patients as well as medical professionals. However, this changover is not easy due to various issues of security and integrity associated with it. Security of patients's personal health record and privacy can be handled well by permitting only authorized access to the confidential health-data via suitably designed authentication scheme. In pursuit to contribute in this direction, we came across the role of Universal Serial Bus (USB), the most widely accepted interface, in enabling communication between peripheral devices and a host controller like laptop, personal computer, smart phone, tablet etc. In the process, we analysed a recently proposed a three-factor authentication scheme for consumer USB Mass Storage Devices (MSD) by He et al. In this paper, we demonstrate that He et al.'s scheme is vulnerable to leakage of temporary but session specific information attacks, late detection of message replay, forward secrecy attacks, and backward secrecy attacks. Then motivated with the benefits of USB, we propose a secure three-factor authentication scheme for smart healthcare.

Journal ArticleDOI
TL;DR: Computational results demonstrate that the proposed approach can successfully identify both the blade crack locations and crack contours in UAV-taken images.
Abstract: A two-stage approach for precisely detecting wind turbine blade surface cracks via analyzing blade images captured by unmanned aerial vehicles (UAVs) is proposed in this paper. The proposed approach includes two main detection procedures, the crack location and crack contour detection. In locating cracks, a method for extracting crack windows based on extended Haar-like features is introduced. A parallel sliding window method is developed to scan images and the cascading classifier is developed to classify sliding windows into two classes, crack and noncrack. Based on detected windows containing cracks, a novel clustering algorithm, the parallel Jaya K -means algorithm, is developed to assign each pixel in crack windows into crack and noncrack segments. Crack contours are obtained based on boundaries of crack segments. The effectiveness and efficiency of the proposed crack detection approach are validated by executing it on a personal computer and an embedded device with UAV-taken images collected from a commercial wind farm. Computational results demonstrate that the proposed approach can successfully identify both the blade crack locations and crack contours in UAV-taken images.

Journal ArticleDOI
TL;DR: A method to fit time-trace data from a terahertz time-domain-spectroscopy system enabling the extraction of physical parameters from a material or metamaterial is reported on.
Abstract: We report on a method to fit time-trace data from a terahertz time-domain-spectroscopy system enabling the extraction of physical parameters from a material or metamaterial. To accomplish this, we developed a Python-based open-source software called Fit@TDS that functions on a personal computer. This software includes commonly used methods where the refractive index is extracted from frequency-domain data. This method has limitations when the signal is too noisy or when an absorption peak saturates the spectrum. Thus, the software also includes a new method where the refractive indices are directly fitted from the time trace. The idea is to model a material or a metamaterial through parametric physical models (Drude–Lorentz model and time-domain coupled mode theory) and implement this in the propagation model to simulate the time trace. Then, an optimization algorithm is used to retrieve the parameters of the model corresponding to the studied material/metamaterial. In this paper, we explain the method and test it on fictitious samples to probe its feasibility and reliability. Finally, we used Fit@TDS on real samples of high-resistivity silicon, lactose, and gold metasurface on quartz to show the capacity of the method.

Proceedings ArticleDOI
18 Jul 2019
TL;DR: An auto-correction simulator is introduced that uses knowledge of the stimulus to emulate statistical text decoding within constrained experimental parameters and high-precision motion tracking hardware is used to visualise and detect fingertip interactions.
Abstract: Virtual and Augmented Reality deliver engaging interaction experiences that can transport and extend the capabilities of the user. To ensure these paradigms are more broadly usable and effective, however, it is necessary to also deliver many of the conventional functions of a smartphone or personal computer. It remains unclear how conventional input tasks, such as text entry, can best be translated into virtual and augmented reality. In this paper we examine the performance potential of four alternative text entry strategies in virtual reality (VR). These four strategies are selected to provide full coverage of two fundamental design dimensions: i) physical surface association; and ii) number of engaged fingers. Specifically, we examine typing with index fingers on a surface and in mid-air and typing using all ten fingers on a surface and in mid-air. The central objective is to evaluate the human performance potential of these four typing strategies without being constrained by current tracking and statistical text decoding limitations. To this end we introduce an auto-correction simulator that uses knowledge of the stimulus to emulate statistical text decoding within constrained experimental parameters and use high-precision motion tracking hardware to visualise and detect fingertip interactions. We find that alignment of the virtual keyboard with a physical surface delivers significantly faster entry rates over a mid-air keyboard. Also, users overwhelmingly fail to effectively engage all ten fingers in mid-air typing, resulting in slower entry rates and higher error rates compared to just using two index fingers. In addition to identifying the envelopes of human performance for the four strategies investigated, we also provide a detailed analysis of the underlying features that distinguish each strategy in terms of its performance and behaviour.

Proceedings ArticleDOI
15 Apr 2019
TL;DR: A new resource management framework, HeteroEdge, is developed to address the heterogeneity of SSEC by providing a uniform interface to abstract the device details (hardware, operating system, CPU); and effectively allocating the social sensing tasks to the heterogeneous edge devices.
Abstract: Social sensing has emerged as a new sensing application paradigm where measurements about the physical world are collected from humans or devices on their behalf. The advent of edge computing pushes the frontier of computation, service, and data along the cloud-to-IoT continuum. The merge of these two technical trends (referred to as Social Sensing based Edge Computing or SSEC) generates a set of new research challenges. One critical issue in SSEC is the heterogeneity of the edge where the edge devices owned by human sensors often have diversified computational power, runtime environments, network interfaces, and hardware equipment. Such heterogeneity poses significant challenges in the resource management of SSEC systems. Examples include masking the pronounced heterogeneity across diverse platforms, allocating interdependent tasks with complex requirements on devices with different resources, and adapting to the dynamic and diversified context of the edge devices. In this paper, we develop a new resource management framework, HeteroEdge, to address the heterogeneity of SSEC by 1) providing a uniform interface to abstract the device details (hardware, operating system, CPU); and 2) effectively allocating the social sensing tasks to the heterogeneous edge devices. We implemented HeteroEdge on a real-world edge computing testbed that consists of heterogeneous edge devices (Jetson TX2, TK1, Raspberry Pi3, and personal computer). Evaluations based on two real-world social sensing applications show that the HeteroEdge achieved up to 42% decrease in end-to-end delay for the application and 22% more energy savings compared to the state-of-the-art baselines.

Posted Content
01 Sep 2019-viXra
TL;DR: This paper aims to propose an artificial neural network (ANN)-based formula to precisely compute the critical elastic buckling load of simply supported cellular beams under uniformly distributed vertical loads.
Abstract: Cellular beams are an attractive option for the steel construction industry due to their versatility in terms of strength, size, and weight. Further benefits are the integration of services thereby reducing ceiling-to-floor depth (thus, building’s height), which has a great economic impact. Moreover, the complex localised and global failures characterizing those members have led several scientists to focus their research on the development of more efficient design guidelines. This paper aims to propose an artificial neural network (ANN)-based formula to estimate the critical elastic buckling load of simply supported cellular beams under uniformly distributed vertical loads. The 3645-point dataset used in ANN design was obtained from an extensive parametric finite element analysis performed in ABAQUS. The independent variables adopted as ANN inputs are the following: beam’s length, opening diameter, web-post width, cross-section height, web thickness, flange width, flange thickness, and the distance between the last opening edge and the end support. The proposed model shows a strong potential as an effective design tool. The maximum and average relative errors among the 3645 data points were found to be 3.7% and 0.4%, respectively, whereas the average computing time per data point is smaller than a millisecond for any current personal computer.

Journal ArticleDOI
TL;DR: A novel data adaptive iterative variant of CF-DAS algorithm used for processing the recorded backscattered signals to reconstruct the image of the breast phantom and to identify the existence and locate the area of the multiple breast tumors is presented.
Abstract: In this paper, a system for microwave breast tumor detection is presented using iteratively corrected coherence factor delay and sum (CF-DAS) algorithm. CF-DAS is data independent, which makes it stable in a noisy environment. However, data adaptive techniques have made significant progress by enhancing the image quality in microwave tomography. Thus, a novel data adaptive iterative variant of CF-DAS is proposed in this paper to produce stable and accurate images. The microwave imaging (MI) system contains a rotatable array of nine modified antipodal Vivaldi antennas in a circular arrangement, an array-mounting stand based on the stepper motor, the flexible phantom mounting podium, a control system for RF switching of the transceivers, and signal processing unit based on personal computer involved in the reconstruction of the image. The impedance bandwidth of the modified antenna is recorded from 2.5 to 11 GHz with stable directional radiation pattern. For performing the transmission and reception of the microwave signals, an SP8T nine port RF switch is used ranging from 2.5 to 8.0 GHz, and the switching is controlled by MATLAB software. Several low-cost lab-based homogenous and heterogeneous phantoms containing the dielectric property of human breast and tumor tissue are prepared to test the system efficiency. Since typical data independent radar-based techniques are ill-equipped for multiple reflection scenarios, an iteratively corrected variant of CF-DAS algorithm is used for processing the recorded backscattered signals to reconstruct the image of the breast phantom and to identify the existence and locate the area of the multiple breast tumors. The proposed method achieves more than 10-dB improvement over conventional CF-DAS in terms of signal to mean ratio for four different phantoms measured in this study.

Posted Content
TL;DR: In this paper, the authors discuss how to compute conformal blocks, formulate the crossing equations as a semi-definite programming problem, solve this problem using SDPB on a personal computer, and interpret the results.
Abstract: These lectures were given at the Weizmann Institute in the spring of 2019. They are intended to familiarize students with the nuts and bolts of the numerical bootstrap as efficiently as possible. After a brief review of the basics of conformal field theory in $d>2$ spacetime dimensions, we discuss how to compute conformal blocks, formulate the crossing equations as a semi-definite programming problem, solve this problem using SDPB on a personal computer, and interpret the results. We include worked examples for all steps, including bounds for 3d CFTs with $\mathbb{Z}_2$ or $O(N)$ global symmetries. Each lecture includes a problem set, which culminate in a precise computation of the 3d Ising model critical exponents using the mixed correlator $\mathbb{Z}_2$ bootstrap. A Mathematica file is included that transforms crossing equations into the proper input form for SDPB.

Book
13 Jun 2019
TL;DR: The history of the stormy romance between American business and Chinese communism through the experiences of American Motors and its operation in China, Beijing Jeep, a closely watched joint venture often visited by American politicians and Chinese leaders as discussed by the authors.
Abstract: When China opened its doors to the West in the late 1970s, Western businesses jumped at the chance to sell their products to the most populous nation in the world. Boardrooms everywhere buzzed with excitement?a Coke for every citizen, a television for every family, a personal computer for every office. At no other time have the institutions of Western capitalism tried to do business with a communist state to the extent that they did in China under Deng Xiaoping. Yet, over the decade leading up to the bloody events in and around Tiananmen Square, that experiment produced growing disappointment on both sides, and a vision of capturing the world's largest market faded.Picked as one of Fortune Magazine's "75 Smartest Books We Know," this updated version of Beijing Jeep, traces the history of the stormy romance between American business and Chinese communism through the experiences of American Motors and its operation in China, Beijing Jeep, a closely watched joint venture often visited by American politicians and Chinese leaders. Jim Mann explains how some of the world's savviest executives completely misjudged the business climate and recounts how the Chinese, who acquired valuable new technology at virtually no expense to themselves, ultimately outcapitalized the capitalists. And, in a new epilogue, Mann revisits and updates the events which constituted the main issues of the first edition.Elegantly written, brilliantly reported, Beijing Jeep is a cautionary tale about the West's age-old quest to do business in the Middle Kingdom.

Journal ArticleDOI
TL;DR: The results show that ImmerTai can accelerate the learning process of students noticeably compared to non-immersive learning with the conventional PC setup, and there is a substantial difference in the quality of the learnt motion between CAVE and HMD compared to PC.