scispace - formally typeset
Search or ask a question

Showing papers in "Sensors in 2017"


Journal ArticleDOI
10 Apr 2017-Sensors
TL;DR: Wang et al. as mentioned in this paper proposed a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy.
Abstract: This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

894 citations


Journal ArticleDOI
22 Feb 2017-Sensors
TL;DR: A novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN), which can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions.
Abstract: Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions.

876 citations


Journal ArticleDOI
04 Sep 2017-Sensors
TL;DR: A deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions, and combines each of these meta-architectures with “deep feature extractors” such as VGG net and Residual Network.
Abstract: Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called "deep learning meta-architectures". We combine each of these meta-architectures with "deep feature extractors" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area.

832 citations


Journal ArticleDOI
12 Jan 2017-Sensors
TL;DR: This paper has presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years and compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.
Abstract: Life expectancy in most countries has been increasing continually over the several few decades thanks to significant improvements in medicine, public health, as well as personal and environmental hygiene. However, increased life expectancy combined with falling birth rates are expected to engender a large aging demographic in the near future that would impose significant burdens on the socio-economic structure of these countries. Therefore, it is essential to develop cost-effective, easy-to-use systems for the sake of elderly healthcare and well-being. Remote health monitoring, based on non-invasive and wearable sensors, actuators and modern communication and information technologies offers an efficient and cost-effective solution that allows the elderly to continue to live in their comfortable home environment instead of expensive healthcare facilities. These systems will also allow healthcare personnel to monitor important physiological signs of their patients in real time, assess health conditions and provide feedback from distant facilities. In this paper, we have presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years. A survey on textile-based sensors that can potentially be used in wearable systems is also presented. Finally, compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.

795 citations


Journal ArticleDOI
22 Dec 2017-Sensors
TL;DR: This study examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data and found that SVM produced the highest OA with the least sensitivity to the training sample sizes.
Abstract: In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.

777 citations


Journal ArticleDOI
29 Nov 2017-Sensors
TL;DR: This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design, and introduces the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions.
Abstract: Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation.

549 citations


Journal ArticleDOI
30 Jan 2017-Sensors
TL;DR: A deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data and is able to outperform several state-of-the-art baseline methods.
Abstract: In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.

520 citations


Journal ArticleDOI
12 Aug 2017-Sensors
TL;DR: This review highlights recent advances towards non-invasive and continuous glucose monitoring devices, with a particular focus placed on monitoring glucose concentrations in alternative physiological fluids to blood.
Abstract: This review highlights recent advances towards non-invasive and continuous glucose monitoring devices, with a particular focus placed on monitoring glucose concentrations in alternative physiological fluids to blood.

520 citations


Journal ArticleDOI
03 Aug 2017-Sensors
TL;DR: This review outlines the recent applications of WSNs in agriculture research as well as classifies and compares various wireless communication protocols, the taxonomy of energy-efficient and energy harvesting techniques for W SNs that can be used in agricultural monitoring systems, and comparison between early research works on agriculture-based WSNS.
Abstract: Wireless sensor networks (WSNs) can be used in agriculture to provide farmers with a large amount of information. Precision agriculture (PA) is a management strategy that employs information technology to improve quality and production. Utilizing wireless sensor technologies and management tools can lead to a highly effective, green agriculture. Based on PA management, the same routine to a crop regardless of site environments can be avoided. From several perspectives, field management can improve PA, including the provision of adequate nutrients for crops and the wastage of pesticides for the effective control of weeds, pests, and diseases. This review outlines the recent applications of WSNs in agriculture research as well as classifies and compares various wireless communication protocols, the taxonomy of energy-efficient and energy harvesting techniques for WSNs that can be used in agricultural monitoring systems, and comparison between early research works on agriculture-based WSNs. The challenges and limitations of WSNs in the agricultural domain are explored, and several power reduction and agricultural management techniques for long-term monitoring are highlighted. These approaches may also increase the number of opportunities for processing Internet of Things (IoT) data.

405 citations


Journal ArticleDOI
Haiyang Yu1, Wu Zhihai1, Shuqin Wang, Yunpeng Wang1, Xiaolei Ma1 
26 Jun 2017-Sensors
TL;DR: Wang et al. as mentioned in this paper proposed a spatiotemporal recurrent convolutional networks (SRCNs) for traffic forecasting, which inherit the advantages of deep CNNs and LSTM neural networks.
Abstract: Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

385 citations


Journal ArticleDOI
20 Apr 2017-Sensors
TL;DR: A simulated deep convolutional neural network for yield estimation based on robotic agriculture that counts efficiently even if fruits are under shadow, occluded by foliage, branches, or if there is some degree of overlap amongst fruits.
Abstract: Recent years have witnessed significant advancement in computer vision research based on deep learning. Success of these tasks largely depends on the availability of a large amount of training samples. Labeling the training samples is an expensive process. In this paper, we present a simulated deep convolutional neural network for yield estimation. Knowing the exact number of fruits, flowers, and trees helps farmers to make better decisions on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits or flowers by workers is a very time consuming and expensive process and it is not practical for big fields. Automatic yield estimation based on robotic agriculture provides a viable solution in this regard. Our network is trained entirely on synthetic data and tested on real data. To capture features on multiple scales, we used a modified version of the Inception-ResNet architecture. Our algorithm counts efficiently even if fruits are under shadow, occluded by foliage, branches, or if there is some degree of overlap amongst fruits. Experimental results show a 91% average test accuracy on real images and 93% on synthetic images.

Journal ArticleDOI
31 Oct 2017-Sensors
TL;DR: A comprehensive review on the state-of-the-art research and development in smart home based remote healthcare technologies is presented.
Abstract: Advancements in medical science and technology, medicine and public health coupled with increased consciousness about nutrition and environmental and personal hygiene have paved the way for the dramatic increase in life expectancy globally in the past several decades. However, increased life expectancy has given rise to an increasing aging population, thus jeopardizing the socio-economic structure of many countries in terms of costs associated with elderly healthcare and wellbeing. In order to cope with the growing need for elderly healthcare services, it is essential to develop affordable, unobtrusive and easy-to-use healthcare solutions. Smart homes, which incorporate environmental and wearable medical sensors, actuators, and modern communication and information technologies, can enable continuous and remote monitoring of elderly health and wellbeing at a low cost. Smart homes may allow the elderly to stay in their comfortable home environments instead of expensive and limited healthcare facilities. Healthcare personnel can also keep track of the overall health condition of the elderly in real-time and provide feedback and support from distant facilities. In this paper, we have presented a comprehensive review on the state-of-the-art research and development in smart home based remote healthcare technologies.

Journal ArticleDOI
16 Mar 2017-Sensors
TL;DR: The experimental results show that the proposed person recognition method using the information extracted from body images is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
Abstract: The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

Journal ArticleDOI
21 Dec 2017-Sensors
TL;DR: A novel implantable wireless neural interface system for simultaneous neural signal recording and stimulation using a single cuff electrode is suggested, which successfully recorded and stimulated the tibial and peroneal nerves while communicating with the external device.
Abstract: Recently, implantable devices have become widely used in neural prostheses because they eliminate endemic drawbacks of conventional percutaneous neural interface systems. However, there are still several issues to be considered: low-efficiency wireless power transmission; wireless data communication over restricted operating distance with high power consumption; and limited functionality, working either as a neural signal recorder or as a stimulator. To overcome these issues, we suggest a novel implantable wireless neural interface system for simultaneous neural signal recording and stimulation using a single cuff electrode. By using widely available commercial off-the-shelf (COTS) components, an easily reconfigurable implantable wireless neural interface system was implemented into one compact module. The implantable device includes a wireless power consortium (WPC)-compliant power transmission circuit, a medical implant communication service (MICS)-band-based radio link and a cuff-electrode path controller for simultaneous neural signal recording and stimulation. During in vivo experiments with rabbit models, the implantable device successfully recorded and stimulated the tibial and peroneal nerves while communicating with the external device. The proposed system can be modified for various implantable medical devices, especially such as closed-loop control based implantable neural prostheses requiring neural signal recording and stimulation at the same time.

Journal ArticleDOI
21 Sep 2017-Sensors
TL;DR: The points of view on the intrinsic properties of graphene and its surface engineering concerned with the transduction mechanisms in biosensing applications are presented and practical synthesis techniques along with prospective properties of the graphene-based materials are explained.
Abstract: The advantages conferred by the physical, optical and electrochemical properties of graphene-based nanomaterials have contributed to the current variety of ultrasensitive and selective biosensor devices. In this review, we present the points of view on the intrinsic properties of graphene and its surface engineering concerned with the transduction mechanisms in biosensing applications. We explain practical synthesis techniques along with prospective properties of the graphene-based materials, which include the pristine graphene and functionalized graphene (i.e., graphene oxide (GO), reduced graphene oxide (RGO) and graphene quantum dot (GQD). The biosensing mechanisms based on the utilization of the charge interactions with biomolecules and/or nanoparticle interactions and sensing platforms are also discussed, and the importance of surface functionalization in recent up-to-date biosensors for biological and medical applications.

Journal ArticleDOI
06 Nov 2017-Sensors
TL;DR: Experimental results show that the proposed deep recurrent neural networks (DRNNs) used for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.
Abstract: Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

Journal ArticleDOI
20 Jan 2017-Sensors
TL;DR: A dataset of falls and activities of daily living acquired with a self-developed device composed of two types of accelerometer and one gyroscope is presented, validating findings of other authors and encourages developing new strategies with this new dataset as the benchmark.
Abstract: Research on fall and movement detection with wearable devices has witnessed promising growth However, there are few publicly available datasets, all recorded with smartphones, which are insufficient for testing new proposals due to their absence of objective population, lack of performed activities, and limited information Here, we present a dataset of falls and activities of daily living (ADLs) acquired with a self-developed device composed of two types of accelerometer and one gyroscope It consists of 19 ADLs and 15 fall types performed by 23 young adults, 15 ADL types performed by 14 healthy and independent participants over 62 years old, and data from one participant of 60 years old that performed all ADLs and falls These activities were selected based on a survey and a literature analysis We test the dataset with widely used feature extraction and a simple to implement threshold based classification, achieving up to 96% of accuracy in fall detection An individual activity analysis demonstrates that most errors coincide in a few number of activities where new approaches could be focused Finally, validation tests with elderly people significantly reduced the fall detection performance of the tested features This validates findings of other authors and encourages developing new strategies with this new dataset as the benchmark

Journal ArticleDOI
07 Oct 2017-Sensors
TL;DR: This paper is an overview of current gyroscopes and their roles based on their applications, and gyroscope technologies commercially available, such as Mechanical Gyroscope, silicon MEMS Gyroscopes, Ring Laser Gyroscope (RLGs) and Fiber-OpticGyroscopes (FOGs), are discussed.
Abstract: This paper is an overview of current gyroscopes and their roles based on their applications. The considered gyroscopes include mechanical gyroscopes and optical gyroscopes at macro- and micro-scale. Particularly, gyroscope technologies commercially available, such as Mechanical Gyroscopes, silicon MEMS Gyroscopes, Ring Laser Gyroscopes (RLGs) and Fiber-Optic Gyroscopes (FOGs), are discussed. The main features of these gyroscopes and their technologies are linked to their performance.

Journal ArticleDOI
29 Jan 2017-Sensors
TL;DR: The challenges and state-of-the-art methods of passive RFID antenna sensors and systems in terms of sensing and communication from system point of view are highlighted and future trends are discussed.
Abstract: In recent few years, the antenna and sensor communities have witnessed a considerable integration of radio frequency identification (RFID) tag antennas and sensors because of the impetus provided by internet of things (IoT) and cyber-physical systems (CPS). Such types of sensor can find potential applications in structural health monitoring (SHM) because of their passive, wireless, simple, compact size, and multimodal nature, particular in large scale infrastructures during their lifecycle. The big data from these ubiquitous sensors are expected to generate a big impact for intelligent monitoring. A remarkable number of scientific papers demonstrate the possibility that objects can be remotely tracked and intelligently monitored for their physical/chemical/mechanical properties and environment conditions. Most of the work focuses on antenna design, and significant information has been generated to demonstrate feasibilities. Further information is needed to gain deep understanding of the passive RFID antenna sensor systems in order to make them reliable and practical. Nevertheless, this information is scattered over much literature. This paper is to comprehensively summarize and clearly highlight the challenges and state-of-the-art methods of passive RFID antenna sensors and systems in terms of sensing and communication from system point of view. Future trends are also discussed. The future research and development in UK are suggested as well.

Journal ArticleDOI
05 Apr 2017-Sensors
TL;DR: This paper is the most comprehensive and detailed review of wetland remote sensing and it will be a good reference for wetland researchers.
Abstract: Wetlands are some of the most important ecosystems on Earth. They play a key role in alleviating floods and filtering polluted water and also provide habitats for many plants and animals. Wetlands also interact with climate change. Over the past 50 years, wetlands have been polluted and declined dramatically as land cover has changed in some regions. Remote sensing has been the most useful tool to acquire spatial and temporal information about wetlands. In this paper, seven types of sensors were reviewed: aerial photos coarse-resolution, medium-resolution, high-resolution, hyperspectral imagery, radar, and Light Detection and Ranging (LiDAR) data. This study also discusses the advantage of each sensor for wetland research. Wetland research themes reviewed in this paper include wetland classification, habitat or biodiversity, biomass estimation, plant leaf chemistry, water quality, mangrove forest, and sea level rise. This study also gives an overview of the methods used in wetland research such as supervised and unsupervised classification and decision tree and object-based classification. Finally, this paper provides some advice on future wetland remote sensing. To our knowledge, this paper is the most comprehensive and detailed review of wetland remote sensing and it will be a good reference for wetland researchers.

Journal ArticleDOI
21 Feb 2017-Sensors
TL;DR: An adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis that can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task.
Abstract: A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment.

Journal ArticleDOI
28 Jun 2017-Sensors
TL;DR: A literature review of sensors for the monitoring of benzene in ambient air and other volatile organic compounds considers commercially available sensors, including PID-based sensors, semiconductor (resistive gas sensors) and portable on-line measuring devices as for example sensor arrays.
Abstract: This article presents a literature review of sensors for the monitoring of benzene in ambient air and other volatile organic compounds. Combined with information provided by stakeholders, manufacturers and literature, the review considers commercially available sensors, including PID-based sensors, semiconductor (resistive gas sensors) and portable on-line measuring devices as for example sensor arrays. The bibliographic collection includes the following topics: sensor description, field of application at fixed sites, indoor and ambient air monitoring, range of concentration levels and limit of detection in air, model descriptions of the phenomena involved in the sensor detection process, gaseous interference selectivity of sensors in complex VOC matrix, validation data in lab experiments and under field conditions.

Journal ArticleDOI
01 Jun 2017-Sensors
TL;DR: Five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion and results show that all but one of the selected models perform similarly (about 35 mm average position estimation error).
Abstract: Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error).

Journal ArticleDOI
30 Dec 2017-Sensors
TL;DR: The technical aspects of oil spill remote sensing are examined and the practical uses and drawbacks of each technology are given with a focus on unfolding technology.
Abstract: The technical aspects of oil spill remote sensing are examined and the practical uses and drawbacks of each technology are given with a focus on unfolding technology. The use of visible techniques is ubiquitous, but limited to certain observational conditions and simple applications. Infrared cameras offer some potential as oil spill sensors but have several limitations. Both techniques, although limited in capability, are widely used because of their increasing economy. The laser fluorosensor uniquely detects oil on substrates that include shoreline, water, soil, plants, ice, and snow. New commercial units have come out in the last few years. Radar detects calm areas on water and thus oil on water, because oil will reduce capillary waves on a water surface given moderate winds. Radar provides a unique option for wide area surveillance, all day or night and rainy/cloudy weather. Satellite-carried radars with their frequent overpass and high spatial resolution make these day–night and all-weather sensors essential for delineating both large spills and monitoring ship and platform oil discharges. Most strategic oil spill mapping is now being carried out using radar. Slick thickness measurements have been sought for many years. The operative technique at this time is the passive microwave. New techniques for calibration and verification have made these instruments more reliable.

Journal ArticleDOI
07 Jul 2017-Sensors
TL;DR: A new setup is introduced that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances.
Abstract: Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today’s life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.

Journal ArticleDOI
10 Feb 2017-Sensors
TL;DR: This article reviews state-of-the-art wearable technologies that can be used for elderly care and discusses a series of considerations and future trends with regard to the construction of “smart clothing” system.
Abstract: Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people’s activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of “smart clothing” system.

Journal ArticleDOI
23 May 2017-Sensors
TL;DR: This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments, and determines the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node.
Abstract: LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways These gateways act like a transparent bridge towards a common network server The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32% In such a case, pure Aloha will have around 90% losses However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower We also show network scalability simulation results for some IoT use cases based on real data

Journal ArticleDOI
Yuanyuan Xu1, Xiao-yue Wu1, Guo Xiao1, Bin Kong1, Min Zhang1, Xiang Qian1, Shengli Mi1, Wei Sun 
19 May 2017-Sensors
TL;DR: These achievements demonstrate the successful application of 3D-printing technology in sensor fabrication, and the selected studies deeply explore the potential for creating sensors with higher performance.
Abstract: Future sensing applications will include high-performance features, such as toxin detection, real-time monitoring of physiological events, advanced diagnostics, and connected feedback However, such multi-functional sensors require advancements in sensitivity, specificity, and throughput with the simultaneous delivery of multiple detection in a short time Recent advances in 3D printing and electronics have brought us closer to sensors with multiplex advantages, and additive manufacturing approaches offer a new scope for sensor fabrication To this end, we review the recent advances in 3D-printed cutting-edge sensors These achievements demonstrate the successful application of 3D-printing technology in sensor fabrication, and the selected studies deeply explore the potential for creating sensors with higher performance Further development of multi-process 3D printing is expected to expand future sensor utility and availability

Journal ArticleDOI
05 Jul 2017-Sensors
TL;DR: An overview on recent important achievements in breast screening methods and breast biomarkers along with biosensors for rapidly diagnosing breast cancer along with microwave imaging techniques is provided.
Abstract: Early-stage cancer detection could reduce breast cancer death rates significantly in the long-term. The most critical point for best prognosis is to identify early-stage cancer cells. Investigators have studied many breast diagnostic approaches, including mammography, magnetic resonance imaging, ultrasound, computerized tomography, positron emission tomography and biopsy. However, these techniques have some limitations such as being expensive, time consuming and not suitable for young women. Developing a high-sensitive and rapid early-stage breast cancer diagnostic method is urgent. In recent years, investigators have paid their attention in the development of biosensors to detect breast cancer using different biomarkers. Apart from biosensors and biomarkers, microwave imaging techniques have also been intensely studied as a promising diagnostic tool for rapid and cost-effective early-stage breast cancer detection. This paper aims to provide an overview on recent important achievements in breast screening methods (particularly on microwave imaging) and breast biomarkers along with biosensors for rapidly diagnosing breast cancer.

Journal ArticleDOI
Yu Du1, Wenguang Jin1, Wentao Wei1, Yu Hu1, Weidong Geng1 
24 Feb 2017-Sensors
TL;DR: A benchmark database of HD-sEMG recordings of hand gestures performed by 23 participants is presented, and a deep-learning-based domain adaptation framework is proposed to enhance sEMG-based inter-session gesture recognition.
Abstract: High-density surface electromyography (HD-sEMG) is to record muscles' electrical activity from a restricted area of the skin by using two dimensional arrays of closely spaced electrodes. This technique allows the analysis and modelling of sEMG signals in both the temporal and spatial domains, leading to new possibilities for studying next-generation muscle-computer interfaces (MCIs). sEMG-based gesture recognition has usually been investigated in an intra-session scenario, and the absence of a standard benchmark database limits the use of HD-sEMG in real-world MCI. To address these problems, we present a benchmark database of HD-sEMG recordings of hand gestures performed by 23 participants, based on an 8 × 16 electrode array, and propose a deep-learning-based domain adaptation framework to enhance sEMG-based inter-session gesture recognition. Experiments on NinaPro, CSL-HDEMG and our CapgMyo dataset validate that our approach outperforms state-of-the-arts methods on intra-session and effectively improved inter-session gesture recognition.