Multidisciplinary Digital Publishing Institute
About: Sensors is an academic journal published by Multidisciplinary Digital Publishing Institute. The journal publishes majorly in the area(s): Computer science & Medicine. It has an ISSN identifier of 1424-8220. It is also open access. Over the lifetime, 54785 publications have been published receiving 954999 citations.
Topics: Computer science, Medicine, Artificial intelligence, Wireless sensor network, Convolutional neural network
Papers published on a yearly basis
TL;DR: A brief review of changes of sensitivity of conductometric semiconducting metal oxide gas sensors due to the five factors: chemical components, surface-modification and microstructures of sensing layers, temperature and humidity.
Abstract: Conductometric semiconducting metal oxide gas sensors have been widely used and investigated in the detection of gases. Investigations have indicated that the gas sensing process is strongly related to surface reactions, so one of the important parameters of gas sensors, the sensitivity of the metal oxide based materials, will change with the factors influencing the surface reactions, such as chemical components, surface-modification and microstructures of sensing layers, temperature and humidity. In this brief review, attention will be focused on changes of sensitivity of conductometric semiconducting metal oxide gas sensors due to the five factors mentioned above.
TL;DR: A generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which is suitable for multimodal wearable sensors, does not require expert knowledge in designing features, and explicitly models the temporal dynamics of feature activations is proposed.
Abstract: Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.
TL;DR: The calibration of the Kinect sensor is discussed, and an analysis of the accuracy and resolution of its depth data is provided, based on a mathematical model of depth measurement from disparity.
Abstract: Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements.
TL;DR: An improved sparse convolution method for Voxel-based 3D convolutional networks is investigated, which significantly increases the speed of both training and inference and introduces a new form of angle loss regression to improve the orientation estimation performance.
Abstract: LiDAR-based or RGB-D-based object detection is used in numerous applications, ranging from autonomous driving to robot vision. Voxel-based 3D convolutional networks have been used for some time to enhance the retention of information when processing point cloud LiDAR data. However, problems remain, including a slow inference speed and low orientation estimation performance. We therefore investigate an improved sparse convolution method for such networks, which significantly increases the speed of both training and inference. We also introduce a new form of angle loss regression to improve the orientation estimation performance and a new data augmentation approach that can enhance the convergence speed and performance. The proposed network produces state-of-the-art results on the KITTI 3D object detection benchmarks while maintaining a fast inference speed.
TL;DR: This review focuses on recent advances in the field of Smart Textiles and pays particular attention to the materials and their manufacturing process, to highlight a possible trade-off between flexibility, ergonomics, low power consumption, integration and eventually autonomy.
Abstract: Electronic Textiles (e-textiles) are fabrics that feature electronics and interconnections woven into them, presenting physical flexibility and typical size that cannot be achieved with other existing electronic manufacturing techniques. Components and interconnections are intrinsic to the fabric and thus are less visible and not susceptible of becoming tangled or snagged by surrounding objects. E-textiles can also more easily adapt to fast changes in the computational and sensing requirements of any specific application, this one representing a useful feature for power management and context awareness. The vision behind wearable computing foresees future electronic systems to be an integral part of our everyday outfits. Such electronic devices have to meet special requirements concerning wearability. Wearable systems will be characterized by their ability to automatically recognize the activity and the behavioral status of their own user as well as of the situation around her/him, and to use this information to adjust the systems' configuration and functionality. This review focuses on recent advances in the field of Smart Textiles and pays particular attention to the materials and their manufacturing process. Each technique shows advantages and disadvantages and our aim is to highlight a possible trade-off between flexibility, ergonomics, low power consumption, integration and eventually autonomy.