scispace - formally typeset
Search or ask a question

Showing papers on "Data acquisition published in 2021"


Journal ArticleDOI
TL;DR: In this paper, the authors developed a data importance-aware scheduling algorithm for data acquisition in edge learning, which takes into account the informativeness of data samples, besides communication reliability.
Abstract: With the prevalence of intelligent mobile applications, edge learning is emerging as a promising technology for powering fast intelligence acquisition for edge devices from distributed data generated at the network edge. One critical task of edge learning is to efficiently utilize the limited radio resource to acquire data samples for model training at an edge server. In this paper, we develop a novel user scheduling algorithm for data acquisition in edge learning, called (data) importance-aware scheduling . A key feature of this scheduling algorithm is that it takes into account the informativeness of data samples, besides communication reliability. Specifically, the scheduling decision is based on a data importance indicator (DII), elegantly incorporating two “important” metrics from communication and learning perspectives, i.e., the signal-to-noise ratio (SNR) and data uncertainty . We first derive an explicit expression for this indicator targeting the classic classifier of support vector machine (SVM), where the uncertainty of a data sample is measured by its distance to the decision boundary. Then, the result is extended to convolutional neural networks (CNN) by replacing the distance based uncertainty measure with the entropy. As demonstrated via experiments using real datasets, the proposed importance-aware scheduling can exploit the two-fold multi-user diversity, namely the diversity in both the multiuser channels and the distributed data samples. This leads to faster model convergence than the conventional scheduling schemes that exploit only a single type of diversity.

59 citations


Journal ArticleDOI
Keke Huang1, Shujie Wu1, Fanbiao Li1, Chunhua Yang1, Weihua Gui1 
TL;DR: Wang et al. as mentioned in this paper proposed a deep learning model with multirate data samples, which can extract features from the multi-rate sampling data automatically without expertise, thus it is more suitable in the industrial situation.
Abstract: Hydraulic systems are a class of typical complex nonlinear systems, which have been widely used in manufacturing, metallurgy, energy, and other industries. Nowadays, the intelligent fault diagnosis problem of hydraulic systems has received increasing attention for it can increase operational safety and reliability, reduce maintenance cost, and improve productivity. However, because of the high nonlinear and strong fault concealment, the fault diagnosis of hydraulic systems is still a challenging task. Besides, the data samples collected from the hydraulic system are always in different sampling rates, and the coupling relationship between the components brings difficulties to accurate data acquisition. To solve the above issues, a deep learning model with multirate data samples is proposed in this article, which can extract features from the multirate sampling data automatically without expertise, thus it is more suitable in the industrial situation. Experiment results demonstrate that the proposed method achieves high diagnostic and fault pattern recognition accuracy even when the imbalance degree of sample data is as large as 1:100. Moreover, the proposed method can increase about 10% diagnosis accuracy when compared with some state-of-the-art methods.

52 citations


Journal ArticleDOI
TL;DR: A chaotic compressive sensing (CS) scheme for securely processing industrial big image data in the fog computing paradigm, called privacy-assured FogCS, using the sine logistic modulation map for secure image data collection in the sensor nodes.
Abstract: In the age of the industrial big data, there are several significant problems such as high-overhead data acquisition, data privacy leakage, and data tampering. Fog computing capability is rapidly expanding to address not only network congestion issues but data security issues. This article presents a chaotic compressive sensing (CS) scheme for securely processing industrial big image data in the fog computing paradigm, called privacy-assured FogCS. Specially, the sine logistic modulation map is used to drive the privacy-assured, authenticated, and block CS for secure image data collection in the sensor nodes. After sampling, the measurements are normalized in the fog nodes. The normalized measurements can achieve the perfect secrecy and their energy values are further masked through the proposed permutation-diffusion architecture. Finally, these relevant data are transmitted to the clouds (data centers) for storage, reconstruction, and authentication if required. In addition, a hardware implementation reference on a field programmable gate array is designed. Simulation analyses show the feasibility and efficiency of the privacy-assured FogCS scheme.

44 citations


Journal ArticleDOI
01 Oct 2021
TL;DR: Gaussian process regression (GPR) is a powerful, non-parametric and robust technique for uncertainty quantification and function approximation that can be applied to optimal and autonomous data acquisition and several use cases from different fields are discussed.
Abstract: The execution and analysis of complex experiments are challenged by the vast dimensionality of the underlying parameter spaces. Although an increase in data-acquisition rates should allow broader querying of the parameter space, the complexity of experiments and the subtle dependence of the model function on input parameters remains daunting owing to the sheer number of variables. New strategies for autonomous data acquisition are being developed, with one promising direction being the use of Gaussian process regression (GPR). GPR is a quick, non-parametric and robust approximation and uncertainty quantification method that can be applied directly to autonomous data acquisition. We review GPR-driven autonomous experimentation and illustrate its functionality using real-world examples from large experimental facilities in the USA and France. We introduce the basics of a GPR-driven autonomous loop with a focus on Gaussian processes, and then shift the focus to the infrastructure that needs to be built around GPR to create a closed loop. Finally, the case studies we discuss show that Gaussian-process-based autonomous data acquisition is a widely applicable method that can facilitate the optimal use of instruments and facilities by enabling the efficient acquisition of high-value datasets. Gaussian process regression (GPR) is a powerful, non-parametric and robust technique for uncertainty quantification and function approximation that can be applied to optimal and autonomous data acquisition. This Review introduces the basics of GPR and discusses several use cases from different fields.

44 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an augmented multidimensional convolutional neural network (CNN) for industrial soft sensing, which is applied to the f-CaO prediction of the cement clinker production process.
Abstract: In the era of industrial big data, data-driven soft-sensor models have become an important method to guide production and optimize control. However, due to the limitation of data acquisition methods, the industrial data obtained in practice have unavoidable defects that affect the performance of soft sensors. Aiming at the unbalanced sampling, inaccurate matching, and partial missing of industrial process data, this article proposes an augmented multidimensional convolutional neural network (CNN) for industrial soft sensing. For complete process data information, we stitch the fine-grained data to obtain coarse-grained data and then use a CNN to extract deep features. On the basis of analyzing the physical meaning of data, the structure of multidimensional convolution is designed to focus on different details of process information. In this framework, the problem of partial missing data is emphasized. Then, mix-grained data augmentation strategies are invented to solve this problem and improve the performance of multidimensional CNN (MDCNN) soft sensors. The proposed augmented MDCNN is applied to the f-CaO prediction of the cement clinker production process, results of which show superiority compared to existing methods.

37 citations


Journal ArticleDOI
Jiguo Liu1, Jian Huang1, Rui Sun1, Haitao Yu, Randong Xiao1 
TL;DR: A multi-source data fusion model that combines information from floating vehicles and microwave sensors, and that, by using GA-PSO-BP neural network is proposed, has combined GA and PSO ingeniously and can overcome the difficulties of the traditional fusion model of its estimation inaccuracy.
Abstract: The development of real-time road condition systems will better monitor road network operation status. However, the weak point of all these systems is their need for comprehensive and reliable data. For traffic data acquisition, two sources are currently available: 1) floating vehicles and 2) remote traffic microwave sensors (RTMS). The former consists of the use of mobile probe vehicles as mobile sensors, and the latter consists of a set of fixed point detectors installed in the roads. First, the structure of a three-layer BP neural network is designed to achieve the fusion of the floating car data (FCD) and the fixed detector data (FDD) efficiently. Second, in order to improve the accuracy of traffic speed estimation, a multi-source data fusion model that combines information from floating vehicles and microwave sensors, and that, by using GA-PSO-BP neural network is proposed. The proposed model has combined GA and PSO ingeniously. The hybrid model can not only overcome the difficulties of the traditional fusion model of its estimation inaccuracy, but also compensate the insufficiency of the traditional BP algorithm. Finally, this system has been tested and implemented on actual roads, and the simulation results show the accuracy of data has reached 98%.

37 citations


Journal ArticleDOI
TL;DR: A thorough review of the state-of-the-art acquisition and processing techniques for building reconstruction using point clouds with particular focus on data acquisition and on the strengths and weaknesses of key processing techniques is provided.
Abstract: Nowadays, point clouds acquired through laser scanning and stereo matching have deemed to be one of the best sources for mapping urban scenes. Spatial coordinates of 3-D points directly reflect the geometry of object surfaces, which significantly streamlining the 3-D reconstruction and modeling of objects. The construction industry has utilized point clouds in various tasks, including but not limited to, building reconstruction, field inspection, and construction progress tracking. However, it is mandatory to generate a high-level (i.e., geometrically accurate, semantically rich, and simply described) representation of 3-D objects from those 3-D measurements (i.e., points), so that the acquired information can be fully utilized. The reconstruction of 3-D objects in a scene of man-made infrastructure and buildings is one of the core tasks using point clouds, which involves both the 3-D data acquisition and processing. There are few systematic reviews summarizing the ways of acquiring 3-D points and the techniques for reconstructing 3-D objects from point clouds for application scenarios in a built environment or construction site. This article therefore intends to provide a thorough review of the state-of-the-art acquisition and processing techniques for building reconstruction using point clouds. It places particular focus on data acquisition and on the strengths and weaknesses of key processing techniques. This review work will discuss the limitations of current data acquisition and processing techniques, as well as the current research gap, ultimately providing recommendations on future research directions in order to fulfill the pressing needs of the intended construction applications in the foreseeable future.

34 citations


Proceedings ArticleDOI
20 Jan 2021
TL;DR: In this article, the authors present the work of integration between the control system and the data acquisition process based on deriving current weather data on the signals used to produce the presented data and the second part includes hardware control.
Abstract: This article primarily presents the work of integration between the control system and the data acquisition process based on deriving current weather data on the signals used to produce the presented data and the second part includes hardware control. All interactions are integrations that create a smart digital display board. It has powerful content integration, multiple screen support, and easy to access. This unique feature makes them useful for developing smart digital boards. Instead of using sensors to collect weather data, the project receives data from various weather stations available worldwide, such as an international weather data provider such as weather, open weather maps, weather group API, Foreca, -Sky Sky, and earth weather online. The project is designed to build an efficient system to display the most accurate real-time weather data and its chief manager can independently advertise the information along with the weather data. The Application-Programming Interface (API)is used to retrieve weather data and also used to measure temperature, humidity and air-speed data, air and pollution control.

30 citations


Journal ArticleDOI
TL;DR: This work presents an architecture to acquire data for an Additive Manufacturing (3D printer) process, using a set of consolidated Internet of Things (IoT) technologies to collect, verify and store these data in a trustful and secure way.
Abstract: Online process control is a crucial task in modern production systems that use digital twin technology. The data acquisition from machines must provide reliable and on-the-fly data, reflecting the ...

28 citations


Journal ArticleDOI
TL;DR: This paper proposes the conceptualization, design, and initial development of a platform for the utilization of data derived from industrial environments for the optimization of the equipment design and the main aspects of the proposed framework are the data acquisition, data processing and the simulation.

27 citations


Journal ArticleDOI
TL;DR: Three core technologies to realize a smart-factory platform for die-casting industry are developed: a novel cost-effective product-tracking technology to obtain high-quality process data providing individual product information, an advanced process data acquisition system that considers process failure, and a fault detection module based on an artificial neural network.

Journal ArticleDOI
TL;DR: The baseline design of the DAQ and HLT systems for the Phase-2 of CMS is described, consisting of custom electronic boards and operating on dedicated data streams, and a second level, the High Level Trigger (HLT), using software algorithms running asynchronously on standard processors and making use of the full detector data to select events for offline storage and analysis.
Abstract: The High Luminosity LHC (HL-LHC) will start operating in 2027 after the third Long Shutdown (LS3), and is designed to provide an ultimate instantaneous luminosity of 7:5 × 1034 cm−2 s−1 , at the price of extreme pileup of up to 200 interactions per crossing. The number of overlapping interactions in HL-LHC collisions, their density, and the resulting intense radiation environment, warrant an almost complete upgrade of the CMS detector. The upgraded CMS detector will be read out by approximately fifty thousand highspeed front-end optical links at an unprecedented data rate of up to 80 Tb/s, for an average expected total event size of approximately 8 − 10 MB. Following the present established design, the CMS trigger and data acquisition system will continue to feature two trigger levels, with only one synchronous hardware-based Level-1 Trigger (L1), consisting of custom electronic boards and operating on dedicated data streams, and a second level, the High Level Trigger (HLT), using software algorithms running asynchronously on standard processors and making use of the full detector data to select events for offline storage and analysis. The upgraded CMS data acquisition system will collect data fragments for Level-1 accepted events from the detector back-end modules at a rate up to 750 kHz, aggregate fragments corresponding to individual Level- 1 accepts into events, and distribute them to the HLT processors where they will be filtered further. Events accepted by the HLT will be stored permanently at a rate of up to 7.5 kHz. This paper describes the baseline design of the DAQ and HLT systems for the Phase-2 of CMS.

Journal ArticleDOI
TL;DR: A monitoring system based on open-source hardware and software for tracking the temperature of the photovoltaic generator in such an SMG, which consists of a network of digital temperature sensors connected to an Arduino microcontroller, which feeds the acquired data to a Raspberry Pi microcomputer.
Abstract: Smart grids and smart microgrids (SMGs) require proper monitoring for their operation. To this end, measuring, data acquisition, and storage, as well as remote online visualization of real-time information, must be performed using suitable equipment. An experimental SMG is being deployed that combines photovoltaics and the energy carrier hydrogen through the interconnection of photovoltaic panels, electrolyser, fuel cell, and load around a voltage bus powered by a lithium battery. This paper presents a monitoring system based on open-source hardware and software for tracking the temperature of the photovoltaic generator in such an SMG. In fact, the increases in temperature in PV modules lead to a decrease in their efficiency, so this parameter needs to be measured in order to monitor and evaluate the operation. Specifically, the developed monitoring system consists of a network of digital temperature sensors connected to an Arduino microcontroller, which feeds the acquired data to a Raspberry Pi microcomputer. The latter is accessed by a cloud-enabled user/operator interface implemented in Grafana. The monitoring system is expounded and experimental results are reported to validate the proposal.

DOI
25 Feb 2021
TL;DR: In 2020, the program continued to advance and conducted several experiments in three aspects, namely, Radar Cross-Section (RCS) calibration of radar targets, detection of sea clutter and target under different sea conditions, as well as detection and tracking of maneuvering targets in sea.
Abstract: There is an urgent need for radar-measured data to tackle key technologies of radar maritime target detection. The ‘‘Sea-detecting X-band Radar and Data Acquisition Program’’, proposed in 2019, aims to obtain data through radar experiments and share them publicly. In 2020, the program continued to advance and conducted several experiments in three aspects, namely, Radar Cross-Section (RCS) calibration of radar targets, detection of sea clutter and target under different sea conditions, as well as detection and tracking of maneuvering targets in sea. The measurement data of the stainless steel sphere calibrator at different distances in radar slow-scanning mode, sea clutter in radar staring mode in different directions, sea target in radar staring mode, and marine engine speedboat in radar scanning mode are obtained. In addition, wind and wave data, data from the Automatic Identification System (AIS) of targets, visible/infrared data, and other associated sensor data are synchronously obtained.

Journal ArticleDOI
Jianguo Duan1, Ma Tianyu1, Qinglei Zhang1, Liu Zhen1, Jiyun Qin1 
TL;DR: Three key technologies utilized to create the system including underlying equipment real-time communication, virtual space building and virtual reality interaction have been demonstrated in this paper.
Abstract: Digital twin technology is a key technology to realize cyber-physical system. Owing to the problems of low visual monitoring of the blade-rotor test rig and poor equipment monitoring capabilities, this paper proposes a framework based on the digital twin technology. The digital-twin based architecture and major function implementation have been carried out form five dimensions, i.e. Physical layer, Virtual layer, Data layer, Application layer and User layer. Three key technologies utilized to create the system including underlying equipment real-time communication, virtual space building and virtual reality interaction have been demonstrated in this paper. Based on RS-485 and other communication protocols, the data acquisition of the underlying devices have been successfully implemented, and then the real-time data reading has been achieved. Finally, the rationality of the system has been validated by taking the blade-rotor test rig as the application object, which provides a reference for the monitoring and evaluation of equipment involved in manufacturing and experiment.

Journal ArticleDOI
10 May 2021-Sensors
TL;DR: In this article, the authors proposed a low-cost real-time internet of things system for micro and mini photovoltaic generation systems that can monitor continuous voltage, continuous current, alternating power, and seven meteorological variables.
Abstract: Monitoring and data acquisition are essential to recognize the renewable resources available on-site, evaluate electrical conversion efficiency, detect failures, and optimize electrical production. Commercial monitoring systems for the photovoltaic system are generally expensive and closed for modifications. This work proposes a low-cost real-time internet of things system for micro and mini photovoltaic generation systems that can monitor continuous voltage, continuous current, alternating power, and seven meteorological variables. The proposed system measures all relevant meteorological variables and directly acquires photovoltaic generation data from the plant (not from the inverter). The system is implemented using open software, connects to the internet without cables, stores data locally and in the cloud, and uses the network time protocol to synchronize the devices’ clocks. To the best of our knowledge, no work reported in the literature presents these features altogether. Furthermore, experiments carried out with the proposed system showed good effectiveness and reliability. This system enables fog and cloud computing in a photovoltaic system, creating a time series measurements data set, enabling the future use of machine learning to create smart photovoltaic systems.

Journal ArticleDOI
15 Sep 2021-Sensors
TL;DR: In this article, a Cost Hyper-Efficient Arduino Product (CHEAP) is developed to accurately measure structural accelerations, which is composed of five low-cost accelerometers that are connected to an Arduino microcontroller as their data acquisition system.
Abstract: Nowadays, engineers are widely using accelerometers to record the vibration of structures for structural verification purposes. The main obstacle for using these data acquisition systems is their high cost, which limits its use to unique structures with a relatively high structural health monitoring budget. In this paper, a Cost Hyper-Efficient Arduino Product (CHEAP) has been developed to accurately measure structural accelerations. CHEAP is a system that is composed of five low-cost accelerometers that are connected to an Arduino microcontroller as their data acquisition system. Test results show that CHEAP not only has a significantly lower price (14 times cheaper in the worst-case scenario) compared with other systems used for comparison but also shows better accuracy on low frequencies for low acceleration amplitudes. Moreover, the final output results of Fast Fourier Transformation (FFT) assessments showed a better observable resolution for CHEAP than the studied control systems.

Journal ArticleDOI
TL;DR: In this article, a UAV-based sensor system called the Accurate and Speed Scanner (AS-Scanner) was developed for rapid acquisition and mapping of normalized difference vegetation index (NDVI) values.

Journal ArticleDOI
TL;DR: Real-time test results revealed that the suggested data acquisition system is appropriate, reliable, cost-effective, and suitable for harsh outdoor conditions for monitoring and gathering operational information of the solar PV system to assess its performance.
Abstract: The performance degradation due to environmental factors represents a major shortcoming to the solar PV system making them untrustworthy for dessert or remote plants. Real-Time monitoring systems a...

Proceedings ArticleDOI
Ki Hyun Tae1, Steven Euijong Whang1
09 Jun 2021
TL;DR: Slice Tuner as mentioned in this paper maintains learning curves of slices that estimate the model accuracies given more data and uses convex optimization to find the best data acquisition strategy to optimize the model accuracy and fairness.
Abstract: As machine learning becomes democratized in the era of Software 2.0, a serious bottleneck is acquiring enough data to ensure accurate and fair models. Recent techniques including crowdsourcing provide cost-effective ways to gather such data. However, simply acquiring data as much as possible is not necessarily an effective strategy for optimizing accuracy and fairness. For example, if an online app store has enough training data for certain slices of data (say American customers), but not for others, obtaining more American customer data will only bias the model training. Instead, we contend that one needs to selectively acquire data and propose Slice Tuner, which acquires possibly-different amounts of data per slice such that the model accuracy and fairness on all slices are optimized. This problem is different than labeling existing data (as in active learning or weak supervision) because the goal is obtaining the right amounts of new data. At its core, Slice Tuner maintains learning curves of slices that estimate the model accuracies given more data and uses convex optimization to find the best data acquisition strategy. The key challenges of estimating learning curves are that they may be inaccurate if there is not enough data, and there may be dependencies among slices where acquiring data for one slice influences the learning curves of others. We solve these issues by iteratively and efficiently updating the learning curves as more data is acquired. We evaluate Slice Tuner on real datasets using crowdsourcing for data acquisition and show that Slice Tuner significantly outperforms baselines in terms of model accuracy and fairness, even when the learning curves cannot be reliably estimated.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate a denoising method that utilizes deep learning as an intelligent way to overcome the constraint of data acquisition in multidimensional phase space, owing to the large phase space volume to be covered.
Abstract: In spectroscopic experiments, data acquisition in multi-dimensional phase space may require long acquisition time, owing to the large phase space volume to be covered. In such case, the limited time available for data acquisition can be a serious constraint for experiments in which multidimensional spectral data are acquired. Here, taking angle-resolved photoemission spectroscopy (ARPES) as an example, we demonstrate a denoising method that utilizes deep learning as an intelligent way to overcome the constraint. With readily available ARPES data and random generation of training data set, we successfully trained the denoising neural network without overfitting. The denoising neural network can remove the noise in the data while preserving its intrinsic information. We show that the denoising neural network allows us to perform similar level of second-derivative and line shape analysis on data taken with two orders of magnitude less acquisition time. The importance of our method lies in its applicability to any multidimensional spectral data that are susceptible to statistical noise.

Journal ArticleDOI
TL;DR: In this paper, a 16-channel in elevation airborne X-band digital beamforming (DBF)-SAR system with 500-MHz bandwidth is presented as a test bed to provide the technical reserves and support for a future spaceborne DBF-sAR system in China.
Abstract: In the Earth observation mission of the synthetic aperture radar (SAR), wide swath can be used to complete global monitoring in a short time and high resolution can provide rich detailed information about the feature space and prominent structure and texture. However, the traditional single-channel classical SAR system cannot meet high-resolution and wide-swath (HRWS) imaging demand due to the constraint of minimum antenna area. Fortunately, this fundamental limitation can be overcome by using multiple receive subapertures in combination with advanced digital beamforming (DBF) technique. DBF in elevation can provide high gain and better system performance and has recently gained much attention in the field of SAR imaging. This article presents a 16-channel in elevation airborne X-band DBF-SAR system with 500-MHz bandwidth, characterized by high speed data acquisition and storage, as a test bed to provide the technical reserves and support for a future spaceborne DBF-SAR system in China. The hardware configuration of this system is designed according to a realistic flight mission. To verify the feasibility and operability of this advanced 16-channel DBF-SAR system, an outfield airborne flight experiment was successfully conducted in eastern Guangdong Province in November 2019. Meanwhile, considering the inevitable channel mismatch from airborne system, a precise strategy as well as the underlying signal processing is proposed to process the experiment data. In addition to the channel mismatch due to the topographic height, the Scan-On-Receive (SCORE) pattern loss (SPL) is also an inherent factor, which will deteriorate the output SNR in final SAR images. Therefore, this article also implements a quantitative assessment of SPL combined with the practical flight parameters and the real airborne data. Finally, the corresponding processing results are presented and analyzed in detail. The practical SNR improvement of 11.23 dB emphasize that DBF technology can significantly improve the quality of SAR images and will make an essential contribution to next generation of HRWS technology for environment monitoring.

Journal ArticleDOI
TL;DR: In this paper, a novel local processing mechanism (LPM) is proposed, which facilitates reduction of manifolds at data acquisition level of sensor nodes and estimates costs corresponding to non-Poisson and Poisson arrival of data packets at local processor using the well-known queuing model.
Abstract: The extensive growth in popularity of Internet of Things (IoT) has led to the generation of massive amount of data from several heterogeneous sensory devices. This has also led to the increase in energy consumption by these connected devices. Smart buildings are one such platform which are equipped with several micro-controllers and sensors, generating a huge amount of redundant information at their data acquisition level. As a result, real-time applications may not be efficiently executed due to latency delays at the cloud service end. This requires several devices at cloud service end to execute the massive amount of data generated by these sensors, which does not satisfy green computing criteria. In this context, a novel local processing mechanism (LPM) is proposed, which favors an improved IoT service architecture for smart buildings. From the perspective of green computing, the proposed LPM framework facilitates reduction of manifolds at data acquisition level of sensor nodes. This paper also addresses the concept of optimal use of sensors in a wireless sensor network (WSN) and estimates costs corresponding to non-Poisson and Poisson arrival of data packets at local processor using the well-known queuing model. We also provide an efficient algorithm for smart buildings using our expert Markov switching (EMS) model, which is a well known probabilistic model in the field of artificial intelligence (AI) for subjectively validating real sensory data sets (viz., temperature, pressure, and humidity). Further, it has been analyzed that the proposed EMS algorithm outperforms several other algorithms conventionally used for determining the state of large-scale dynamic sensor networks. The service cost of proposed model has been compared with conventional model under various stress conditions viz., arrival rate, service rate, and number of clusters. It is observed that the proposed model operates well by leveraging green computing criteria. Thus, in the aforementioned context, this paper provides thing-centric, data-centric, and service-oriented IoT architecture.

Journal ArticleDOI
TL;DR: The Corryvreckan framework architecture and user interface is introduced, and a detailed overview of the event building algorithm is provided, including an implementation of Millepede-II for offline alignment.
Abstract: Corryvreckan is a versatile, highly configurable software with a modular structure designed to reconstruct and analyse test beam and laboratory data. It caters to the needs of the test beam community by providing a flexible offline event building facility to combine detectors with different read-out schemes, with or without trigger information, and includes the possibility to correlate data from multiple devices based on timestamps. Hit timing information, available with high precision from an increasing number of detectors, can be used in clustering and tracking to reduce combinatorics. Several algorithms, including an implementation of Millepede-II, are provided for offline alignment. A graphical user interface enables direct monitoring of the reconstruction progress and can be employed for quasi-online monitoring during data taking. This work introduces the Corryvreckan framework architecture and user interface, and provides a detailed overview of the event building algorithm. The reconstruction and analysis capabilities are demonstrated with data recorded at the DESY II Test Beam Facility using the EUDAQ2 data acquisition framework with an EUDET-type beam telescope, a Timepix3 timing reference, a fine-pitch planar silicon sensor with CLICpix2 readout and the AIDA Trigger Logic Unit. The individual steps of the reconstruction chain are presented in detail.

Journal ArticleDOI
01 Dec 2021
TL;DR: The paper focuses on deep neural networks and discusses techniques for supporting transfer learning and pruning, so to reduce the times for training the networks and the size of the networks for deployment at IoT devices.
Abstract: This paper addresses the problem of efficient and effective data collection and analytics for applications such as civil infrastructure monitoring and emergency management. Such problem requires the development of techniques by which data acquisition devices, such as IoT devices, can: (a) perform local analysis of collected data; and (b) based on the results of such analysis, autonomously decide further data acquisition. The ability to perform local analysis is critical in order to reduce the transmission costs and latency as the results of an analysis are usually smaller in size than the original data. As an example, in case of strict real-time requirements, the analysis results can be transmitted in real-time, whereas the actual collected data can be uploaded later on. The ability to autonomously decide about further data acquisition enhances scalability and reduces the need of real-time human involvement in data acquisition processes, especially in contexts with critical real-time requirements. The paper focuses on deep neural networks and discusses techniques for supporting transfer learning and pruning, so to reduce the times for training the networks and the size of the networks for deployment at IoT devices. We also discuss approaches based on machine learning reinforcement techniques enhancing the autonomy of IoT devices.

Journal ArticleDOI
TL;DR: A framework for automated defect inspection of the concrete structures, made up of data collection, defect detection, scene reconstruction, defect assessment and data integration stages is presented, successfully demonstrating the joint application of advanced technologies in facilitating inspection programs of civil infrastructure.

Journal ArticleDOI
TL;DR: Methods and their implementations in a Python package, named Algotom, are provided for not only processing such data types but also with the highest quality possible for extended applications of parallel-beam tomography systems.
Abstract: Parallel-beam tomography systems at synchrotron facilities have limited field of view (FOV) determined by the available beam size and detector system coverage. Scanning the full size of samples bigger than the FOV requires various data acquisition schemes such as grid scan, 360-degree scan with offset center-of-rotation (COR), helical scan, or combinations of these schemes. Though straightforward to implement, these scanning techniques have not often been used due to the lack of software and methods to process such types of data in an easy and automated fashion. The ease of use and automation is critical at synchrotron facilities where using visual inspection in data processing steps such as image stitching, COR determination, or helical data conversion is impractical due to the large size of datasets. Here, we provide methods and their implementations in a Python package, named Algotom, for not only processing such data types but also with the highest quality possible. The efficiency and ease of use of these tools can help to extend applications of parallel-beam tomography systems.

Journal ArticleDOI
TL;DR: In-field continuous evaluation over the past three years prove that the proposed solution—SPWAS’21—is not only reliable but also represents a robust and low-cost data acquisition device capable of gathering different parameters of interest in PA practices.
Abstract: Spatial and temporal variability characterization in Precision Agriculture (PA) practices is often accomplished by proximity data gathering devices, which acquire data from a wide variety of sensors installed within the vicinity of crops. Proximity data acquisition usually depends on a hardware solution to which some sensors can be coupled, managed by a software that may (or may not) store, process and send acquired data to a back-end using some communication protocol. The sheer number of both proprietary and open hardware solutions, together with the diversity and characteristics of available sensors, is enough to deem the task of designing a data acquisition device complex. Factoring in the harsh operational context, the multiple DIY solutions presented by an active online community, available in-field power approaches and the different communication protocols, each proximity monitoring solution can be regarded as singular. Data acquisition devices should be increasingly flexible, not only by supporting a large number of heterogeneous sensors, but also by being able to resort to different communication protocols, depending on both the operational and functional contexts in which they are deployed. Furthermore, these small and unattended devices need to be sufficiently robust and cost-effective to allow greater in-field measurement granularity 365 days/year. This paper presents a low-cost, flexible and robust data acquisition device that can be deployed in different operational contexts, as it also supports three different communication technologies: IEEE 802.15.4/ZigBee, LoRa/LoRaWAN and GRPS. Software and hardware features, suitable for using heat pulse methods to measure sap flow, leaf wetness sensors and others are embedded. Its power consumption is of only 83 μA during sleep mode and the cost of the basic unit was kept below the EUR 100 limit. In-field continuous evaluation over the past three years prove that the proposed solution—SPWAS’21—is not only reliable but also represents a robust and low-cost data acquisition device capable of gathering different parameters of interest in PA practices.

Journal ArticleDOI
TL;DR: High accuracy and reliability was found between body position and glide variable data between the two methods with relative error ≤5.4% and correlation coefficients >0.95 for all variables, which could be applied to greatly reduce the time of kinematic analysis in sports.
Abstract: Video analysis is used in sport to derive kinematic variables of interest but often relies on time-consuming tracking operations. The purpose of this study was to determine speed, accuracy and reli...

Journal ArticleDOI
TL;DR: A new functional control chart is elaborated on the residuals obtained from a function-on-function linear regression of the quality characteristic profile on the functional covariates of the shipping industry with particular regard to detecting their reduction after a specific energy efficiency initiative.
Abstract: The modern development of data acquisition technologies in many industrial processes is facilitating the collection of quality characteristics that are apt to be modeled as functions, which are usu...