scispace - formally typeset
Search or ask a question

Showing papers in "EURASIP Journal on Advances in Signal Processing in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors presented state-of-the-art techniques in both areas of the automated analysis, i.e., motion artifacts removal and heart rate tracking, and concluded that adaptive filtering and multi-resolution decomposition techniques are better for MA removal and machine learning-based approaches are future perspective of heart rate tracker.
Abstract: Non-invasive photoplethysmography (PPG) technology was developed to track heart rate during motion. Automated analysis of PPG has made it useful in both clinical and non-clinical applications. However, PPG-based heart rate tracking is a challenging problem due to motion artifacts (MAs) which are main contributors towards signal degradation as they mask the location of heart rate peak in the spectra. A practical analysis system must have good performance in MA removal as well as in tracking. In this article, we have presented state-of-art techniques in both areas of the automated analysis, i.e., MA removal and heart rate tracking, and have concluded that adaptive filtering and multi-resolution decomposition techniques are better for MA removal and machine learning-based approaches are future perspective of heart rate tracking. Hence, future systems will be composed of machine learning-based trackers fed with either empirically decomposed signal or from output of adaptive filter.

33 citations


Journal ArticleDOI
TL;DR: A comparative study of the use of the recent deep learning models to deal with detection and classification of coronavirus pneumonia from other pneumonia cases using chest X-ray images of Covid-19 and Pneumonia patients shows DenseNet121 showed better performance when compared with the other models.
Abstract: Coronavirus disease of 2019 or COVID-19 is a rapidly spreading viral infection that has affected millions all over the world. With its rapid spread and increasing numbers, it is becoming overwhelming for the healthcare workers to rapidly diagnose the condition and contain it from spreading. Hence it has become a necessity to automate the diagnostic procedure. This will improve the work efficiency as well as keep the healthcare workers safe from getting exposed to the virus. Medical image analysis is one of the rising research areas that can tackle this issue with higher accuracy. This paper conducts a comparative study of the use of the recent deep learning models (VGG16, VGG19, DenseNet121, Inception-ResNet-V2, InceptionV3, Resnet50, and Xception) to deal with the detection and classification of coronavirus pneumonia from pneumonia cases. This study uses 7165 chest X-ray images of COVID-19 (1536) and pneumonia (5629) patients. Confusion metrics and performance metrics were used to analyze each model. Results show DenseNet121 (99.48% of accuracy) showed better performance when compared with the other models in this study.

31 citations


Journal ArticleDOI
TL;DR: This paper uses feature weighting and Laplace calibration to improve the naive Bayesian classification algorithm, and obtains the improved naive Bayes classification algorithm.
Abstract: Naive Bayesian classification algorithm is widely used in big data analysis and other fields because of its simple and fast algorithm structure. Aiming at the shortcomings of the naive Bayes classification algorithm, this paper uses feature weighting and Laplace calibration to improve it, and obtains the improved naive Bayes classification algorithm. Through numerical simulation, it is found that when the sample size is large, the accuracy of the improved naive Bayes classification algorithm is more than 99%, and it is very stable; when the sample attribute is less than 400 and the number of categories is less than 24, the accuracy of the improved naive Bayes classification algorithm is more than 95%. Through empirical research, it is found that the improved naive Bayes classification algorithm can greatly improve the correct rate of discrimination analysis from 49.5 to 92%. Through robustness analysis, the improved naive Bayes classification algorithm has higher accuracy.

30 citations


Journal ArticleDOI
Yiwei Zhang1, Min Zhang, Caixia Fan1, Fuqiang Li1, Baofang Li1 
TL;DR: A computing resource allocation scheme based on deep reinforcement learning network for mobile edge computing scenarios in IoV and results show that proposed scheme can effectively allocate the computing resources of IoV in edge computing environment.
Abstract: With the emergence and development of 5G technology, Mobile Edge Computing (MEC) has been closely integrated with Internet of Vehicles (IoV) technology, which can effectively support and improve network performance in IoV. However, the high-speed mobility of vehicles and diversity of communication quality make computing task offloading strategies more complex. To solve the problem, this paper proposes a computing resource allocation scheme based on deep reinforcement learning network for mobile edge computing scenarios in IoV. Firstly, the task resource allocation model for IoV in corresponding edge computing scenario is determined regarding the computing capacity of service nodes and vehicle moving speed as constraints. Besides, the mathematical model for task offloading and resource allocation is established with the minimum total computing cost as objective function. Then, deep Q-learning network based on deep reinforcement learning network is proposed to solve the mathematical model of resource allocation. Moreover, experience replay method is used to solve the instability of nonlinear approximate function neural network, which can avoid falling into dimension disaster and ensure the low-overhead and low-latency operation requirements of resource allocation. Finally, simulation results show that proposed scheme can effectively allocate the computing resources of IoV in edge computing environment. When the number of user uploaded data is 10K bits and the number of terminals is 15, it still shows the excellent network performance of low-overhead and low-latency.

22 citations


Journal ArticleDOI
Ming Yan1, Huimin Yuan1, Jie Xu, Ying Yu1, Libiao Jin1 
TL;DR: In this paper, an intelligent marine task allocation and route planning scheme for multiple UAVs based on improved particle swarm optimization combined with a genetic algorithm (GA-PSO) is proposed.
Abstract: Unmanned aerial vehicles (UAVs) are considered a promising example of an automatic emergency task in a dynamic marine environment. However, the maritime communication performance between UAVs and offshore platforms has become a severe challenge. Due to the complex marine environment, the task allocation and route planning efficiency of multiple UAVs in an intelligent ocean are not satisfactory. To address these challenges, this paper proposes an intelligent marine task allocation and route planning scheme for multiple UAVs based on improved particle swarm optimization combined with a genetic algorithm (GA-PSO). Based on the simulation of an intelligent marine control system, the traditional particle swarm optimization (PSO) algorithm is improved by introducing partial matching crossover and secondary transposition mutation. The improved GA-PSO is used to solve the random task allocation problem of multiple UAVs and the two-dimensional route planning of a single UAV. The simulation results show that compared with the traditional scheme, the proposed scheme can significantly improve the task allocation efficiency, and the navigation path planned by the proposed scheme is also optimal.

17 citations


Journal ArticleDOI
TL;DR: In this paper, the estimation accuracy of different neural networks in modeling lower body joint angles in the sagittal plane using the kinematic records of a single IMU attached to the foot was investigated.
Abstract: Reliability and user compliance of the applied sensor system are two key issues of digital healthcare and biomedical informatics. For gait assessment applications, accurate joint angle measurements are important. Inertial measurement units (IMUs) have been used in a variety of applications and can also provide significant information on gait kinematics. However, the nonlinear mechanism of human locomotion results in moderate estimation accuracy of the gait kinematics and thus joint angles. To develop “digital twins” as a digital counterpart of body lower limb joint angles, three-dimensional gait kinematic data were collected. This work investigates the estimation accuracy of different neural networks in modeling lower body joint angles in the sagittal plane using the kinematic records of a single IMU attached to the foot. The evaluation results based on the root mean square error (RMSE) show that long short-term memory (LSTM) networks deliver superior performance in nonlinear modeling of the lower limb joint angles compared to other machine learning (ML) approaches. Accordingly, deep learning based on the LSTM architecture is a promising approach in modeling of gait kinematics using a single IMU, and thus can reduce the required physical IMUs attached on the subject and improve the practical application of the sensor system.

14 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed visual navigation algorithm for agricultural robots based on deep learning image understanding can perform autonomous navigation in complex and noisy environments and has good practicability and applicability.
Abstract: In the development of modern agriculture, the intelligent use of mechanical equipment is one of the main signs for agricultural modernization. Navigation technology is the key technology for agricultural machinery to control autonomously in the operating environment, and it is a hotspot in the field of intelligent research on agricultural machinery. Facing the accuracy requirements of autonomous navigation for intelligent agricultural robots, this paper proposes a visual navigation algorithm for agricultural robots based on deep learning image understanding. The method first uses a cascaded deep convolutional network and hybrid dilated convolution fusion method to process images collected by a vision system. Then, it extracts the route of processed images based on the improved Hough transform algorithm. At the same time, the posture of agricultural robots is adjusted to realize autonomous navigation. Finally, our proposed method is verified by using non-interference experimental scenes and noisy experimental scenes. Experimental results show that the method can perform autonomous navigation in complex and noisy environments and has good practicability and applicability.

13 citations


Journal ArticleDOI
TL;DR: Song et al. as discussed by the authors proposed a new method for recording device source identification based on the fusion of spatial feature information and temporal feature information by using an end-to-end framework, where two kinds of networks were designed to extract recording device sources spatial and temporal information.
Abstract: Deep learning techniques have achieved specific results in recording device source identification. The recording device source features include spatial information and certain temporal information. However, most recording device source identification methods based on deep learning only use spatial representation learning from recording device source features, which cannot make full use of recording device source information. Therefore, in this paper, to fully explore the spatial information and temporal information of recording device source, we propose a new method for recording device source identification based on the fusion of spatial feature information and temporal feature information by using an end-to-end framework. From a feature perspective, we designed two kinds of networks to extract recording device source spatial and temporal information. Afterward, we use the attention mechanism to adaptively assign the weight of spatial information and temporal information to obtain fusion features. From a model perspective, our model uses an end-to-end framework to learn the deep representation from spatial feature and temporal feature and train using deep and shallow loss to joint optimize our network. This method is compared with our previous work and baseline system. The results show that the proposed method is better than our previous work and baseline system under general conditions.

10 citations


Journal ArticleDOI
Xi Cheng1
TL;DR: Experiments on the real data set of the Flickr social network showed that the proposed algorithm has a higher accuracy rate and recall rate, compared with the traditional algorithm that only considers the interest theme and the algorithm whichonly considers the distance matching.
Abstract: To solve the problem of low accuracy of traditional travel route recommendation algorithm, a travel route recommendation algorithm based on interest theme and distance matching is proposed in this paper. Firstly, the real historical travel footprints of users are obtained through analysis. Then, the user’s preferences of interest theme and distance matching are proposed based on the user’s stay in each scenic spot. Finally, the optimal travel route calculation method is designed under the given travel time limit, starting point, and end point. Experiments on the real data set of the Flickr social network showed that the proposed algorithm has a higher accuracy rate and recall rate, compared with the traditional algorithm that only considers the interest theme and the algorithm which only considers the distance matching.

9 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used frequency-modulated continuous wave (FMCW) radar to measure the vital signs signals of multi-target, and the linear constrained minimum variance-based adaptive beamforming (LCMV-ADBF) technique was proposed to form a spatially distributed beams on the targets of interest directions.
Abstract: Respiration and heartbeats rates are important physiological assessment indicators that provide valid prior-knowledge for the diagnosis of numerous diseases. However, most of the current research focuses on the vital signs measurement of single target, and multi-target vital signs detection has not received much attention. In this paper, we use frequency-modulated continuous wave (FMCW) radar to measure the vital signs signals of multi-target. First, we apply the three-dimensional fast Fourier transform (3D-FFT) method to separate multiple targets and get their distance and azimuth information. Subsequently, the linear constrained minimum variance-based adaptive beamforming (LCMV-ADBF) technique is proposed to form a spatially distributed beams on the targets of interest directions. Finally, a compressive sensing based on orthogonal matching pursuit (CS-OMP) method and rigrsure adaptive soft threshold noise reduction based on discrete wavelet transform (RA-DWT) method are present to extract the respiratory and heartbeat signals. We perform tests in a real experimental environment and compare the proposed method with reference devices. The results show that the degrees of agreement for respiratory and heartbeat are 89% and 87%, respectively, for two human targets. The level of agreement for respiratory and heartbeat is 87% and 85%, respectively, for three human targets, proving the effectiveness of the proposed method.

8 citations


Journal ArticleDOI
Siling Feng1, Yinjie Chen1, Qianhao Zhai1, Mengxing Huang1, Feng Shu1 
TL;DR: This work investigated a model considering three targets and improved it by normalizing each target in the model to eliminate the influence of dimensions, and proposed an algorithm hybrid whale optimization algorithm (WOA) with GWO named GWO-WOA named after grey wolf optimizer (GWO).
Abstract: As the technology of the Internet of Things (IoT) and mobile edge computing (MEC) develops, more and more tasks are offloaded to the edge servers to be computed. The offloading strategy performs an essential role in the progress of computation offloading. In a general scenario, the offloading strategy should consider enough factors, and the strategy should be made as quickly as possible. While most of the existing model only considers one or two factors, we investigated a model considering three targets and improved it by normalizing each target in the model to eliminate the influence of dimensions. Then, grey wolf optimizer (GWO) is introduced to solve the improved model. To obtain better performance, we proposed an algorithm hybrid whale optimization algorithm (WOA) with GWO named GWO-WOA. And the improved algorithm is tested on our model. Finally, the results obtained by GWO-WOA, GWO, WOA, particle swarm optimization (PSO), and genetic algorithm (GA) are discussed. The results have shown the advantages of GWO-WOA.

Posted ContentDOI
TL;DR: A new target attraction field function which was segmented by the search distance to quickly search for quickly targets searching and can significantly improve the found number of targets and mission area coverage and comparative experiment results proved the necessity of setting CS activation and cool-down for improving the search performance.
Abstract: Unmanned aerial vehicle (UAV) detection has the advantages of flexible deployment and no casualties. It has become a force that cannot be ignored in the battlefield. Scientific and efficient mission planning can help improving the survival rate and mission completion rate of the UAV search in dynamic environments. Towards the mission planning problem of UAV collaborative search for multi-types of time-sensitive moving targets, a search algorithm based on hybrid layered artificial potential fields algorithm (HL-APF) was proposed. This method consists of two parts, a distributed artificial field algorithm and a centralized layered algorithm. In the improved artificial potential field (IAPF), this paper utilized a new target attraction field function which was segmented by the search distance to quickly search for dynamic targets. Moreover, in order to solve the problem of repeated search by the UAV in a short time interval, a search repulsion field generated by the UAV search path was proposed. Besides, in order to solve the unknown target search and improve the area coverage, a centralized layered scheduling algorithm controlled by the cloud server (CS) was added. CS divides the mission area into several sub-areas, and allocates UAV according to the priority function based on the search map. The CS activation mechanism can make full use of prior information, and the UAV assignment cool-down mechanism can avoid the repeated assignment of the same UAV. The simulation results show that compared with the hybrid artificial potential field and ant colony optimization and IAPF, HL-APF can significantly improve the number of targets and mission area coverage. Moreover, comparative experiment results of CS mechanism proved the necessity of setting CS activation and cool-down for improving the search performance. Finally, it also verified the robustness of the method under the failure of some UAVs.

Posted ContentDOI
TL;DR: The advantage of the proposed speech enhancement method is that it uses of the key information of features to suppress noise in both matched and unmatched noise cases and the proposed method outperforms other common methods in speech enhancement.
Abstract: Speech is easily interfered by external environment in reality, which results in the loss of important features. Deep learning has become a popular speech enhancement method because of its superior potential in solving nonlinear mapping problems for complex features. However, the deficiency of traditional deep learning methods is the weak learning capability of important information from previous time steps and long-term event dependencies between the time-series data. To overcome this problem, we propose a novel speech enhancement method based on the fused features of deep neural networks (DNNs) and gated recurrent unit (GRU). The proposed method uses GRU to reduce the number of parameters of DNNs and acquire the context information of the speech, which improves the enhanced speech quality and intelligibility. Firstly, DNN with multiple hidden layers is used to learn the mapping relationship between the logarithmic power spectrum (LPS) features of noisy speech and clean speech. Secondly, the LPS feature of the deep neural network is fused with the noisy speech as the input of GRU network to compensate the missing context information. Finally, GRU network is performed to learn the mapping relationship between LPS features and log power spectrum features of clean speech spectrum. The proposed model is experimentally compared with traditional speech enhancement models, including DNN, CNN, LSTM and GRU. Experimental results demonstrate that the PESQ, SSNR and STOI of the proposed algorithm are improved by 30.72%, 39.84% and 5.53%, respectively, compared with the noise signal under the condition of matched noise. Under the condition of unmatched noise, the PESQ and STOI of the algorithm are improved by 23.8% and 37.36%, respectively. The advantage of the proposed method is that it uses the key information of features to suppress noise in both matched and unmatched noise cases and the proposed method outperforms other common methods in speech enhancement.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a transfer learning-fused Inception-v3 model for dynasty-based classification, which adopted frozen fully connected and softmax layers for pretraining over ImageNet.
Abstract: It is difficult to identify the historical period in which some ancient murals were created because of damage due to artificial and/or natural factors; similarities in content, style, and color among murals; low image resolution; and other reasons. This study proposed a transfer learning-fused Inception-v3 model for dynasty-based classification. First, the model adopted Inception-v3 with frozen fully connected and softmax layers for pretraining over ImageNet. Second, the model fused Inception-v3 with transfer learning for parameter readjustment over small datasets. Third, the corresponding bottleneck files of the mural images were generated, and the deep-level features of the images were extracted. Fourth, the cross-entropy loss function was employed to calculate the loss value at each step of the training, and an algorithm for the adaptive learning rate on the stochastic gradient descent was applied to unify the learning rate. Finally, the updated softmax classifier was utilized for the dynasty-based classification of the images. On the constructed small datasets, the accuracy rate, recall rate, and F1 value of the proposed model were 88.4%, 88.36%, and 88.32%, respectively, which exhibited noticeable increases compared with those of typical deep learning models and modified convolutional neural networks. Comparisons of the classification outcomes for the mural dataset with those for other painting datasets and natural image datasets showed that the proposed model achieved stable classification outcomes with a powerful generalization capacity. The training time of the proposed model was only 0.7 s, and overfitting seldom occurred.

Journal ArticleDOI
TL;DR: Experiments show that the proposed VRKD method outperforms many state-of-the-art vehicle re-identification approaches with better accurate and speed performance.
Abstract: Vehicle re-identification is a challenging task that matches vehicle images captured by different cameras. Recent vehicle re-identification approaches exploit complex deep networks to learn viewpoint robust features for obtaining accurate re-identification results, which causes large computations in their testing phases to restrict the vehicle re-identification speed. In this paper, we propose a viewpoint robust knowledge distillation (VRKD) method for accelerating vehicle re-identification. The VRKD method consists of a complex teacher network and a simple student network. Specifically, the teacher network uses quadruple directional deep networks to learn viewpoint robust features. The student network only contains a shallow backbone sub-network and a global average pooling layer. The student network distills viewpoint robust knowledge from the teacher network via minimizing the Kullback-Leibler divergence between the posterior probability distributions resulted from the student and teacher networks. As a result, the vehicle re-identification speed is significantly accelerated since only the student network of small testing computations is demanded. Experiments on VeRi776 and VehicleID datasets show that the proposed VRKD method outperforms many state-of-the-art vehicle re-identification approaches with better accurate and speed performance.

Journal ArticleDOI
TL;DR: In this paper, an enhanced deep learning-based (DL) antenna selection approach for optimum sparse linear array selection for direction-of-arrival (DOA) estimation applications is introduced.
Abstract: This paper introduces an enhanced deep learning-based (DL) antenna selection approach for optimum sparse linear array selection for direction-of-arrival (DOA) estimation applications. Generally, the antenna selection problem yields a combination of subarrays as a solution. Previous DL-based methods designated these subarrays as classes to fit the problem into a classification problem to which a convolutional neural network (CNN) is employed to solve it. However, these methods sample the combination set randomly to reduce computational cost related to the generation of training data, and it often leads to sub-optimal solutions due to ill-sampling issues. Hence, in this paper, we propose an improved DL-based method by constraining the combination set to retain the hole-free subarrays to enhance the method’s performance and sparse subarrays rendered. Numerical examples show that the proposed method yields sparser subarrays with better beampattern properties and improved DOA estimation performance than conventional DL techniques.

Journal ArticleDOI
TL;DR: This work extends the state-of-the-art LADCF tracking algorithm with a re-detection component based on the SVM model, and introduces a robust confidence degree evaluation criterion that combines the maximum response criterion and the average peak-to correlation energy (APCE) to judge the confidence level of the predicted target.
Abstract: Long-term visual tracking undergoes more challenges and is closer to realistic applications than short-term tracking. However, the performances of most existing methods have been limited in the long-term tracking tasks. In this work, we present a reliable yet simple long-term tracking method, which extends the state-of-the-art learning adaptive discriminative correlation filters (LADCF) tracking algorithm with a re-detection component based on the support vector machine (SVM) model. The LADCF tracking algorithm localizes the target in each frame, and the re-detector is able to efficiently re-detect the target in the whole image when the tracking fails. We further introduce a robust confidence degree evaluation criterion that combines the maximum response criterion and the average peak-to-correlation energy (APCE) to judge the confidence level of the predicted target. When the confidence degree is generally high, the SVM is updated accordingly. If the confidence drops sharply, the SVM re-detects the target. We perform extensive experiments on the OTB-2015 and UAV123 datasets. The experimental results demonstrate the effectiveness of our algorithm in long-term tracking.

Posted ContentDOI
TL;DR: This paper investigates a computing offloading policy and the allocation of computational resource for multiple user equipments (UEs) in Device-to-Device (D2D) aided fog radio access networks (F-RANs) and formulate the problem of task offloading and resource optimization as a mixed-integer nonlinear programming problem to maximize the total utility of all UEs.
Abstract: This paper investigates a computing offloading policy and the allocation of computational resource for multiple user equipments (UEs) in device-to-device (D2D)-aided fog radio access networks (F-RANs) Concerning the dynamically changing wireless environment where the channel state information (CSI) is difficult to predict and know exactly, we formulate the problem of task offloading and resource optimization as a mixed-integer nonlinear programming problem to maximize the total utility of all UEs Concerning the non-convex property of the formulated problem, we decouple the original problem into two phases to solve Firstly, a centralized deep reinforcement learning (DRL) algorithm called dueling deep Q-network (DDQN) is utilized to obtain the most suitable offloading mode for each UE Particularly, to reduce the complexity of the proposed offloading scheme-based DDQN algorithm, a pre-processing procedure is adopted Then, a distributed deep Q-network (DQN) algorithm based on the training result of the DDQN algorithm is further proposed to allocate the appropriate computational resource for each UE Combining these two phases, the optimal offloading policy and resource allocation for each UE are finally achieved Simulation results demonstrate the performance gains of the proposed scheme compared with other existing baseline schemes

Journal ArticleDOI
TL;DR: The positioning algorithm in this paper can be used by a single base station to locate the target in an outdoor non-line-of-sight (NLOS) environment, and the accuracy is improved compared with the traditional positioning algorithm.
Abstract: This paper proposes a scattering area model for processing multipath parameters achieve single base station positioning. First of all, we construct a scattering area model based on the spatial layout of obstacles near the base station and then collect the multipath signals needed for positioning and extract parameters. Second, we use the joint clustering algorithm improved by k-means clustering and mean shift clustering algorithm to process the parameters and extract useful information. Third, the processed information is combined with the spatial layout information of the scattering area model to construct equations, and then the solving problem of equations is converted into a least-squares optimization problem. Finally, the Levenberg-Marquardt (LM) algorithm is used to solve the optimal solution and estimate the mobile target position. The simulation results show that the positioning algorithm in this paper can be used by a single base station to locate the target in an outdoor non-line-of-sight (NLOS) environment, and the accuracy is improved compared with the traditional positioning algorithm.

Journal ArticleDOI
TL;DR: In this paper, a new WVD associated with linear canonical transform (WVDL) and integration form of WVDL (IWVDl) is presented, which removes the coupling between time and time delay and lays the foundation for signal analysis and processing.
Abstract: Linear canonical transform as a general integration transform has been considered into Wigner-Ville distribution (WVD) to show more powerful ability for non-stationary signal processing. In this paper, a new WVD associated with linear canonical transform (WVDL) and integration form of WVDL (IWVDL) are presented. First, the definition of WVDL is derived based on new autocorrelation function and some properties are investigated in details. It removes the coupling between time and time delay and lays the foundation for signal analysis and processing. Then, based on the characteristics of WVDL over time-frequency plane, a new parameter estimation method, IWVDL, is proposed for linear modulation frequency (LFM) signal. Two phase parameters of LFM signal are estimated simultaneously and the cross term can be suppressed well by integration operator. Finally, compared with classical WVD, the simulation experiments are carried out to verify its better estimation and suppression of cross term ability. Error analysis and computational cost are discussed to show superior performance compared with other WVD in linear canonical transform domain. The further application in radar imaging field will be studied in the future work.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of reconstructing a signal from under-determined modulo observations (or measurements) and propose an approach to solve the signal recovery problem under sparsity constraints for the special case to modulo folding limited to two periods.
Abstract: We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements). This observation model is inspired by a relatively new imaging mechanism called modulo imaging, which can be used to extend the dynamic range of imaging systems; variations of this model have also been studied under the category of phase unwrapping. Signal reconstruction in the under-determined regime with modulo observations is a challenging ill-posed problem, and existing reconstruction methods cannot be used directly. In this paper, we propose a novel approach to solving the signal recovery problem under sparsity constraints for the special case to modulo folding limited to two periods. We show that given a sufficient number of measurements, our algorithm perfectly recovers the underlying signal. We also provide experiments validating our approach on toy signal and image data and demonstrate its promising performance.

Journal ArticleDOI
TL;DR: In this article, the authors considered a double-IRS-assisted wireless communication system, where IRS1 and IRS2 are deployed near the base station and the user, respectively, and the transmitted signals reach the user via the cascaded BS-Iris1-IIS2-user channel only.
Abstract: Intelligent reflecting surface (IRS) has emerged as an innovative and disruptive solution to boost the spectral and energy efficiency and enlarge the coverage of wireless communication systems. However, the existing literature on IRS mainly concentrates on wireless communication systems assisted by single or multiple distributed IRSs, which are not always effective. In view of this issue, this paper considers a special double-IRS-assisted wireless communication system, where IRS1 and IRS2 are deployed near the base station (BS) and the user, respectively, and the transmitted signals reach the user via the cascaded BS-IRS1-IRS2-user channel only. We cooperatively optimize transmit and passive beamforming on the two IRSs based on the particle swarm optimization (PSO) algorithm to maximize the received signal power. Simulation indicates that despite no direct line-of-sight (LoS) path from the BS to the user, an excellent signal-to-noise ratio (SNR) is available at the receiver with the aid of two IRSs, which demonstrates that it is feasible to assist communication by double reflection links composed of two IRSs. Additionally, we unexpectedly find that when the positions of the two IRSs are fixed, by exchanging the positions of the BS and the user, the obtainable SNRs are similar.

Journal ArticleDOI
TL;DR: The horizontal-to-vertical spectral ratio (HVSR) has been extensively used in site characterization utilizing recordings from microtremor and earthquake in recent years as mentioned in this paper.
Abstract: The horizontal-to-vertical spectral ratio (HVSR) has been extensively used in site characterization utilizing recordings from microtremor and earthquake in recent years. This method is proposed based on ground pulsation, and then it has been applied to both S-wave and ambient noise, accordingly, in practical application also different. The main applications of HVSR are site classification, site effect study, mineral exploration, and acquisition of underground average shear-wave velocity structure. In site response estimates, the use of microtremors has been introduced long ago in Japan, while it has long been very controversial in this research area, as there are several studies reporting difficulties in recognizing the source effects from the pure site effects in noise recordings, as well as discrepancies between noise and earthquake recordings. In practice, the most reliable way is the borehole data, and the theoretical site response results were compared with the HVSR using shear wave to describe site response. This paper summarizes the applications of the HVSR method and draws conclusions that HVSR has been well applied in many fields at present, and it is expected to have a wider application in more fields according to its advantages.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a dual-channel convolutional neural network (DCNN) for hand-drawn sketch recognition, where the sketch and contour are extracted by the contour extraction algorithm and then used as input image of CNN.
Abstract: In hand-drawn sketch recognition, the traditional deep learning method has the problems of insufficient feature extraction and low recognition rate. To solve this problem, a new algorithm based on a dual-channel convolutional neural network is proposed. Firstly, the sketch is preprocessed to get a smooth sketch. The contour of the sketch is obtained by the contour extraction algorithm. Then, the sketch and contour are used as the input image of CNN. Finally, feature fusion is carried out in the full connection layer, and the classification results are obtained by using a softmax classifier. Experimental results show that this method can effectively improve the recognition rate of a hand-drawn sketch.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors used the support vector machine (SVM) and the Gaussian mixture model to obtain the action characteristics of the characters in the video sequence, and the average recognition accuracy reached 95.9%, which verifies the application and feasibility of this technology in the recognition of shooting actions is conducive to follow up and improve shooting techniques.
Abstract: Computer vision recognition refers to the use of cameras and computers to replace the human eyes with computer vision, such as target recognition, tracking, measurement, and in-depth graphics processing, to process images to make them more suitable for human vision. Aiming at the problem of combining basketball shooting technology with visual recognition motion capture technology, this article mainly introduces the research of basketball shooting technology based on computer vision recognition fusion motion capture technology. This paper proposes that this technology first performs preprocessing operations such as background removal and filtering denoising on the acquired shooting video images to obtain the action characteristics of the characters in the video sequence and then uses the support vector machine (SVM) and the Gaussian mixture model to obtain the characteristics of the objects. Part of the data samples are extracted from the sample set for the learning and training of the model. After the training is completed, the other parts are classified and recognized. The simulation test results of the action database and the real shot video show that the support vector machine (SVM) can more quickly and effectively identify the actions that appear in the shot video, and the average recognition accuracy rate reaches 95.9%, which verifies the application and feasibility of this technology in the recognition of shooting actions is conducive to follow up and improve shooting techniques.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a sub-graphics exchange method, the main idea is to achieve the overall aesthetic effect by exchanging the corresponding individual subgraphics, the system can naturally store the pattern library and save its parameters into the pattern database at any time.
Abstract: With the improvement of people’s living standards, people pay more and more attention to the indoor living environment. This research mainly discusses the research and realization of the innovative design of wall painting patterns based on the microprocessor system and the evolution of computer technology. Pattern design is an important field in art design. The understanding of pattern design in modern design is all patterns, graphics, and even symbols that can cause visual beauty and convey information. Its form can be flat or three-dimensional. Evolutionary Computation is a highly parallel, random and adaptive search algorithm developed based on natural selection and evolutionary mechanisms in the biological world. This article proposes a sub-graphics exchange method, the main idea is to achieve the overall aesthetic effect by exchanging the corresponding individual sub-graphics. The system can naturally store the pattern library. The wall painting works selected by the user and the simulation environment image are merged to generate a wall painting simulation effect diagram. In the process of wall painting pattern design, if you encounter a satisfactory pattern during evolution, you can save its parameters into the pattern database at any time. In the rendering simulation stage, if the user chooses to import wall photos by himself, the image format should be in jpg format, and the camera angle should be as close to the wall as possible, so that the wall painting pattern can be mapped vertically on the wall. The processor correctly realized the multi-core JPEG decoding function, and the system pattern processing efficiency reached 91%. The pattern design system designed in this study is highly innovative.

Journal ArticleDOI
TL;DR: In this article, the authors studied the research of multi-sensor information fusion and intelligent optimization methods and the methods of applying them to mobile robot related technologies, and in-depth study of the construction of mobile robot maps from the perspective of multisensors information fusion.
Abstract: Research on mobile robots began in the late 1960s. Mobile robots are a typical autonomous intelligent system and a hot spot in the high-tech field. They are the intersection of multiple technical disciplines such as computer artificial intelligence, robotics, control theory and electronic technology. The product not only has potentially very attractive application value and commercial value, but the research on it is also a challenge to intelligent technology. The development of mobile robots provides excellent research for various intelligent technologies and solutions. This dissertation aims to study the research of multi-sensor information fusion and intelligent optimization methods and the methods of applying them to mobile robot related technologies, and in-depth study of the construction of mobile robot maps from the perspective of multi-sensor information fusion. And, in order to achieve this function, combined with autonomous exploration and other related theories and algorithms, combined with the Robot Operating System (ROS). This paper proposes the area equalization method, equalization method, fuzzy neural network and other methods to promote the realization of related technologies. At the same time, this paper conducts simulation research based on the SLAM comprehensive experiment of the JNPF-4WD square mobile robot. On this basis, the high precision and high reliability of robot positioning are further realized. The experimental results in this paper show that the maximum error of the X-axis and Y-axis, FastSLAM algorithm is smaller than EKF algorithm, and the improved FASTSALM algorithm error is further reduced compared with the original FastSLAM algorithm, the value is less than 0.1.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a disentangled noise suppression method based on Generative Adversarial Network (GAN) for low-dose CT image restoration, which can effectively remove noise while recovering finer details and provide better visual perception than other state-of-theart methods.
Abstract: Generative adversarial network (GAN) has been applied for low-dose CT images to predict normal-dose CT images. However, the undesired artifacts and details bring uncertainty to the clinical diagnosis. In order to improve the visual quality while suppressing the noise, in this paper, we mainly studied the two key components of deep learning based low-dose CT (LDCT) restoration models—network architecture and adversarial loss, and proposed a disentangled noise suppression method based on GAN (DNSGAN) for LDCT. Specifically, a generator network, which contains the noise suppression and structure recovery modules, is proposed. Furthermore, a multi-scaled relativistic adversarial loss is introduced to preserve the finer structures of generated images. Experiments on simulated and real LDCT datasets show that the proposed method can effectively remove noise while recovering finer details and provide better visual perception than other state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this article, a phased linear antenna array instead of the planar array is proposed to circumvent the problem that two frequency squint steering main beams cannot cover any two beam directions simultaneously.
Abstract: This article investigates using a phased linear antenna array instead of the planar array to circumvent the problem that two frequency squint steering main beams cannot cover any two beam directions simultaneously. First, we approximate the donut-shaped main beam of the linear array by means of multiple pencil-shaped main beams of a virtual planar array for matching the steering main beam of the linear array with the multi-path sparse scattering channel model mathematically and give a method for calculating the number of antenna elements of the virtual array. Second, we cope with possible inter-user interference on a single squint main beam of the linear array in some scenarios by means of the power-domain non-orthogonal multiple access (PD-NOMA) technique, making it possible to support communication with two users on a single squint main beam at the base station (BS) side. The feasible domain of PD-NOMA is given when a single antenna is used for both the BS and the user end, assuming a two-user successive interference cancellation (SIC) decoding power ratio limit. Third, three algorithms are given for serving multi-user at the BS via squint beams of the linear array. Finally, numerical results show that the second proposed algorithm supporting PD-NOMA pairing within a single donut-shaped squint main beam significantly increases the number of simultaneous users served within a single cellular system.

Journal ArticleDOI
TL;DR: In this article, a pruning-based paradigm is proposed to reduce the computational cost of DNNs, by uncovering a more compact structure and learning the effective weights therein, on the basis of not compromising the expressive capability of deep neural networks.
Abstract: Nowadays, deep neural networks (DNNs) have been rapidly deployed to realize a number of functionalities like sensing, imaging, classification, recognition, etc. However, the computational-intensive requirement of DNNs makes it difficult to be applicable for resource-limited Internet of Things (IoT) devices. In this paper, we propose a novel pruning-based paradigm that aims to reduce the computational cost of DNNs, by uncovering a more compact structure and learning the effective weights therein, on the basis of not compromising the expressive capability of DNNs. In particular, our algorithm can achieve efficient end-to-end training that transfers a redundant neural network to a compact one with a specifically targeted compression rate directly. We comprehensively evaluate our approach on various representative benchmark datasets and compared with typical advanced convolutional neural network (CNN) architectures. The experimental results verify the superior performance and robust effectiveness of our scheme. For example, when pruning VGG on CIFAR-10, our proposed scheme is able to significantly reduce its FLOPs (floating-point operations) and number of parameters with a proportion of 76.2% and 94.1%, respectively, while still maintaining a satisfactory accuracy. To sum up, our scheme could facilitate the integration of DNNs into the common machine-learning-based IoT framework and establish distributed training of neural networks in both cloud and edge.