scispace - formally typeset
Search or ask a question

Showing papers in "Computer Engineering and Design in 2015"


Journal Article
TL;DR: In terms of advantages, advantages, disadvantages and application fields, cuckoo search algorithm, particle swarm optimization,ant colony optimization and bee colony algorithm were analyzed and compared.
Abstract: Cuckoo search algorithm is a metaheuristic swarm intelligence technique,which is combined with the cuckoo's nest parasitism and Levy flights mode.The principle and procedure flowchart of cuckoo search algorithm were illustrated in details.The research status of relative improved algorithms and their application were discussed.Moreover,in terms of advantages,disadvantages and application fields,cuckoo search algorithm,particle swarm optimization,ant colony optimization and bee colony algorithm were analyzed and compared.Finally,the existing problems in current research were summarized and some future research directions to address the problems were proposed.

15 citations


Journal Article
TL;DR: The improved potential field approach is applicable to path planning of mobile robot in complicated environment and it looks for the optimization path.
Abstract: To solve the path planning problems of mobile robot using traditional artificial potential field method,the corresponding improved methods were proposed.The method of narrowing the influence area of obstacles was applied to deal with the problem caused by the obstacles in the vicinity of the goal,which made the goal point unreachable.In addition,the hierarchical thinking of obstacle influencing scope was proposed to help the robot escape from the local minimum point to successfully reach the goal point.The improved potential field approach is applicable to path planning of mobile robot in complicated environment and it looks for the optimization path.The effectiveness of this improved approach is verified by simulation results.

5 citations


Journal Article
TL;DR: IDBN was compared with DBN on the MINST dataset, Binary Alphadigits dataset and the USPS dataset, and the result shows that IDBN has a higher learning speed with a reliable learning accuracy.
Abstract: An amount of time is needed to optimize the whole deep belief network(DBN)using the traditional global optimization algorithms and the fine-tuning method based on gradient used in DBN may stick into local optimum.To accelerate the learning process of DBN,extreme learning machine(ELM)was applied into the learning process of DBN.IDBN was compared with DBN on the MINST dataset,Binary Alphadigits dataset and the USPS dataset,and the result shows that IDBN has a higher learning speed with a reliable learning accuracy.

5 citations


Journal Article
Zhai Ya1
TL;DR: To judge the relationship between a point and a complicated polygon correctly and rapidly while the traditional ray method fals to give a correct result, a concept of virtual inter point to modify the traditional method of radial was introduced.
Abstract: To judge the relationship between a point and a complicated polygon correctly and rapidly while the traditional ray method fals to give a correct result,a concept of virtual inter point to modify the traditional method of radial was introduced and some typical examples to prove the correction of the method were clemonstrated.As to the relationship of a point and a complicated polyhedron,a section method was used to transfer the judgment of the relationship between a point and a complicated polyhedron to that of the relationship between a point and a complicated polygon.The verification of examples shows that the modified algorithm is fast and efficient.It can also be realized by programming easily.

4 citations


Journal Article
Liu Jian-ju1
TL;DR: The numerical results show that the proposed algorithm can ensure the global optimality of optimal value and improve the accuracy and efficiency of the algorithm.
Abstract: To achieve global optimality of optimal value,fast convergence and high accuracy,an improved hybrid scale chaos optimization algorithm based on Kent chaotic map was proposed,which consisted of two stages.In the first stage,Kent map was applied to generate initial chaotic variables instead of using Logistic map.In the second stage,scale chaotic optimization performed at the beginning and Nelder-Mead simplex algorithm was applied for a more accurate solution at the end.The numerical results show that the proposed algorithm can ensure the global optimality of optimal value and improve the accuracy and efficiency of the algorithm.

2 citations


Journal Article
TL;DR: Experimental results show that the proposed foreground detection algorithm of improved GMM based on matching feedback quantity can deal with complicated changes of the video monitoring scenes and extract the prospect target effectively, accurately and rapidly.
Abstract: The traditional Gaussian mixture model can not effectively adapt to changes in complex scenes by using the fixed model parameters of threshold of background model and learning rate when detecting moving objects.A kind of method named foreground detection algorithm of improved GMM based on matching feedback quantity was proposed.Through analyzing controlling parameters of the model,a kind of strategy which adjusted threshold of background model dynamically according to each pixel matching case and adjusted learning rate adaptively according to the feedback was proposed.Experimental results show that this method can deal with complicated changes of the video monitoring scenes and extract the prospect target effectively,accurately and rapidly.

2 citations


Journal Article
TL;DR: A defense model—ULDM(user loyalty defense model) for the application layer DDoS(App-DDoS)attack was proposed based on the user loyalty, and simulation results show the effectiveness of the defense model.
Abstract: A defense model—ULDM(user loyalty defense model)for the application layer DDoS(App-DDoS)attack was proposed based on the user loyalty.Normal users and attack users were identified based on the degree of loyalty.The user loyalty consisted of the access frequency loyalty and the behavior loyalty.The behavior loyalty included the history behavior loyalty and the current behavior loyalty.The performance of user behavior was evaluated through the request frequency and the ratio of high load request in a long time,and then the user behavior loyalty was obtained.The access frequency loyalty was computed based on the access frequency of each user.Through policy dispatching,the request was scheduled based on the user loyalty.Simulation results show the effectiveness of the defense model.

2 citations


Journal Article
TL;DR: The results show that for some plane troubles, the algorithm can effectively solve the problem of outlier detection and find the fault data in QAR accurately.
Abstract: To find the abnormal data of the aircraft quick access recorder(QAR)data and predict potential problems for planes,the characteristics of the large amount of QAR data and the relatively stable flight parameter data values were considered,and an outlier detection algorithm applied to QAR data was proposed.In the first stage of the algorithm,the K-means method was used to cluster the QAR data streams,and reference points were generated.In the second stage,the least squares method was used to fit the reference points.The distances from the reference points to the aircraft parametric curve fitted by the least squares method were computed to determine and identify the possible outliers.The results show that for some plane troubles,the algorithm can effectively solve the problem of outlier detection and find the fault data in QAR accurately.

2 citations


Journal Article
TL;DR: To decrease the false detection rate of face from the image captured from a long distance, a design method of face detection based on Adaboost algorithm and false report reduction technique was put forward.
Abstract: To decrease the false detection rate of face from the image captured from a long distance,a design method of face detection based on Adaboost algorithm and false report reduction technique was put forwardThe false report reduction technique was made up of skin color detection and variable edge board in this methodThe average red,green and blue components of the object deteciont window were contained in the skin color detection,followed by the generation of the binary cluster imageThe size of edge board was determined by the ellipse covering the binary cluster shapeThe edge board was used to filter out false report by evaluating the outline shape of the object in the object detection windowResults show false report rate using this method is lower than that of simple Adaboost algorithm and Adaboost-SkinColor algorithm,and the method is effective for face detection in given images captured at a long distance

2 citations


Journal Article
TL;DR: Compared with the common better segmentation algorithm, the proposed pavement crack detection algorithm has better performance and keeps most of the details, and both the precision rate and recall rate of extracting crack are more than 92%.
Abstract: To extract the pavement cracks effectively,apavement crack detection algorithm combining texture feature fusion and saliency detection was proposed.The morphological method was used to remove part of the light.The LBP,Laws,contrast texture description factor were adopted to describe pavement cracks and the result of description was weighted and fused.The crack was significantly processed using saliency algorithm.Through threshold segmentation,median filtering and connected domain denoising,the final goal was achieved.The experimental results show that compared with the common better segmentation algorithm,the proposed algorithm has better performance and keeps most of the details,and both the precision rate and recall rate of extracting crack are more than 92%.

2 citations


Journal Article
TL;DR: Experimental results show that the improved localization algorithm based on the inenial sensor, somatosensory sensor and Kinect motion state can improve the localization accuracy of the robot and the effect of mappingbased on the RGBD-SLAM algorithm.
Abstract: Localization based on the RGBD-SLAM may fail and generate great mapping error when collected Kinect feature points of image data are rare or absent.To solve the problem,an improved localization algorithm was proposed based on the inenial sensor(IMU),the somatosensory sensor(Kinect)and the motion state of the robot itself.The comparison and fusion were done for attitudes,at the same time,the prediction model and the observation model were constructed using IMU measurement data and results of Kinect pose estimation for position respectively,the robot's movement and motion commands restrictions were taken as constraints to do the extended Kalman filter(extended Kalman filter,EKF)integration for robot localization and mapping.Experimental results show that the method can improve the localization accuracy of the robot and the effect of mapping based on the RGBD-SLAM algorithm.

Journal Article
Dai Le-y1
TL;DR: The result of the comparison shows that the task-level data allocation mechanism performs better and is more flexible than the algorithm mapping mechanism.
Abstract: To solve the problem of the cipher algorithm mapping of multi-core cipher processor(MCP),including the single algorithm high-speed mapping,the multi-algorithm parallel mapping and the complex information security protocol mapping,the algorithm mapping mechanism of MCP was analyzed,the cipher service of the MCP was partitioned to the tasks,and a task-level data allocation mechanism was proposed.The result of the comparison shows that the task-level data allocation mechanism performs better and is more flexible.

Journal Article
TL;DR: The maximum between-cluster variance algorithm was adopted to search for the optimal threshold in the gray space, overcoming the defects of traditional algorithm and improving search efficiency and reducing the number of variance calculations.
Abstract: As it is difficult to select the optimal threshold in image segmentation and the calculation amount is large,the improved algorithm of image segmentation based on the maximum between-cluster variance was put forward.The maximum between-cluster variance algorithm was adopted to search for the optimal threshold in the gray space,overcoming the defects of traditional algorithm.The search principle in dichotomy was applied in Otsu algorithm,thus improving search efficiency and reducing the number of variance calculations.Compared with the traditional Otsu method,the improved method speeds up the calculation process with the calculation amount lowered by more than 10 times.The experimental results show that the algorithm can be better applied to occasions with high real-time performance and large amount of data,and it has high efficiency and use value.

Journal Article
TL;DR: The test results show that the Android-based ECG information management system design meets the functional and performing requirements of designing, and meets the physicians' demand to view and analyzeECG information.
Abstract: ECG information management systems generally run on the client computer,so physicians are not easy to check the ECG information and analyze ECG signals anywhere,to solve the problem,an Android-based ECG information management system design was proposedAccording to actual demands,the system was divided into the client and the server,and these two used http protocol to communicateAs to the implementation of the system's client and server,some key algorithms and system designs used in its implementation were discussed,such as ECG data transmission,R-wave detection,arrhythmia identification and diagnostic report generationThe test results show that the system meets the functional and performing requirements of designing,and meets the physicians' demand to view and analyze ECG information

Journal Article
TL;DR: Experimental results show that the recall rate and precision rate of the service discovery are better than before and they are not affected when the max dispersion of parameters set size between service and query changes.
Abstract: To resolve the problems of the ontology-based algorithm of bipartite graph matching semantic Web service discovery presented in previous research,the new algorithm was studied.In previous algorithms,the optimal bipartite graph matching was used to find augment path,leading to low recall rate and precision rate of Web service discovery.To solve these problems,an ontology-based bipartite graph matching semantic Web service discovery algorithm was proposed,which extended the optimal bipartite graph matching algorithm.The slack function was used to find the augment path in the equivalent sub graph.Experimental results show that the recall rate and precision rate of the service discovery are better than before and they are not affected when the max dispersion of parameters set size between service and query changes.

Journal Article
TL;DR: The experimental results show that the proposed method based on the combination of the word order has a great contribution to the Chinese text keyword extraction.
Abstract: To improve the effect of the keyword extraction,a method based on the combination of the word order was proposed.Through steps including the statistic of word order,the POS tagging,the filtering of the stop words,words combination,the phrase or the combination of the word was constructed,and the candidate of keyword was filtered.On the other hand,the accuracy of the final keyword extraction was improved greatly by the introduction of the other features.The experimental results show that the method has a great contribution to the Chinese text keyword extraction.

Journal Article
TL;DR: Because the 1080 Pvideo required the bandwidth of 6M-10 M,Ralink RT5350 was adopted to design a wireless router based on the embedded Linux to meet the bandwidth requirements and to simultaneously transmit multiple videos.
Abstract: Due to inconveniences or high costs of laying cables for HD video monitoring points,network cameras equipped with a wireless router based on the embedded Linux card was usedVideos were passed through Wifi to another video camera with easily laying cable,and then transferred to the monitoring center via cablesBecause the 1080 Pvideo required the bandwidth of 6M-10 M,Ralink RT5350 was adopted to design a wireless router based on the embedded Linux to meet the bandwidth requirements and to simultaneously transmit multiple videosUsing Linux routing cards creatively can reduce the amount of construction and costs,which makes high-definition video monitoring points put into use quickly

Journal Article
TL;DR: Experimental results show that the algorithm can generate high accuracy dense disparity map for different scene images and effectively use of disparity search space, and determine the disparity range automatically.
Abstract: To solve the fast stereo matching problem,a Bayesian probability model was proposed.Firstly,two-dimensional triangular meshes were generated from a set of robust sparse feature matching points.Then,the priori distribution function of disparity value based on triangular plane was established to reduce the ambiguity in the matching process.The algorithm can effectively use of disparity search space,and determine the disparity range automatically.It can be executed in parallel and generate accurate dense disparity map without global optimization.Experimental results show that the algorithm can generate high accuracy dense disparity map for different scene images.For Cones image at 900×750resolution,the running time of the algorithm is 0.3228 s.

Journal Article
Wang Yi-yan1
TL;DR: To eliminate differences of medical terminologies among health care organizations and achieve unified management of medical terminology, a set of ontology storage model was presented, which follows the MDA framework in accordance with the CTS specification.
Abstract: To eliminate differences of medical terminologies among health care organizations and achieve unified management of medical terminology,considering the medical thesaurus information can be preserved through the ontology,a set of ontology storage model was presented.The model follows the MDA framework in accordance with the CTS specification to achieve effective storage of medical terminology ontology.Ultimately,a common medical-terminology service based on the ontology was designed using the JAVA EE architecture.The actual deployment of the system verifies the medical ontology can persist in the relational database with the usage of ontology storage model.Also the service system can provide a unified terminology for the medical field management and retrieval services.

Journal Article
Chen Tie-ju1
TL;DR: Experimental results show that compared with early nave Bayes algorithm and SVM algorithm, the RSSI algorithm can significantly reduce the classification time and the probability of misjudging legitimate emails.
Abstract: When Bayesian algorithm is applied in spam filtering,Bernoulli model's accuracy is low and can not distinguish the importance of text features,and the multinomial model has larger computation.In addition,it is a waste of time in calculating unrelated feature elements and this model is sensitive to low frequency elements.For these shortcomings,an improved feature extraction algorithm named RSSI was proposed,which not only reduced the amount of computation,but also improved the classification performance by calculating and comparing the occurrence frequency of feature items,so that overfitting phenomenon was reduced.Experimental results show that compared with early nave Bayes algorithm and SVM algorithm,the RSSI algorithm can significantly reduce the classification time and the probability of misjudging legitimate emails.

Journal Article
XU Jian-mi1
TL;DR: The experimental result shows that the proposed user reliability evaluation model not only can recognise the traditional fake users zombie fans, but also has high recognition rate for new fake users.
Abstract: For the problem of fake users,behavioral characteristics of Sina micro-blog users were analyzed,and then feature variables for distinguishing the category of users were extracted according to online time,posting time,interaction degree and so on.Applying logistic regression algorithm,a user reliability evaluation model was put forward.The experimental result shows that the proposed model not only can recognise the traditional fake users zombie fans,but also has high recognition rate for new fake users.Sina micro-blog users can be roughly classified according to the confidence.The proposed model is stronger on practicability.

Journal Article
TL;DR: The results show that this variable density topological optimization method can be used to increase the stiffness and mass ratio of the product in engineering practice to provide guidance for the design of the cockpit.
Abstract: To solve the problem that cockpit design is too conservative and weighs too much,a variable density topological optimization method to reduce the quality of the cockpit was developed.Based on the ANSYS software,the finite element model of the cockpit was established,ANSYS parametric design language was used to program topology optimization.Topological optimization was implemented on the roof,side and bottom plate respectively on bending and torsional working conditions to minimize the quality of the cockpit.After optimization,the stiffness of the cockpit was analyzed and compared with the original one.Through several design schemes and comparison,the best cockpit skeleton structure space layout was obtained,reducing the quality of the cockpit to the lightest on the premise of ensuring bending stiffness and torsional stiffness.The results show that this method can be used to increase the stiffness and mass ratio of the product in engineering practice to provide guidance for the design of the cockpit.

Journal Article
TL;DR: Experimental results show that this method can evaluate the importance of each node in the network more objectively and accurately and distinguishes the differences of the edge nodes effectively and its results are consistent with the fact.
Abstract: Aiming at the problem that the node importance measurement of network is influenced by multi-factors,a comprehensive measuring method for the node importance based on the data field model was proposed.Firstly,the data field model was adopted to describe the relations of interaction among all nodes,and each node in the network was viewed as a material particle which created a potential field around itself.Secondly,the importance of each node was quantified according to the value of node's assets and the relationship between interconnected nodes.At the same time,the security policy factor was introduced and the scope of potential field was revised for each node using the accessibility matrix.Experimental results show that this method can evaluate the importance of each node in the network more objectively and accurately.It distinguishes the differences of the edge nodes effectively and its results are consistent with the fact.

Journal Article
TL;DR: Results show that the energy-saving effect of ZigBee network is significantly improved using the proposed clock synchronization mechanism, compared with the asynchronous clock mechanism commonly used in industry.
Abstract: To achieve efficient economy energy in the wireless sensor network,a novel ZigBee-WiFi cooprative synchronise power saving scheme was presented.ZigBee wireless sensor node sensed the periodical beacon frame of WiFi which worked in the same radio spectrum through the built-in receive signal strength register(RSSI),and used it as a reference clock to adjust the local clock.However,the clock deviation was larger while using this method.A clock model based on the state space was proposed then,Kalman filter and DLQR regulator were adopted to trace and regulate the state variable.Finally,the trade-off between the clock synchronization error and correction cycle was analyzed.Considering comprehensive factors,compared with the asynchronous clock mechanism commonly used in industry,the results show that the energy-saving effect of ZigBee network is significantly improved using the proposed clock synchronization mechanism.

Journal Article
TL;DR: Experimental results show that the method can effectively restore the plane and sharp characteristics of buildings and it also can improve the modeling quality of the surface reconstruction.
Abstract: To improve the 3Dreconstruction quality of city buildings,a method was presented for the surface reconstruction based on MVS point cloud.Through statistical analysis of filtering,noise points of the point cloud surface and the surrounding were effectively removed.Point cloud block and efficient RANSAC were used to complete plane model fitting.Experimental results show that the method can effectively restore the plane and sharp characteristics of buildings and it also can improve the modeling quality of the surface reconstruction.The method was applied to the big scene modeling of buildings.The results verify that it has good applicability and makes contributions to the construction of a virtual city 3Denvironment.

Journal Article
TL;DR: An improved Fourier basis consisted of symmetric frequency points was presented, and through the accumulation between conjugate atoms and observation vector, the anti-noise performance in signal reconstruction with compressed sensing was enhanced, while the amount of computation of restricting algorithm was reduced.
Abstract: An improved Fourier basis consisted of symmetric frequency points was presented,through the accumulation between conjugate atoms and observation vector,the anti-noise performance in signal reconstruction with compressed sensing was enhanced,while the amount of computation of restricting algorithm was reducedWith the study of traditional approach for spectrum measuring of multi-band signal,the sub-band sparse dictionary was presentedIt was proved that the sub-band sparse dictionary matched the orthogonality,the restricted isometry property was also proved while reconstructing the signalThe simulation results show that compared with traditional wavelet edge detection,the approach based on frequency-sub-band dictionary performs better in anti-noise and spectrum sensing

Journal Article
TL;DR: Experimental results show that the research can enhance the classification accuracy of feature words, and improve the clustering effect, and take advantage of both global optimization of genetic algorithm and efficient local search of K-means algorithm.
Abstract: To solve the problem of the limitation of feature word weight expressing the text and the inefficiency of genetic Kmeans,a text clustering method including text preprocessing and the improved algorithm was presented.According to the weight factor and feature vector,texts were preprocessed,which reflected the texts' diversities.On this basis,the genetic control factor was used to control individuals in the crossover and mutation operation and the adaptive control was carried out for crossover and mutation probabilities,individuals with high qualities were concluded in the next generation easily.It took advantage of both global optimization of genetic algorithm and efficient local search of K-means algorithm.Experimental results show that the research can enhance the classification accuracy of feature words,and improve the clustering effect.

Journal Article
TL;DR: Experimental results show that when applying the method into Micro-blog topic detection, the average missing rate and false detection rate of topic discovery were effectively reduced, improving the quality of topiciscovery.
Abstract: For the inaccuracy problem of Micro-blog short text similarity calculation caused by sparse features,a method of Micro-blog short text similarity based on multiple views was proposed.Common blocks between short texts were found according to the same word in form or the similar word in meaning,and short text semantic similarity model based on common block sequence was newly established by combining the total number of words within common blocks with order between common blocks.The creating time of Micro-blog short texts and the structured information such as forwarding and commenting were used to revise short text semantic similarity model to construct a novel method of Micro-blog short text similarity,commonly measuring the similarity between Micro-blog short texts.The algorithm was combined with Single-Pass clustering algorithm to detect Microblog topics.Experimental results show that when applying the method into Micro-blog topic detection,the average missing rate and false detection rate of topic discovery were effectively reduced,improving the quality of topic discovery.

Journal Article
XU Yue-zho1
TL;DR: Results of analysis and simulation show that the complexity, the convergence rate, and the convergence precision of the chaotic fruit fly algorithm are far superior to leapfrog and virtual force algorithm.
Abstract: For the low coverage of intelligent algorithm in wireless sensor network and the high algorithm complexity,a simple and efficient algorithm of chaotic fruit fly was proposed.The optimization of fruit fly algorithm was used to guide node layout for sensor network,a chaotic disturbance factor was randomly generated based on the ergodicity of chaotic search,and chaotic perturbations were used before every time fruit flies group evolved to make fruit flies group quickly jump out of local optimization to do global search.Results of analysis and simulation show that the complexity,the convergence rate,and the convergence precision of the chaotic fruit fly algorithm are far superior to leapfrog and virtual force algorithm.It has better network coverage and it is more close to the theory of extreme value.

Journal Article
TL;DR: A trajectory outlier detection ensemble algorithm based on multi-factors (TRODEM) was proposed and a novel cumulative sum method with weighed values was used as combine function to combine the scores.
Abstract: Most existing trajectory outlier detection algorithms fail to deal with multi-factors of trajectory effectively.Aiming at the problem,a trajectory outlier detection ensemble algorithm based on multi-factors(TRODEM)was proposed.The ensemble learning idea was adopted and a novel data-centered ensemble framework was taken into the process of multi-factors ensemble.Each factor was detected and then an outlier score was assigned to each of them.In the process of outlier ensemble,a novel cumulative sum method with weighed values was used as combine function to combine the scores.Experimental results show that TRODEM is robust and effective.