scispace - formally typeset
Search or ask a question

Showing papers in "Computer Technology and Development in 2009"


Journal Article
TL;DR: Presents a fast fingerprint enhancement and minutiae extraction algorithm which improves the clarity of the ridge and valley structures of the input fingerprint images based on the frequency and orientation of the local ridges and thereby extracts correctminutiae.
Abstract: Automatic and reliable extraction of minutiae from fingerprint images is a critical step in fingerprint matching.The quality of input fingerprint images plays an important role in the performance of automatic identification and verification algorithms.Presents a fast fingerprint enhancement and minutiae extraction algorithm which improves the clarity of the ridge and valley structures of the input fingerprint images based on the frequency and orientation of the local ridges and thereby extracts correct minutiae.Experimental results show that the method performs well.

66 citations


Journal Article
XU Hong-wei1
TL;DR: The article describes the RBAC with time character, the temporal authorization and the temporal role-based access control model are presented by cycle restraint and constructed one role access control system about the time restraint.
Abstract: The research work of RBAC is greatly emphasized in recent years.However,the main work focuses on some characters which have nothing to do with the time character.In the real life many access control with the time related cannot obtain very good solution,especially some request very high timeliness or the very strong rule periodic access control.The article describes the RBAC with time character,the temporal authorization and the temporal role-based access control model are presented by cycle restraint.And constructed one role access control system about the time restraint,solved a series problem of time and the role of the access control,enhanced the efforts of access control.

30 citations


Journal Article
TL;DR: This essay focuses on comparing and analysing several typical grid resource scheduling algorithms, and points out that the remaining deficiencies of the performance are still exist and to improve the next step forward.
Abstract: Algorithm for grid resource scheduling is one of the key technologies which influence grid success.Firstly classifies the grid resource scheduling methods from different angles,addresses the performance metric of grid resource scheduling from three aspects,and focuses on comparing and analysing several typical grid resource scheduling algorithms,including the Min-min algorithm,Max-min algorithm,grid resource scheduling algorithm based on economic models,genetic algorithms and simulated annealing,etc,points out that the remaining deficiencies of the performance and to improve the next step forward,finally makes an expectation about its future research directions.This essay provides very good references for the study of grid resource scheduling algorithm.

15 citations


Journal Article
TL;DR: By comparing the results show that the algorithm to extract corners very effective, and better than the Harris algorithm in the performance of corner detection.
Abstract: By the study of the Harris corner detection algorithm,while some images' corners are extracted,there exists the following problems: extracting false corners,the information of the corners is missing,the positioning of the corners offsets.And also not easy to set up the threshold in non-maximal inhibition.Presents to set up a dual threshold method,one is relatively large,and the other is relatively small when perform the non-maximal inhibition,so get the corners information of different thresholds in the same image,through comparing the corners information,can better solve the corners information missing,location offsetting and eliminate some false corners,and then using the idea of the SUSAN to eliminate the leaving false corners.By comparing the results show that the algorithm to extract corners very effective,and better than the Harris algorithm in the performance of corner detection.

14 citations


Journal Article
TL;DR: The improved simulated annealing algorithm is increased memory function to remember the current best state so that it becomes an intelligent algorithm and used to solve a no-linear problem that is searching optimization combination.
Abstract: Introduced the traditional simulated annealing algorithm through discussing its theory and process,analyzed its shortcoming in detail,simply described influence of key parameters to simulated annealing algorithm and provided feasible improvement.Then presented a method of improving simulated annealing algorithm.In order to avoid missing current optimal solution,the improved algorithm is increased memory function to remember the current best state so that it becomes an intelligent algorithm.Also designed an adaptive temperature update function and set up dual-threshold to reduce amount of calculation.Finally,used the two algorithms to solve a no-linear problem that is searching optimization combination.Through testing,the improved simulated annealing algorithm is better than the traditional simulated annealing algorithm.

14 citations


Journal Article
HU Yan-jun1
TL;DR: It is proved that network life time and energy consuming of node are improved greatly from mending algorithm results.
Abstract: Wireless sensor networks is one of the tools that scout and control the long-distance environment systerm.Due to the limitation of power and memory size,the router protocol of wireless sensor networks must maintain small router information and reduce the power usage as much as possible.The improvement of LEACH is based on power.Mended cluster head selection of the classic LEACH algorithm,and the node remaining energy is considered to selection of cluster head.Both of the algorithms were emulated.It is proved that network life time and energy consuming of node are improved greatly from mending algorithm results.

10 citations


Journal Article
TL;DR: An EM algorithm that can be used to deal with missing data problems, and the Kalman smoothing based parameter estimation methods for linear state space models are presented.
Abstract: Following the description of traditional maximum likelihood estimation methods and the discussions on their disadvantages.EM algorithm is an iterative algorithm,every iteration to ensure that the likelihood function can be increased,and the convergence to a local maxima.Presents an EM algorithm that can be used to deal with missing data problems,where the details of the EM algorithm and its realization procedure have been analyzed.Algorithm named because each iterative algorithm includes two steps: the first step in seeking expectations(Expectation Step),known as the E step;the second step for maxima(Maximization Step),known as step-by-step M.EM algorithm used to calculate the principal based on incomplete data,maximum likelihood estimation.This is then followed by applying the proposed EM algorithm to the parameter estimation of state space models.The paper also presents the Kalman smoothing based parameter estimation methods for linear state space models.

7 citations


Journal Article
TL;DR: Some limitations of BP neural network have been analyzed and optimized methods have been supposed and through practical simulating experiments in Matlab, the effect is certificated.
Abstract: BP(back propagation) algorithm which is one of the most widely used neural network algorithms,has very high nonlinear fitting ability,and it can be used to predict the developing trend of time series data in practical application and simulationBut some kinds of problems and exceptions may happen because of the limitation and deficiency of the algorithm itself,such as abnormal termination,long training time and low accuracyAiming at improving the performance,through analyzing the algorithm and simulation,corresponding causes and problem-solving way are foundIn this paper some limitations of BP neural network have been analyzed and optimized methods have been supposedFinally,through practical simulating experiments in Matlab,the effect is certificated

7 citations


Journal Article
TL;DR: In this paper, a consumption forecast model of furnace gas based on BP neural network is designed according to its production process and simulated by Matlab software, the model's prediction error reduced to the design requirements of precision, the prediction model can be used as a reference of gas scheduling and gas balance.
Abstract: Furnace gas is an important secondary energy in iron and steel enterprises,if furnace gas balance can be recognized reasonably,the objective of saving-energy and the sustainable development in the iron and steel enterprises can be achievedA consumption forecast model of furnace gas based on BP neural network is designed according to its production process and simulated by Matlab softwareThe model's prediction error reduced to the design requirements of precisionThe prediction model can be used as a reference of gas scheduling and gas balanceThe comprehensive optimization and management method of furnace gas is proposedAs a result,it is useful help to increase utilization efficiency of furnace gas and realize rational energy consumption so as to achieve energy-saving and reduce discharge amount for iron and steel enterprises

6 citations


Journal Article
TL;DR: An improved anti-collision algorithm is proposed, which will recognize more tags in less time than pure ALOHA, slotted AlOHA and binary search algorithm.
Abstract: Collision is a familiar problem in an RFID systemHow to solve this problem effectively is very important to the whole RFID systemThe RFID system is always made up of two indispensable components: one is the tag,the other is the readerCollisions occur when there are so many tags within the interrogation zone of a reader communicating with the reader synchronouslyIn this paper,an improved anti-collision algorithm is proposedUsing this algorithm will recognize more tags in less time than pure ALOHA,slotted ALOHA and binary search algorithm

6 citations


Journal Article
TL;DR: It can restoration 3D visible surface geometry of the object through two images, give full play to the OpenCV library functions, and basically meet the requirements of three-dimensional reconstruction objectives.
Abstract: Based on binocular stereo vision methods,introduced a three-dimensional visual system constituted by a pair of industrial CCD,designed a practical system of three-dimensional programme,including image acquisition module,camera calibration system based on the OpenCV,feature extraction and matching based on SIFT algorithm,in-depth information calculated,3D model reconstruction based OpenGL several modulesAfter the test and verify,it can restoration 3D visible surface geometry of the object through two images,give full play to the OpenCV library functions,it can basically meet the requirements of three-dimensional reconstruction objectives,in particular has a greater value in the urban landscape reconstruction

Journal Article
TL;DR: One kind of effective lossless compression algorithm in the data compression area of technology — LZW is designed and implemented and test result indicated that the improved algorithm has the good compression ratio and the ideal efficiency of compression.
Abstract: Researches one kind of effective lossless compression algorithm in the data compression area of technology — LZW.The LZW principle lies in the entry code replaces by the string of character in the packed data.Therefore in the dictionary entry is longer are more,the compression ratio is higher.Enlarges the dictionary capacity to be possible to enhance the compression ratio.But the dictionary capacity is limited by the computer memory,moreover it is possible to fill up the dictionary.After like this when the dictionary can't join the new entry again,the excessively old dictionary can't guarantee the high compression ratio.In view of the LZW algorithm's question,we designed and implemented one kind of improved algorithm;After analyzed the effect of improved algorithm on the complexity,selected some typical documents to test the algorithm on the application.The test result indicated that the improved algorithm has the good compression ratio and the ideal efficiency of compression.

Journal Article
TL;DR: The definition of image segmentation was reviewed, the major image segmentations approaches including thresholding, edge-based method, region- based method, model-Based method and artificial intelligence-based methods were studied and the merits and drawbacks of the methods were discussed.
Abstract: Image segmentation is critical to image processing and pattern recognition.All the typical approaches are presented and discussed in this paper.First reviewed the definition of image segmentation,then studied the major image segmentation approaches including thresholding,edge-based method,region-based method,model-based method and artificial intelligence-based method.In this paper,the merits and drawbacks of the methods were discussed too.In practice,these methods often combine to achieve the effect and raise the efficiency of image segmentation,which can not be obtained by single method.

Journal Article
TL;DR: This method uses the maximum deviation value and mean squared deviation to filter the judgment information given by experts and shows that the judgment matrix constructed with the preprocessed judge information has better consistency.
Abstract: Constructing the judgment matrix which meets the consistency requirement is one of the key issues of AHP.In order to improve the consistency of the judgment matrix,study and analyze the factors which affect the consistency of the judgment matrix in AHP,and it designs a method to preprocess the results of experts' judgment.This method uses the maximum deviation value and mean squared deviation to filter the judgment information given by experts.If the judgment matrix has lager deviation of the consistency,the method will re-seek experts' advices or delete this judgment information directly.The method can not only improve the consistency of judgment matrix,but also esteem and make good use of the initial judgment information from experts.The instance shows that the judgment matrix constructed with the preprocessed judge information has better consistency.

Journal Article
TL;DR: For the multiple QoS constrained unicast routing problem, a new QoS routing algorithm combining modified ant colony algorithm (MACA) with artificial fish swarm algorithm (AFSA) was proposed, making use of AFSA's advantage of whole quick convergence.
Abstract: For the multiple QoS constrained unicast routing problem,a new QoS routing algorithm combining modified ant colony algorithm(MACA) with artificial fish swarm algorithm(AFSA) was proposed.The proposed algorithm adopted hybrid ant behavior to produce diverse original paths,optimizating of choice nodes set according to multiple QoS constrained,adds AFSA to MACA's every generation,making use of AFSA's advantage of whole quick convergence,ACA's convergence speed was quickened,and AFSA's preying behavior improved the ability of MACA to avoid being premature.The feasibility and effectiveness of the algorithm are validated by series of simulated results.

Journal Article
TL;DR: The dual membership is introduced to reduce the algorithm complexity and shorten its training time compared with fuzzy support vector machine based on density, at the same time the algorithm well improves the SVM's accuracy rate.
Abstract: In this paper,an improved fuzzy membership function determination is proposed to train the fuzzy support vector machine(FSVM) for classification which the sample set in reality environment is increasing,and it often contains a lot of noise and outliers.In the improved algorithm,the sample points have the different types of memberships in different regions.That is,the membership of the sample point near by the class centers is determined by the distance between the point and its class center,and the membership of the sample point far away the class centers is determined by the proportion between the number of its congeneric points and the number of its heterogeneous points in its neighborhood.The dual membership is introduced to reduce the algorithm complexity and shorten its training time compared with fuzzy support vector machine based on density,at the same time the algorithm well improves the SVM's accuracy rate.

Journal Article
TL;DR: The speed of processing by method one is very slow, can't meet the demand of practical image processing; the speed ofprocessing by method two is more than 60 times faster than method one's; method three is the fatest one, its speed is about 2.3 times better than method two's.
Abstract: With the ever-higher demand of speed for digital image processing,it's urgent to look for fast and effective method for image gray processingIntroduces three programming method for gray processing based on GDI+:the method by directly reading /writing pixel data,the method by transformational color matrix,and the method by directly reading/writing image data in memoryThe performance and programming complexity of those method is comparedThe major conclusion is: the speed of processing by method one is very slow,can't meet the demand of practical image processing;the speed of processing by method two is more than 60 times faster than method one's;method three is the fatest one,its speed is about 23 times faster than method two's,the gray processing by method three can meet the demand of practical image processing in most cases

Journal Article
TL;DR: The critical step of "subtraction of the smallest surplus weight" is left out in a new algorithm of this article to raise the efficiency of algorithm greatly.
Abstract: On the base of one shortest path algorithm from one point to another point in a graph based on cellular automata model,it is pointed out that its critical step of "subtraction of the smallest surplus weight" in evolution rule of the algorithm is far from the basic thinking of classic shortest path algorithms from one point to another point in a graph,it should be left out .The critical step of "subtraction of the smallest surplus weight" is left out in a new algorithm of this article to raise the efficiency of algorithm greatly.At last,one example is used to show correctness and efficiency of this new algorithm by comparing evolution steps of these two algorithms..

Journal Article
Dong Jian-guo1
TL;DR: A modified algorithm that can efficiently detect model image in test image is proposed in this paper with the same accuracy as the one unmodified and is applied to image matching on a single image and a video sequence, with good visual and objective matching efficiency results.
Abstract: In image processing and pattern recognition,it is a key technique to detect regions of interested in test image and extract them from test image.Because Hausdorff distance is robust to object occlusion,image noise,and clutter,etc,model-based matching using Hausdorff distance is one of the most common approaches.In order to reduce the computational complexity of the approach above,improve efficiency,a modified algorithm that can efficiently detect model image in test image is proposed in this paper with the same accuracy as the one unmodified.Finally,the proposed algorithm is applied to image matching on a single image and a video sequence,with good visual and objective matching efficiency results.

Journal Article
TL;DR: Introduces the processing of the signal sparse representation, observation matrix and recovery algorithms and focus on the theoretical framework of compressed sensing and discusses the existing difficult problems.
Abstract: With the development of information technology,the demands for information are increasing dramatically,which causes a series of challenges in signal sampling,transmission and storage.An emerging theory of compressed sensing(CS),which is presented in recent years,provides a new method for solving this problem.CS project a singnal into a lower dimension at frist,then by using nonlinear recovery algorithms(based on convex optimization),super-resolved signals and images can be reconstructed from what appears to be highly incomplete data.Introduces the processing of the signal sparse representation,observation matrix and recovery algorithms and focus on the theoretical framework of compressed sensing and discusses the existing difficult problems.Apply this new theory to data of one dimension and image of two dimensions and give the simulated result in the end.Experiments proved CS is higher compression ratio and smaller compression error than traditional data compression algorithm.

Journal Article
TL;DR: A training algorithm for neural network based on particle swarm optimization was investigated and showed that it is effective through the comparison of least square method.
Abstract: A training algorithm for neural network based on particle swarm optimization was investigated.Introduced a parameter optimization method of radial basis function(RBF) neural network algorithm based on particle swarm optimization(PSO) algorithm.First,it used subtractive clustering method to determine unit's number in RBF layer.Second,it optimized central position and directional width used PSO algorithm.Third,it optimized connection weights between the RBF layer and the output layer used PSO algorithm.Through the comparison of least square method,the result shows that it is effective.

Journal Article
TL;DR: Aiming at the phenomenon BBS network information is mess, a high performance method is presented to solve BBS sentiment classification problems and can help people locate the required reviews in the BBS, and identify the comment is affirmatives of negatives.
Abstract: Aiming at the phenomenon BBS network information is mess,present a high performance method to solve BBS sentiment classification problems.It can help people locate the required reviews in the BBS,and identify the comment is affirmatives of negatives.Based on the different probability whether the words have polarity,use maximum entropy to identify the words with polarity as features.Then use SVM classifier to deal with the texts in order to judge it is positive or negative.The experiments show that this method achieves a high performance.

Journal Article
YU Shi-peng1
TL;DR: The architecture and the detail of the two solutions are analyzed, and their respective strengths and weaknesses are summed up by comparing in the end.
Abstract: The IP address' dual statuses(identifier and locator) caused the present Internet core route trend to be complex.In order to resolve this problem,has discussed the identifier and the locator separation,and introduced two kinds in view of this question's solution,which respectively are LISP and HIP.LISP uses the host's IP address as its identifier and uses the router's ID as its locator.HIP expands the existing domain name space,and uses the global name to separate the identifier and locator.Has analyzed the architecture and the detail of the two solutions,and summed up their respective strengths and weaknesses by comparing in the end.

Journal Article
TL;DR: In order to advance the production efficiency of automobile assembly lines, and optimize the allocation of resources, researchers searched into the integrated optimization problem of production planning and scheduling on automobile assemblylines, and presented the mixed integer programming model of this problem.
Abstract: In order to advance the production efficiency of automobile assembly lines,and optimize the allocation of resources,researched into the integrated optimization problem of production planning and scheduling on automobile assembly lines,and presented the mixed integer programming model of this problem.By using the branch-and-bound algorithm and simplex method got the rough production plan of this problem.A heuristic algorithm was inquired into by combining simulated annealing algorithm with quick schedule simulation.Then on the basis of the obtained rough production plan,the implementing of this algorithm was presented according to three different optimizing search combinations.Finally,the algorithm was applied to the practical examples.Simulations show that this algorithm can solve the problem effectively.

Journal Article
HU Hong-zhi1
TL;DR: The experimental calculation results demonstrates that the optimal or nearly optimal solutions to the logistics distribution routing problem can be easily obtained by using genetic algorithm.
Abstract: For improving the competitive ability of medium and small-size enterprise in market,an intelligent solution based on genetic algorithm is discussed in this paperEstablished the mathematic model and solving flow of logistics distribution routing problemAnd the design and implementation of logistics vehicles dispatching system based on genetic algorithm is discussedAt the same time,the key technologies such as the implementing of the genetic algorithm based on natural number encode in the logistics vehicles dispatching have been expoundedBy simulative testing,get good effectThe experimental calculation results demonstrates that the optimal or nearly optimal solutions to the logistics distribution routing problem can be easily obtained by using genetic algorithm

Journal Article
TL;DR: This method retains mast of the important characteristics of the tooth-marked, which makes recognition tooth- marked possible and will help improve the digitization of the traditional Chinese medicine tongue diagnosis foundation.
Abstract: Summarizes the experience of body of tongue segmentation of the predecessors.After analyzing the characteristic of the kind of tooth-marked tongue,it was made sure that conversion from RGB model to HIS model,and then using Otsu threshold value and H,S,V component supplementary method complete segmentation of toothmarked tongue image.This method retains mast of the important characteristics of the tooth-marked,which makes recognition tooth-marked possible.It will help improve the digitization of the traditional Chinese medicine(TCM) tongue diagnosis foundation.

Journal Article
TL;DR: The thesis firstly has presented the technologies of QR code, and then the solution of the image recognition in 2-D bar code based on QR Code is given, which synthetically uses image graying, image denoising, image binarization, edge detection, and image rotation etc to complete the preprocessing, localization, segmentation and data extraction of bar codes.
Abstract: In modern social commodity currency,in order to greatly improve work efficiency recognition technologies based on bar code have been widely used in various applications.The one-dimensional bar code is limited by its capacity,and thus it can only identify merchandises and cannot describe merchandises.In contrast,the two-dimensional bar code solves the capacity problem.It has many advantages,such as large information capacity,good reliability,secrecy and anti-counterfeit.The thesis firstly has presented the technologies of QR code,and then has given a solution of the image recognition in 2-D bar code based on QR Code.The solution synthetically uses image graying,image denoising,image binarization,edge detection,and image rotation etc.to complete the preprocessing,localization,segmentation and data extraction of bar codes.The experimental results show that the solution can greatly enhance the flexibility and reliability of reading.

Journal Article
TL;DR: The new component model adopts a more modular component to implement, distills the aspect element from traditional component model, and satisfactorily resolves the problem of chaotic systems caused by cross-cutting concerns in the traditional development methods.
Abstract: Integrating aspect-oriented programming technology into the traditional component-based software development methodology,and raises aspect-based component model.The relevant assembly strategy is discussed,and using XML to describe the processing logic of assembly and weaving of aspect-based component.The new component model adopts a more modular component to implement,distills the aspect element from traditional component model,and satisfactorily resolves the problem of chaotic systems caused by cross-cutting concerns in the traditional development methods,which can enhance the reusability and development efficiency of software.

Journal Article
TL;DR: This system uses the vector space model to represent a text, uses the fast KNN algorithm to classify a text and uses the reverse maximum match to segment the words to improve the accuracy of medical information classification and the efficiency of information processing.
Abstract: Designs and implements a system of medical information text categorization based on KNN algorithm.This system uses the vector space model to represent a text,uses the fast KNN algorithm to classify a text,and uses the reverse maximum match to segment the words.Therefore,it improves the accuracy of medical information classification and the efficiency of information processing.In addition,constructs a dataset of medical information including 582 medical documents,which is randomly divides into a training set including 433 documents and 149 documents.The system of medical information text classification is tested on our dataset and a F1 score of 74.83% is obtained.The result shows the better classification performance on medical information.

Journal Article
TL;DR: This paper describes the main form of user rights management, and analyses the realization forms of early user rights, and designs the flexible management ofuser rights and dynamic allocation with the changing needs of business.
Abstract: In order to be able to work well together as well as multi-user dynamic allocation of user rights requirements,and meet the growing refinement of a clear division of labor to ensure that enterprise information platform for the coordination of operational work of the convenient and quick.First,it describes the main form of user rights management,and analyses the realization forms of early user rights.Then,on the basis of contrasting with the form of general application management system of user rights and combining enterprise-class information management system's managing models,emphasize researching the classification design and the role allocated of users in workflow system.And then it mainly researches and designs the flexible management of user rights and dynamic allocation with the changing needs of business.Finally,it successfully realizes the design patterns of user rights dynamic allocation.