scispace - formally typeset
Search or ask a question

Showing papers in "Computer Science in 2013"


Journal ArticleDOI
TL;DR: This paper shows what kind of tools, APIs and interfaces are available in WS-PGRADE/gUSE to customize it to have an application specific gateway.
Abstract: Enabling scientists to use remote distributed infrastructures, parametrizeand execute common science-domain applications transparently is actual anda highly relevant field of distributed computing. For this purpose a general so-lution is the concept of Science Gateways. WS-PGRADE/gUSE system offersa transparent and web-based interface to access distributed resources (grids,clusters or clouds), extended by a powerful generic purpose workflow editorand enactment system, which can be used to compose scientific applicationsinto data-flow based workflow structures. It’s a generic web-based portal so-lution to organize scientific applications in a workflow structure and executethem on remote computational resources. As the portal defines nodes as black-box applications uploaded by the users, it does not provide any applicationspecific interface by default. In this paper we show what kind of tools, APIsand interfaces are available in WS-PGRADE/gUSE to customize it to have anapplication specific gateway.

50 citations


Journal Article
TL;DR: A tight frame based regularization approach was developed for such blind inpainting problems and the resulted minimization problem is solved by the split Bregman algorithm.
Abstract: Image inpainting has been widely used in practice to repair damaged/missing pixels of given images.Most of the existing inpainting techniques require knowing priori information of pixels.However,in certain applications,such information neither is available nor can be reliably pre-detected.This paper introduced a blind inpainting model to solve this type of problems.A tight frame based regularization approach was developed for such blind inpainting problems.And the resulted minimization problem is solved by the split Bregman algorithm.The experiments show that our method is efficient in image inpainting.

39 citations


Journal Article
TL;DR: A method which improves text similarity calculation by using LDA model is presented, which can improve the text similarity accurate rate and clustering quality effectively.
Abstract: Latent Dirichlet Allocation(LDA)is an unsupervised model which exhibits superiority on latent topic modeling of text data in the research of recent years.This paper presented a method which improves text similarity calculation by using LDA model.This method models corpus and text with LDA.Parameters are estimated with Gibbs sampling of MCMC and the word probability is represented.It can mine the hidden relationship between the different topics and the words from texts,get the topic distribution,and compute the similarity between the text.Finally,the text similarity matrix clustering experiments are carrieel out to assess the effect of clustering.Experimental results show that the method can improve the text similarity accurate rate and clustering quality effectively.

19 citations


Journal ArticleDOI
TL;DR: Automated algorithms for detecting drifting concepts in a probability distribution of the data are presented and the detected concepts drifts are used to label the data, for subsequent learning of classification models by asupervised learner.
Abstract: The task identifying changes and irregularities in medical insurance claim pay-ments is a difficult process of which the traditional practice involves queryinghistorical claims databases and flagging potential claims as normal or abnor-mal. Because what is considered as normal payment is usually unknown andmay change over time, abnormal payments often pass undetected; only to bediscovered when the payment period has passed.This paper presents the problem of on-line unsupervised learning from datastreams when the distribution that generates the data changes or drifts overtime. Automated algorithms for detecting drifting concepts in a probabilitydistribution of the data are presented. The idea behind the presented driftdetection methods is to transform the distribution of the data within a slidingwindow into a more convenient distribution. Then, a test statistics p-value ata given significance level can be used to infer the drift rate, adjust the windowsize and decide on the status of the drift. The detected concepts drifts areused to label the data, for subsequent learning of classification models by asupervised learner. The algorithms were tested on several synthetic and realmedical claims data sets.

17 citations


Journal ArticleDOI
TL;DR: This paper recalls the basics of EMAS and describes the considered problem (Step and Flash Imprint Lithography), and shows convincing results that EMAS is more effective than a classical evolutionary algorithm.
Abstract: This paper tackles the application of evolutionary multi-agent computing to solve inverse problems. High costs of fitness function call become a major difficulty when approaching these problems with population-based heuristics. However, evolutionary agent-based systems (EMAS) turn out to reduce the fitness function calls, which makes them a possible weapon of choice against them. This paper recalls the basics of EMAS and describes the considered problem (Step and Flash Imprint Lithography), and later, shows convincing results that EMAS is more effective than a classical evolutionary algorithm.

16 citations


Journal Article
TL;DR: A new method of IDP k-means clustering is presented and it is proved it satisfies e-differential privacy and gets a much higher clustering availability than differential privacy k-Mean clustering method.
Abstract: We studied k-means privacy preserving clustering method within the framework of differential privacy.We first introduced the research status of privacy preserve data mining and privacy preserve clustering,briefly presenting the basic principle and method of differential privacy.To improve the poor clustering availability of differential privacy k-means,we presented a new method of IDP k-means clustering and proved it satisfies e-differential privacy.Our experiments show that at the same level of privacy preserve,IDP k-means clustering gets a much higher clustering availability than differential privacy k-means clustering method.

16 citations


Journal ArticleDOI
TL;DR: An Evolutionary Multi-agent system based computing process is subjected to detailed analysis of the parameters in order to ground a base for better understanding this meta-heuristics from the practitioner's point of view.
Abstract: In this paper an Evolutionary Multi-agent system based computing processis subjected to detailed analysis of the parameters in order to ground a basefor better understanding this meta-heuristics from the practitioner's point of view.After reviewing the concepts of EMAS and its immunological variant, a series of experiments is shown and theresults of influencing of search outcomes by certain parameters are discussed.

15 citations


Journal Article
TL;DR: The strategic utilisat ion of e-learning is of upmost significance as it plays a pivotal role in the improvement of healthca re learning and knowledge transfer, especially in developing countries and in pursuing of Millennium Development Goals.
Abstract: This study aims to explicate the strategic utilisat ion of e-learning is of upmost significance as e-le arning plays a pivotal role in the improvement of healthca re learning and knowledge transfer, especially in developing countries and in pursuing of Millennium Development Goals (MDGs). Rapid technology changes in the learning and knowledge transfer land scape markedly, the swift pace of e-learning leavin g healthcare providers no choice if they want to rema in competitive. Human capital, an important element in contemporary employee relations scenario, has become the most significant competitive advantage in healthcare delivery systems. As such, healthcare pr oviders need a new strategy for learning and traini ng of their employees. Besides, the knowledge and competencies of healthcare providers are not only vital component but also essential to the quality of care and health of the society. Thus, these rationales exert that today’s healthcare providers are embracing e-learni ng. The benefits of e-learning are extremely compel ling. They include a reduction in costs associated with e mployee travelling; reduction in time spent away fr om the patients and reduced learning times. Also, this study describes the United Nations University International Institute for Global Health (UNU-IIGH ) strategies, best practices and experiences in del ivering e-learning to healthcare workforce of developing co untries.

14 citations


Journal ArticleDOI
TL;DR: The paper presents a new, cost, volunteer computing based platform that utilizes volunteers’ web browsers as computational nodes making use of the HTML5 web worker technology.
Abstract: The paper presents a new, cost effective, volunteer computing based platform.It utilizes volunteers’ web browsers as computational nodes. The computationaltasks are delegated to the browsers and executed in the background (indepen-dently of any user interface scripts) making use of the HTML5 web workerstechnology. The capabilities of the platform have been proved by experimentsperformed in a wide range of numbers of computational nodes (1–400).

13 citations


Journal ArticleDOI
TL;DR: The paper presents a prototype implementation of CLUO – an Open Source Intelligence (OSINT) system, which extracts and analyzes quantities of openly available information.
Abstract: The amount of textual information published on the Internet is considered tobe in billions of web pages, blog posts, comments, social media updates andothers. Analyzing such quantities of data requires high level of distribution –both data and computing. This is especially true in case of complex algorithms,often used in text mining tasks.The paper presents a prototype implementation of CLUO – an Open SourceIntelligence (OSINT) system, which extracts and analyzes significant quantitiesof openly available information.

13 citations


Journal Article
TL;DR: It is proved that this identity-based broadcast signcryption scheme with proxy re-signature has indistinguishability against chosen multiple identities and adaptive chosen ciphertext attacks and existential unforgeability against choices of multiple identity and message attacks in terms of the hardness of CBDH(computational bilinear diffie-hellman) problem and CDH
Abstract: To protect data confidentiality and integrity in cloud data sharing and other applications,an identity-based broadcast signcryption scheme with proxy re-signature was proposed.This scheme could transform a broadcast signcryption from the initial signcrypter to the re-signcrypter by executing a proxy re-signature.It is proved that this scheme has indistinguishability against chosen multiple identities and adaptive chosen ciphertext attacks and existential unforgeability against chosen multiple identities and message attacks in terms of the hardness of CBDH(computational bilinear diffie-hellman) problem and CDH(computational diffie-hellman) problem.At last,its application in cloud data sharing was introduced.

Journal Article
TL;DR: The paper comprehensively reviewed the issues and challenges involved in information fusion from the perspectives of both information processing requirements and fusion system's design considerations, as well as the existing major fusion models and approaches.
Abstract: In recent years,a new research boom of multisource information fusion technologies has been set off both in China and abroad.This paper proposed a comprehensive review of the information fusion state of the art.Firstly,the paper explored the conceptualizations and essence of information fusion,then comprehensively reviewed the issues and challenges involved in information fusion from the perspectives of both information processing requirements and fusion system's design considerations,as well as the existing major fusion models and approaches,at the same time especially analyzed the future trends of the research on information fusion methodologies,including some relatively new areas.Next,wide ranges of information fusion applications were surveyed and some emerging application areas were specially discussed.At last,the summary and outlook of information fusion research routes were presented.

Journal Article
TL;DR: The results show that the proposed method can significantly reduce positioning error compared with the original DV-Hop algorithm without increasing the hardware overhead of sensor nodes.
Abstract: With regard to the problem that the typical range-free localization algorithm of DV-Hop for wireless sensor networks has low localization accuracy,the artificial bee colony algorithm with good robust,high convergence speed and outstanding performance on solving global optimization problems was applied to the design of DV-Hop algorithm and an improved algorithm named as ABDV-Hop(Artificial Bee Colony DV-Hop) was proposed.Based on the original DV-Hop algorithm,the improved algorithm uses the information of distance between the nodes as well as the location of beacon nodes.Through establishing the optimization function,the location of unknown nodes is estimated at the final stage of the algorithm.The results show that the proposed method can significantly reduce positioning error compared with the original DV-Hop algorithm without increasing the hardware overhead of sensor nodes.

Journal ArticleDOI
TL;DR: In this paper, a root extraction approach for Arabic words is presented, which tries to assign for Arabic word a unique root without having a database of word roots, a list of words patterns or even a lists of all the prefixes and the suffixes of the Arabic words.
Abstract: In this paper we present a new root-extraction approach for Arabic words. The approach tries to assign for Arabic word a unique root without having a database of word roots, a list of words patterns or even a list of all the prefixes and the suffixes of the Arabic words. Unlike most of Arabic rule-based stemmers, it tries to predict the letters positions that may form the word root one by one using some rules based on the relations among the Arabic word letters and their placement in the word. This paper will focus on two parts of the approach. The first one deals with the rules that distinguish between the Arabic definite article “ ال -AL” and the permanent component “ ال -AL” that may found in any Arabic word. The second part of the approach adopts the segmentation of the word into three parts and classifies Arabic letters in to groups according to their positions in each segment. The proposed approach is a system composed of several modules that corporate together to extract the word root. The approach has been tested and evaluated using the Holy Quran words. The results of the evaluation show a promising root extraction algorithm.

Journal Article
TL;DR: This paper describes the results from the implementation of a few dierent parallel sorting algorithms on GPU cards and multi-core processors, and presents a hybrid algorithm, consisting of parts executed on both platforms.
Abstract: Sorting is a common problem in computer science. There are a lot of well- known sorting algorithms created for sequential execution on a single processor. Recently, many-core and multi-core platforms have enabled the creation of wide parallel algorithms. We have standard processors that consist of multiple cores and hardware accelerators, like the GPU. Graphic cards, with their parallel architecture, provide new opportunities to speed up many algorithms. In this paper, we describe the results from the implementation of a few dierent parallel sorting algorithms on GPU cards and multi-core processors. Then, a hybrid algorithm will be presented, consisting of parts executed on both platforms (a standard CPU and GPU). In recent literature about the implementation of sorting algorithms in the GPU, a fair comparison between many core and multi-core platforms is lacking. In most cases, these describe the resulting time of sorting algorithm executions on the GPU platform and a single CPU core.

Journal Article
TL;DR: A cloud computing security assessment indexes system was built up through Delphi method, and the weight of each index was calculated with analytic hierarchy process to assess the security level of cloud platform.
Abstract: The security topic in application and development of cloud computing is one of the greatest concerns.Aiming at the requirement of security level quantification in cloud computing service,based on classified protection in our country and learned from the cloud computing risk control and security assessment frameworks designed by European and American institutions,a cloud computing security assessment indexes system was built up through Delphi method,and the weight of each index was calculated with analytic hierarchy process.According to this indexes system,fuzzy comprehensive analysis method was introduced to the evaluation of a cloud computing instance.The case study shows that this model can effectively quantify and assess the security level of cloud platform.

Journal ArticleDOI
TL;DR: In this article, the performance of the working DG-SG DCI was measured, and the normal dis-tribution of host performances, and signs of log-normal distributions of other characteristics (CPUs, RAM, and HDD per host) were found.
Abstract: The distributed computing infrastructure (DCI) on the basis of BOINC andEDGeS-bridge technologies for high-performance distributed computing is usedfor porting the sequential molecular dynamics (MD) application to its paral-lel version for DCI with Desktop Grids (DGs) and Service Grids (SGs). Theactual metrics of the working DG-SG DCI were measured, and the normal dis-tribution of host performances, and signs of log-normal distributions of othercharacteristics (CPUs, RAM, and HDD per host) were found. The practicalfeasibility and high efficiency of the MD simulations on the basis of DG-SG DCIwere demonstrated during the experiment with the massive MD simulations forthe large quantity of aluminum nanocrystals ( ∼ 102–103). Statistical analysis(Kolmogorov-Smirnov test, moment analysis, and bootstrapping analysis) ofthe defect density distribution over the ensemble of nanocrystals had shownthat change of plastic deformation mode is followed by the qualitative changeof defect density distribution type over ensemble of nanocrystals. Some limita-tions (fluctuating performance, unpredictable availability of resources, etc.) ofthe typical DG-SG DCI were outlined, and some advantages (high efficiency,high speedup, and low cost) were demonstrated. Deploying on DG DCI al-lows to get new scientific quality from the simulated quantity of numerousconfigurations by harnessing sufficient computational power to undertake MDsimulations in a wider range of physical parameters (configurations) in a muchshorter timeframe.

Journal Article
TL;DR: Considering the complexity and diversity of the real environment, the paper introduced multiple path quality constraints to improve the rules of the pheromone update and shows the improved ant colony algorithm has a good effect in the dynamic path planning.
Abstract: In view of the shortcomings of slow rate of convergence and easy to fall into local optimal solution for the traditional ant algorithm,this paper put forward to improve distance heuristic factor to encrease effects on the next node,so as to enhance the global search ability,avoid trap in local optimal solution and improve the rate of convergence.Considering the complexity and diversity of the real environment,this paper introduced multiple path quality constraints to improve the rules of the pheromone update.The simulation results show the improved ant colony algorithm has a good effect in the dynamic path planning.

Journal Article
HU Li-ru1
TL;DR: Experimental results show that the algorithm is convergent, and relative to Non-negative Matrix Factorization (NMF) and so on, the orthogonality and the sparseness of the basis matrix are better, in face recognition, there is higher recognition accuracy.
Abstract: To solve the problem that the iterative method for Linear Projection-Based Non-negative Matrix Factorization(LPBNMF)is complex,a method,called Linear Projective Non-negative Matrix Factorization(LP-NMF),was proposed.In LP-NMF,from projection and linear transformation angle,an objective function of Frobenius norm is considered.The Taylor series expansion is used.An iterative algorithm for basis matrix and linear transformation matrix is derived strictly and a proof of algorithm convergence is provided.Experimental results show that the algorithm is convergent,and relative to Non-negative Matrix Factorization(NMF)and so on,the orthogonality and the sparseness of the basis matrix are better,in face recognition,there is higher recognition accuracy.The method for LP-NMF is effective.

Journal Article
TL;DR: The simulation results confirm that the SRGM established by GA-DFNN has steady long period prediction, and its short period prediction error is small and it has some versatility.
Abstract: The parameters of dynamic fuzzy neural network were dynamically adjusted by genetic algorithm(GA-DFNN),and GA-DFNN was used to study software reliability growth model(SGRM).The optimal solution of DFNN's parameters was resolved by genetic algorithm in the DFNN's training process,and according to the DFNN which has the optimal parameters,software failure data prediction model was established.According to 3 groups of software defects data,we compared the SGRM's predictive ability established by GA-DFNN with SGRM's predictive ability established by fuzzy neural network(FNN) and BP neural network(BPN).The simulation results confirm that the SRGM established by GA-DFNN has steady short period prediction,and its short period prediction error is small and it has some versatility.

Journal ArticleDOI
TL;DR: Results of implementation of a few different sorting algorithms on GPU cards and multicore processors are described and hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.
Abstract: Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.

Journal Article
TL;DR: In this paper, a novel way of signal processing using a group of transformations within the limits of group theory is proposed. But it is not suitable for all types of signals and it cannot be used for every type of signal.
Abstract: Different signal processing transforms provide us with unique decomposition capabilities. Instead of using specific transformation for every type of signal, we propose in this paper a novel way of signal processing using a group of transformations within the limits of Group theory. For different types of signal different transformation combinations can be chosen. It is found that it is possible to process a signal at multiresolution and extend it to perform edge detection, denoising, face recognition, etc by filtering the local features. For a finite signal there should be a natural existence of basis in it’s vector space. Without any approximation using Group theory it is seen that one can get close to this finite basis from different viewpoints. Dihedral groups have been demonstrated for this purpose. General Terms signal processing, algorithm, group theory.

Journal Article
TL;DR: In this paper, the authors propose model-driven techniques to extend IBM's SOMA method towards migrating legacy systems into Service-Oriented Architectures (SOA) and explore how graph-based querying and transformation techniques enable the integration of legacy assets into a new SOA and how these techniques can be integrated into the overall migration process.
Abstract: This paper proposes model-driven techniques to extend IBM’s SOMA method towards migrating legacy systems into Service-Oriented Architectures (SOA). The proposal explores how graph-based querying and transformation techniques enable the integration of legacy assets into a new SOA and how these techniques can be integrated into the overall migration process. The presented approach is applied to the identification and migration of services in an open source Java software system.

Journal ArticleDOI
TL;DR: This paper utilizes the concept of the L 2 and H 1 projections to adaptively generate a continuous approximation of an input material data in the finite element (FE) base, which can be used as material data for FE solvers.
Abstract: In this paper we utilize the concept of the L 2 and H 1 projections used to adaptively generate a continuous approximation of an input material data in the finite element (FE) base. This approximation, along with a corresponding FE mesh, can be used as material data for FE solvers. We begin with a brief theoretical background, followed by description of the hp-adaptive algorithm adopted here to improve gradually quality of the projections. We investigate also a few distinct sample problems, apply the aforementioned algorithms and conclude with numerical results evaluation.

Journal Article
Li Y1
TL;DR: The presented algorithm is inspired by fundamentals of echolocation of micro bats, and intends to combine the advantages of existing algorithms into the new bat algorithm.
Abstract: The study of bionics bridges the functions,biological structures and organizational principles found in nature with our modern technologies,and numerous mathematical and meta-heuristic algorithms have been developed along with the knowledge transferring process from the life forms to the human technologies.Recently,a new global optimization algorithm,called Bat-inspired Algorithm(BA),has been developed by Yang.The presented algorithm is inspired by fundamentals of echolocation of micro bats,and intends to combine the advantages of existing algorithms into the new bat algorithm.The first part of the paper was devoted to the detailed description of the existing algorithm.Subsequent sections concentrated on the performed experimental parameter studies and a comparison with efficient particle swarm optimizer based on existing benchmark functions.Finally the implication of the results and potential topics for further research was discussed.

Journal Article
Shao Pen1
TL;DR: An improved particle swarm optimization algorithm (PSO-R) is proposed which introduces trigonometric factor with periodic oscillations of trig onometric functions so that each particle gets strong oscillation to expand the search space of each particle and guide particles to close nearly the optimal value and avoid converging prematurely and local optimum.
Abstract: Rosenbrock function optimization belongs to unconstrained optimization problems,and its global minimum value is located in the bottom of a smooth and narrow valley of the parabolic shape.It is very difficult to find the global minimum value because of little information the function optimization algorithm provided.According to the characteristics of the Rosenbrock function,this paper specifically proposed an improved particle swarm optimization(PSO)algorithm(PSO-R)which introduces trigonometric factor with periodic oscillations of trigonometric functions so that each particle gets strong oscillation to expand the search space of each particle and guide particles to close nearly the optimal value and avoid converging prematurely and local optimum,and find the global minimum of the Rosenbrock function.A large number of experimental results show that the algorithm has good performance of function optimization,and provides a new idea for optimization problems similar with the Rosenbrock function for some problems of special fields.

Journal ArticleDOI
TL;DR: Some aspects of the procedures of creation and employment of the meshcontrol space based on the discrete set of points are presented and several methods of metric interpolation between these discrete points are inspected.
Abstract: The article concerns the problem of a definition of the control space from a set of discretedata (metric description gathered from different sources) and its influence on the efficiency ofthe generation process with respect to 2D and 3D surface meshes. Several methods of metricinterpolation between these discrete points are inspected, including an automated selectionof proper method. Some aspects of the procedures of creation and employment of the meshcontrol space based on the discrete set of points are presented. The results of using differentvariations of these methods are also included.

Journal Article
TL;DR: Experimental results on FERET CAS-PEAL-R1 and real scene database demonstrate that the proposed approach not only significantly raises the recognition rate and reduces the computing time but also has certain robustness to the influence of light.
Abstract: A novel approach to face recognition,which is based on HOG multi-feature fusion and Random Forest,was proposed to solve the problems of low face recognition rate in complex environments.This approach introduces the HOG descriptor(Histograms of Oriented Gradients)to extract information of the facial feature.Firstly,the face image grid is set to extract the holistic HOG features of the entire face,and the face image is divided into homogeneous subblocks,and local HOG features are extracted in the sub-blocks which contain key components of the face.After that,the dimensions of holistic and local HOG features are reduced using 2D Principal Component Analysis(2D PCA)and Linear Discriminant Analysis(LDA)and the final classification features are formed by the feature level's fusion.Finally,the random forest classifier is employed to classify the final features.Experimental results on FERET CAS-PEAL-R1 and real scene database demonstrate that the proposed approach not only significantly raises the recognition rate and reduces the computing time but also has certain robustness to the influence of light.

Journal ArticleDOI
TL;DR: The paper presents the concept of communication that relies on the exchange of messages between wireless pervasive devices available in the environment and shows that commonly used standards and protocols for sharing services are not practical and do not take into account the occurrence of these problems.
Abstract: An increasing number of electronic devices in our environment is equipped with radio interfaces used for exposing and using their functionality by other devices and applications. Wireless communication in this class of devices is exposed to a number of situations that may occur including limited energy resources, equipment failures, nodes mobility and loss of communication between nodes. It causes that commonly used standards and protocols for sharing services are not practical and do not take into account the occurrence of these problems. The paper presents the concept of communication that relies on the exchange of messages between wireless pervasive devices available in the environment.

Journal Article
TL;DR: This paper presents how shared memory in GPU can be used to improve performance for Cellular Automata models by introducing modifications that takes an advantage of fast shared memory.
Abstract: Graphics processors (GPU – Graphic Processor Units) recently have gained a lot of interest as an efficient platform for general-purpose computation. Cellular Automata approach which is inherently parallel gives the opportunity to implement high performance simulations. This paper presents how shared memory in GPU can be used to improve performance for Cellular Automata models. In our previous works, we proposed algorithms for Cellular Automata model that use only a GPU global memory. Using a profiling tool, we found bottlenecks in our approach. With this paper, we will introduce modifications that takes an advantage of fast shared memory. The modified algorithm is presented in details, and the results of profiling and performance test are demonstrated. Our unique achievement is comparing the efficiency of the same algorithm working with a global and shared memory.