scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Software in 2009"


Journal Article
TL;DR: Readers will capture the current status of cloud computing as well as its future trends from this paper, including the systems and current research.
Abstract: This paper surveys the current technologies adopted in cloud computing as well as the systems in enterprisesCloud computing can be viewed from two different aspectsOne is about the cloud infrastructure which is the building block for the up layer cloud applicationThe other is of course the cloud applicationThis paper focuses on the cloud infrastructure including the systems and current researchSome attractive cloud applications are also discussedCloud computing infrastructure has three distinct characteristicsFirst,the infrastructure is built on top of large scale clusters which contain a large number of cheap PC serversSecond,the applications are co-designed with the fundamental infrastructure that the computing resources can be maximally utilizedThird,the reliability of the whole system is achieved by software building on top of redundant hardware instead of mere hardwareAll these technologies are for the two important goals for distributed system:high scalability and high availabilityScalability means that the cloud infrastructure can be expanded to very large scale even to thousands of nodesAvailability means that the services are available even when quite a number of nodes failFrom this paper,readers will capture the current status of cloud computing as well as its future trends

223 citations


Journal ArticleDOI
TL;DR: The attitudes and perceptions of the citizens of Kuwait, a developing country, towards the adoption of e-government services are identified and factors related to usefulness, ease of use, reforming bureaucracy, cultural and social influences, technology issues and lack of awareness are investigated.
Abstract: E-government initiatives are in their infancy in many developing countries. The success of these initiatives is dependent on government support as well as citizens’ adoption of e-government services. This research identified the attitudes and perceptions of the citizens of Kuwait, a developing country, towards the adoption of e-government services. Based on previous research exploring the determinants of the adoption of e-government services using an amended version of the UTAUT model, the study reported here investigates the factors that influence the take-up of such services. These factors are related to usefulness, ease of use, reforming bureaucracy, cultural and social influences, technology issues and lack of awareness. Conclusions and implication for decision makers are also considered in this paper.

202 citations



Journal Article
TL;DR: The essential characteristics of multi-objective optimization problems are deeply investigated and experimental comparison of several representative algorithms are given.
Abstract: Evolutionary multi-objective optimization(EMO),whose main task is to deal with multi-objective optimization problems by evolutionary computation,has become a hot topic in evolutionary computation community.After summarizing the EMO algorithms before 2003 briefly,the recent advances in EMO are discussed in details.The current research directions are concluded.On the one hand,more new evolutionary paradigms have been introduced into EMO community,such as particle swarm optimization,artificial immune systems,and estimation distribution algorithms.On the other hand,in order to deal with many-objective optimization problems, many new dominance schemes different from traditional Pareto-dominance come forth.Furthermore,the essential characteristics of multi-objective optimization problems are deeply investigated.This paper also gives experimental comparison of several representative algorithms.Finally,several viewpoints for the future research of EMO are proposed.

111 citations


Journal Article
TL;DR: A comprehensive survey of the recommender system research aiming to facilitate readers to understand this field is made and most difficulties and future directions are concluded.
Abstract: This paper makes a comprehensive survey of the recommender system research aiming to facilitate readers to understand this field.First the research background is introduced,including commercial application demands,academic institutes,conferences and journals.After formally and informally describing the recommendation problem,a comparison study is conducted based on categorized algorithms.In addition,the commonly adopted benchmarked datasets and evaluation methods are exhibited and most difficulties and future directions are concluded.

111 citations



Journal ArticleDOI
TL;DR: Gaussian function is found to be performing better than the trapezoidal function, as it demonstrates a smoother transition in its intervals, and the achieved results were closer to the actual effort.
Abstract: In software industry Constructive Cost Model (COCOMO) is considered to be the most widely used model for effort estimation. Cost drivers have significant influence on the COCOMO and this research investigates the role of cost drivers in improving the precision of effort estimation. It is important to stress that uncertainty at the input level of the COCOMO yields uncertainty at the output, which leads to gross estimation error in the effort estimation. Fuzzy logic has been applied to the COCOMO using the symmetrical triangles and trapezoidal membership functions to represent the cost drivers. Using Trapezoidal Membership Function (TMF), a few attributes are assigned the maximum degree of compatibility when they should be assigned lower degrees. To overcome the above limitation, in this paper, it is proposed to use Gaussian Membership Function (GMF) for the cost drivers by studying the behavior of COCOMO cost drivers. The present work is based on COCOMO dataset and the experimental part of the study illustrates the approach and compares it with the standard version of the COCOMO. It has been found that Gaussian function is performing better than the trapezoidal function, as it demonstrates a smoother transition in its intervals, and the achieved results were closer to the actual effort.

90 citations


Journal ArticleDOI
TL;DR: The purpose is to use the constructed theory and practice in order to enable anywhere and anytime adaptive e-learning environments and to integrate and extend fundamental and promising theoretical and technical aspects found in the literature.
Abstract: In this article we present a review of selected literature of context-aware pervasive computing while integrating theory and practice from various disciplines in order to construct a theoretical grounding and a technical follow-up path for our future research. This paper is not meant to provide an extensive review of the literature, but rather to integrate and extend fundamental and promising theoretical and technical aspects found in the literature. Our purpose is to use the constructed theory and practice in order to enable anywhere and anytime adaptive e-learning environments. We particularly elaborate on context, adaptivity, context-aware systems, ontologies and software development issues. Furthermore, we represent our view point for context-aware pervasive application development particularly based on higher abstraction where ontologies and semantic web activities, also web itself, are of crucial.

86 citations


Journal ArticleDOI
TL;DR: This work introduces a pragmatic and instrumented approach to define a translational semantics and to validate it against a reference operational semantics expressed by the DSL designer and applies this approach to the XSPEM process description language in order to verify process models.
Abstract: In the context of MDE (Model-Driven Engineering), our objective is to define the semantics for a given DSL (Domain Specific Language) either to simulate its models or to check properties on them using model-checking techniques. In both cases, the purpose is to formalize the DSL semantics as it is known by the DSL designer but often in an informal way. After several experiments to define operational semantics on the one hand, and translational semantics on the other hand, we discuss both approaches and we specify in which cases these semantics seem to be judicious. As a second step, we introduce a pragmatic and instrumented approach to define a translational semantics and to validate it against a reference operational semantics expressed by the DSL designer. We apply this approach to the XSPEM process description language in order to verify process models.

75 citations



Journal ArticleDOI
TL;DR: A fuzzy number is simply an ordinary number whose precise value is somewhat uncertain, and operation of fuzzy number can be generalized from that of crisp interval, which is defined by the extension principle.
Abstract: A fuzzy number is simply an ordinary number whose precise value is somewhat uncertain. Fuzzy numbers are used in statistics, computer programming, engineering, and experimental science. The arithmetic operators on fuzzy numbers are basic content in fuzzy mathematics. Operation of fuzzy number can be generalized from that of crisp interval. The operations of interval are discussed. Multiplication operation on fuzzy numbers is defined by the extension principle. Based on extension principle, nonlinear programming method, analytical method, computer drawing method and computer simulation method are used for solving multiplication operation of two fuzzy numbers. The nonlinear programming method is a precise method also, but it only gets a membership value as given number and it is a difficult problem for solving nonlinear programming. The analytical method is most precise, but it is hard to α-cuts interval when the membership function is complicated. The computer drawing method is simple, but it need calculate the α -cuts interval. The computer simulation method is the most simple, and it has wide applicability, but the membership function is rough. Each method is illuminated by examples.

Journal ArticleDOI
TL;DR: Experimental results show that this semi-supervised clustering method based on affinity propagation reaches its goal for complex datasets, and this method outperforms the comparative methods when there are a large number of pairwise constraints.

Journal ArticleDOI
TL;DR: A new multi-objective method for hiding sensitive association rules based on the concept of genetic algorithms is introduced, fully supporting security of database and keeping the utility and certainty of mined rules at highest level.
Abstract: Extracting of knowledge form large amount of data is an important issue in data mining systems. One of most important activities in data mining is association rule mining and the new head for data mining research area is privacy of mining. Today association rule mining has been a hot research topic in Data Mining and security area. A lot of research has done in this area but most of them focused on perturbation of original database heuristically. Therefore the final accuracy of released database falls down intensely. In addition to accuracy of database the main aspect of security in this area is privacy of database that is not warranted in most heuristic approaches, perfectly. In this paper we introduce new multi-objective method for hiding sensitive association rules based on the concept of genetic algorithms. The main purpose of this method is fully supporting security of database and keeping the utility and certainty of mined rules at highest level.

Journal Article
TL;DR: A hash function with lower rate but higher efficiency is proposed and it can be built on insecure compression functions and it is shown that key schedule is a more important factor affecting the efficiency of a block-cipher-based hash function than rate.
Abstract: In this paper, a hash function with lower rate but higher efficiency is proposed and it can be built on insecure compression functions. The security of this scheme is proved under black-box model and some compression function based on block ciphers are given to build this scheme. It is also shown that key schedule is a more important factor affecting the efficiency of a block-cipher-based hash function than rate. The new scheme only needs 2 keys and the key schedule of it can be pre-computed. It means the new scheme need not re-schedule the keys at every step during the iterations and its efficiency is improved.


Journal Article
TL;DR: The experiments on some real-world networks show that the algorithm requires no input parameters and can discover the intrinsic or even overlapping community structure in networks.
Abstract: Inspired from the idea of data fields, a community discovery algorithm based on topological potential is proposed The basic idea is that a topological potential function is introduced to analytically model the virtual interaction among all nodes in a network and, by regarding each community as a local high potential area, the community structure in the network can be uncovered by detecting all local high potential areas margined by low potential nodes The experiments on some real-world networks show that the algorithm requires no input parameters and can discover the intrinsic or even overlapping community structure in networks The time complexity of the algorithm is O(m+n3/γ )~O(n2), where n is the number of nodes to be explored, m is the number of edges, and 2γ 3 is a constant

Journal ArticleDOI
TL;DR: The proposed approach provides a very abstract and generic way of programming service composition thanks to the high-order property of HOCL, and is illustrated by applying it to a simple example that aims at providing a travel organizer service based on the composition of several basic and smaller services.
Abstract: Service-based infrastructures are shaping tomorrow’s distributed computing systems by allowing the design of loosely-coupled distributed applications based on the composition of services spread over a set of resources available on the Internet. Compared to previous approaches such as remote procedure call, distributed objects or components, this new paradigm makes feasible the loose coupling of software modules, encapsulated into services, by allowing a late binding to them at runtime. In this context, an important issue is how to express the composition of services while keeping this loosely-coupled property. Different approaches have been proposed to express services composition, mostly using specialized languages. This article presents and explore an unconventional new approach for service composition based on a programming language, inspired by a chemical metaphor, called the High-Order Chemical Language (HOCL). The proposed approach provides a very abstract and generic way of programming service composition thanks to the high-order property of HOCL. We illustrated this approach by applying it to a simple example that aims at providing a travel organizer service based on the composition of several basic and smaller services.

Journal Article
TL;DR: A review of the state-of-the-art of template protection technology domestic and abroad, and then systemizes almost all the related research directions can be found in this article.
Abstract: This paper reviews the state-of-the-art of biometric template protection technology domestic and abroad, and then systemizes almost all the related research directions. First it is clarified that the underlying disadvantages of traditional biometric systems and the attacks they are vulnerable to. Then the necessity and difficulty of protecting biometric template are drawn naturally. Afterwards this paper classifies the methods and algorithms into various categories based on the operation manner, and elaborates specifically some representative ones, such as Biohashing and Fuzzy Vault and so on. In the experiment, evaluations of Biohashing and Fuzzy Vault are carried on different fingerprint databases, and the results show the advantages and disadvantages of Biohashing method, as well as our improved Fuzzy Fingerprint Vault's better performance and security on FVC2002 DB2.


Journal Article
TL;DR: This paper presents the basic concept of topology's properties and modeling metrics; categorizes and analyzes both AS-level models and router-lever models; and identifies future directions and open problems of the topology modeling research.
Abstract: This paper presents the basic concept of topology's properties and modeling metrics; categorizes and analyzes both AS-level models and router-lever models. Moreover, this paper summarizes current research achievements on Internet topology's modeling, especially at the router-level. Finally, it identifies future directions and open problems of the topology modeling research.

Journal Article
TL;DR: In this paper, a new method based on C4.5 decision tree is proposed to handle the problem of connatural instability in Internet traffic classification using machine learning methods, which builds a classification model using information entropy in training data and classifies flows just by a simple search of the decision tree.
Abstract: In recent years,Internet traffic classification using machine learning has become a new direction in network measurement.Being simple and efficient Nave Bayes and its improved methods have been widely used in this area.But these methods depend too much on probability distribution of sample spacing,so they have connatural instability.To handle this problem,a new method based on C4.5 decision tree is proposed in this paper.This method builds a classification model using information entropy in training data and classifies flows just by a simple search of the decision tree.The theoretical analysis and experimental results show that there are obvious advantages in classification stability when C4.5 decision tree method is used to classify Internet traffic.

Journal ArticleDOI
TL;DR: This paper presents how can be the identified areas of improvement from assessment can be mapped with best knowledge based story cards practices for agile software development environments.
Abstract: This paper describes an ongoing process to define a suitable process improvement model for story cards based requirement engineering process and practices at agile software development environments. Key features of the SMM (Story card Maturity Model) process are: solves the problems related to the story cards like requirements conflicts, missing requirements, ambiguous requirements, define standard structure of story cards, to address nonfunctional requirements from exploration phase, and the use of a simplified and tailored assessment method for story cards based requirements engineering practices based on the CMM, which is poorly addressed at CMM. CMM does not cover how the quality of the requirements engineering process should be secured or what activities should be present for the requirements engineering process to achieve a certain maturity level. It is difficult to know what is not addressed or what could be done to improve the process. We also presents how can be the identified areas of improvement from assessment can be mapped with best knowledge based story cards practices for agile software development environments.

Journal ArticleDOI
TL;DR: An algorithm of web session clustering is proposed which not only overcomes the shortcomings of traditional clustering algorithm which merely focus on partial similarities, but also decreases the complexities of time and space.
Abstract: The task of clustering web sessions is to group web sessions based on similarity and consists of maximizing the intra-group similarity while minimizing the inter-group similarity. The results of Web session clustering can be used in personalization, system improvement, site modification, business intelligence, usage characterization and so forth. This paper proposes a framework of Web session clustering first. Then several data preparation techniques that can be used to improve the performance of data preprocessing are presented. A new method for measuring similarities between web pages that takes into account not only the URL but also the viewing time of the visited web page is also introduced and a new method to measure the similarity of web sessions using sequence alignment and the similarity of web page access is given in detail. Finally, an algorithm of web session clustering is proposed. This algorithm defines the number of clusters according to the knowledge of application fields, takes advantage of ROCK to decide the initial data points of each cluster and determines the criterion function according to the contributions of overall increase in similarities made by dividing Web sessions into different clusters --- which not only overcomes the shortcomings of traditional clustering algorithm which merely focus on partial similarities, but also decreases the complexities of time and space.

Journal ArticleDOI
TL;DR: A new collaborative filtering personalized recommendation algorithm is proposed which employs the user attribute information and the item attribute information, which can alleviate the sparsity issue in the recommender systems.
Abstract: Recommender systems are web based systems that aim at predicting a customer's interest on available products and services by relying on previously rated products and dealing with the problem of information and product overload. Collaborative filtering is the most popular recommendation technique nowadays and it mainly employs the user item rating data set . Traditional collaborative filtering approaches compute a similarity value between the target user and each other user by computing the relativity of their ratin gs , which is the set of ratings given on the same items. Based on the ratings of the most similar users, commonly referred to as neighbors, the algorithms compute recommendations for the target user. They only consider the ratings information. User attribute information associated with a user's personality and item attribute information associated with a n item's inside are rarely considered in the collaborative filtering recommendation process. In this paper, a new collaborative filtering personalized recommendation algorithm is proposed which employs the user attribute information and the item attribute information . This approach combines the user rating similarity and the user attribute similarity in the user based collaborative filtering process to fill the vacant ratings where necessary, and then it combines the item rating similarity and the item attribute similarity in the item based collaborative filtering process to produce recommendations . The hybrid collaborative filtering employs the user attribute and item attribute can alleviate the sparsity issue in the recommender systems.

Journal ArticleDOI
TL;DR: A new routing mechanism to combat the common selective packet dropping attack is proposed and Associations between nodes are used to identify and isolate the malicious nodes.
Abstract: A mobile ad hoc network (MANET) is a self-organizing, self-configuring confederation of wireless systems. MANET devices join and leave the network asynchronously at will, and there are no predefined clients or server. The dynamic topologies, mobile communications structure, decentralized control, and anonymity creates many challenges to the security of systems and network infrastructure in a MANET environment. Consequently, this extreme form of dynamic and distributed model requires a revaluation of conventional approaches to security enforcements. In this paper, we propose a new routing mechanism to combat the common selective packet dropping attack. Associations between nodes are used to identify and isolate the malicious nodes. Simulation results show the effectiveness of our scheme compared with conventional scheme.

Journal ArticleDOI
TL;DR: A distance education application of a Chinese handwriting education system that allows students to do practice at anytime and anywhere and can handle more handwriting error cases than existing methods with a higher accuracy.
Abstract: We build a distance education application of a Chinese handwriting education system that allows students to do practice at anytime and anywhere. As an intelligent tutor, the system can automatically check the handwriting errors, such as the stroke production errors, stroke sequence error and stroke relationship error. Then our system should provide useful feedback to the student. In this paper, attributed relational graph matching is used to locate the handwriting errors. The pruning strategy is applied to reduce the computational time. The experiment results show that our proposal can handle more handwriting error cases than existing methods with a higher accuracy.


Journal Article
TL;DR: The state-of-the-art of constrained optimization evolutionary algorithms (COEAs) is surveyed from two basic aspects of COEAs: constraint-handling techniques and evolutionary algorithms.
Abstract: Constrained optimization problems (COPs) are mathematical programming problems frequently encountered in the disciplines of science and engineering application. Solving COPs has become an important research area of evolutionary computation in recent years. In this paper, the state-of-the-art of constrained optimization evolutionary algorithms (COEAs) is surveyed from two basic aspects of COEAs (i.e., constraint-handling techniques and evolutionary algorithms). In addition, this paper discusses some important issues of COEAs. More specifically, several typical algorithms are analyzed in detail. Based on the analyses, it concluded that to obtain competitive results, a proper constraint-handling technique needs to be considered in conjunction with an appropriate search algorithm. Finally, the open research issues in this field are also pointed out.

Journal ArticleDOI
TL;DR: Based on the classic fuzzy theory, the trust evaluation and the dynamic routing protocols for MANET are represented, to give the modeling of MANET with the fuzzy inference rules, and to improve the routing protocols with fuzzy dynamic programming.
Abstract: As a kind of typical embedded system, MANET is a multi-hop self-configuring network with the topology changes dynamically To model of the security, mobility, and dynamic changes of MANET, trust is used as a novel concept recently In this paper, based on the classic fuzzy theory, the trust evaluation and the dynamic routing protocols for MANET are represented, to give the modeling of MANET with the fuzzy inference rules, and to improve the routing protocols with fuzzy dynamic programming The experiments with OPNET show that the novel fuzzy trusted DSR protocols can reduce the Packet Drop Ratio and enhance the throughput with the acceptable End to End Delay in MANET

Journal Article
TL;DR: Experimental results show that with this approach the preciseness of QoS prediction for Web services can be improved significantly.
Abstract: Consumers need to make prediction on the quality of unused Web services before selecting. Usually, this prediction is made based on other consumers’ experiences. Being aware of different QoS (quality of service) experiences of consumers, this paper proposes a QoS prediction approach. This approach makes similarity mining among consumers and QoS data, and then predicts the QoS of the unused Web services from other consumers’ experiences. Experimental results show that with this approach the preciseness of QoS prediction for Web services can be improved significantly.