scispace - formally typeset
Search or ask a question

Showing papers in "Computing in 2015"


Journal ArticleDOI
TL;DR: In this article, the authors identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools and further discuss how the aforementioned research dimensions are handled by current academic research as well as by commercial monitoring tools.
Abstract: Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.

150 citations


Journal ArticleDOI
TL;DR: This survey aims to encompass the state-of-the-art context-aware recommender systems based on the Computational Intelligence techniques, and discusses the strengths and weaknesses of each of the CI techniques used in context- AwareRecommender systems.
Abstract: The demand for ubiquitous information processing over the Web has called for the development of context-aware recommender systems capable of dealing with the problems of information overload and information filtering. Contemporary recommender systems harness context-awareness with the personalization to offer the most accurate recommendations about different products, services, and resources. However, such systems come across the issues, such as sparsity, cold start, and scalability that lead to imprecise recommendations. Computational Intelligence (CI) techniques not only improve recommendation accuracy but also substantially mitigate the aforementioned issues. Large numbers of context-aware recommender systems are based on the CI techniques, such as: (a) fuzzy sets, (b) artificial neural networks, (c) evolutionary computing, (d) swarm intelligence, and (e) artificial immune systems. This survey aims to encompass the state-of-the-art context-aware recommender systems based on the CI techniques. Taxonomy of the CI techniques is presented and challenges particular to the context-aware recommender systems are also discussed. Moreover, the ability of each of the CI techniques to deal with the aforesaid challenges is also highlighted. Furthermore, the strengths and weaknesses of each of the CI techniques used in context-aware recommender systems are discussed and a comparison of the techniques is also presented.

96 citations


Journal ArticleDOI
TL;DR: Experimental results on real world datasets show that the average relative performance between anonymized data and original data is the best with the approach.
Abstract: Privacy is one of the most important issues in social social network data sharing. Structure anonymization is a effective method to protect user from being reidentfied through graph modifications. The data utility of the distorted graph structure after the anonymization is a really severe problem. Reducing the utility loss is a new measurement while k-anonymity as a criterion to guarantee privacy protection. The existing anonymization algorithms that use vertex's degree modification usually introduce a large amount of distortion to the original social network graph. In this paper, we present a $$k$$k-degree anonymity with vertex and edge modification algorithm which includes two phase: first, finding the optimal target degree of each vertex; second, deciding the candidates to increase the vertex degree and adding the edges between vertices to satisfy the requirement. The community structure factors of the social network and the path length between vertices are used to evaluated the anonymization methods. Experimental results on real world datasets show that the average relative performance between anonymized data and original data is the best with our approach.

82 citations


Journal ArticleDOI
TL;DR: A product ranking system that facilitates the online shopping experience by analyzing the reviews for sentiments, evaluating their usefulness, extracting and weighing different product features and aspects, ranking it among similar comparable products, and finally creating a unified rank for each product is proposed.
Abstract: Online shopping generates billions of dollars in revenues, including both the physical goods and online services. Product images and associated descriptions are the two main sources of information used by the shoppers to gain knowledge about a product. However, these two pieces of information may not always present the true picture of the product. Images could be deceiving, and descriptions could be overwhelming or cryptic. Moreover, the relative rank of these products among the peers may lead to inconsistencies. Hence, a useful and widely used piece of information is "user reviews". A number of vendors like Amazon have created whole ecosystems around user reviews, thereby boosting their revenues. However, extracting the relevant and useful information out of the plethora of reviews is not straight forward, and is a very tedious job. In this paper we propose a product ranking system that facilitates the online shopping experience by analyzing the reviews for sentiments, evaluating their usefulness, extracting and weighing different product features and aspects, ranking it among similar comparable products, and finally creating a unified rank for each product. Experiment results show the usefulness of our proposed approach in providing an effective and reliable online shopping experience in comparison with similar approaches.

71 citations


Journal ArticleDOI
TL;DR: This paper applies CEP2U (Complex Event Processing under Uncertainty), a novel model for dealing with uncertainty in CEP to an existing CEP language—TESLA—, showing how it seamlessly integrate with modern rule languages by supporting all the operators they commonly offer.
Abstract: Several application domains involve detecting complex situations and reacting to them. This asks for a Complex Event Processing (CEP) engine specifically designed to timely process low level event notifications to identify higher level composite events according to a set of user-defined rules. Several CEP engines and accompanying rule languages have been proposed. Their primary focus on performance often led to an oversimplified modeling of the external world where events happens, which is not suited to satisfy the demand of real-life applications. In particular, they are unable to consider, model, and propagate the uncertainty that exists in most scenarios. Moving from this premise, we present CEP2U (Complex Event Processing under Uncertainty), a novel model for dealing with uncertainty in CEP. We apply CEP2U to an existing CEP language--TESLA--, showing how it seamlessly integrate with modern rule languages by supporting all the operators they commonly offer. Moreover, we implement CEP2U on top of the T-Rex CEP engine and perform a detailed study of its performance, measuring a limited overhead that demonstrates its practical applicability. The discussion presented in this paper, together with the experiments we conducted, show how CEP2U provides a valuable combination of expressiveness, efficiency, and ease of use.

62 citations


Journal ArticleDOI
TL;DR: A survey of the literature on software assurance practices for mobile applications, describing them and assessing their contribution and success is presented, opening an area for further research that may determine if practitioners have followed the proposed assurance paradigms.
Abstract: Mobile software applications have to cope with a particular environment that involves small size, limited resources, high autonomy requirements, competitive business models and many other challenges To provide development guidelines that respond to these needs, several practices have been introduced; however, it is not clear how these guidelines may contribute to solve the issues present in the mobile domain Furthermore, the rapid evolution of the mobile ecosystem challenges many of the premises upon which the proposed practices were designed In this paper, we present a survey of the literature on software assurance practices for mobile applications, with the objective of describing them and assessing their contribution and success We identified, organized and reviewed a body of research that spans in three levels: software development processes, software product assurance practices, and software implementation practices By carrying out this literature survey, we reviewed the different approaches that researchers on Software Engineering have provided to address the needs that raise in the mobile software development arena Moreover, we review the evolution of these practices, identifying how the constant changes and modernization of the mobile execution environment has impacted the methods proposed in the literature Finally, we introduced discussion on the application of these practices in a real productive setting, opening an area for further research that may determine if practitioners have followed the proposed assurance paradigms

58 citations


Journal ArticleDOI
TL;DR: The study concludes that Node.js offers client-server development integration, aiding code reusability in web applications, and is the perfect tool for developing fast, scalable network applications.
Abstract: We examine the implications of end-to-end web application development, in the social web era. The paper describes a distributed architecture, suitable for modern web application development, as well as the interactivity components associated with it. Furthermore, we conducted a series of stress tests, on popular server side technologies. The PHP/Apache stack was found inefficient to address the increasing demand in network traffic. Nginx was found more than 2.5 times faster in input/output (I/O) operations than Apache, whereas Node.js outperformed both. Node.js, although excellent in I/O operations and resource utilization, was found lacking in serving static files using its built in HTTP server, while Nginx performed great at this task. So, in order to address efficiency, an Nginx server could be placed in-front and proxy static file requests, allowing the Node.js processes to only handle dynamic content. Such a configuration can offer a better infrastructure in terms of efficiency and scalability, replacing the aged PHP/Apache stack. Furthermore we have found that building cross platform applications based on web technologies, is both feasible and highly productive, especially when addressing stationary and mobile devices, as well as the fragmentation among them. Our study concludes that Node.js offers client-server development integration, aiding code reusability in web applications, and is the perfect tool for developing fast, scalable network applications.

58 citations


Journal ArticleDOI
TL;DR: The results highlight that improvements on the availability are not significant when increasing the system to more than two clusters, and that it takes 18 months, in average, for these private cloud architectures to pay off the cost equivalent to the computational capacity rented from a public cloud.
Abstract: High availability in cloud computing services is essential for maintaining customer confidence and avoiding revenue losses due to SLA violation penalties. Since the software and hardware components of cloud infrastructures may have limited reliability, the use of redundant components and multiple clusters may be required to achieve the expected level of dependability while also increasing the computational capacity. A drawback of such improvements is the respective impact on the capital and increase in acquisition and operational costs. This paper presents availability models for private cloud architectures based on Eucalyptus platform, and presents a comparison of costs between these architectures and similar infrastructure rented from a public cloud provider. Metrics for capacity-oriented availability and system steady-state availability are used to compare architectures with distinct numbers of clusters. A heterogeneous hierarchical modeling approach is employed to represent the systems considering both hardware and software failures. The results highlight that improvements on the availability are not significant when increasing the system to more than two clusters. The analysis also shows that the average available capacity is close to the maximum possible capacity in all architectures, and that it takes 18 months, in average, for these private cloud architectures to pay off the cost equivalent to the computational capacity rented from a public cloud.

51 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel set-based summarization approach that directly summarizes naturally occurring sets of similar nodes in a graph, and proposes a heuristic based on similarity among degrees of candidate nodes, and a parameter-free pruning technique to effectively identify subset of highly similar nodes from candidate nodes.
Abstract: Graph summarization is valuable approach to analyze various real life phenomenon, like communities, influential nodes, and information flow in a big graph. To summarize a graph, nodes having similar neighbors are merged into super nodes and their corresponding edges are compressed into super edges. Existing methods find similar nodes either by nodes ordering or perform pairwise similarity computations. Compression-by-node ordering approaches are scalable but provide lesser compression due to exhaustive similarity computations of their counterparts. In this paper, we propose a novel set-based summarization approach that directly summarizes naturally occurring sets of similar nodes in a graph. Our approach is scalable since we avoid explicit similarity computations with non-similar nodes and merge sets of nodes in each iteration. Similarly, we provide good compression ratio as each set consists of highly similar nodes. To locate sets of similar nodes, we find candidate sets of similar nodes by using locality sensitive hashing. However, member nodes of every candidate set have varying similarities with each other. Therefore, we propose a heuristic based on similarity among degrees of candidate nodes, and a parameter-free pruning technique to effectively identify subset of highly similar nodes from candidate nodes. Through experiments on real world graphs, our approach requires lesser execution time than pairwise graph summarization, with margin of an order of magnitude in graphs containing nodes with highly diverse neighborhood, and produces summary at similar accuracy. Similarly, we observe comparable scalability against the compression-by-node ordering method, while providing better compression ratio.

45 citations


Journal ArticleDOI
TL;DR: A strategy to increase the performance of firefly algorithm by replacing the worst firefly with a new constructed firefly, inspired by the concept of opposition-based learning.
Abstract: Firefly algorithm has been shown to yield good performance for solving various optimization problems. However, under some conditions, FA may converge prematurely and thus may be trapped in local optima due to loss of population diversity. To overcome this defect, inspired by the concept of opposition-based learning, a strategy to increase the performance of firefly algorithm is proposed. The idea is to replace the worst firefly with a new constructed firefly. This new constructed firefly is created by taken some elements from the opposition number of the worst firefly or the position of the brightest firefly. After this operation, the worst firefly is forced to escape from the normal path and can help it to escape from local optima. Experiments on 16 standard benchmark functions show that our method can improve accuracy of the basic firefly algorithm.

41 citations


Journal ArticleDOI
TL;DR: This article presents how most schemes in WSNs have been approached using the traditional and enhanced RL models and algorithms and presents performance enhancements brought about by the RL algorithms, and open issues associated with the application of RL in W SNs.
Abstract: Wireless sensor network (WSN) consists of a large number of sensors and sink nodes which are used to monitor events or environmental parameters, such as movement, temperature, humidity, etc Reinforcement learning (RL) has been applied in a wide range of schemes in WSNs, such as cooperative communication, routing and rate control, so that the sensors and sink nodes are able to observe and carry out optimal actions on their respective operating environment for network and application performance enhancements This article provides an extensive review on the application of RL to WSNs This covers many components and features of RL, such as state, action and reward This article presents how most schemes in WSNs have been approached using the traditional and enhanced RL models and algorithms It also presents performance enhancements brought about by the RL algorithms, and open issues associated with the application of RL in WSNs This article aims to establish a foundation in order to spark new research interests in this area Our discussion has been presented in a tutorial manner so that it is comprehensive and applicable to readers outside the specialty of both RL and WSNs

Journal ArticleDOI
TL;DR: The experimental results on artificial and real-world complex networks illustrate that the proposed community detection algorithm provides a useful paradigm for robustly discovering community structures.
Abstract: One of the challenging problems when studying complex networks is the detection of sub-structures, called communities. Network communities emerge as dense parts, while they may have a few relationships to each other. Indeed, communities are latent among a mass of nodes and edges in a sparse network. This characteristic makes the community detection process more difficult. Among community detection approaches, modularity maximization has attracted much attention in recent years. In this paper, modularity density (D value) has been employed to discover real community structures. Due to the inadequacy of previous mathematical models in finding the correct number of communities, this paper first formulates a mixed integer non-linear program to detect communities without any need of prior knowledge about their number. Moreover, the mathematical models often suffer from NP-Hardness. In order to overcome this limitation, a new hybrid artificial immune network (HAIN) has been proposed in this paper. HAIN aims to use a network's properties in an efficient way. To do so, this algorithm employs major components of the pure artificial immune network, hybridized with a well-known heuristic, to provide a powerful and parallel search mechanism. The combination of cloning and affinity maturation components, a strong local search routine, and the presence of network suppression and diversity are the main components. The experimental results on artificial and real-world complex networks illustrate that the proposed community detection algorithm provides a useful paradigm for robustly discovering community structures.

Journal ArticleDOI
TL;DR: A novel hybrid approach is introduced that combines model-driven performance forecasting techniques and on-line exploration in order to take the best of the two techniques, namely enhancing robustness despite model’s inaccuracies, and maximizing convergence speed towards optimum solutions.
Abstract: In this paper we investigate the issue of automatically identifying the "natural" degree of parallelism of an application using software transactional memory (STM), i.e., the workload-specific multiprogramming level that maximizes application's performance. We discuss the importance of adapting the concurrency level in two different scenarios, a shared-memory and a distributed STM infrastructure. We propose and evaluate two alternative self-tuning methodologies, explicitly tailored for the considered scenarios. In shared-memory STM, we show that lightweight, black-box approaches relying solely on on-line exploration can be extremely effective. For distributed STMs , we introduce a novel hybrid approach that combines model-driven performance forecasting techniques and on-line exploration in order to take the best of the two techniques, namely enhancing robustness despite model's inaccuracies, and maximizing convergence speed towards optimum solutions.

Journal ArticleDOI
TL;DR: A conversational agent aiming to replace human counselors assisting IT personnel in software development ecosystems toward future deployment to social ecosystems, and knowledge focusing on a single domain, such as IT counseling domain, and context-based/context-respectful reflection allow the agent to work properly without having to acquire and manage a huge amount of knowledge.
Abstract: Counseling for information technology (IT) personnel lies at the intersection between the software development ecosystem where IT employees collaborate professionally and the social ecosystem where they communicate with each other to share the success or handle the failure of software development. Today, counseling has become a major issue in the IT industry, since the success rate of IT system development projects is as low as 30 %, and more than 60 % of IT professionals suffer from anxiety or other emotional problems. This paper describes a conversational agent aiming to replace human counselors assisting IT personnel in software development ecosystems toward future deployment to social ecosystems. Utilizing IT domain ontology knowledge, our agent automatically adapts the vocabulary used in its responses according to the context and to the current phase of the conversation. Using context-based reflection support knowledge, the agent generates its response consisting of (1) chatterbot-like mirroring/rewording for context sharing and (2) newly proposed context-respectful mechanism of prompts for "context narrowing/digging" to help a client discover problems and become aware of their solutions via deep reflections of IT personnel undergoing counseling. Knowledge focusing on a single domain, such as IT counseling domain, and context-based/context-respectful reflection allow our counseling agent to work properly without having to acquire and manage a huge amount of knowledge. Experimental results show that clients interact with our agent on average two times longer than they do with ELIZA-style conversational agents; also, a questionnaire-based validation has shown the average value of questionnaire's result was "agree" side for our agent, but "disagree" side for ELIZA-style conversational agents. Therefore, the user acceptance level of our agent is much higher than that of conventional chatterbots.

Journal ArticleDOI
TL;DR: The state-of-the-art related to security specification, verification, and quantification for software and systems that are modeled by using UML or SysML language are reviewed.
Abstract: Modern systems are more and more complex and security has become a key component in the success of software and systems development. The main challenge encountered in industry as well as in academia is to develop secure products, prove their security correctness, measure their resilience to attacks, and check if vulnerabilities exist. In this paper, we review the state-of-the-art related to security specification, verification, and quantification for software and systems that are modeled by using UML or SysML language. The reviewed work fall into the field of secure software and systems engineering that aims at fulfilling the security as an afterthought in the development of secure systems.

Journal ArticleDOI
TL;DR: The solution is named MRSUB and is developed to run over the Hadoop framework and uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework.
Abstract: Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time.

Journal ArticleDOI
TL;DR: A novel adaptive control approach is proposed which is based on continuous reinforcement learning and provides dynamic resource provisioning while dealing with uncertainty in the Cloud’s environment.
Abstract: Because of numerous parameters existing in the Cloud's environment, it is helpful to introduce a general solution for dynamic resource provisioning in Cloud that is able to handle uncertainty In this paper, a novel adaptive control approach is proposed which is based on continuous reinforcement learning and provides dynamic resource provisioning while dealing with uncertainty in the Cloud's environment The proposed dynamic resource provisioner is a goal directed controller which provides ability of handling uncertainty specifically in Cloud's spot markets where competition between Cloud providers requires optimal policies for attracting and maintaining clients This controller is aimed at hardly preventing from job rejection (as the primary goal) and minimizing the energy consumption (as the secondary goal) Although these two goals almost conflict (because job rejection is a common event in the process of energy consumption optimization), the results demonstrate the perfect ability of the proposed method with reducing job rejection down to near 0 % and minimizing energy consumption down to 955 %

Journal ArticleDOI
TL;DR: The paper describes Hamming weight counters/comparators built on counting networks that incorporate two distinctive and important features: the propagation delays of signals passing through data independent segments within the circuit are shortened, which allows faster pipelined implementations.
Abstract: The paper describes Hamming weight counters/comparators built on counting networks that incorporate two distinctive and important features. The counting networks are composed of simple logic (core) elements with incrementally reducing numbers of elements from the inputs to the outputs. This feature provides the same performance as the best known sorting networks with radically reduced complexity. Compared to a competitive design based on parallel counters, the propagation delays of signals passing through data independent segments within the circuit are shortened, which allows faster pipelined implementations. Several types of counting networks are elaborated, namely pure combinational, partially sequential with reusable fragments, and pipelined. The correctness of the proposed concept and scalability of the networks are proven. Formal expressions to estimate the complexity and throughput of the network are given. Finally, the results of extensive experiments, evaluations and comparisons are reported that demonstrate that the solutions proposed offer better characteristics than the best known alternatives.

Journal ArticleDOI
TL;DR: The main contribution of this work is a detailed description and evaluation of ParVec, a vectorized version of the PARSEC benchmark suite (as a case study of a commonly used application set), available for the research community to serve as a new baseline for evaluation of future computer systems.
Abstract: Energy efficiency has recently replaced performance as the main design goal for microprocessors across all market segments. Vectorization, parallelization, specialization and heterogeneity are the key approaches that both academia and industry embrace to make energy efficiency a reality. New architectural proposals are validated against real applications in order to ensure correctness and perform performance and energy evaluations. However, keeping up with architectural changes while maintaining similar workloads and algorithms (for comparative purposes) becomes a real challenge. If benchmarks are optimized for certain features and not for others, architects may end up overestimating the impact of certain techniques and underestimating others. The main contribution of this work is a detailed description and evaluation of ParVec, a vectorized version of the PARSEC benchmark suite (as a case study of a commonly used application set). ParVec can target SSE, AVX and NEON™ SIMD architectures by means of custom vectorization and math libraries. The performance and energy efficiency improvements from vectorization depend greatly on the fraction of code that can be vectorized. Vectorization-friendly benchmarks obtain up to 10$$\times $$× energy improvements per thread. The ParVec benchmark suite is available for the research community to serve as a new baseline for evaluation of future computer systems.

Journal ArticleDOI
TL;DR: This paper proposes a Greedy Randomized Adaptive Search Procedure (GRASP) based algorithm for dependable virtual network mapping, considering availability as the adopted metric, and demonstrates the impacts whenever dependability is taken into account in the allocation.
Abstract: Virtualized networks are a promising approach to deal with the current ossification problem of Internet. A major challenge is related to virtual network (VN) mapping, as it is a NP-hard problem. Thus, several heuristics have been proposed aiming to achieve efficient allocation, but they do not consider dependability issues, which directly impact quality of service. This paper proposes a Greedy Randomized Adaptive Search Procedure (GRASP) based algorithm for dependable virtual network mapping, considering availability as the adopted metric. Redundancy policies are also adopted in order to meet VN requests with high availability constraints. Experiments demonstrate the impacts whenever dependability is taken into accountsss in the allocation as well as the trade-off between VN's availability and cost.

Journal ArticleDOI
TL;DR: Analyzing the trade-off between the number of test cases for MCC and MC/DC for a case study from the automotive domain concluded with the strong recommendation to use MCC as a coverage metric for testing safety-relevant software implemented in programming languages with short-circuit evaluation.
Abstract: Modified condition/decision coverage (MC/DC) is a structural code coverage metric, originally defined in the standard DO-178B, intended to be an efficient coverage metric for the evaluation of the testing process of software incorporating decisions with complex Boolean expressions The upcoming standard ISO 26262 for safety-relevant automotive systems prescribes MC/DC for ASIL D as a highly recommended coverage metric One assumed benefit of MC/DC is that it requires a much smaller number of test cases in comparison to multiple condition coverage (MCC), while sustaining a quite high error-detection probability Programming languages like C, commonly used for implementing software for the automotive domain, are using short-circuit evaluation For short-circuit evaluation the number of test cases for MCC is much smaller than in a non-short-circuit environment because many redundant test cases occur We evaluated the trade-off between the number of test cases for MCC and MC/DC for a case study from the automotive domain and observed a very low overhead (only 5 %) for the number of test cases necessary for MCC compared to MC/DC This motivated an analysis of programs containing decisions where the number and structure of the referring Boolean expressions vary Our results show that the overhead for a test suite for MCC is on the average only about 35 % compared to MC/DC and the maximum overhead is approximately 100 % (for decisions with up to 5 conditions) This means that a test set for MCC is in the worst case around twice as big as a test set for MC/DC for a program with short-circuit evaluation with maximum 5 conditions Considering the lower error-detection effectiveness of MC/DC compared to MCC, we conclude with the strong recommendation to use MCC as a coverage metric for testing safety-relevant software (with a limited number of conditions) implemented in programming languages with short-circuit evaluation

Journal ArticleDOI
TL;DR: A filter-based address autoconfiguration protocol (FAACP) for duplicate address detection and recovery scheme has been proposed which use sequence filtering technique for address space management.
Abstract: Assigning unique addresses to the nodes in mobile ad-hoc networks is a challenging issue due to dynamic topology, resource constraint, network merging and partitioning. The existing address autoconfiguration protocols designed to provide unique addresses to the nodes, address one or two of the challenges like efficiency of duplicate detection, address space management, scalability etc. In this paper, a scheme addressing most of the issues challenging the autoconfiguration is presented. A filter-based address autoconfiguration protocol (FAACP) for duplicate address detection and recovery scheme has been proposed which use sequence filtering technique for address space management. The scheme present a grid structured network topology which manages the network merging and partitioning in effective manner. The specialized nodes called "Unique IP Address Verification Agents" are dynamically selected to improve the efficiency and reliability of distributed duplicate detection. The scheme uses significantly less number of control packets, and hence incurs less address acquisition delay and communication overhead. The FAACP scheme is simulated in Network Simulator-2 and has proven to be scalable without significant change in the performance. The scheme performs well inspite of the packet losses that occur due to high node mobility.

Journal ArticleDOI
TL;DR: The proposed algorithm is able to tolerate both faulty routers and links with negligible impact on the performance and it avoids taking unnecessary longer paths and the shortest paths are always taken as long as a path exists.
Abstract: Faults at either the link or router level may result in the failure of the system. Fault-tolerant routing algorithms attempt to tolerate faults by rerouting packets around the faulty region. This rerouting would be at the cost of significant performance loss. The proposed algorithm in this paper is able to tolerate both faulty routers and links with negligible impact on the performance. In fact, the proposed algorithm avoids taking unnecessary longer paths and the shortest paths are always taken as long as a path exists. On the other hand, fault-tolerant routing algorithms might be based on deterministic routing in which all packets use a single path between each pair of source and destination routers. Using deterministic routing, packets reach the destination in the same order they have been delivered from the source so that no reordering buffer is needed at the destination. For improving the performance, fault-tolerant algorithms might be based on adaptive routing in which packets are delivered through multiple paths to destinations. In this case, packets should be reordered at the destinations demanding reordering buffers. The proposed algorithm can be configured in both working modes, such that it can be based on deterministic or adaptive routing.

Journal ArticleDOI
TL;DR: This paper defines a certification scheme that allows to verify the dependability properties of services and business processes and awards machine-readable dependability certificates to services, whose validity is continuously verified using run-time monitoring.
Abstract: The advances and success of the Service-Oriented Architecture (SOA) paradigm have produced a revolution in ICT, particularly, in the way in which software applications are implemented and distributed. Today, applications are increasingly provisioned and consumed as web services over the Internet, and business processes are implemented by dynamically composing loosely coupled applications provided by different suppliers. In this highly dynamic context, clients (e.g., business owners or users selecting a service) are concerned about the dependability of their services and business processes. In this paper, we define a certification scheme that allows to verify the dependability properties of services and business processes. Our certification scheme relies on discrete-time Markov chains and awards machine-readable dependability certificates to services, whose validity is continuously verified using run-time monitoring. Our solution can be integrated within existing SOAs, to extend the discovery and selection process with dependability requirements and certificates, and to support a dependability-aware service composition.

Journal ArticleDOI
TL;DR: FreeRec’s decentralized approach makes it independent of any entity wishing to collect personal data about users, and its onion-routing-like gossip-based overlay protocols effectively hide the association between users and their interest profiles without affecting the quality of personalization.
Abstract: We present and evaluate FreeRec, an anonymous decentralized peer-to-peer architecture, designed to bring personalization while protecting the privacy of its users. FreeRec's decentralized approach makes it independent of any entity wishing to collect personal data about users. At the same time, its onion-routing-like gossip-based overlay protocols effectively hide the association between users and their interest profiles without affecting the quality of personalization. The core of FreeRec consists of three layers of overlay protocols: the bottom layer, rps, consists of a standard random peer sampling protocol ensuring connectivity; the middle layer, PRPS, introduces anonymity by hiding users behind anonymous proxy chains, providing mutual anonymity; finally, the top clustering layer identifies for each anonymous user, a set of anonymous nearest neighbors. We demonstrate the effectiveness of FreeRec by building a decentralized and anonymous content dissemination system. Our evaluation by simulation, our PlanetLab experiments, and our probabilistic analysis show that FreeRec effectively decouples users from their profiles without hampering the quality of personalized content delivery.

Journal ArticleDOI
TL;DR: A new parallel coordinates visualization method that uses arced-axis for high-dimensional data representation that can be efficiently used to identify the main features of network attacks by displaying recognizable visual patterns and a prototype system for network scan detection is designed.
Abstract: With the rapid growth of data in size and complexity, that are available on shared cloud computing platform, the threat of malicious activities and computer crimes has increased accordingly. Thus, investigating efficient data visualization techniques for visual analytics of such big data and visual intrusion detection over data intensive cloud computing is urgently required. In this paper, we first propose a new parallel coordinates visualization method that uses arced-axis for high-dimensional data representation. This new geometrical scheme can be efficiently used to identify the main features of network attacks by displaying recognizable visual patterns. In addition, with the aim of visualizing the clear and detailed structure of the dataset according to the contribution of each attribute, we propose a meaningful layout for the new method based on singular value decomposition algorithm, which possesses statistical property and can overcome the curse of dimensionality. Finally, we design a prototype system for network scan detection, which is based on our visualization approach. The experiments have shown that our approach is effective in visualizing multivariate datasets and detecting attacks from a variety of networking patterns, such as the features of DDoS attacks.

Journal ArticleDOI
TL;DR: This work proposes an arc-based method to divide the search space into disjoint partitions and proposes a dynamic strategy to assign partitions to reducers so that the estimated load difference is within a threshold.
Abstract: Given a set of facility objects and a set of client objects, where each client is served by her nearest facility and each facility is constrained by a service capacity, we study how to find all the locations on which if a new facility with a given capacity is established, the number of served clients is maximized (in other words, the utility of the facilities is maximized). This problem is intrinsically difficult. An existing algorithm with an exponential complexity is not scalable and cannot handle this problem on large data sets. Therefore, we propose to solve the problem through parallel computing, in particular using MapReduce. We propose an arc-based method to divide the search space into disjoint partitions. For load balancing, we propose a dynamic strategy to assign partitions to reducers so that the estimated load difference is within a threshold. We conduct extensive experiments using both real and synthetic data sets of large sizes. The results demonstrate the efficiency and scalability of the algorithm.

Journal ArticleDOI
TL;DR: This paper proposes a new lightweight architecture for the Web of Things, based on RESTful approaches, and shows how the proposed architecture supports the concept of a mobile wallet, enabling users to make secure payments employing NFC technology with their mobile devices.
Abstract: The Web (and Internet) of Things has seen the rapid emergence of new protocols and standards, which provide for innovative models of interaction for applications. One such model fostered by the Web of Things (WoT) ecosystem is that of contactless interaction between devices. Near Field Communication (NFC) technology is one such enabler of contactless interactions. Contactless technology for the WoT requires all parties to agree one common definition and implementation and, in this paper, we propose a new lightweight architecture for the WoT, based on RESTful approaches. We show how the proposed architecture supports the concept of a mobile wallet, enabling users to make secure payments employing NFC technology with their mobile devices. In so doing, we argue that the vision of the WoT is brought a step closer to fruition.

Journal ArticleDOI
TL;DR: It is demonstrated that generation and display of tree-populated scenes with shadows at interactive rates can be achieved by utilizing the massively parallel GPU architecture to accelerate the computationally intensive steps of the method.
Abstract: Real-time synthesis of realistic tree models is a desirable functionality for computer games, simulators, and landscape design software. Self-organizing tree models that adapt to the environment are a welcome addition and central to various 3D design tools but present a challenging task for interactive use even on modern commodity hardware. The paper describes the implementation of a complete self-organizing tree synthesis method running on a contemporary graphics processing unit using OpenCL. We demonstrate that generation and display of tree-populated scenes with shadows at interactive rates can be achieved by utilizing the massively parallel GPU architecture to accelerate the computationally intensive steps of the method. A comparison with the performance of single-threaded and CPU-based OpenCL implementation of the same method is reported.

Journal ArticleDOI
TL;DR: This paper presents a distributed protocol for privacy-preserving aggregation to enable computing a class of aggregation functions that can be expressed as Abelian group and preserves data privacy such that participant data is only known to their owner with a given probability.
Abstract: Recent advances in techniques that combine and analyze data collected from multiple partners led to many new promising distributed collaborative applications. Such collaborative computations could occur between trusted partners, between partially trusted partners, or between competitors. Therefore preserving privacy is an important issue in this context. This paper presents a distributed protocol for privacy-preserving aggregation to enable computing a class of aggregation functions that can be expressed as Abelian group. The proposed protocol is based on an overlay structure that enables secret sharing without the need of any central authority or heavyweight cryptography. It preserves data privacy such that participant data is only known to their owner with a given probability. The aggregation result is computed by participants themselves without interacting with a specific aggregator. The aggregation result is accurate when there is no data loss. A strategy to handle the problem of nodes failures is given, along with a study of the privacy ensured by the suggested protocol.