scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Distributed Systems and Technologies in 2014"


Journal ArticleDOI
TL;DR: This paper analyzes issues related to collaboration between customers and developers in a distributed, Agile setting and proposes a framework that defines practices and tools for handling project information and communication activities.
Abstract: The traditional, in-house software development process is progressively losing its appeal in favor of distributed, multi-site development: this is mainly due to the well-known advantages of the latter, such as higher productivity and lower costs. However, this practice has also some documented disadvantages that are inherent to distance: geographical, temporal and socio-cultural distances between stakeholders can affect communication, coordination and control activities, thus making collaboration very difficult. This would not immediately look as the ideal scenario for applying Agile methodologies, which definitely rely on continuous collaboration between all stakeholders, including (with a very important role) the customers. This paper analyzes issues related to collaboration between customers and developers in a distributed, Agile setting and proposes a framework that defines practices and tools for handling project information and communication activities.

10 citations


Journal ArticleDOI
TL;DR: This paper introduces eight adaptation strategies that provide a higher abstraction, modularity and reconfigurability in the tree self-organization process, and illustrates the performance trade-offs and properties of adaptation strategies.
Abstract: Tree topologies are often deployed in large-scale distributed systems to structure a hierarchical communication. Building and maintaining overlay networks self-organized in tree topologies is challenging to achieve in dynamic environments. Performance trade-offs between resilience to failures and message overhead need to be considered. This paper introduces eight adaptation strategies that provide a higher abstraction, modularity and reconfigurability in the tree self- organization process. Performance can be further enhanced by dynamically changing strate- gies during system runtime. Experimental evaluation illustrates the performance trade-offs and properties of adaptation strategies.

10 citations


Journal ArticleDOI
TL;DR: In this article a newly defined reference process for the Digital Factory is used as basis for a software classification schema and the results of a software market analysis for this domain have been structured and interpreted from a process and functionality coverage point of view.
Abstract: Today, industrial enterprises are increasingly facing the need to achieve shorter lifecycles for their products, which at the same time are getting more and more complex. One approach to face this challenge is to introduce the Digital Factory. The Digital Factory is defined as an IT system capable of digitally planning, controlling and optimizing all resources and activities related to a product which are performed beginning with its development and ending in the order processing – prior to the start of the real production of the product. In this article a newly defined reference process for the Digital Factory is used as basis for a software classification schema. Using this schema the results of a software market analysis for the Digital Factory domain have been structured and interpreted from a process and functionality coverage point of view. With this approach, the authors were able to prove that the amount of software solutions available within this domain is huge. This generates a need for IT support within the software selection process for the Digital Factory domain. To achieve this, a first concept for a structured software selection process is introduced. The results of this article can be used as a basis for future research on this topic.

8 citations


Journal ArticleDOI
TL;DR: The subcarrier allocation with the power priority selection (PSS) and the adaptive clipping with the peak reduction signal to improve the system and PAPR performance is proposed.
Abstract: In recent, orthogonal frequency division multiple access (OFDMA) has been used for a multiuser wireless communication. In a wireless network, the transmitted signal of each user has independent channel fluctuation characteristic. By using this characteristic, OFDMA can achieve the multiuser diversity (MUDiv). Until this time, to achieve a low complexity and performance improvement, the adaptive subcarrier block (ASB) and frequency symbol spreading (FSS) methods have been proposed. However, the system performance in a low Eb/N0 is worse than that of maximal sum capacity (MSC) and peak to average power ratio (PAPR) does not decrease greatly. To solve these problems, in this paper, we propose the subcarrier allocation with the power priority selection (PSS) and the adaptive clipping (AC) with the peak reduction signal to improve the system and PAPR performance.

7 citations


Journal ArticleDOI
TL;DR: This is an unique attempt that integrates the support for security features, topology generation and key management in a single, secure data aggregation protocol; substantiating the proposal with elaborate experimental evaluation, too.
Abstract: Data aggregation based on in-network processing is useful in improving the communications overhead in Wireless Sensor Networks (WSNs) by reducing the overall number of packets communicated to the base station. However, due to the fusion of data items sourced at different nodes into a single one, the security of the aggregated data as well as that of the aggregating node, demands critical investigation. This paper observes that though there have been substantial numbers of research attempts recently in proposing techniques for secure data aggregation, there is still a need for a coherent, integrated framework for secure data aggregation. It proposes such an integrated framework for secure data aggregation in this paper. The proposed framework implements a secure data aggregation protocol, offering the attributes viz. confidentiality, privacy, authentication, robustness, as well as data integrity using a defined data aggregation topology. In addition to that, the proposed framework is based on a zero configuration protocol that supports a generation of a defined aggregation topology and a key exchange procedure. This work supports the framework with detailed security analysis and performance evaluation on TinyOS platform using TOSSIM as the base simulator. Through humble observations, this is an unique attempt that integrates the support for security features, topology generation and key management in a single, secure data aggregation protocol; substantiating the proposal with elaborate experimental evaluation, too.

5 citations


Journal ArticleDOI
TL;DR: A new adaptive dynamic replication is proposed that combines between a replication based on availability and replicationBased on popularity based on popularity and is grouped in one fault tolerance box named Collaborative Services for Fault Tolerance CSFT.
Abstract: As fault tolerance is the ability of a system to perform its function correctly even in the presence of faults Therefore, different fault tolerance techniques are critical for improving the efficient utilization of expensive resources in high performance data grid systems One of the most popular strategies of fault tolerance is the replication, it creates multiple copies of resources in the system and it has been proved to be an effective way to achieve data availability and system reliability In this paper the authors propose a new adaptive dynamic replication that combines between a replication based on availability and replication based on popularity The authors' adaptive dynamic replication uses two types of replicas primary and ordinary and two types of placement nodes best client and best responsible nodes for the new replicas In addition to the replication, we used other strategies such as fault detection, fault prediction, dynamicity management, self-stabilization All these services are grouped in one fault tolerance box named Collaborative Services for Fault Tolerance CSFT that structure them in hierarchical services and organize the relationships between them

5 citations


Journal ArticleDOI
TL;DR: The authors explore the potential of using context-aware information in CSCW application in order to support collaboration in pervasive environments and describes the approach used in the design and development of a context- aware framework utilizing users' context information interpretation for behaviour adaptation of collaborative applications in pervasive communities.
Abstract: Pervasive environments involve the interaction of users with the objects that surround them and also other participants. In this way, pervasive communities can lead the user to participate beyond traditional pervasive spaces, enabling the cooperation among groups taking into account not only individual interests, but also the collective and social context. In this study, the authors explore the potential of using context-aware information in CSCW application in order to support collaboration in pervasive environments. In particular this paper describes the approach used in the design and development of a context-aware framework utilizing users' context information interpretation for behaviour adaptation of collaborative applications in pervasive communities.

4 citations


Journal ArticleDOI
TL;DR: This paper observes that the Secret Sharing scheme is more efficient than the one based on Paillier homomorphic encryption, however, it observes a considerable increase in the overhead associated with the Shamir's secret sharing scheme, as a result of the increased in the number of parties.
Abstract: With digital data being abundant in today's world, competing organizations desire to gain insights about the market, without putting the privacy of their confidential data at risk. This paper provides a new dimension to the problem of Privacy Preserving Distributed Association Rule Mining (PPDARM) by extending it to a distributed temporal setup. It proposes extensions of public key based and non-public key based additively homomorphic techniques, based on efficient private matching and Shamir's secret sharing, to privately decipher these global cycles in cyclic association rules. Along with the theoretical analysis, it presents experimental results to substantiate it. This paper observes that the Secret Sharing scheme is more efficient than the one based on Paillier homomorphic encryption. However, it observes a considerable increase in the overhead associated with the Shamir's secret sharing scheme, as a result of the increase in the number of parties. To reduce this overhead, it extends the secret sharing scheme without mediators to a novel model with a Fully Trusted and a Semi Trusted Third Party. The experimental results establish this functioning for global cycle detections in a temporal setup as a case study. The novel constructions proposed can also be applied to other scenarios that want to undertake Secure Multiparty Computation (SMC) for PPDARM.

4 citations


Journal ArticleDOI
TL;DR: A centralized dynamic load balancing strategy using adaptive thresholds has been proposed for a multiprocessors system that ensures a uniform distribution of the load on the processing elements with dynamic load environment.
Abstract: The primary objective of scheduling is to minimize the job execution time and maximize the resource utilization. Scheduling of 'm' jobs to 'n' resources with the objective to optimize the QoS parameters has been proven to be NP-hard problem. Two broad approaches that are defined for dealing with NP-hard problems are approximate and heuristic approach. In this paper, a centralized dynamic load balancing strategy using adaptive thresholds has been proposed for a multiprocessors system. The scheduler continuously monitors the load on the system and takes corrective measures as the load changes. The threshold values considered are adaptive in nature and are readjusted to suite the changing load on the system according to the mean of the available load. Effectively, the load is leveraged towards the mean, transferring only the appropriate number of jobs from heavily loaded nodes to lightly loaded nodes. In addition, the threshold values are designed in such a way that the scheduler avoids excessive load balancing. Therefore, the scheduler always ensures a uniform distribution of the load on the processing elements with dynamic load environment. Simulation study reveals the effectiveness of the model under various conditions.

4 citations


Journal ArticleDOI
TL;DR: The authors use the Pederson's Verifiable Secret Sharing scheme ensuring the privacy using additively homomorphic secret sharing scheme and show that the proposed approach is scalable in terms of dataset size.
Abstract: In this article, the authors propose an approach for privacy preserving distributed clustering that assumes malicious model. In the literature, there do exist, numerous approaches that assume a semi honest model. However, such an assumption is, at best, reasonable in experimentations; rarely true in real world. Hence, it is essential to investigate approaches for privacy preservation using a malicious model. The authors use the Pederson's Verifiable Secret Sharing scheme ensuring the privacy using additively homomorphic secret sharing scheme. The trustworthiness of the data is assured using homomorphic commitments in Pederson's scheme. In addition, the authors propose two variants of the proposed approach - one for horizontally partitioned dataset and the other for vertically partitioned dataset. The experimental results show that the proposed approach is scalable in terms of dataset size. The authors also carry out experimentations to highlight the effectiveness of Verifiable Secret Sharing scheme against Zero Knowledge Proof scheme.

4 citations


Journal ArticleDOI
TL;DR: A newly category system is defined in this paper, which can be applied on many different video sharing websites for content analysis and implement machine learning to realize the content re-classification with the newly defined category system.
Abstract: This article studies the characteristics of content on video sharing websites. A better understanding on online video content can help to analyse Internet users' behaviour and improve the video-sharing service. We improved an existing graph-sampling algorithm so that it could be more adapted to sample over the video sharing websites. A newly category system is defined in this paper, which can be applied on many different video sharing websites for content analysis. We also implement machine learning to realize the content re-classification with the newly defined category system. The efficiency reaches at 90%. From the classified content analysis, we find the content category distribution is not constant, and nowadays, cultural goods content take about 70% over all the sampled videos.

Journal ArticleDOI
TL;DR: This paper proposes an effective methodology to discover drug-related semantic relationships over large-scale distributed web data in medicine, pharmacology and biotechnology by utilizing semantic web and distributed system technologies and developed a novel hierarchical knowledge abstraction and an efficient relation discovery protocol.
Abstract: Drug discovery is a lengthy, expensive and difficult process. Indentifying and understanding the hidden relationships among drugs, genes, proteins, and diseases will expedite the process of drug discovery. In this paper, we propose an effective methodology to discover drug-related semantic relationships over large-scale distributed web data in medicine, pharmacology and biotechnology. By utilizing semantic web and distributed system technologies, we developed a novel hierarchical knowledge abstraction and an efficient relation discovery protocol. Our approach effectively facilitates the realization of the full potential of harnessing the collective power and utilization of the drug-related knowledge scattered over the Internet.

Journal ArticleDOI
TL;DR: The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.
Abstract: Dynamic Load Balancing (DLB) is sine qua non in modern distributed systems to ensure the efficient utilization of computing resources therein. This paper proposes a novel framework for hybrid dynamic load balancing. Its framework uses a Genetic Algorithms (GA) based supernode selection approach within. The GA-based approach is useful in choosing optimally loaded nodes as the supernodes directly from data set, thereby essentially improving the speed of load balancing process. Applying the proposed GA-based approach, this work analyzes the performance of hybrid DLB algorithm under different system states such as lightly loaded, moderately loaded, and highly loaded. The performance is measured with respect to three parameters: average response time, average round trip time, and average completion time of the users. Further, it also evaluates the performance of hybrid algorithm utilizing OnLine Transaction Processing (OLTP) benchmark and Sparse Matrix Vector Multiplication (SPMV) benchmark applications to analyze its adaptability to I/O-intensive, memory-intensive, or/and CPU-intensive applications. The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.

Journal ArticleDOI
TL;DR: This paper makes the time domain replica signals from detected signals and the excess channel impulse responses over GI and adjusts the channel state information (CSI) and the CSI is updated to obtain accurate compensated signals.
Abstract: In mobile communications, MIMO-OFDM transmission performance suffers severe degradation caused by the large delay spread channel greater than guard interval (GI). This is because the excess delay results in considerable inter-symbol interference (ISI) between adjacent symbols and inter-carrier-interference (ICI) among subcarrier in the same symbol. In case of scattered pilot (SP), the interference of pilot signals causes the deterioration of channel estimation. To mitigate this problem, in this paper, we propose the interference compensation scheme using the time domain replica signals. We make the time domain replica signals from detected signals and the excess channel impulse responses over GI. After compensation of the time domain replica signals and the received signals, we recalculate the channel state information (CSI) and the CSI is updated. Finally, we carry out the channel compensation with updated CSI for obtaining accurate compensated signals.

Journal ArticleDOI
TL;DR: The new DSN-a network with a carefully refined routing logic performs significantly better than the basic DSN in term of communication latency while provides strengths in fault-tolerance as well as load-balance.
Abstract: We aim at creating a robust and efficient custom routing mechanism for Distributed Shortcut Networks (Nguyen et al., 2013), which address new challenging issues posed by recently advanced studies in the areas of massively parallel computing and large-scale data centers. We follow the design principles of Distributed Shortcut Networks (DSN), which construct non-random topologies with the creation of long-range shortcuts inspired by observations in small-world networks. However, we focus on designing a powerful custom routing mechanism which smartly exploits some precious properties of the topology. As a result, our new DSN-a network with a carefully refined routing logic performs significantly better than the basic DSN in term of communication latency while provides strengths in fault-tolerance as well as load-balance. These help the network become robust against link failures or burst of traffic demand while topology-agnostic deadlock-free routing (e.g. the famous up*/down* algorithm) suffers a lot.

Journal ArticleDOI
TL;DR: A technique for developing service-oriented Semantic Reasoning applications based on the workflow concept is introduced, and the Large Knowledge Collider-a software platform for developing workflow-based Semantic Web applications, taking advantages of on-demand high performance computing and cloud infrastructures is presented.
Abstract: The modern Semantic Web scenarios require reasoning algorithms to be flexible, modular, and highly-configurable. A solid approach, followed in the design of the most currently existing reasoners, is not sufficient when dealing with today's challenges of data analysis across multiple sources of heterogeneous data or when the data amount grows to the "Big Data" sizes. The "reasoning as a workflow" concept has attracted a lot of attention in the design of new-generation Semantic Web applications, offering a lot of opportunities to improve both flexibility and scalability of the reasoning process. Considering a single workflow component as a service offers a lot of opportunities for a reasoning algorithm to target a much wider range of potentially enabled Semantic Web use cases by taking benefits of a service-oriented and component-based implementation. We introduce a technique for developing service-oriented Semantic Reasoning applications based on the workflow concept. We also present the Large Knowledge Collider-a software platform for developing workflow-based Semantic Web applications, taking advantages of on-demand high performance computing and cloud infrastructures.