scispace - formally typeset
Search or ask a question

Showing papers presented at "Parallel and Distributed Computing: Applications and Technologies in 2013"


Proceedings ArticleDOI
16 Dec 2013
TL;DR: This paper presents a simulated-annealing load balancing algorithm for solving the resource allocation and scheduling problem in a cloud computing environment, and shows that this method is able to achieve load balancing, and performs better than the round robin and basic simulated-Annealing algorithms.
Abstract: Recently, the development of cloud computing has received considerable attention. For cloud service providers, packing VMs onto a small number of servers is an effective way to reduce energy costs, so as to improve the efficiency of the data center. However allocating too many VMs on a physical machine may cause some hot spots which violate the SLA of applications. Load balancing of the entire system is hence needed to guarantee the SLA. In this paper, we present a simulated-annealing load balancing algorithm for solving the resource allocation and scheduling problem in a cloud computing environment. Experimental results show that this method is able to achieve load balancing, and performs better than the round robin and basic simulated-annealing algorithms.

25 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: This study presents an algorithm that preserves privacy in scheduling of workflows, whilst still considering customers' deadlines and cost, and evaluated the efficiency of the approach using real workflows running on a private HTCondor-based hybrid cloud.
Abstract: Cloud computing has been developed in response to demand from companies seeking to deal with the execution cost of their complex distributed applications. Introducing the notion of hybrid clouds to the cloud computing paradigm brings out many challenges in resource provisioning for workflows. Hybrid clouds encounter the following two main obstacles in reaching their full potential: (1) customers' dissatisfaction due to the conflicting nature of the constraints (budget and deadline), and (2) exposure of customers' private data/jobs in hybrid cloud infrastructures. We believe that too little attention is paid to privacy issues for workflow scheduling under deadline constraint. Many algorithms exist to address the cost and time trade-off, however, they suffer from insufficient consideration of privacy. In this study, we present an algorithm that preserves privacy in scheduling of workflows, whilst still considering customers' deadlines and cost. We evaluated our approach using real workflows running on a private HTCondor-based hybrid cloud. Results were promising and demonstrated the efficiency of our approach in not only reducing the cost of executing workflows, but also satisfying both the privacy and deadline constraints of the submitted workflows.

23 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: A task scheduling algorithm for multi-core processors, which is based on priority queue and task duplication, which can decrease the communication cost, improve the processor utilization rate and shorten the schedule length is presented.
Abstract: With the widespread use of multi-core processors, task scheduling for multi-core processors has become a hot issue. Many researches have been done on task scheduling from various perspectives. However, the existing task scheduling algorithms still have some drawbacks, such as low processor utilization rate, high complexity, and so on. This paper presents a task scheduling algorithm for multi-core processors, which is based on priority queue and task duplication. In the proposed algorithm, the Directed A cyclic Graph (DAG) is used to build a task model. Based on the model, task critical degree, task reminder, task execution time and the average communication time are all considered as the priority metrics. A priority based task dispatching list is set up by comprehensive analysis and calculating the priority for each task. Then interval insertion and task duplication strategies are employed to map tasks to processors, which can decrease the communication cost, improve the processor utilization rate and shorten the schedule length. Our experiments show that the proposed algorithm has better performance and lower complexity than the existing scheduling algorithms.

20 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: The simulations results confirm that not only the proposed method is a high capacity image steganography method, but also using this method, the cover image data remains unchanged.
Abstract: In this paper, a Discrete Wavelet Transform (DWT) based perfect secure and high capacity image steganography method is presented. This method is used for steganography of the pictorial messages in a cover image (carrier data). First, the message data and the cover image data are transformed using Haar filters based DWT, and then, the message DWT coefficients are embedded to the cover image DWT coefficients. Therefore, some DWT based different message data embedding approaches are studied. The robustness and image brightness are considered as the main criteria. Hence, the PSNR is considered as an objective criteria and the image brightness is considered as a subjective criteria for evaluation. The simulations results confirm that not only the proposed method is a high capacity image steganography method, but also using this method, the cover image data remains unchanged. Furthermore, if the message data size to be increased, after the message data extraction, there will be existing high PSNR cover image data.

19 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: It is shown that performance can be improved significantly without any change in the complexity of the algorithm (as compared to HEFT), and the new algorithm HEFT-NC (No-Cross) is compared with HEFT both in terms of speedup and schedule length.
Abstract: Scheduling applications efficiently on a network of computing systems is crucial for high performance. This problem is known to be NP-Hard and is further complicated when applied to a CPU-GPU heterogeneous environment. Heuristic algorithms like Heterogeneous Earliest Finish Time (HEFT) have shown to produce good results for other heterogeneous environments like Grids and Clusters. In this paper, we propose a novel optimization of this algorithm that takes advantage of dissimilar execution times of the processors in the chosen environment. We optimize both the task ranking as well as the processor selection steps of the HEFT algorithm. By balancing the locally optimal result with the globally optimal result, we show that performance can be improved significantly without any change in the complexity of the algorithm (as compared to HEFT). Using randomly generated Directed A cyclic Graphs (DAGs), the new algorithm HEFT-NC (No-Cross) is compared with HEFT both in terms of speedup and schedule length. We show that the HEFT-NC outperforms HEFT algorithm and is consistent across different graph shapes and task sizes.

18 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: This work implemented a virtualization environment, and wrote an algorithm to solve the traditional migration problem that needs manually determining whether the machine load is too heavy or not, and the purpose of automatic dynamic migration to balance resources of servers is achieved.
Abstract: In this work, we particularly focus on the use of free open-source software, so that end users do not need to spend a huge amount of software license fees. For cloud computing, virtualization technology delivers numerous benefits in addition to being one of the basic roles to build a cloud environment. By virtualization, enterprises can maximize working efficiency without the need to install more facilities in the computer room. In this study, we implemented a virtualization environment and performed experiments on it. The main subject of it is how to use the Open Stack open-source software to build a cloud infrastructure with high availability and a dynamic resource allocation mechanism. It provides a private cloud solution for business and organizations. It belongs to Infrastructure as a Service (IaaS), one of the three service models in the cloud. For the part of the user interface, a web interface was used to reduce the complexity of access to cloud resources for users. We measured the performance of live migration of virtual machines with different specifications and analyzed the data. Also according to live migration modes, we wrote an algorithm to solve the traditional migration problem that needs manually determining whether the machine load is too heavy or not, as a result, the virtual machine load level is automatically detected, and the purpose of automatic dynamic migration to balance resources of servers is achieved.

18 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: In this article, a novel parallel method based on MapReduce for large-scale attribute reduction is proposed, and several representative heuristic attribute reduction algorithms in rough set theory have been parallelized.
Abstract: Attribute reduction for big data is viewed as an important preprocessing step in the areas of pattern recognition, machine learning and data mining. In this paper, a novel parallel method based on MapReduce for large-scale attribute reduction is proposed. By using this method, several representative heuristic attribute reduction algorithms in rough set theory have been parallelized. Further, each of the improved parallel algorithms can select the same attribute reduct as its sequential version, therefore, owns the same classification accuracy. An extensive experimental evaluation shows that these parallel algorithms are effective for big data.

17 citations


Proceedings ArticleDOI
Chao-Tung Yang1, Jung-Chun Liu1, Wen-Hung Hsu1, Hsin-Wen Lu1, William C. Chu1 
16 Dec 2013
TL;DR: The purpose of this paper is to construct complete import tools and solutions based on HBase to facilitate easy access of data in HBase, a non-relational database to further integrate data.
Abstract: Currently, most health care systems used among divisions in medical centers still adopt the Excel file format for a variety of scales statistics, such as the clinical self-care ability scale for Functional Independence Measure. Although people can further analyze Excel files using other statistical analysis software, such as SAS, SPSS, and STATA, they cannot effectively share the archived data in Excel among divisions. We propose to do format conversion on these data and store them in a database. As the collection of Excel files cannot be shared with ease, we plan to use HBase, a non-relational database, to further integrate data. The purpose of this paper is to construct complete import tools and solutions based on HBase to facilitate easy access of data in HBase. Besides, a visual interface is also used to manage HBase to implement user friendly client connection tools for the HBase database.

15 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: A structure capable of performing this task by means of triggers and a tool developed for the automatic generation of the SQL (Structured Query Language) code to create these trigger, besides showing its performance and comparing it to other techniques are proposed.
Abstract: Nowadays large corporations require integrated data from diverse sources, leading to the use of data warehouse architectures for this purpose. To bypass problems related to the use of computational resources to process large volumes of data, an ETL (Extract, Transform and Load) technique with zero latency can be used, that works by constantly processing small data loads. Among the extraction techniques of the zero latency ETL are the use of logs, triggers, materialized views and timestamps. This paper proposes a structure capable of performing this task by means of triggers and a tool developed for the automatic generation of the SQL (Structured Query Language) code to create these trigger, besides showing its performance and comparing it to other techniques. Said method is relevant for the extraction of portions of selected information as it permits to combine conventional and real time ETL techniques.

13 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: This paper defines and solves the problem of privacy-preserving ranked fuzzy keyword search over encrypted cloud data, and exploits the edit distance to quantify keyword similarity and dictionary-based fuzzy set construction to construct fuzzy keyword sets, which greatly reduces the index size, storage and communication costs.
Abstract: As Cloud Computing becomes popular, more and more data owners prefer to store their data into the cloud for great flexibility and economic savings In order to protect the data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data utilization a challenging task Although traditional searchable symmetric encryption schemes allow users to securely search over encrypted data through keywords and selectively retrieve files of interest without capturing any relevance of data files or search keywords, and fuzzy keyword search on encrypted data allows minor typos and format inconsistencies, secure ranked keyword search captures the relevance of data files and returns the results that are wanted most by users These techniques function unilaterally, which greatly reduces the system usability and efficiency In this paper, for the first time, we define and solve the problem of privacy-preserving ranked fuzzy keyword search over encrypted cloud data Ranked fuzzy keyword search greatly enhances system usability and efficiency when exact match fails It returns the matching files in a ranked order with respect to certain relevance criteria (eg, keyword frequency) based on keyword similarity semantics In our solution, we exploit the edit distance to quantify keyword similarity and dictionary-based fuzzy set construction to construct fuzzy keyword sets, which greatly reduces the index size, storage and communication costs We choose the efficient similarity measure of "coordinate matching", ie, as many matches as possible, to obtain the relevance of data files to the search keywords

13 citations


Proceedings ArticleDOI
16 Dec 2013
TL;DR: This paper proposes a new dynamic data race detector called SimpleLock that accurately detects data races in a scheduling insensitive manner with low execution overhead and results indicate that the execution overhead of SimpleLock is not much higher than that of FastTrack, the fastest happens-before race detector.
Abstract: Data races are one of the major causes of concurrency bugs in multithreaded programs, but they are hard to find due to nondeterministic thread scheduling. Data race detectors are essential tools that help long-suffering programmers to locate data races in multithreaded programs. One type of detectors precisely detects data races but is sensitive to thread scheduling, whereas another type is less sensitive to thread scheduling but reports a considerable number of false positives. In this paper, we propose a new dynamic data race detector called Simple Lock that accurately detects data races in a scheduling insensitive manner with low execution overhead. We reduce execution overhead by using two assumptions. The first is that most data races are caused by the accessing of shared variables without locks. The second is that two accesses that cause a data race have not a long distance between them in an execution trace. The results of experiments conducted on the Road Runner framework confirm that these assumptions are valid and that our Simple Lock detector can efficiently and accurately detect real and potential data races in one execution trace. The results also indicate that the execution overhead of Simple Lock is not much higher than that of Fast Track, the fastest happens-before race detector.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: This work summarizes and classifies the research effort conducted on fault management in cloud computing, providing a good starting point for further research in this area.
Abstract: Background: The large computational infrastructures required to provide the on-demand services that most users are now used to are more prone to failures than any single computational device. Thus, fault management is a essential activity in the realization of the cloud computing model. Aims: This work aims at identifying well-explored topics on fault management in cloud computing as well as pin-pointing gaps in the scientific literature that may represent opportunities for further research and development in this area. Method: We conducted a systematic mapping study to collect, filter and classify scientific works in this area. The 4535 scientific papers found on major search engines were filtered and the remaining 166 papers were classified according to a taxonomy described in this work. Results: We found that IaaS is most explored in the selected studies. The main dependability functions explored were Tolerance and Removal, and the attributes were Reliability and Availability. Most papers had been classified by research type as Solution Proposal. Conclusion: This work summarizes and classifies the research effort conducted on fault management in cloud computing, providing a good starting point for further research in this area.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: The objective of this work is evaluate the solution regarding the performance gains achieved with the use of the framework showing that it is possible to improve the software testing process with very little configuration overhead and low costs.
Abstract: This work presents the framework Cloud Testing, a solution to parallelize the execution of a test suite over a distributed cloud infrastructure. The use of a cloud as runtime environment for automated software testing provides a more efficient and effective solution when compared to traditional methods regarding the exploration of diversity and heterogeneity for testing coverage. The objective of this work is evaluate our solution regarding the performance gains achieved with the use of the framework showing that it is possible to improve the software testing process with very little configuration overhead and low costs.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: A heuristic is proposed for planning a set of tours (closed paths) through a network such that every node is at most l-hops away from at least one tour, and all tours are connected.
Abstract: We consider the problem of planning a set of tours (closed paths) through a network such that every node is at most l-hops away from at least one tour, and all tours are connected. A set of tours is called connected in this work, if there exists a path between any two nodes on the tour that is completely within the set of tours. In other words, in a connected set of of tours, we do not have to leave the tour to travel between any two tour nodes. The problem naturally involves steps related to finding extended dominating sets, travelling salesman tours and forwarding trees such that the cost of data gathering is minimized. We propose a heuristic for this problem that considers the as costs the tour length, and the multi-hop forwarding traffic. We evaluate experimentally the new heuristic for various settings, and also compare against previously proposed approaches for related data gathering problems.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: This paper proposes a novel key-value storage keeping all data always in RAM by aggregating resources of potentially many nodes in a data center and provides persistence by a novel SSD-aware logging approach allowing to recover failed nodes very fast.
Abstract: Large-scale interactive applications and real time data-processing are facing problems with traditional disk-based storage solutions. Because of the often irregular access patterns they must keep almost all data in RAM caches, which need to be manually synchronized with secondary storage and need a lot of time to be re-loaded in case of power outages. In this paper we propose a novel key-value storage keeping all data always in RAM by aggregating resources of potentially many nodes in a data center. We aim at supporting management of billions of small data objects (16-64 byte) like for example needed for storing graphs. A scalable low-overhead meta-data management is realized using a novel range-based ID approach combined with a super-overlay network. Furthermore, we provide persistence by a novel SSD-aware logging approach allowing to recover failed nodes very fast.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: Every learner is assigned to at least one groups and, instead of a learner studied by interacting directly with other learners, group leader is responsible for raising up the member's knowledge, i.e., to explore for optimal solution.
Abstract: Teaching-learning based optimization (TLBO), inspired from the teaching-learning process in a classroom, is a newly developed population based algorithm. Except population size and maximum number of iteration, it does not require any specific parameters. TLBO consists of two modes of searching phase, teacher and learner phase. In this paper, every learner is assigned to at least one groups and, instead of a learner studied by interacting directly with other learners, group leader is responsible for raising up the member's knowledge, i.e., to explore for optimal solution. The idea is analog to group discussion in which group leader always dominate group discussion direction and performance. For simplicity, the proposed algorithm will be denoted as LTLBO. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with original TLBO and particle swarm optimization (PSO).

Proceedings ArticleDOI
16 Dec 2013
TL;DR: The study results show that the Approx-real model can better approximate the best accurate existing antenna models than other simplified antenna models, while introducing no high computational overheads.
Abstract: This paper concerns the impact of various antenna models on the network connectivity of wireless ad hoc networks. Existing antenna models have their pros and cons in the accuracy reflecting realistic antennas and the computational complexity. We therefore propose a new directional antenna model called Approx-real to balance the accuracy against the complexity. We then run extensive simulations to compare the existing models and the Approx-real model in terms of the network connectivity. The study results show that the Approx-real model can better approximate the best accurate existing antenna models than other simplified antenna models, while introducing no high computational overheads.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: A framework proposed for automatically indexing information based on semantics and on text mining techniques is described, which should work in parallel with the insertion of data into electronic records.
Abstract: The volume of digital information is growing considerably in the last two decades and there is currently a huge concern about obtaining this content quickly and effectively. In the health sector it is not different, to retrieve medical records that obtain relevant information about treatments and progresses of clinical conditions may speed up new patients' diagnosis. In this work it is described a framework proposed for automatically indexing information based on semantics and on text mining techniques. This task should work in parallel with the insertion of data into electronic records. The original contributions come down to search engine in texts organized so as to potentiate the amount of results obtained, as evidenced by the experiments carried out. The stored information is automatically fragmented into words, which have a semantic dictionary based on a framework that enables the information retrieval through semantics.

Proceedings ArticleDOI
Chao-Tung Yang1, Jung-Chun Liu1, Chi-Jui Liao1, Chia-Cheng Wu1, Fang-Yie Le1 
16 Dec 2013
TL;DR: An intelligent environment monitoring feedback system is built to collect data of physical conditions of employees and air conditions of the working environment, displayed the collected data with a real time interface, and sent out warming messages to prevent accidents.
Abstract: Along with the improvement of sanitary conditions and changes of life style, people begin to pay attention to the modern concept of health promotion and preventive medicine. Therefore, we built an intelligent environment monitoring feedback system to collect data of physical conditions of employees and air conditions of the working environment, displayed the collected data with a real time interface, and sent out warming messages to prevent accidents. We hope that based on these real-time data, the proposed system can help people make right and timely decisions, and act on time to maintain a beneficial environment in the monitored area.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: Experimental results indicate that the training speed of P-PDIPM on GPU is almost 40x faster than that of the serial one (S-PD IPM) on CPU, and without extensive optimization, P- PDIPM can obtain about 8x speedup over the state of the art tool LIBSVM while maintaining high prediction accuracy.
Abstract: The training of SVM can be viewed as a Convex Quadratic Programming (CQP) problem which becomes difficult to be solved when dealing with the large scale data sets. Traditional methods such as Sequential Minimal Optimization (SMO) for SVM training is used to solve a sequence of small scale sub-problems, which costs a large amount of computation time and is hard to be accelerated by utilizing the computation power of GPU. Although Interior Point Method (IPM) such as primal-dual interior point method (PDIPM) can be also addressed SVM training well and has favourable potential for parallelizing on GPU, it contains comparatively high time complexity O(l^3) and space complexity O(l^2), where l is the number of training instances. Fortunately, by invoking low-rank approximation methods such as Incomplete Cholesky Factorization (ICF) and Sherman Morrison Woodbury formula (SMW), the requirements of both storage and computation of PDIPM can be reduced significantly. In this paper, a parallel PDIPM method (P-PDIPM) along with a parallel ICF method (P-ICF) is proposed to accelerate the SVM training on GPU. Experimental results indicate that the training speed of P-PDIPM on GPU is almost 40x faster than that of the serial one (S-PDIPM) on CPU. Besides, without extensive optimization, P-PDIPM can obtain about 8x speedup over the state of the art tool LIBSVM while maintaining high prediction accuracy.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: NoC implementation of Midi mew-connected Mesh Network (MMN) is proposed, a Minimal Distance Mesh with Wrap-around (Midi mew) links network of multiple basic modules, in which the basic modules are 2D-mesh networks that are hierarchically interconnected for higher-level networks.
Abstract: Architecture of interconnection network plays a significant role in the performance and energy consumption of Network-on-Chip (NoC) systems In this paper we propose NoC implementation of Midi mew-connected Mesh Network (MMN) MMN is a Minimal Distance Mesh with Wrap-around (Midi mew) links network of multiple basic modules, in which the basic modules are 2D-mesh networks that are hierarchically interconnected for higher-level networks For implementing all the links of level-3 MMN, minimum 4 layers are needed which is feasible with current and future VLSI technologies With innovative combination of diagonal and hierarchical structure, MMN possesses several attractive features including constant node degree, small diameter, low cost, small average distance, and moderate bisection width than that of other conventional and hierarchical interconnection networks

Proceedings ArticleDOI
16 Dec 2013
TL;DR: A dynamic thread scheduler which co-optimizes both the data locality and load balance on a GPGPU is proposed and the results show that the proposed approach reduces the overall execution cycles by up to 16% when compared with other approaches concerning only one objective.
Abstract: Modern GPGPUs implement on-chip shared cache to better exploit the data reuse of various general purpose applications. Given the massive amount of concurrent threads in a GPGPU, striking the balance between Data Locality and Load Balance has become a critical design concern. To achieve the best performance, the trade-off between these two factors needs to be performed concurrently. This paper proposes a dynamic thread scheduler which co-optimizes both the data locality and load balance on a GPGPU. The proposed approach is evaluated using three applications with various input datasets. The results show that the proposed approach reduces the overall execution cycles by up to 16% when compared with other approaches concerning only one objective.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: This paper proposes a new parallel method for Ant Colony Optimization (ACO) for Traveling Salesman Problems (TSP) with GPUs, called the Transition Condition Method, which achieves the maximal speed-up factor of 4.74 more than the previous parallel method.
Abstract: Graph Processing Units (GPUs) have recently evolved into a super multi-core and a fully programmable architecture. In the CUDA programming model, the programmers can simply implement parallelism ideas of a task on GPUs. The purpose of this paper is to accelerate Ant Colony Optimization (ACO) for Traveling Salesman Problems (TSP) with GPUs. In this paper, we propose a new parallel method, which is called the Transition Condition Method. Experimental results are extensively compared and evaluated on the performance side and the solution quality side. The TSP problems are used as a standard benchmark for our experiments. In terms of experimental results, our new parallel method achieves the maximal speed-up factor of 4.74 than the previous parallel method. On the other hand, the quality of solutions is similar to the original sequential ACO algorithm. It proves that the quality of solutions does not be sacrificed in the cause of speed-up.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: A new parallel skyline query algorithm is presented that can eliminate redundant computations and improve parallelism of the skyline query and successfully exploits the features of multiple cores to improve the performance of skyline computation for large high-dimensional datasets.
Abstract: The skyline query is an efficient data analysis tool for multi-criteria decision making that has received significant attention in the database community As multi-core architectures have gone mainstream, we present a new parallel skyline query algorithm that can be applied to multi-core and multiprocessor systems, to progressively return skyline points as they are identified efficiently In this paper, we proposed a parallel skyline algorithm which can eliminate redundant computations and improve parallelism of the skyline query Experimental results show that our algorithm successfully exploits the features of multiple cores to improve the performance of skyline computation for large high-dimensional datasets

Proceedings ArticleDOI
16 Dec 2013
TL;DR: A type of game theoretic dynamics in a network model where all nodes act selfishly and will forward packets only if it is to their benefit is considered, which shows the rather surprising fact that a non-negligible amount of non-trivial flow is successfully transmitted.
Abstract: We consider a type of game theoretic dynamics in a network model where all nodes act selfishly and will forward packets only if it is to their benefit. The model we present assumes that each node receives utility from successfully sending its own flow to its destination(s) and from receiving flow, while it pays a cost (e.g., battery energy) for its transmissions. Each node has to decide whether to relay flow as an intermediate node from other sources, as relaying incurs only costs. To induce nodes into acting as intermediaries, the model implements a reputation-based mechanism which punishes non-cooperative nodes by cutting off links to them, a decision that is made in a very local fashion. In our setting, the nodes know only the state of the network in their local neighborhood, and can only decide on the amount of the flow on their outgoing edges, unlike the previously considered models where users have full knowledge of the network and can also decide the routing of flow originating from them. Given the opportunistic nature of the nodes and their very limited knowledge of the network, our simulations show the rather surprising fact that a non-negligible amount of non-trivial flow (flow over at least two hops) is successfully transmitted.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: A novel random key revocation protocol suitable for large scale networks and removes compromised information efficiently is presented, which can guarantee network security and has less memory consumption and communication overhead.
Abstract: In recent years, several random key pre-distribution schemes have been proposed for wireless sensor networks. However, the problem of key and node revocation has received fewer attentions. In this paper, we present a novel random key revocation protocol, which is suitable for large scale networks and removes compromised information efficiently. The proposed revocation protocol can guarantee network security and has less memory consumption and communication overhead. With the combination of centralized and distributed revocations, the protocol achieves both timeliness and accuracy for revocation. The simulation results show that our protocol has better performance than existing protocols in terms of increasing revocation validity and revocation velocity, and prolonging the network lifetime.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: This work used building files of ISG to build the file system of a web-based system and each attribute of an object to be specified for translating these settings to Java for avoiding cooperating processes that share data and resulting in inconsistencies in the shared data.
Abstract: We developed two tools previously called ISG and DWL and ISG is for generating information systems and DWL is for generating web systems. We have used ISG and DWL to develop a customized web-based system for the 7th Ubiquitous-Home Conference UHC2013 and other web-based systems. The advantage of these web-based systems is that it uses object serialization mechanism to fill objects with data which saves CPU time. We used building files of ISG to build the file system of a web-based system and each attribute of an object to be specified for translating these settings to Java. We wrote Input Output programs to read data and write data and these objects with data entry thus created can be stored and retrieved efficiently. Our web-based systems avoid running cooperating processes that share data and resulting in inconsistencies in the shared data. Company produce new products frequently and the web pages of new products need to be updated shortly and using ISG can solve this problem.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: This paper first formulate the problem with a popular background knowledge attack model named rank attack, and then provide an anonymization-based method to prevent the released data from such attacks, and takes data utility into consideration by defining specific information loss metrics.
Abstract: Cloud computing is becoming increasingly popular due to its power in providing high-performance and flexible service capabilities. More and more internet users have accepted this innovative service model and been using various cloud-based services every day. However, these service-using data is quite valuable for marketing purposes, as it can reflect a user's interest and service-using pattern. Therefore, the privacy issues have been brought out. Recently, many studies focus on access control and other traditional security problems in cloud, and little studied on the topic of the private service data publishing. In this paper, we study the private service data publishing problem by representing the data with a hyper graph, which is quite efficient to illustrate complex relationships among users. We first formulate the problem with a popular background knowledge attack model named rank attack, and then provide an anonymization-based method to prevent the released data from such attacks. We also take data utility into consideration by defining specific information loss metrics. The performances of the methods have been validated by two sets of synthetic data.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: The data service provisioning framework that builds up the data infrastructure, focusing on the deployment of data integration services in the hybrid Cloud, the associated mechanism for securing access to patient-specific datasets, and performance results for different deployment scenarios relevant within the scope of the project are presented.
Abstract: The European VPH-Share project develops a comprehensive service framework with the objective of sharing clinical data, information, models and workflows focusing on the analysis of the human physiopathology within the Virtual Physiological Human (VPH) community. The project envisions an extensive and dynamic data infrastructure built on top of a secure hybrid Cloud environment. This paper presents the data service provisioning framework that builds up the data infrastructure, focusing on the deployment of data integration services in the hybrid Cloud, the associated mechanism for securing access to patient-specific datasets, and performance results for different deployment scenarios relevant within the scope of the project.

Proceedings ArticleDOI
16 Dec 2013
TL;DR: This work considers a simple imperative language with fork/join parallelism and lexically scoped nested atomic sections and contributes a formal operational semantics of this language that satisfies a specification on execution traces designed in a companion paper.
Abstract: We consider a simple imperative language with fork/join parallelism and lexically scoped nested atomic sections from which threads can escape. In this context, our contribution is a formal operational semantics of this language that satisfies a specification on execution traces designed in a companion paper.