Showing papers presented at "Parallel and Distributed Processing Techniques and Applications in 2015"
•
01 Jan 2015
16 citations
•
01 Jan 2015TL;DR: A decision-making strategy for global adaptive offloading that distributes application components to community-based clouds formed from multiple collaborating peers is formulated and a max-min technique was used to maximise the minimum TTF in order to balance energy consumption across collaborating devices.
Abstract: Adaptive offloading systems achieve context specific
optimization on mobile and pervasive devices by offloading
computational components to a resource copious remote server
or cloud. However, with the recent advancement in
computational capacity of mobile and pervasive devices,
adaptive offloading could facilitate the formation of ad-hoc
cloud-like environments using collections of mobile and
pervasive devices, with reduced reliance on centralized
infrastructure. Therefore, in this paper, we formulate a
decision-making strategy for global adaptive offloading that
distributes application components to community-based clouds
formed from multiple collaborating peers. The goal was to
extend the collaboration and application lifetime by optimizing
the Time to Failure (TTF) of devices due to energy depletion,
while meeting application specific performance constraints.
Specifically, a max-min technique was used to maximise the
minimum TTF in order to balance energy consumption across
collaborating devices. The efficacy, performance and
scalability of the formulated model were evaluated with the
proposed algorithm producing an optimal solution to the
specified model, using integer linear programming, in
affordable time and energy for a range of application and
collaboration sizes.
1 citations
•
01 Jan 2015TL;DR: A significantly improved implementation of a parallel SVM algorithm (PSVM) together with a comprehensive experimental study shows that there exists a threshold between the number of computational cores and the training time, and that choosing an appropriate value of p effects the choice of the C and gamma parameters as well as the accuracy.
Abstract: We present a significantly improved implementation of a parallel SVM algorithm (PSVM) together with a comprehensive experimental study. Support Vector Machines (SVM) is one of the most well-known machine learning classification techniques. PSVM employs the Interior Point Method, which is a solver used for SVM problems that has a high potential of parallelism. We improve PSVM regarding its structure and memory management for contemporary processor architectures. We perform a number of experiments and study the impact of the reduced column size p and other important parameters as C and gamma on the class-prediction accuracy and training time. The experimental results show that there exists a threshold between the number of computational cores and the training time, and that choosing an appropriate value of p effects the choice of the C and gamma parameters as well as the accuracy.
1 citations