scispace - formally typeset
Search or ask a question

Showing papers on "Software portability published in 2008"


Book
24 Nov 2008
TL;DR: Professional Android Application Development will give the grounding and knowledge you need to write applications using the current SDK, along with the flexibility to quickly adapt to future enhancements, to help you construct increasingly complex, useful, and innovative mobile applications for Android phones.
Abstract: A hands-on guide to building mobile applications, Professional Android Application Development features concise and compelling examples that show you how to quickly construct real-world mobile applications for Android phones. Fully up-to-date for version 1.0 of the Android software development kit, it covers all the essential features, and explores the advanced capabilities of Android (including GPS, accelerometers, and background Services) to help you construct increasingly complex, useful, and innovative mobile applications for Android phones.What this book includesAn introduction to mobile development, Android, and how to get started. An in-depth look at Android applications and their life cycle, the application manifest, Intents, and using external resources. Details for creating complex and compelling user interfaces by using, extending, and creating your own layouts and Views and using Menus. A detailed look at data storage, retrieval, and sharing using preferences, files, databases, and Content Providers. Instructions for making the most of mobile portability by creating rich map-based applications as well as using location-based services and the geocoder. A look at the power of background Services, using threads, and a detailed look at Notifications. Coverage of Android's communication abilities including SMS, the telephony APIs, network management, and a guide to using Internet resources Details for using Android hardware, including media recording and playback, using the camera, accelerometers, and compass sensors. Advanced development topics including security, IPC, advanced 2D / 3D graphics techniques, and userhardware interaction. Who this book is forThis book is for anyone interested in creating applications for the Android mobile phone platform. It includes information that will be valuable whether you're an experienced mobile developer or making your first foray, via Android, into writing mobile applications. It will give the grounding and knowledge you need to write applications using the current SDK, along with the flexibility to quickly adapt to future enhancements.

366 citations


Journal ArticleDOI
TL;DR: Stage’s scalability is examined to suggest that it may be useful for swarm robotics researchers who would otherwise use custom simulators, with their attendant disadvantages in terms of code reuse and transparency.
Abstract: Stage is a C++ software library that simulates multiple mobile robots. Stage version 2, as the simulation backend for the Player/Stage system, may be the most commonly used robot simulator in research and university teaching today. Development of Stage version 3 has focused on improving scalability, usability, and portability. This paper examines Stage’s scalability.

343 citations


Proceedings Article
07 Dec 2008
TL;DR: This analysis shows that a model based on OS utilization metrics and CPU performance counters is generally most accurate across the machines and workloads tested, and is particularly useful for machines whose dynamic power consumption is not dominated by the CPU, as well as machines with aggressively power-managed CPUs.
Abstract: Dynamic power management in enterprise environments requires an understanding of the relationship between resource utilization and system-level power consumption. Power models based on resource utilization have been proposed in the context of enabling specific energy-efficiency optimizations on specific machines, but the accuracy and portability of different approaches to modeling have not been systematically compared. In this work, we use a common infrastructure to fit a family of high-level full-system power models, and we compare these models over a wide variation of workloads and machines, from a laptop to a server. This analysis shows that a model based on OS utilization metrics and CPU performance counters is generally most accurate across the machines and workloads tested. It is particularly useful for machines whose dynamic power consumption is not dominated by the CPU, as well as machines with aggressively power-managed CPUs, two classes of systems that are increasingly prevalent.

315 citations


Proceedings ArticleDOI
07 Dec 2008
TL;DR: This work examines appliance contextualization needs and presents architecture for secure, consistent, and dynamic contextualization, in particular for groups of appliances that must work together in a shared security context, and introduces the concept of a standalone context broker.
Abstract: As virtual appliances become more prevalent, we encounter the need to stop manually adapting them to their deployment context each time they are deployed. We examine appliance contextualization needs and present architecture for secure, consistent, and dynamic contextualization, in particular for groups of appliances that must work together in a shared security context. This architecture allows for programmatic cluster creation and use, as well as mitigating potential errors and unnecessary charges during setup time. For portability across many deployment mechanisms, we introduce the concept of a standalone context broker. We describe the current implementation of the entire architecture using the virtual workspaces toolkit, showing real-life examples of dynamically contextualized Grid clusters.

197 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: Harmony, a runtime supported programming and execution model that provides semantics for simplifying parallelism management, dynamic scheduling of compute intensive kernels to heterogeneous processor resources, and online monitoring driven performance optimization for heterogeneous many core systems is proposed.
Abstract: The emergence of heterogeneous many core architectures presents a unique opportunity for delivering order of magnitude performance increases to high performance applications by matching certain classes of algorithms to specifically tailored architectures. Their ubiquitous adoption, however, has been limited by a lack of programming models and management frameworks designed to reduce the high degree of complexity of software development intrinsic to heterogeneous architectures. This paper proposes Harmony, a runtime supported programming and execution model that provides: (1) semantics for simplifying parallelism management, (2) dynamic scheduling of compute intensive kernels to heterogeneous processor resources, and (3) online monitoring driven performance optimization for heterogeneous many core systems. We are particulably concerned with simplifying development and ensuring binary portability and scalability across system configurations and sizes. Initial results from ongoing development demonstrate the binary compatibility with variable number of cores, as well as dynamic adaptation of schedules to data sets. We present preliminary results of key features for some benchmark applications.

194 citations


Proceedings Article
22 Jun 2008
TL;DR: Vx32 is evaluated using microbenchmarks and whole system benchmarks, and four applications based on vx32 are examined: an archival storage system, an extensible public-key infrastructure, an experimental user-level operating system running atop another host OS, and a Linux system call jail.
Abstract: Code sandboxing is useful for many purposes, but most sandboxing techniques require kernel modifications, do not completely isolate guest code, or incur substantial performance costs. Vx32 is a multipurpose user-level sandbox that enables any application to load and safely execute one or more guest plug-ins, confining each guest to a system call API controlled by the host application and to a restricted memory region within the host's address space. Vx32 runs guest code efficiently on several widespread operating systems without kernel extensions or special privileges; it protects the host program from both reads and writes by its guests; and it allows the host to restrict the instruction set available to guests. The key to vx32's combination of portability, flexibility, and efficiency is its use of x86 segmentation hardware to sandbox the guest's data accesses, along with a lightweight instruction translator to sandbox guest instructions. We evaluate vx32 using microbenchmarks and whole system benchmarks, and we examine four applications based on vx32: an archival storage system, an extensible public-key infrastructure, an experimental user-level operating system running atop another host OS, and a Linux system call jail. The first three applications export custom APIs independent of the host OS to their guests, making their plug-ins binary-portable across host systems. Compute-intensive workloads for the first two applications exhibit between a 30% slowdown and a 30% speedup on vx32 relative to native execution; speedups result from vx32's instruction translator improving the cache locality of guest code. The experimental user-level operating system allows the use of the guest OS's applications alongside the host's native applications and runs faster than whole-system virtual machine monitors such as VMware and QEMU. The Linux system call jail incurs up to 80% overhead but requires no kernel modifications and is delegation-based, avoiding concurrency vulnerabilities present in other interposition mechanisms.

164 citations


Proceedings Article
01 Jun 2008
TL;DR: A sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet, and a new system consisting of the ensemble of two classifiers with precision-based vote weighting that provides significant gains in accuracy and recall.
Abstract: This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.

164 citations


01 Jan 2008
TL;DR: The design of an abstraction layer and API designed to support portability between vendor platforms, isolation between switchlets and both the platform and other switchlets, high performance, and programming simplicity are discussed.
Abstract: Most switch vendors have launched “open” platform designs for routers and switches, allowing code from customers or third-party vendors to run on their proprietary hardware. An open platform needs a programming interface, to provide switchlets sufficient access to platform features without exposing too much detail. We discuss the design of an abstraction layer and API designed to support portability between vendor platforms, isolation between switchlets and both the platform and other switchlets, high performance, and programming simplicity. The API would also support resource-management abstractions; for example, to allow policy-based allocation of TCAM entries among multiple switchlets.

133 citations


Journal ArticleDOI
01 Aug 2008
TL;DR: Clustera is designed for extensibility, enabling the system to be easily extended to handle a wide variety of job types ranging from computationally-intensive, long-running jobs with minimal I/O requirements to complex SQL queries over massive relational tables.
Abstract: This paper introduces Clustera, an integrated computation and data management system. In contrast to traditional cluster-management systems that target specific types of workloads, Clustera is designed for extensibility, enabling the system to be easily extended to handle a wide variety of job types ranging from computationally-intensive, long-running jobs with minimal I/O requirements to complex SQL queries over massive relational tables. Another unique feature of Clustera is the way in which the system architecture exploits modern software building blocks including application servers and relational database systems in order to realize important performance, scalability, portability and usability benefits. Finally, experimental evaluation suggests that Clustera has good scale-up properties for SQL processing, that Clustera delivers performance comparable to Hadoop for MapReduce processing and that Clustera can support higher job throughput rates than previously published results for the Condor and CondorJ2 batch computing systems.

113 citations


Journal ArticleDOI
01 Jul 2008
TL;DR: Intensive testing and troubleshooting allowed to dramatically increase both job submission rate and service stability, and future developments of the gLite WMS will be focused on reducing external software dependency, improving portability, robustness and usability.
Abstract: The gLite Workload Management System (WMS) is a collection of components that provide the service responsible for distributing and managing tasks across computing and storage resources available on a Grid. The WMS basically receives requests of job execution from a client, finds the required appropriate resources, then dispatches and follows the jobs until completion, handling failure whenever possible. Other than single batch-like jobs, compound job types handled by the WMS are Directed Acyclic Graphs (a set of jobs where the input/output/execution of one of more jobs may depend on one or more other jobs), Parametric Jobs (multiple jobs with one parametrized description), and Collections (multiple jobs with a common description). Jobs are described via a flexible, high-level Job Definition Language (JDL). New functionality was recently added to the system (use of Service Discovery for obtaining new service endpoints to be contacted, automatic sandbox files archival/compression and sharing, support for bulk-submission and bulk-matchmaking). Intensive testing and troubleshooting allowed to dramatically increase both job submission rate and service stability. Future developments of the gLite WMS will be focused on reducing external software dependency, improving portability, robustness and usability.

104 citations


01 Jan 2008
TL;DR: It is shown that auto-tuning consistently delivers speedups in excess of 3× across all multicore computers except the memory-bound Intel Clovertown, where the benefit was as little as 1.5×.
Abstract: For the last decade, the exponential potential of Moore's Law has been squandered in the effort to increase single thread performance, which is now limited by the memory, instruction, and power walls In response, the computing industry has boldly placed its hopes on the multicore gambit That is, abandon instruction-level parallelism and frequency-scaling in favor of the exponential scaling of the number of compute cores per microprocessor The massive thread-level parallelism results in tremendous potential performance, but demands efficient parallel programming—a task existing software tools are ill-equipped for We desire performance portability—the ability to write a program once and not only have it deliver good performance on the development computer, but on all multicore computers today and tomorrow This thesis accepts for fact that multicore is the basis for all future computers Furthermore, we regiment our study by organizing it around the computational patterns and motifs as set forth in the Berkeley View Although domain experts may be extremely knowledgeable on the mathematics and algorithms of their fields, they often lack the detailed computer architecture knowledge required to achieve high performance Forthcoming heterogeneous architectures will exacerbate the problem for everyone Thus, we extend the auto-tuning approach to program optimization and performance portability to the menagerie of multicore computers In an automated fashion, an auto-tuner will explore the optimization space for a particular computational kernel of a motif on a particular computer In doing so, it will determine the best combination of algorithm, implementation, and data structure for the combination of architecture and input data We implement and evaluate auto-tuners for two important kernels: Lattice Boltzmann Magnetohydrodynamics (LBMHD) and sparse matrix-vector multiplication (SpMV) They are representative of two of the computational motifs: structured grids and sparse linear algebra To demonstrate the performance portability that our auto-tuners deliver, we selected an extremely wide range of architectures as an experimental test bed These include conventional dual- and quad-core superscalar x86 processors both with and without integrated memory controllers We also include the rather unconventional chip multithreaded (CMT) Sun Niagara2 (Victoria Falls) and the heterogeneous, local store-based IBM Cell Broadband Engine In some experiments we sacrifice the performance portability of a common C representation, by creating ISA-specific auto-tuned versions of these kernels to gain architectural insight To quantify our success, we created the Roofline model to perform a bound and bottleneck analysis for each kernel-architecture combination Despite the common wisdom that LBMHD and SpMV are memory bandwidth-bound, and thus nothing can be done to improve performance, we show that auto-tuning consistently delivers speedups in excess of 3× across all multicore computers except the memory-bound Intel Clovertown, where the benefit was as little as 15× The Cell processor, with its explicitly managed memory hierarchy, showed far more dramatic speedups of between 20× and 130× The auto-tuners includes both architecture-independent optimizations based solely on source code transformations and high-level kernel knowledge, as well as architecture-specific optimizations like the explicit use of single instruction, multiple data (SIMD) extensions or the use Cell's DMA-based memory operations We observe that the these ISA-specific optimizations are becoming increasingly important as architectures evolve

Proceedings ArticleDOI
14 Apr 2008
TL;DR: This work presents an implementation of a map-reduce library supporting parallel field programmable gate arrays (FPGAs) and graphics processing units (GPUs) and describes the experience in developing a number of benchmark problems in signal processing, Monte Carlo simulation and scientific computing.
Abstract: The map-reduce model requires users to express their problem in terms of a map function that processes single records in a stream, and a reduce function that merges all mapped outputs to produce a final result. By exposing structural similarity in this way, a number of key issues associated with the design of custom computing machines including parallelisation; design complexity; software-hardware partitioning; hardware-dependency, portability and scalability can be easily addressed. We present an implementation of a map-reduce library supporting parallel field programmable gate arrays (FPGAs) and graphics processing units (GPUs). Parallelisation due to pipelining, multiple data paths and concurrent execution of FPGA/GPU hardware is automatically achieved. Users first specify the map and reduce steps for the problem in ANSI Cand no knowledge of the underlying hardware or parallelisation is needed. The source code is then manually translated into a pipelined data path which, along with the map-reduce library, is compiled into appropriate binary configurations for the processing units. We describe our experience in developing a number of benchmark problems in signal processing, Monte Carlo simulation and scientific computing as well as report on the performance of FPGA, GPU and heterogeneous systems.

Journal ArticleDOI
TL;DR: The proposed methodology combines a high-level algorithmic design that captures the machine-independent aspects, to guarantee portability with performance to future processors, with an implementation that embeds processor-specific optimizations.
Abstract: Multi-core processors are a shift of paradigm in computer architecture that promises a dramatic increase in performance. But they also bring an unprecedented level of complexity in algorithmic design and software development. In this paper we describe the challenges involved in designing a breadth-first search (BFS) algorithm for the Cell/B.E. processor. The proposed methodology combines a high-level algorithmic design that captures the machine-independent aspects, to guarantee portability with performance to future processors, with an implementation that embeds processor-specific optimizations. Using a fine-grained global coordination strategy derived by the bulk-synchronous parallel (BSP) model, we have determined an accurate performance model that has guided the implementation and the optimization of our algorithm. Our experiments on a pre-production Cell/B.E. board running at 3.2 GHz, show almost linear speedups when using multiple synergistic processing elements, and an impressive level of performance when compared to other processors. On graphs which offer sufficient parallelism, the Cell/B.E. is typically an order of magnitude faster than conventional processors, such as the AMD Opteron and the Intel Pentium 4 and Woodcrest, and custom-designed architectures, such as the MTA-2 and BlueGene/L.

Book ChapterDOI
29 Jul 2008
TL;DR: News@hand is presented, a news recommender system which applies semantic-based technologies to describe and relate news contents and user preferences in order to produce enhanced recommendations.
Abstract: We present News@hand, a news recommender system which applies semantic-based technologies to describe and relate news contents and user preferences in order to produce enhanced recommendations. The exploitation of conceptual information describing contents and user profiles, along with the capability of inferring knowledge from the semantic relations defined in the ontologies, enabling different content-based collaborative recommendation models, are the key distinctive aspects of the system. The multi-domain portability, the multi-media source applicability, and addressing of some limitations of current recommender systems are the main benefits of our proposed approach.

Book ChapterDOI
11 Mar 2008
TL;DR: An efficient and portable TPM emulator for Unix that enables not only the implementation of flexible and low-cost test-beds and simulators but, in addition, provides programmers of trusted systems with a powerful testing and debugging tool that can also be used for educational purposes.
Abstract: When developing and researching new trusted computing technologies, appropriate tools to investigate their behavior and to evaluate their performance are of paramount importance. In this paper, we present an efficient and portable TPM emulator for Unix. Our emulator enables not only the implementation of flexible and low-cost test-beds and simulators but, in addition, provides programmers of trusted systems with a powerful testing and debugging tool that can also be used for educational purposes. Thanks to its portability and interoperability, the TPM emulator runs on a variety of platforms and is compatible with the most relevant software packages and interfaces.

Journal ArticleDOI
TL;DR: Four factors are found to be significantly related with overall user evaluation, namely functionality, portability, performance, and usability, which focus on the factors that users find important in mobile devices.
Abstract: Advanced mobile technology continues to shape professional environments. Smart cell phones, pocket computers and laptop computers reduce the need of users to remain close to a wired information system infrastructure and allow for task performance in many different contexts. Among the consequences are changes in technology requirements, such as the need to limit weight and size of the devices. In the current paper, we focus on the factors that users find important in mobile devices. Based on a content analysis of online user reviews that was followed by structural equation modeling, we found four factors to be significantly related with overall user evaluation, namely functionality, portability, performance, and usability. Besides the practical relevance for technology developers and managers, our research results contribute to the discussion about the extent to which previously established theories of technology adoption and use are applicable to mobile technology. We also discuss the methodological suitability of online user reviews for the assessment of user requirements, and the complementarity of automated and non-automated forms of content analysis.

Journal ArticleDOI
TL;DR: The hardware thread interface (HWTI) component is created to provide an abstract, platform-independent compilation target for hardware-resident computations and enables the use of standard thread communication and synchronization operations across the software/hardware boundary.
Abstract: This paper introduces hthreads, a unifying programming model for specifying application threads running within a hybrid computer processing unit (CPU)/field-programmable gate-array (FPGA) system. Presently accepted hybrid CPU/FPGA computational models-and access to these computational models via high level languages-focus on programming language extensions to increase accessibility and portability. However, this paper argues that new high-level programming models built on common software abstractions better address these goals. The hthreads system, in general, is unique within the reconfigurable computing community as it includes operating system and middleware layer abstractions that extend across the CPU/FPGA boundary. This enables all platform components to be abstracted into a unified multiprocessor architecture platform. Application programmers can then express their computations using threads specified from a single POSIX threads (pthreads) multithreaded application program and can then compile the threads to either run on the CPU or synthesize them to run within an FPGA. To enable this seamless framework, we have created the hardware thread interface (HWTI) component to provide an abstract, platform-independent compilation target for hardware-resident computations. The HWTI enables the use of standard thread communication and synchronization operations across the software/hardware boundary. Key operating system primitives have been mapped into hardware to provide threads running in both hardware and software uniform access to a set of sub-microsecond, minimal-jitter services. Migrating the operating system into hardware removes the potential bottleneck of routing all system service requests through a central CPU.

Journal ArticleDOI
TL;DR: This paper presents the specifications for an intermediate layer between the stream program and the target architecture that provides a common level of abstraction that facilitates efficient execution of stream programs by making it easier for compilers to manage computation, and by providing automatic orchestration and optimization of communication when appropriate.
Abstract: As multicore architectures gain widespread use, it becomes increasingly important to be able to harness their additional processing power to achieve higher performance. However, exploiting parallel cores to improve single-program performance is difficult from a programmer's perspective because most existing programming languages dictate a sequential method of execution.Stream programming, which organizes programs by independent filters communicating over explicit data channels, exposes useful types of parallelism that can be exploited. However, there is still the burden of mapping high-level stream programs to specific multicore architectures. The complexities of each architecture's underlying details makes it difficult to schedule the execution of a stream program with high performance.In this paper, we present the specifications for an intermediate layer between the stream program and the target architecture. This multicore streaming layer (MSL) provides a common level of abstraction that facilitates efficient execution of stream programs by making it easier for compilers to manage computation, and by providing automatic orchestration and optimization of communication when appropriate. We implemented a framework for one such instance of the MSL targeted to the Cell processor and the StreamIt language and achieved greater than 88% utilization on all benchmarks with relatively small amounts of code. The framework can also be applied to other architectures and stream programming languages to enhance generality and portability.

Patent
12 Sep 2008
TL;DR: In this article, a user viewing a web page opens a bookmark including a short first instruction configured to load at least one other instruction, which can be dynamically injected into the viewed web page.
Abstract: Devices and processes allowing users to identify media content suitable for mobilizing to a mobile device and facilitating selection and portability of such media content to the mobile device A user viewing a web page opens a bookmark including a short first instruction configured to load at least one other instruction The loaded instruction can be dynamically injected into the viewed web page The injected instruction allows for the mobilization of media content

Journal ArticleDOI
TL;DR: The research demonstrates the applicability of the MDA approach to health care information systems development, which has the potential to overcome the challenges of platform (vendor) dependency, lack of open standards, interoperability, portability, scalability, and the high cost of implementation.

Book ChapterDOI
04 Jun 2008
TL;DR: Blight, a lightweight language for web services orchestration designed around some of WS-BPEL peculiar features like partner links, process termination, message correlation, long-running business transactions and compensation handlers is introduced.
Abstract: We introduce Blite, a lightweight language for web services orchestration designed around some of WS-BPEL peculiar features like partner links, process termination, message correlation, long-running business transactions and compensation handlers. Blite formal presentation helps clarifying some ambiguous aspects of the WS-BPEL specification, which have led to engines implementing different semantics and, thus, have undermined portability of WS-BPEL programs over different platforms. We illustrate the main features of Blite by means of many examples, some of which are also exploited to test and compare the behaviour of three of the most known free WS-BPEL engines.

Patent
29 May 2008
TL;DR: A portable data management system may be easily employed with multiple processing devices by eliminating the need to pre-install additional programs, agents, device drivers, or other software components on the hosts as mentioned in this paper.
Abstract: A portable data-management system may be easily employed with multiple processing devices by eliminating the need to pre-install additional programs, agents, device drivers, or other software components on the hosts. A portable storage device contains software for a data-management application, which receives and processes test data from a meter that measures an analyte. The portable device may employ an interface protocol that makes the portable device immediately compatible with different operating systems and hardware configurations. Once the portable device is connected to the host, the data-management application can be automatically launched. The convenience and portability of a data-management system may be enhanced by integrating advanced data processing and display features with the portable device. The users may access some advanced presentations of health data without having to launch the data-management application on a separate host.

Book ChapterDOI
15 Oct 2008
TL;DR: A thorough performance study by example of RAxML, which is a widely used Bioinformatics application for large-scale phylogenetic inference under the Maximum Likelihood criterion, indicates that the ML function should be parallelized with MPI and Pthreads based on software engineering criteria as well as to enforce data locality.
Abstract: Emerging multi- and many-core computer architectures pose new challenges with respect to efficient exploitation of parallelism. In addition, it is currently not clear which might be the most appropriate parallel programming paradigm to exploit such architectures, both from the efficiency as well as software engineering point of view. Beyond that, the application of high performance computing techniques and the use of supercomputers will be essential to deal with the explosive accumulation of sequence data. We address these issues via a thorough performance study by example of RAxML, which is a widely used Bioinformatics application for large-scale phylogenetic inference under the Maximum Likelihood criterion. We provide an overview over the respective parallelization strategies with MPI, Pthreads, and OpenMP and assess performance for these approaches on a large variety of parallel architectures. Results indicate that there is no universally best-suited paradigm with respect to efficiency and portability of the ML function. Therefore, we suggest that the ML function should be parallelized with MPI and Pthreads based on software engineering criteria as well as to enforce data locality.

Book
01 Jan 2008
TL;DR: The aim of the conference is to give an overview of the state-of-the-art of the developments, applications and future trends in high-performance computing for all platforms.
Abstract: ParCo2007 marks a quarter of a century of the international conferences on parallel computing that started in Berlin in 1983. The aim of the conference is to give an overview of the state-of-the-art of the developments, applications and future trends in high-performance computing for all platforms. The conference addresses all aspects of parallel computing, including applications, hardware and software technologies as well as languages and development environments. Special emphasis was placed on the role of high-performance processing to solve real-life problems in all areas, including scientific, engineering and multidisciplinary applications and strategies, experiences and conclusions made with respect to parallel computing.The book contains papers covering: Applications - the application of parallel computers to solve computationally challenging problems in the physical and life sciences, engineering, industry and commerce; Algorithms - design, analysis and implementation of generic parallel algorithms, including their scalability, in particular to a large number of processors (MPP), portability and adaptability; and, Software and Architectures - software engineering for developing and maintaining parallel software, including parallel programming models and paradigms, development environments, compile-time and run-time tools. A number of symposia on specialized topics formed part of the scientific program. The following topics were covered: Parallel Computing with FPGA's, The Future of OpenMP in the Multi-Core Era, Scalability and Usability of HPC Programming Tools, DEISA: Extreme Computing in an Advanced Supercomputing Environment, and, Scaling Science Applications on Blue Gene. The conference was organized by the renowned research and teaching institutions Forschungszentrum Julich and the RWTH Aachen University in Germany.

Book
01 Jan 2008
TL;DR: This paper investigates how several innovative techniques, not all initially intended for fault-tolerance, can be applied in providing fault tolerance of complex mobile agent systems, and proposes several possible solutions for error recovery at the application level.
Abstract: The purpose of this paper is to investigate how several innovative techniques, not all initially intended for fault-tolerance, can be applied in providing fault tolerance of complex mobile agent systems. Due to their roaming nature, mobile agents usually run on Java-based platforms, which ensures full portability of mobile code. The first part of the paper discusses specific characteristics of mobile systems, outlines the application areas benefiting from cede mobility, and shows why the existing error recovery techniques are not suitable for mobile systems. In the next part of the paper we present evaluation criteria for fault tolerance techniques, and propose several possible solutions for error recovery at the application level: meta-agent, Coordinated Atomic actions, asynchronous resolution, self-repair, and proof carrying code. The intention is to allow system developers to choose the approach which is suited best to the characteristics of the mobile agent application to be designed. To this end we discuss the advantages and disadvantages of each technique, as well as situations in which it provides the most benefit. A simple example, based on Internet shopping, is used throughout the paper to demonstrate the techniques.

Journal ArticleDOI
01 Jun 2008
TL;DR: An ontology-based model for multilingual knowledge management in information systems that is associated with ontological concepts and specified in multiple languages is proposed and it is shown that news items in different languages can be identified by a single ontology concept using contexts.
Abstract: Information systems in multilingual environments, such as the EU, suffer from low portability and high deployment costs. In this paper we propose an ontology-based model for multilingual knowledge management in information systems. Our unique feature is a lightweight mechanism, dubbed context, that is associated with ontological concepts and specified in multiple languages. We use contexts to assist in resolving cross-language and local variation ambiguities. Equipped with such a model, we next provide a four-step procedure for overcoming the language barrier in deploying a new information system. We also show that our proposed solution can overcome differences that stem from local variations that may accompany multilingual information systems deployment. The proposed mechanism was tested in an actual multilingual eGovernment environment and by using real-world news syndication traces. Our empirical results serve as a proof-of-concept of the viability of the proposed model. Also, our experiments show that news items in different languages can be identified by a single ontology concept using contexts. We also evaluated the local interpretations of concepts of a language in different geographical locations.

Patent
19 Aug 2008
TL;DR: In this paper, a real-time audio and visual transmitting/receiving by using web-based instant messaging apparatus which is dedicated to visual telecommunication as a video phone is presented.
Abstract: The present invention provides a method to perform real time audio and visual transmitting/receiving by using web-based instant messaging apparatus which is dedicated to visual telecommunication as a video phone. The device integrates all necessary hardware into one single portable size body, embeds all network communication software and preloads instant messaging carriers' application programs. The advantages of the invented apparatus are low manufacture costs, simple operational use, remote communication thru Internet, high mobility/portability, versatile video functions, and ease of use for personal or business purpose.

Proceedings ArticleDOI
19 Oct 2008
TL;DR: A set of language extensions which allows the programmer to introduce pipeline parallelism into sequential programs, manage distributed memories, and express the desired mapping of tasks to resources are tackled.
Abstract: The architectures of system-on-chip (SoC) platforms found in high-end consumer devices are getting more and more complex as designers strive to deliver increasingly compute-intensive applications on near-constant energy budgets. Workloads running on these platforms require the exploitation of heterogeneous parallelism and increasingly irregular memory hierarchies. The conventional approach to programming such hardware is very lowlevel but this yields software which is intimately and inseparably tied to the details of the platform it was originally designed for, limiting the software's portability, and, ultimately, the architectural choices available to designers of future platform generations. The key insight of this paper is that many of the problems experienced in mapping applications onto SoC platforms come not from deciding how to map a program onto the hardware but from the need to restructure the program and the number of interdependencies introduced in the process of implementing those decisions. We tackle this complexity with a set of language extensions which allows the programmer to introduce pipeline parallelism into sequential programs, manage distributed memories, and express the desired mapping of tasks to resources. The compiler takes care of the complex, error-prone details required to implement that mapping. We demonstrate the effectiveness of SoC-C and its compiler with a "software defined radio" example (the PHY layer of a Digital Video Broadcast receiver) achieving a 3.4x speedup on 4 cores.

Journal ArticleDOI
TL;DR: This paper identifies potential issues and problems with the use of mobile information systems by examining both personal and organizational perspectives of mobile devices and applications and provides a set of guidelines that can assist organizations in making decisions about the design and implementation of mobile technologies and applications in organizations.
Abstract: While mobile computing provides organizations with many information systems implementation alternatives, it is often difficult to predict the potential benefits, limitations, and problems with mobile applications. Given the inherent portability of mobile devices, many design and use issues can arise which do not exist with desktop systems. While many existing rules of thumb for design of stationary systems apply to mobile systems, many new ones emerge. Issues such as the security and privacy of information take on new dimensions, and potential conflicts can develop when a single mobile device serves both personal and business needs. This paper identifies potential issues and problems with the use of mobile information systems by examining both personal and organizational perspectives of mobile devices and applications. It provides a set of guidelines that can assist organizations in making decisions about the design and implementation of mobile technologies and applications in organizations.

Proceedings ArticleDOI
10 Mar 2008
TL;DR: To what degree the existing AUTOSAR standard can support the development of safety- and time-critical software and what is required to move toward the desirable goal of timing isolation when integrating multiple applications into the same execution platform are discussed.
Abstract: System-level integration requires an overall understanding of the interplay of the sub-systems to enable component- based development with portability, reconfigurability and extensibility, together with guaranteed reliability and performance levels. Integration by simple interfaces and plug- and-play of sub-systems, which is the main objective of AUTOSAR, requires solving essential technical problems. We discuss to what degree the existing AUTOSAR standard can support the development of safety- and time-critical software and what is required to move toward the desirable goal of timing isolation when integrating multiple applications into the same execution platform.