scispace - formally typeset
Search or ask a question

Showing papers on "Software portability published in 2009"


Proceedings ArticleDOI
17 May 2009
TL;DR: The Native Client project as mentioned in this paper is a sandbox for untrusted x86 native code that uses software fault isolation and a secure runtime to direct system interaction and side effects through interfaces managed by Native Client.
Abstract: This paper describes the design, implementation and evaluation of Native Client, a sandbox for untrusted x86 native code. Native Client aims to give browser-based applications the computational performance of native applications without compromising safety. Native Client uses software fault isolation and a secure runtime to direct system interaction and side effects through interfaces managed by Native Client. Native Client provides operating system portability for binary code while supporting performance-oriented features generally absent from web application programming environments, such as thread support, instruction set extensions such as SSE, and use of compiler intrinsics and hand-coded assembler. We combine these properties in an open architecture that encourages community review and 3rd-party tools.

560 citations


Journal ArticleDOI
20 Jan 2009-Sensors
TL;DR: Following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.
Abstract: 3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

511 citations


Journal ArticleDOI
01 Jul 2009
TL;DR: A comparative study of PLASMA's performance against established linear algebra packages and some preliminary results of MAGMA on hybrid multi-core and GPU systems is presented.
Abstract: The emergence and continuing use of multi-core architectures and graphics processing units require changes in the existing software and sometimes even a redesign of the established algorithms in order to take advantage of now prevailing parallelism. Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) and Matrix Algebra on GPU and Multics Architectures (MAGMA) are two projects that aims to achieve high performance and portability across a wide range of multi-core architectures and hybrid systems respectively. We present in this document a comparative study of PLASMA's performance against established linear algebra packages and some preliminary results of MAGMA on hybrid multi-core and GPU systems.

460 citations


Journal ArticleDOI
01 Oct 2009
TL;DR: The FLASH3 architecture is described, with emphasis on solutions to the more challenging conflicts arising from solver complexity, portable performance requirements, and legacy codes.
Abstract: FLASH is a publicly available high performance application code which has evolved into a modular, extensible software system from a collection of unconnected legacy codes. FLASH has been successful because its capabilities have been driven by the needs of scientific applications, without compromising maintainability, performance, and usability. In its newest incarnation, FLASH3 consists of inter-operable modules that can be combined to generate different applications. The FLASH architecture allows arbitrarily many alternative implementations of its components to co-exist and interchange with each other, resulting in greater flexibility. Further, a simple and elegant mechanism exists for customization of code functionality without the need to modify the core implementation of the source. A built-in unit test framework providing verifiability, combined with a rigorous software maintenance process, allow the code to operate simultaneously in the dual mode of production and development. In this paper we describe the FLASH3 architecture, with emphasis on solutions to the more challenging conflicts arising from solver complexity, portable performance requirements, and legacy codes. We also include results from user surveys conducted in 2005 and 2007, which highlight the success of the code.

291 citations


01 Jan 2009
TL;DR: FLAIR is an advanced user graphical interface for FLUKA to enable the user to start and controlFLUKA jobs completely from a GUI environment without the need for command-line interactions and contains a fully featured editor for editing the input files in a human readable way with syntax highlighting.
Abstract: FLAIR is an advanced user graphical interface for FLUKA, to enable the user to start and control FLUKA jobs completely from a GUI environment without the need for command-line interactions. It is written entirely with python and Tkinter allowing easier portability across various operating systems and great programming flexibility with focus to be used as an Application Programming Interface (API) for FLUKA. FLAIR is an integrated development environment (IDE) for FLUKA, it does not only provide means for the post processing of the output but a big emphasis has been set on the creation and checking of error free input files. It contains a fully featured editor for editing the input files in a human readable way with syntax highlighting, without hiding the inner functionality of FLUKA from the users. It provides also means for building the executable, debugging the geometry, running the code, monitoring the status of one or many runs, inspection of the output files, post processing of the binary files (data merging) and interface to plotting utilities like gnuplot and PovRay for high quality plots or photorealistic images. The program includes also a database of selected properties of all known nuclides and their known isotopic composition as well a reference database of ~300 predefined materials together with their Sterheimer parameters.

259 citations


Proceedings Article
10 Aug 2009
TL;DR: The design and implementation of a system that fully automates the process of constructing instruction sequences that can be used by an attacker for malicious computations are presented and a practical attack that can bypass existing kernel integrity protection mechanisms is described.
Abstract: Protecting the kernel of an operating system against attacks, especially injection of malicious code, is an important factor for implementing secure operating systems. Several kernel integrity protection mechanism were proposed recently that all have a particular shortcoming: They cannot protect against attacks in which the attacker re-uses existing code within the kernel to perform malicious computations. In this paper, we present the design and implementation of a system that fully automates the process of constructing instruction sequences that can be used by an attacker for malicious computations. We evaluate the system on different commodity operating systems and show the portability and universality of our approach. Finally, we describe the implementation of a practical attack that can bypass existing kernel integrity protection mechanisms.

256 citations


Journal ArticleDOI
01 Feb 2009
TL;DR: This paper presents the design and implementation issues of a “mobile tourism” research prototype, which brings together the main assets of the two aforementioned approaches, and enables the creation of portable tourist applications with rich content that matches user preferences.
Abstract: "Mobile tourism" represents a relatively new trend in the field of tourism and involves the use of mobile devices as electronic tourist guides. While much of the underlying technology is already available, there are still open challenges with respect to design, usability, portability, functionality and implementation aspects. Most existing "mobile tourism" solutions either represent of-the-shelf applications with rigidly defined content or involve portable devices with networking capabilities that access tourist content with the requirement of constant airtime, i.e., continuous wireless network coverage. This paper presents the design and implementation issues of a "mobile tourism" research prototype, which brings together the main assets of the two aforementioned approaches. Namely, it enables the creation of portable tourist applications with rich content that matches user preferences. The users may download these personalized applications (optimized for their specific device's model) either directly to their mobile device or first to a PC and then to a mobile terminal (through infrared or bluetooth). Thereafter, network coverage is not further required as the applications execute in standalone mode and may be updated when the user returns online. The dynamically created tourist applications also incorporate a "push model", wherein new tourist content is forwarded to the mobile terminal with minimal user intervention as soon as it is added or updated by the administrator. Our prototype has been developed on the top of Java 2 Micro Edition (J2ME) which offers an ideal platform for the development of full-fledged, interactive and portable applications tailored for resource-constrained mobile devices. The paper presents our development experiences with J2ME and highlights its main advantages and shortcomings in relation to the implementation of such kind of applications. Finally, an empirical evaluation of user experience with the mobile application prototype is presented.

187 citations


Book ChapterDOI
23 Aug 2009
TL;DR: GPUSs is presented, an extension of the Star Superscalar programming model that targets the parallelization of applications on platforms consisting of a general-purpose processor connected with multiple graphics processors, while preserving simplicity and portability.
Abstract: While general-purpose homogeneous multi-core architectures are becoming ubiquitous, there are clear indications that, for a number of important applications, a better performance/power ratio can be attained using specialized hardware accelerators. These accelerators require specific SDK or programming languages which are not always easy to program. Thus, the impact of the new programming paradigms on the programmer's productivity will determine their success in the high-performance computing arena. In this paper we present GPU Superscalar (GPUSs), an extension of the Star Superscalar programming model that targets the parallelization of applications on platforms consisting of a general-purpose processor connected with multiple graphics processors. GPUSs deals with architecture heterogeneity and separate memory address spaces, while preserving simplicity and portability. Preliminary experimental results for a well-known operation in numerical linear algebra illustrate the correct adaptation of the runtime to a multi-GPU system, attaining notable performance results.

181 citations


Book ChapterDOI
16 Apr 2009
TL;DR: The notion of "protocol portability," a property that identifies input and verifier state distributions under which a protocol becomes a ZKP when called as a subroutine in a sequential execution of a larger application, is introduced.
Abstract: The notion of Zero Knowledge Proofs (of knowledge) [ZKP] is central to cryptography; it provides a set of security properties that proved indispensable in concrete protocol design. These properties are defined for any given input and also for any auxiliary verifier private state, as they are aimed at any use of the protocol as a subroutine in a bigger application. Many times, however, moving the theoretical notion to practical designs has been quite problematic. This is due to the fact that the most efficient protocols fail to provide the above ZKP properties for all possible inputs and verifier states. This situation has created various problems to protocol designers who have often either introduced imperfect protocols with mistakes or with lack of security arguments, or they have been forced to use much less efficient protocols in order to achieve the required properties. In this work we address this issue by introducing the notion of "protocol portability," a property that identifies input and verifier state distributions under which a protocol becomes a ZKP when called as a subroutine in a sequential execution of a larger application. We then concentrate on the very efficient and heavily employed "Generalized Schnorr Proofs" (GSP) and identify the portability of such protocols. We also point to previous protocol weaknesses and errors that have been made in numerous applications throughout the years, due to employment of GSP instances while lacking the notion of portability (primarily in the case of unknown order groups). This demonstrates that cryptographic application designers who care about efficiency need to consider our notion carefully. We provide a compact specification language for GSP protocols that protocol designers can employ. Our specification language is consistent with the ad-hoc notation that is currently widely used and it offers automatic derivation of the proof protocol while dictating its portability (i.e., the proper initial state and inputs) and its security guarantees. Finally, as a second alternative to designers wishing to use GSPs, we present a modification of GSP protocols that is unconditionally portable (i.e., ZKP) and is still quite efficient. Our constructions are the first such protocols proven secure in the standard model (as opposed to the random oracle model).

134 citations


Journal ArticleDOI
Mohamad Minhat1, Valeriy Vyatkin1, Xun Xu1, S. Wong1, Z. Al-Bayaa1 
TL;DR: The layered CNC-FB architecture is proposed, which simplifies the design of a CNC machine controller with the architecture layers responsible for data processing, data storage and execution, and supports the object-oriented Model-View-Control design pattern.
Abstract: Modern manufacturing industries demand computer numeric controllers, having higher level input languages than outdated G-code, and less proprietary vendor dependencies. IEC 61499 is a new standard for distributed measurement and control systems, that enables portability and interoperability of embedded controllers, along with the ease of their mapping to arbitrary distributed networking hardware configurations. This paper demonstrates that the IEC 61499 reference architecture can be successfully used to create a computer numeric controller, offering interoperability, portability, configurability, and distribution characteristics. The layered CNC-FB architecture is proposed, which simplifies the design of a CNC machine controller with the architecture layers responsible for data processing, data storage and execution. In combination with the object-oriented Model-View-Control design pattern, the CNC-FB architecture supports the design framework, in which simulation of the machining becomes natural and inherent part of the design process, with seamless transition from simulation to actual machining. The implemented controller was tested in both the model and on an actual milling machine.

88 citations


Journal ArticleDOI
TL;DR: In this paper, a variety of optical spatial data collection techniques are compared in terms of accuracy, automation of spatial data retrieval, instrument cost, and portability, and the relationships between techniques and the requirements of civil infrastructure applications are established and compiled in tables.
Abstract: Infrastructure modeling refers to the process of collecting infrastructure spatial data and transforming them into structured representations. It is useful during all stages of the infrastructure life cycle, and plays an important role in infrastructure's development and rehabilitation applications. In order to facilitate infrastructure modeling, a variety of optical spatial data collection techniques are available. None of them is ideal for all infrastructure applications. Each has its own benefits and limitations. The main purpose of this paper is to select an appropriate technique based on the given infrastructure application requirements. To achieve this goal, the principles of these techniques are first investigated. Their benefits and limitations are identified by comparing them in aspects such as accuracy, automation of spatial data retrieval, instrument cost, and portability. This way, the relationships between techniques and the requirements of civil infrastructure applications are established and compiled in tables. Practitioners can easily select an appropriate technique for their own applications by consulting these tables.

Book ChapterDOI
22 May 2009
TL;DR: This paper investigates if OpenMP could still survive in this new scenario and proposes a possible way to extend the current specification to reasonably integrate heterogeneity while preserving simplicity and portability.
Abstract: OpenMP has evolved recently towards expressing unstructured parallelism, targeting the parallelization of a broader range of applications in the current multicore era. Homogeneous multicore architectures from major vendors have become mainstream, but with clear indications that a better performance/power ratio can be achieved using more specialized hardware (accelerators), such as SSE-based units or GPUs, clearly deviating from the easy-to-understand shared-memory homogeneous architectures. This paper investigates if OpenMP could still survive in this new scenario and proposes a possible way to extend the current specification to reasonably integrate heterogeneity while preserving simplicity and portability. The paper leverages on a previous proposal that extended tasking with dependencies. The runtime is in charge of data movement, tasks scheduling based on these data dependencies and the appropriate selection of the target accelerator depending on system configuration and resource availability.

Proceedings ArticleDOI
28 Dec 2009
TL;DR: This paper combines the advantages of Mobile Agent and cloud computing to provide a realization for the Open Cloud Computing Federation, MABOCCF can span over multiple heterogeneous Cloud Computing platforms and realizes portability and interoperability, it can be a beginning of open cloud computing federation and a future part of cloud computing.
Abstract: Although cloud computing is generally recognized as a technology which will has a significant impact on IT in the future. However, Cloud computing is still in its infancy, currently, there is not a standard available for it, portability and interoperability is also impossible between different Cloud Computing Service Providers, therefore, handicaps the widely deploy and quick development of cloud computing, there is still a long distance to the fine scenery which theoretically depicted by cloud computing. We analyze the problems in the current state of the art, put forward that Open Cloud Computing Federation is an inevitable approach for the widely use of cloud computing and to realize the greatest value of it. Accordingly, we proposal the MABOCCF (Mobile Agent Based Open Cloud Computing Federation) mechanism in this paper, it combines the advantages of Mobile Agent and cloud computing to provide a realization for the Open Cloud Computing Federation, MABOCCF can span over multiple heterogeneous Cloud Computing platforms and realizes portability and interoperability, it can be a beginning of open cloud computing federation and a future part of cloud computing. We also present in this paper the rationalities and the motivations for the combination of Mobile Agent and Cloud Computing, finally, a prototype is given with a performance analysis.

Patent
23 Sep 2009
TL;DR: In this article, a topology manager is configured to maintain a security topology of a plurality of hosts in a cloud computing deployment, and a portability manager is also configured to receive a request to deploy an access control agent on the one or more candidate hosts.
Abstract: According to one embodiment, a system comprises one or more processors coupled to a memory. The one or more processors when executing logic encoded in the memory provide a topology manager. The topology manager is configured to maintain a security topology of a plurality of hosts. The security topology associates one or more virtual hosts policies with a plurality of virtual hosts in a cloud computing deployment. The topology manager is also configured to request a query for one or more hosts that are candidates to be enforced. A portability manager is configured to receive a request to deploy an access control agent on the one or more candidate hosts, determine an optimal agent to be deployed from a list of available agents, and deploy the optimal agent on the one or more candidate hosts.

Book ChapterDOI
24 Aug 2009
TL;DR: This paper includes an application using pLinguaCore for describing and simulating ecosystems by means of P systems and can be extended to accept new input or output formats and also new models or simulators.
Abstract: P–Lingua is a programming language for membrane computing which aims to be a standard to define P systems. In order to implement this idea, a Java library called pLinguaCore has been developed as a software framework for cell–like P systems. It is able to handle input files (either in XML or in P–Lingua format) defining P systems from a number of different cell–like P system models. Moreover, the library includes several built–in simulators for each supported model. For the sake of software portability, pLinguaCore can export a P system definition to any convenient output format (currently XML and binary formats are available). This software is not a closed product, but it can be extended to accept new input or output formats and also new models or simulators. The term P–Lingua 2.0 refers to the software package consisting of the above mentioned library together with a user interface called pLinguaPlugin (more details can be found at http://www.p-lingua.org). Finally, in order to illustrate the software, this paper includes an application using pLinguaCore for describing and simulating ecosystems by means of P systems.

Book ChapterDOI
22 May 2009
TL;DR: The runtime, which is based on a multi-level thread scheduler combined with a NUMA-aware memory manager, converts this information into "scheduling hints" to solve thread/memory affinity issues and enables dynamic load distribution guided by application structure and hardware topology, thus helping to achieve performance portability.
Abstract: Exploiting the full computational power of current hierarchical multiprocessor machines requires a very careful distribution of threads and data among the underlying non-uniform architecture so as to avoid memory access penalties. Directive-based programming languages such as OpenMPprovide programmers with an easy way to structure the parallelism of their application and to transmit this information to the runtime system. Our runtime, which is based on a multi-level thread scheduler combined with a NUMA-aware memory manager, converts this information into "scheduling hints" to solve thread/memory affinity issues. It enables dynamic load distribution guided by application structure and hardware topology, thus helping to achieve performance portability. First experiments show that mixed solutions (migrating threads and data) outperform next-touch -based data distribution policies and open possibilities for new optimizations.

Book ChapterDOI
08 Oct 2009
TL;DR: The ROSE source-to-source outliner is presented, which addresses the problem of extracting tunable kernels out of whole programs, thereby helping to convert the challenging whole-program tuning problem into a set of more manageable kernel tuning tasks.
Abstract: Although automated empirical performance optimization and tuning is well-studied for kernels and domain-specific libraries, a current research grand challenge is how to extend these methodologies and tools to significantly larger sequential and parallel applications. In this context, we present the ROSE source-to-source outliner, which addresses the problem of extracting tunable kernels out of whole programs, thereby helping to convert the challenging whole-program tuning problem into a set of more manageable kernel tuning tasks. Our outliner aims to handle large scale C/C++, Fortran and OpenMP applications. A set of program analysis and transformation techniques are utilized to enhance the portability, scalability, and interoperability of source-to-source outlining. More importantly, the generated kernels preserve performance characteristics of tuning targets and can be easily handled by other tools. Preliminary evaluations have shown that the ROSE outliner serves as a key component within an end-to-end empirical optimization system and enables a wide range of sequential and parallel optimization opportunities.

Proceedings ArticleDOI
29 Sep 2009
TL;DR: This generator is presented around the simple example of a collision detector, which it significantly improves in accuracy, DSP count, logic usage, frequency and latency with respect to an implementation using standard floating-point operators.
Abstract: Custom operators, working at custom precisions, are a key ingredient to fully exploit the FPGA flexibility advantage for high-performance computing. Unfortunately, such operators are costly to design, and application designers tend to rely on less efficient off-the-shelf operators. To address this issue, an open-source architecture generator framework is introduced. Its salient features are an easy learning curve from VHDL, the ability to embed arbitrary synthesizable VHDL code, portability to mainstream FPGA targets from Xilinx and Altera, automatic management of complex pipelines with support for frequency-directed pipeline, and automatic test-bench generation. This generator is presented around the simple example of a collision detector, which it significantly improves in accuracy, DSP count, logic usage, frequency and latency with respect to an implementation using standard floating-point operators.

Journal ArticleDOI
TL;DR: This paper describes how defect-related knowledge on an electronic assembly line can be integrated in the decision making process at an operational and organizational level and focuses in particular on the efficient acquisition of shallow knowledge concerning everyday human interventions on the production lines.
Abstract: Increasing global competition has made many manufacturing companies recognize that competitive manufacturing in terms of low cost and high quality is crucial for success. Real-time process control and production optimization are, however, extremely challenging areas because manufacturing processes are getting ever more complex and involve many different parameters. This is a major problem when building decision support systems especially in electronics manufacturing. Although problem-solving is a knowledge intensive activity undertaken by people on the production floor, it is quite common to have large databases and run blindly feature extraction and data mining methods. Performance of these methods could, however, be drastically increased when combined with knowledge or expertise of the process. This paper describes how defect-related knowledge on an electronic assembly line can be integrated in the decision making process at an operational and organizational level. It focuses in particular on the efficient acquisition of shallow knowledge concerning everyday human interventions on the production lines, as well as on the factory-wide sharing of the resulting information for an improved defect management. Software with dedicated interfaces has been developed using a knowledge representation that supports portability and flexibility of the system. Semi-automatic knowledge acquisition from the production floor and generation of comprehensive reports for the quality department resulted in an improvement of the usability, usage, and usefulness of the decision support system.

Journal ArticleDOI
TL;DR: This paper presents QACID an ontology-based Question Answering system applied to the CInema Domain that allows users to retrieve information from formal ontologies by using as input queries formulated in natural language.
Abstract: This paper presents QACID an ontology-based Question Answering system applied to the CInema Domain. This system allows users to retrieve information from formal ontologies by using as input queries formulated in natural language. The original characteristic of QACID is the strategy used to fill the gap between users' expressiveness and formal knowledge representation. This approach is based on collections of user queries and offers a simple adaptability to deal with multilingual capabilities, inter-domain portability and changes in user information requirements. All these capabilities permit developing Question Answering applications for actual users. This system has been developed and tested on the Spanish language and using an ontology modelling the cinema domain. The performance level achieved enables the use of the system in real environments.

Journal ArticleDOI
TL;DR: In this article, the authors present an initial table listing available data-collection tools and reflect their experience with these tools and their performance, and an international group of experts iteratively reviewed the table and reflected on the performance of the tools until no new insights and consensus resulted.

Journal ArticleDOI
TL;DR: This work introduces an accelerated neuromorphic hardware device and describes the implementation of the proposed concept for this system, based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification.
Abstract: Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

Book ChapterDOI
25 Aug 2009
TL;DR: In this paper, the authors present a GPU virtualization middleware, which makes remote CUDA-compatible GPUs available to all the cluster nodes, which is implemented on top of the sockets application programming interface.
Abstract: Current high performance clusters are equipped with high bandwidth/low latency networks, lots of processors and nodes, very fast storage systems, etc. However, due to economical and/or power related constraints, in general it is not feasible to provide an accelerating coprocessor -such as a graphics processor (GPU)- per node. To overcome this, in this paper we present a GPU virtualization middleware, which makes remote CUDA-compatible GPUs available to all the cluster nodes. The software is implemented on top of the sockets application programming interface, ensuring portability over commodity networks, but it can also be easily adapted to high performance networks.

Journal ArticleDOI
TL;DR: The major requirements for developing privacy-preserving social network applications are reported and a privacy threat model is proposed that can be used to enhance the information privacy in data or social network portability initiatives by determining the issues at stake related to the processing of personally identifiable information.

Journal ArticleDOI
TL;DR: The Spike Train Analysis Toolkit is a software package which implements, documents, and guides application of several information-theoretic spike train analysis techniques, thus minimizing the effort needed to adopt and use them.
Abstract: Conventional methods widely available for the analysis of spike trains and related neural data include various time- and frequency-domain analyses, such as peri-event and interspike interval histograms, spectral measures, and probability distributions. Information theoretic methods are increasingly recognized as significant tools for the analysis of spike train data. However, developing robust implementations of these methods can be time-consuming, and determining applicability to neural recordings can require expertise. In order to facilitate more widespread adoption of these informative methods by the neuroscience community, we have developed the Spike Train Analysis Toolkit. STAToolkit is a software package which implements, documents, and guides application of several information-theoretic spike train analysis techniques, thus minimizing the effort needed to adopt and use them. This implementation behaves like a typical Matlab toolbox, but the underlying computations are coded in C for portability, optimized for efficiency, and interfaced with Matlab via the MEX framework. STAToolkit runs on any of three major platforms: Windows, Mac OS, and Linux. The toolkit reads input from files with an easy-to-generate text-based, platform-independent format. STAToolkit, including full documentation and test cases, is freely available open source via http://neuroanalysis.org, maintained as a resource for the computational neuroscience and neuroinformatics communities. Use cases drawn from somatosensory and gustatory neurophysiology, and community use of STAToolkit, demonstrate its utility and scope.

Journal ArticleDOI
01 Oct 2009
TL;DR: The components of the backseat driver architecture as implemented on the Iver2 underwater vehicle are described, several examples of its use are provided, and the future direction of the architecture is discussed.
Abstract: In this paper, an innovative hybrid control architecture for real-time control of autonomous robotic vehicles is described as well as its implementation on a commercially available autonomous underwater vehicle (AUV). This architecture has two major components, a behavior-based intelligent autonomous controller and an interface to a classical dynamic controller that is responsible for real-time dynamic control of the vehicle given the decisions of the intelligent controller over the decision state space (e.g. vehicle course, speed, and depth). The driving force behind the development of this architecture was a desire to make autonomy software development for underwater vehicles independent from the dynamic control specifics of any given vehicle. The resulting software portability allows significant code reuse and frees autonomy software developers from being tied to a particular vehicle manufacturer's autonomy software and support as long as the vehicle supports the required interface between the intelligent controller and the dynamic controller. This paper will describe in detail the components of the backseat driver architecture as implemented on the Iver2 underwater vehicle, provide several examples of its use, and discuss the future direction of the architecture.

Journal IssueDOI
TL;DR: This article presents a novel profiling approach, which is entirely based on program transformation techniques, in order to build a profiling data structure that provides calling-context-sensitive program execution statistics and to generate reproducible profiles.
Abstract: Virtual execution environments, such as the Java virtual machine, promote platform-independent software development. However, when it comes to analyzing algorithm complexity and performance bottlenecks, available tools focus on platform-specific metrics, such as the CPU time consumption on a particular system. Other drawbacks of many prevailing profiling tools are high overhead, significant measurement perturbation, as well as reduced portability of profiling tools, which are often implemented in platform-dependent native code. This article presents a novel profiling approach, which is entirely based on program transformation techniques, in order to build a profiling data structure that provides calling-context-sensitive program execution statistics. We explore the use of platform-independent profiling metrics in order to make the instrumentation entirely portable and to generate reproducible profiles. We implemented these ideas within a Java-based profiling tool called JP. A significant novelty is that this tool achieves complete bytecode coverage by statically instrumenting the core runtime libraries and dynamically instrumenting the rest of the code. JP provides a small and flexible API to write customized profiling agents in pure Java, which are periodically activated to process the collected profiling information. Performance measurements point out that, despite the presence of dynamic instrumentation, JP causes significantly less overhead than a prevailing tool for the profiling of Java code. Copyright © 2008 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
02 Mar 2009
TL;DR: A novel approach to calling context reification that reconciles flexibility, efficiency, accuracy, and portability is introduced, which relies on a generic bytecode instrumentation framework ensuring complete bytecode coverage, including the standard Java class library.
Abstract: Aspect-oriented programming (AOP) eases the development of profilers, debuggers, and reverse engineering tools. Such tools frequently rely on calling context information. However, current AOP technology, such as AspectJ, does not offer dedicated support for accessing complete calling context within aspects. In this paper, we introduce a novel approach to calling context reification that reconciles flexibility, efficiency, accuracy, and portability. It relies on a generic bytecode instrumentation framework ensuring complete bytecode coverage, including the standard Java class library. We compose our program transformations for calling context reification with the AspectJ weaver, providing the aspect developer an efficient mechanism to manipulate a customizable representation of the complete calling context. To highlight the benefits of our approach, we present ReCrash as an aspect using a stack-based calling context representation; ReCrash is an existing tool that generates unit tests to reproduce program failures. In comparison with the original ReCrash tool, our aspect resolves several limitations, is extensible, covers also the standard Java class library, and causes less overhead.

Proceedings ArticleDOI
08 Dec 2009
TL;DR: This work presents the design and implementation considerations of such a smartphone-centered platform for low-cost continuous health monitoring based on commercial-off-the-shelf wireless wearable biosensors, which are anticipated to proliferate in the market in the near future.
Abstract: The body area sensor network is widely regarded as a key technology that holds promise of enabling low-cost healthcare to be accessible to the fast growing aging sector of the population. The popularity of smartphones with their open operating systems provides a powerful platform for developing very low-cost personalized healthcare applications. In addition to the requirement of low cost, the limited battery life and the implementation of bio-signal processing software in resource-constrained smartphone platforms are two practical challenges. In this work, we present the design and implementation considerations of such a smartphone-centered platform for low-cost continuous health monitoring based on commercial-off-the-shelf wireless wearable biosensors, which are anticipated to proliferate in the market in the near future. As a case study, this platform approach has been implemented utilizing photoplethysmographic biosensors and different smartphones to measure heart rate, breathing rate, oxygen saturation, and estimate obstructive sleep apnea. The two aforementioned practical challenges have been addressed in detail from a systemlevel design perspective. The case study result confirms the many advantages of the suggested system, including the closed loop control capability, portability and upgradability.

Proceedings ArticleDOI
05 May 2009
TL;DR: This paper presents the design of a controller intended for teleoperation, capable of controlling an anthropomorphic robotic arm through a LAN or via the Internet, and makes use of the already widespread Wi-Fi technology as its wireless communications medium.
Abstract: This paper presents the design of a controller intended for teleoperation. It is capable of controlling an anthropomorphic robotic arm through a LAN or via the Internet. The system uses several interdependent processing modules to provide numerous functionalities, and makes use of the already widespread Wi-Fi technology as its wireless communications medium. The user can control the robotic arm remotely and access its sensory feedback signals as well. The camera mounted on the robot arm takes images and transmits to the control station. The system has been designed with project portability in mind, and consequently will require minimal modification for other applications. The robot arm is controlled using a master-slave control methodology.