scispace - formally typeset
Search or ask a question

Showing papers on "Software portability published in 2004"


Journal ArticleDOI
TL;DR: A review of current technology compares how, when, and where recomposition occurs.
Abstract: Interest in adaptive computing systems has increased dramatically in the past few years, and a variety of techniques now allow software to adapt dynamically to its environment. Compositional adaptation enables software to modify its structure and behavior dynamically in response to change in its execution environment. A review of current technology compares how, when, and where recomposition occurs.

651 citations


Patent
03 Jul 2004
TL;DR: In this paper, a lightweight, battery operated, portable, personal electronic device capable of faxing, scanning, printing and copying media as a standalone device or in cooperation with other electronic devices including PCs, mobile telephones, PDAs, etc. is provided.
Abstract: A lightweight, battery operated, portable, personal electronic device capable of faxing, scanning, printing and copying media as a standalone device or in cooperation with other electronic devices including PCs, mobile telephones, PDAs, etc. is provided. The device automatically detects the presence of fax-capable devices and reconfigures the software for compatibility with the fax-capable device eliminating the need for user programming. The device's ergonomic design, intrinsic physical stability, and same side paper feeds and user interface provide use in work areas having limited space. The device includes unidirectional, independent pathways for original and recording media such that paper jams are minimized. Portability is maximized through innovative power management software and hardware.

428 citations


Proceedings ArticleDOI
25 Apr 2004
TL;DR: Papier-Mache introduces a high-level event model for working with computer vision, electronic tags, and barcodes that facilitates technology portability and finds the input abstractions, technologyPortability, and monitoring window to be highly effective.
Abstract: Tangible user interfaces (TUIs) augment the physical world by integrating digital information with everyday physical objects. Currently, building these UIs requires "getting down and dirty" with input technologies such as computer vision. Consequently, only a small cadre of technology experts can currently build these UIs. Based on a literature review and structured interviews with nine TUI researchers, we created Papier-Mâche, a toolkit for building tangible interfaces using computer vision, electronic tags, and barcodes. Papier-Mache introduces a high-level event model for working with these technologies that facilitates technology portability. For example, an application can be prototyped with computer vision and deployed with RFID. We present an evaluation of our toolkit with six class projects and a user study with seven programmers, finding the input abstractions, technology portability, and monitoring window to be highly effective.

242 citations


Patent
13 Apr 2004
TL;DR: In this article, the authors present a system for generating message transformation and validation software using interface definition documents as inputs. But this system is limited to the use of W3C XML schemas, which can be reused and composed with other schemas.
Abstract: This system for generating message transformation and validation software uses interface definition documents as inputs. An interface definition consists of an internally consistent set of message definitions, data dictionary entries, transformation rules, and validation rules. A user-friendly graphical user interface provides the requirements engineer or other user with the ability to specify these documents. This graphical user interface is a structured table and rules editor that allows the requirements engineer to enter and validate interface definitions to ensure that the definitions meet certain predetermined requirements. The generation system takes the interface definition documents as input and generates various software artifacts to transform and validate messages. W3C XML schemas are generated from an interface definition for assistance with code development, for use as standards-compliant interface definition that can be reused and composed with other schemas, and for validating messages. Extensible Stylesheet Language Transform files are generated from an interface definition to transform and validate messages. These generated software artifacts for message transformation and validation may then be used to implement message-processing systems. One example where this software was deployed is a wireless or local number portability service bureau that permits portability requests to pass from one telecommunications entity to another. The graphical user interface also enables the user to compare interface definitions, generate schema artifacts, generate transformation and validation artifacts, generate test cases, generate message indices, and generate documentation for distribution and review (formats include Microsoft Word, rich-text format, and HTML). Preexisting requirements documents may be converted for use in the present system by parsing and translating the preexisting documents into the interface definition documents. After this conversion process, information that could not be parsed and translated is referred to the requirements engineer or other user, who reenters the information using the structured table and rules editor.

123 citations


Book ChapterDOI
23 Jun 2004
TL;DR: The main contribution is in showing how NLG tools that take Semantic Web ontologies as their input can be designed to minimises the portability effort, while offering better output than template-based ontology verbalisers.
Abstract: This paper presented an approach for automatic generation of reports from domain ontologies encoded in Semantic Web standards like OWL. The paper identifies the challenges that need to be addressed when generating text from RDF and OWL and demonstrates how the ontology is used during the different stages of the generation process. The main contribution is in showing how NLG tools that take Semantic Web ontologies as their input can be designed to minimises the portability effort, while offering better output than template-based ontology verbalisers.

104 citations


ReportDOI
06 Jul 2004
TL;DR: A standard set of network characteristics that are useful for Grid applications and services as well as a classification hierarchy for these characteristics are described, which will facilitate the creation of common schemata for describing network monitoring data in Grid Monitoring and Discovery Services.
Abstract: This document describes a standard set of network characteristics that are useful for Grid applications and services as well as a classification hierarchy for these characteristics. The goal of this work is to identify the various types of network measurements according to the network characteristic they measure and the network entity on which they are taken. This document defines standard terminology to describe those measurements, but it does not attempt to define new standard measurement methodologies or attempt to define the best measurement methodologies to use for grid applications. However, it does attempt to point out the advantages and disadvantages of different measurement methodologies. This document was motivated by the need for the interchange of measurements taken by various systems in the Grid and to develop a common dictionary to facilitate discussions about and specifications for measurement systems. The application of this naming system will facilitate the creation of common schemata for describing network monitoring data in Grid Monitoring and Discovery Services, and thus help to address portability issues between the wide variety of network measurements used between sites of a Grid.

93 citations


DOI
31 Aug 2004
TL;DR: This work presents a new approach to achieve this goal, by applying intrusion detection techniques to virtual machine based systems, thus keeping the intrusion detection system out of reach from intruders.
Abstract: A virtual machine is a software replica of an underlying real machine. Multiple virtual machines can operate on the same host machine concurrently, without interfere each other. Such concept is becoming valuable in production computing systems, due to its benefits in terms of costs and portability. As they provide a strong isolation between the virtual environment and the underlying real system, virtual machines can also be used to improve the security of a computer system in face of attacks to its network services. This work presents a new approach to achieve this goal, by applying intrusion detection techniques to virtual machine based systems, thus keeping the intrusion detection system out of reach from intruders. The results obtained from a prototype implementation confirm the usefulness of this approach.

85 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper describes a new hardware/software co-verification method for System-On-a-Chip, based on the integration of a C/C++ simulator and an inexpensive FPGA emulator, which enables easy debugging, rich portability, and high verification speed, at a low cost.
Abstract: This paper describes a new hardware/software co-verification method for System-On-a-Chip, based on the integration of a C/C++ simulator and an inexpensive FPGA emulator. Communication between the simulator and emulator occurs via a flexible interface based on shared communication registers. This method enables easy debugging, rich portability, and high verification speed, at a low cost. We describe the application of this environment to the verification of three different complex commercial SoCs, supporting concurrent hardware and embedded software development. In these projects, our verification methodology was used to perform complete system verification at 0.2-1.1 MHz, while supporting full graphical interface functions such as "waveform" or "signal dump" viewers, and debugging functions such as "step" or "break".

76 citations


Book ChapterDOI
01 Jan 2004
TL;DR: Non-functional aspects of software development should be treated as requirements to be dealt with from the earliest stages of the software development process, and then throughout the entire life cycle.
Abstract: Software developers are constantly under pressure to deliver code on time and on budget. As a result, many projects focus on delivering functionalities at the expense of meeting non-functional requirements such as reliability, security, maintainability, portability, accuracy, among others. As software complexity grows and clients demand higher and higher quality software, non-functional properties can no longer be considered to be of secondary importance. Many systems fail or fall into disuse precisely because of inadequacies in these properties. These nonfunctional aspects have been treated as properties or attributes after the fact. While these properties have always been a concern among software engineering researchers, early work have tended to view them as properties or attributes of the finished software product to be evaluated and measured. Recent work offer the complementary view that they should be treated as requirements to be dealt with from the earliest stages of the software development process [6][7], and then throughout the entire life cycle.

74 citations


Proceedings ArticleDOI
08 Nov 2004
TL;DR: This model is based on a novel mechanism, called detection strategy, that raises the abstraction level in dealing with metrics, by allowing to formulate good-design rules and heuristics in a quantifiable manner, and to detect automatically deviations from these rules.
Abstract: The quality of a design has a decisive impact on the quality of a software product; but due to the diversity and complexity of design properties (e.g., coupling, encapsulation), their assessment and correlation with external quality attributes (e.g., maintenance, portability) is hard. In contrast to traditional quality models that express the "goodness" of design in terms of a set of metrics, the novel Factor-Strategy model proposed by This work, relates explicitly the quality of a design to its conformance with a set of essential principles, rules and heuristics. This model is based on a novel mechanism, called detection strategy, that raises the abstraction level in dealing with metrics, by allowing to formulate good-design rules and heuristics in a quantifiable manner, and to detect automatically deviations from these rules. This quality model provides a twofold advantage: (i) an easier construction and understanding of the model as quality is put in connection with design principles rather than "raw numbers"; and (ii) a direct identification of the real causes of quality flaws. We have validated the approach through a comparative analysis involving two versions of a industrial software system.

71 citations


Journal Article
TL;DR: A formal model for analyzing and constructing forensic procedures, showing the advantages of formalization, is proposed and applied in a real-world scenario with focus on Linux and OS X.
Abstract: Forensic investigative procedures are used in the case of an intrusion into a networked computer system to detect the scope or nature of the attack. In many cases, the forensic procedures employed are constructed in an informal manner that can impede the effectiveness or integrity of the investigation. We propose a formal model for analyzing and constructing forensic procedures, showing the advantages of formalization. A mathematical description of the model will be presented demonstrating the construction of the elements and their relationships. The model highlights definitions and updating of forensic procedures, identification of attack coverage, and portability across different platforms. The forensic model is applied in a real-world scenario with focus on Linux and OS X.

Proceedings ArticleDOI
06 Jul 2004
TL;DR: It is shown that this extension of the capabilities of current multi-agent systems to provide for the efficient transfer of low-level information, by allowing backchannels of communication between agents with flexible protocols in a carefully principled way can yield significant performance increases in communication efficiency.
Abstract: Despite the growing number of multi-agent software systems, relatively few physical systems have adopted multi-agent systems technology. Agents that interact with a dynamic physical environment have requirements not shared by virtual agents, including the need to transfer information about the world and their interaction with it. The agent communication languages proven successful in software based multi-agent systems incur overheads that make them impractical or infeasible for the transfer of low-level data. Instead, real world systems typically employ application specific protocols to transfer video, audio, sensory, or telemetry data. These protocols lack the transparency and portability of formal agent communication languages and consequently are limited in their scalability. We propose augmenting the capabilities of current multi-agent systems to provide for the efficient transfer of low-level information, by allowing backchannels of communication between agents with flexible protocols in a carefully principled way. We show that this extension can yield significant performance increases in communication efficiency and discuss the benefits of incorporating backchannels into a search a rescue robot system.

DOI
31 Aug 2004
TL;DR: The different subsystems of DesCOTS, a software system that embraces several tools that interact to support the different activities of the COTS components selection, are presented.
Abstract: Selection of commercial-off-the-shelf software components (COTS components) has a growing importance in software engineering. Unfortunately, selection projects have a high risk of ending up into abandonment or yielding an incorrect selection. The use of some software engineering practices such as the definition of quality models can reduce this risk. We defined a process for COTS components selection based on the use of quality models and we started to apply it in academic and industrial cases. The need of having a tool to support this process arose and, although some tools already exist to partially support the involved activities, none of them was suitable enough. Because of this we developed DesCOTS, a software system that embraces several tools that interact to support the different activities of our process. The system has been designed taking into account not only functional concerns but also nonfunctional aspects such as reusability, interoperability and portability. We present in this paper the different subsystems of DesCOTS and discuss about their applicability.

Journal ArticleDOI
TL;DR: This article presents an approach for developing distributed manufacturing applications that are compatible and synchronized and thus, able to support IPPD, which involves the use of a common manufacturing application ‘middleware’, which is distributed between a central geometric modelling server and application clients.
Abstract: A heterogeneous computing environment characterizes today's manufacturing situation. This is a stumbling block for the efficient implementation of manufacturing concepts such as integrated product and process design (IPPD). A computing environment for IPPD would require the seamless integration of the various product and process design software systems. The exchange of information between these systems should be efficient, compatible and synchronous. This article presents an approach for developing distributed manufacturing applications that are compatible and synchronized and thus, able to support IPPD. The approach involves the use of a common manufacturing application ‘middleware’, which is distributed between a central geometric modelling server and application clients. The portability of the middleware is ensured through the use of Java for code portability and XML for data portability. The compatible product model problem is solved through the use of common data structures developed using reusable application client classes. Efficient transfer of product data is proposed using compressed model information embedded in a product data XML schema. Synchronization of design changes among all applications is achieved through the creation of relationships on an Application Relationship Manager.

Journal ArticleDOI
TL;DR: The design flow and the internal architecture of a newly proposed framework called verifiable embedded real-time application framework (VERTAF), which integrates software component-based reuse, formal synthesis, and formal verification, are revealed.
Abstract: The growing complexity of embedded real-time software requirements calls for the design of reusable software components, the synthesis and generation of software code, and the automatic guarantee of nonfunctional properties such as performance, time constraints, reliability, and security. Available application frameworks targeted at the automatic design of embedded real-time software are poor in integrating functional and nonfunctional requirements. To bridge this gap, we reveal the design flow and the internal architecture of a newly proposed framework called verifiable embedded real-time application framework (VERTAF), which integrates software component-based reuse, formal synthesis, and formal verification. A formal UML-based embedded real-time object model is proposed for component reuse. Formal synthesis employs quasistatic and quasidynamic scheduling with automatic generation of multilayer portable efficient code. Formal verification integrates a model checker kernel from SGM, by adapting it for embedded software. The proposed architecture for VERTAF is component-based and allows plug-and-play for the scheduler and the verifier. Using VERTAF to develop application examples significantly reduced design effort and illustrated how high-level reuse of software components combined with automatic synthesis and verification can increase design productivity.

Proceedings ArticleDOI
25 May 2004
TL;DR: It is demonstrated that reduced middleware footprint can be achieved while maintaining real-time properties of applications running on networked embedded systems, and evidence that empirical measurement using a representative application is crucial to guide selection of feature subsets from general purpose middleware is given.
Abstract: General purpose middleware has been shown to be effective off-the-shelf, in meeting diverse functional requirements for a wide range of distributed systems. However, middleware customization is necessary for many networked embedded systems because of the resource constraints in the networked nodes. We demonstrate that reduced middleware footprint can be achieved while maintaining real-time properties of applications running on such systems. We also give evidence that empirical measurement using a representative application is crucial to guide (1) selection of feature subsets from general purpose middleware and (2) trade-offs among different dimensions of design metrics including real-time, footprint, and portability.

Journal ArticleDOI
TL;DR: In this article, the authors compare and classify quality-evaluation models, particularly those evaluating the correctness aspect of quality, and examine their data requirements to provide practical guidance for selecting appropriate models and measurements.
Abstract: Quality can determine a software product's success or failure in today's competitive market. Among the many characteristics of quality, some aspects deal directly with the functional correctness or the conformance to specifications, while others deal with usability, portability, and so on. Correctness - that is, how well software conforms to requirements and specifications - is typically the most important aspect of quality, particularly when crucial operations depend on the software. Even for market segments in which new features and usability take priority, such as software for personal use in the mass market, correctness is still a fundamental part of the users' expectations. We compare and classify quality-evaluation models, particularly those evaluating the correctness aspect of quality, and examine their data requirements to provide practical guidance for selecting appropriate models and measurements.

Patent
23 Dec 2004
TL;DR: In this article, a method and apparatus for generating a standard software communication architecture (SCA) compliant waveform application for a software defined radio is described, where an application shell generator is used to separate implementation of software radio software resources from implementation of radio waveform functionality.
Abstract: A method and apparatus is described for generating a standard software communication architecture (SCA) compliant waveform application for a software defined radio. An application shell generator is used to separate implementation of software radio software resources from implementation of software radio waveform functionality. In this manner, an additional layer of abstraction (402) is defined and enforced between software resource objects (408) that control access to a set of physical abstraction layer SCA core framework API's (422) and waveform functionality. This additional abstraction layer assures that the physical abstraction layer API's only interact with architecture compliant source code (424). The source code, derived from software resource templates (406), also assures portability of the generated software radio waveform application to other SCA compliant platforms.

Book Chapter
01 Sep 2004
TL;DR: The Transterpreter: a virtual machine for executing the Transputer instruction set is reported on, which is a small, portable, efficient and ex- tensible run-time interpreter intended to be easily ported to handheld computers, mobile phones, and other embedded contexts.
Abstract: This paper reports on the Transterpreter: a virtual machine for executing the Transputer instruction set. This interpreter is a small, portable, efficient and ex- tensible run-time. It is intended to be easily ported to handheld computers, mobile phones, and other embedded contexts. In striving for this level of portability, occam programs compiled to Transputer byte-code can currently be run on desktop comput- ers, handhelds, and even the LEGO Mindstorms robotics kit.

Patent
20 Jul 2004
TL;DR: In this article, an instant messaging software architecture and method for implementing scalable and portable community-motivated communications is described, which can be used to enhance a user's instant messaging experience through the ability to involve a large number of users in a variety of different interactive environments while maintaining inter-user responsiveness.
Abstract: An instant messaging software architecture and method for implementing scalable and portable community-motivated communications is disclosed herein. Aspects of the invention can be used to enhance a user's instant messaging experience through the ability to involve a large number of users in a variety of different interactive environments while maintaining inter-user responsiveness. The scalability aspect of the invention utilizes scalable messaging interfaces and object oriented programming to extend user limits beyond current boundaries. The portability of this implementation and programming language also enables users of different devices such as PDAs, personal computers and mobile phones to use the same software and architecture to communicate with other users. Aspects of the invention further enable content providers to advertise, poll and otherwise interact with a large audience in a real-time instant messaging environment.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: A virtualization layer is introduced that allows reconfigurable application-specific coprocessors to access the user-space virtual memory and share the memory address space with user applications and performs runtime optimizations.
Abstract: Reconfigurable Systems-on-Chip (SoCs) on the market consist of full-fledged processors and large Field-Programmable Gate-Arrays (FPGAs). The latter can be used to implement the system glue logic, various peripherals, and application-specific coprocessors. Using FPGAs for application-specific coprocessors has certain speedup potentials, but it is less present in practice because of the complexity of interfacing the software application with the coprocessor. Another obstacle is the lack of portability across different systems. In this work, we present a virtualisation layer consisting of an operating-system extension and a hardware component. It lowers the complexity of interfacing and increases portability potentials, while it also allows the coprocessor to access the user virtual memory through a virtual memory window. The burden of moving data between processor and coprocessor is shifted from the programmer to the operating system. Since the virtualisation layer components hide physical details of the system, user designed hardware and software become perfectly portable. A reconfigurable SoC running Linux is used to prove the viability of the concept. Two applications are ported to the system for testing the approach, with their critical functions mapped to the specific coprocessors. We show a significant speedup compared to the software versions, while limited penalty is paid for virtualisation.

Book ChapterDOI
TL;DR: A Meta language in XML is introduced, which allows defining test cases for services and has the possibility to test and monitor if certain workflows between multiple service endpoints really behave as described with the XML Meta language.
Abstract: Service-Oriented Architectures (SOAs) have recently emerged as a new promising paradigm for supporting distributed computing. Testing SOAs is very challenging and automated test tools can help to reduce the development costs enormously. In this paper we will propose an approach as to how automatic testing for SOAs can be done. We will introduce a Meta language in XML, which allows defining test cases for services. This paper focuses on a real life prototype implementation called SITT (Service Integration Test Tool). It has the possibility to test and monitor if certain workflows between multiple service endpoints really behave as described with the XML Meta language. This paper shows how SITT is designed and we will present its features by introducing a real-world application scenario from the domain of Telecommunications providers, namely “Mobile Number Portability”.

Patent
21 Apr 2004
TL;DR: In this paper, the authors propose an operation and development platform for system software in top large-scale of embedded type to realize independence of top and bottom layers and to increase portability of top application code, but also can collectively transfer the function of some universal modules in top application software into the virtulizer for realization through intermediate component contained with generality in the virtualizer.
Abstract: The unit comprises databank module, interprocess communication module, high reliability module, debugging module, task dispatching module, expension protocol module, clock module, internal memory module and multiintertask communication module. The unit not only provides an operation and development platform with strong function for system software in top large-scale of embedded type to realize independence of top and bottom layers and to increase portability of top application code, but also can collectively transfer the function of some universal modules in top application software into the virtulizer for realization through intermediate component contained with generality in the virtualizer.

Journal ArticleDOI
TL;DR: The On-Chip Communication Network project provides an efficient framework for the specification, modeling, simulation, and design exploration of network on-chip based on an object-oriented C++ library built on top of SystemC.

Proceedings ArticleDOI
13 Jan 2004
TL;DR: A reverse engineering tool that allows a flexible recovery of the presentation model from Web sites, adapting the reverse engineering to the target platforms, and a forward engineering tools that converts this model into any final executable UI, in particular expressed in VRML, WML, ...
Abstract: Re-engineering transforms a final user interface into a logical representation that is manipulable enough to allow forward engineering to port a UI from one computing platform to another with maximum flexibility and minimal effort Re-engineering is used to adapt a UI to another context This adaptation is governed by two main tasks: the adaptation of the code itself to the new computing platform and the redesign of the UI to better suit the new constraints of the target platform (interaction capabilities, screen size,) To support this process, we have developed a reverse engineering tool that allows a flexible recovery of the presentation model from Web sites, adapting the reverse engineering to the target platforms, and a forward engineering tool that converts this model into any final executable UI, in particular expressed in VRML, WML,

Journal ArticleDOI
TL;DR: It is shown that an information extraction system which is used for real world applications and different domains can be built using some autonomous, corporate components (agents) and that carefully selecting the right machine learning technique for the right task and selective sampling can be used to reduce the human effort required to annotate examples for building such systems.
Abstract: Information Extraction (IE) systems that can exploit the vast source of textual information that is the internet would provide a revolutionary step forward in terms of delivering large volumes of content cheaply and precisely, thus enabling a wide range of new knowledge driven applications and services. However, despite this enormous potential, few IE systems have successfully made the transition from laboratory to commercial application. The reason may be a purely practical one—to build useable, scaleable IE systems requires bringing together a range of different technologies as well as providing clear and reproducible guidelines as to how to collectively configure and deploy those technologies. This paper is an attempt to address these issues. The paper focuses on two primary goals. Firstly, we show that an information extraction system which is used for real world applications and different domains can be built using some autonomous, corporate components (agents). Such a system has some advanced properties: clear separation to different extraction tasks and steps, portability to multiple application domain, trainability, extensibility, etc. Secondly, we show that machine learning and, in particular, learning in different ways and at different levels, can be used to build practical IE systems. We show that carefully selecting the right machine learning technique for the right task and selective sampling can be used to reduce the human effort required to annotate examples for building such systems.

Proceedings ArticleDOI
28 Jun 2004
TL;DR: This paper proposes a new methodology for the definition of faultloads based on software faults for dependability benchmarking and concludes that software fault-based faultloads generated using this methodology are appropriate and useful for dependable benchmarking.
Abstract: The most critical component of a dependability benchmark is the faultload, as it should represent a repeatable, portable, representative, and generally accepted set of faults. These properties are essential to achieve the desired standardization level required by a dependability benchmark but, unfortunately, are very hard to achieve. This is particularly true for software faults, which surely accounts for the fact that this important class of faults has never been used in known dependability benchmark proposals. This paper proposes a new methodology for the definition of faultloads based on software faults for dependability benchmarking. Faultload properties such as repeatability, portability and scalability are also analyzed and validated through experimentation using a case study of dependability benchmarking of Web-servers. We concluded that software fault-based faultloads generated using our methodology are appropriate and useful for dependability benchmarking. As our methodology is not tied to any specific software vendor or platform, it can be used to generate faultloads for the evaluation of any software product such as OLTP systems.

Journal ArticleDOI
TL;DR: LearningPinocchio is described, a system for adaptive Information Extraction from texts that is having good commercial and scientific success and the general suitability of this IE technology for real world applications is discussed.
Abstract: The new frontier of research on Information Extraction from texts is portability without any knowledge of Natural Language Processing. The market potential is very large in principle, provided that a suitable easy-to-use and effective methodology is provided. In this paper we describe LearningPinocchio, a system for adaptive Information Extraction from texts that is having good commercial and scientific success. Real world applications have been built and evaluation licenses have been released to external companies for application development. In this paper we outline the basic algorithm behind the scenes and present a number of applications developed with LearningPinocchio. Then we report about an evaluation performed by an independent company. Finally, we discuss the general suitability of this IE technology for real world applications and draw some conclusion.

Patent
27 Feb 2004
TL;DR: In this article, a number portability reconciliation (NPR) monitoring system (200) receives signaling messages relating to different calls or transactions, and in response, the NPR monitoring system queries a number-portability database (206) and updates the call detail record based on the response.
Abstract: Methods and systems for generating accurate call detail records in networks that utilize number portability are disclosed. A number portability reconciliation (NPR) monitoring system (200) receives signaling messages relating to different calls or transactions. The signaling messages may be copied from a network monitoring location upstream from where a number portability database (106) lookup occurs for a call. The monitoring system (200) automatically correlates messages relating to the same call or transaction into a call detail record usable by a plurality of different network monitoring applications. The NPR monitoring system (200) determines whether number portability processing is required, and, in response, the NPR monitoring system (200) queries a number portability database (206). The NPR monitoring system (200) receives a response from the number portability database (106) and updates the call detail record based on the response.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: FIT, a Flexible open-source binary code Instrumentation Toolkit is presented, with existing backends for the Alpha, x86 and ARM architectures and the Tru64Unix, Linux and ARM Firmware execution environments.
Abstract: This paper presents FIT, a Flexible open-source binary code Instrumentation Toolkit. Unlike existing tools, FIT is truly portable, with existing backends for the Alpha, x86 and ARM architectures and the Tru64Unix, Linux and ARM Firmware execution environments. This paper focuses on some of the problems that needed to be addressed for providing this degree of portability. It also discusses the trade-off between instrumentation precision and low overhead.