scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 2000"


Journal ArticleDOI
TL;DR: The authors believe that the answer lies in the use and reuse of software components that work within an explicit software architecture, and the Koala model, a component-oriented approach, is their way of handling the diversity of software in consumer electronics.
Abstract: Most consumer electronics today contain embedded software. In the early days, developing CE software presented relatively minor challenges, but in the past several years three significant problems have arisen: size and complexity of the software in individual products; the increasing diversity of products and their software; and the need for decreased development time. The question of handling diversity and complexity in embedded software at an increasing production speed becomes an urgent one. The authors present their belief that the answer lies not in hiring more software engineers. They are not readily available, and even if they were, experience shows that larger projects induce larger lead times and often result in greater complexity. Instead, they believe that the answer lies in the use and reuse of software components that work within an explicit software architecture. The Koala model, a component-oriented approach detailed in this article, is their way of handling the diversity of software in consumer electronics. Used for embedded software in TV sets, it allows late binding of reusable components with no additional overhead.

795 citations


Proceedings ArticleDOI
01 Jun 2000
TL;DR: This analysis of the development process of the Apache web server reveals a unique process, which performs well on important measures, and concludes that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.
Abstract: According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.

611 citations


Proceedings Article
Faraboschi, Brown, Fisher, Desoll, Homewood 
01 Jan 2000

316 citations


Journal ArticleDOI
TL;DR: The Real-Time CORBA specification includes features to manage CPU, network and memory resources and helps decrease the cycle time and effort required to develop high-quality systems by composing applications using reusable software component services rather than building them entirely from scratch.
Abstract: A growing class of real-time systems require end-to-end support for various quality-of-service (QoS) aspects, including bandwidth, latency, jitter and dependability. Applications include command and control, manufacturing process control, videoconferencing, large-scale distributed interactive simulation, and testbeam data acquisition. These systems require support for stringent QoS requirements. To meet this challenge, developers are turning to distributed object computing middleware, such as the Common Object Request Broker Architecture, an Object Management Group (OMG) industry standard. In complex real-time systems, DOC middleware resides between applications and the underlying operating systems, protocol stacks and hardware. CORBA helps decrease the cycle time and effort required to develop high-quality systems by composing applications using reusable software component services rather than building them entirely from scratch. The Real-Time CORBA specification includes features to manage CPU, network and memory resources. The authors describe the key Real-Time CORBA features that they feel are the most relevant to researchers and developers of distributed real-time and embedded systems.

301 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: The experiments described in the paper show that specialization for an application domain is effective, yielding large gains in price/performance ratio and how scaling machine resources scales performance, although not uniformly across all applications.
Abstract: Lx is a scalable and customizable VLIW processor technology platform designed by Hewlett-Packard and STMicroelectronics that allows variations in instruction issue width, the number and capabilities of structures and the processor instruction set. For Lx we developed the architecture and software from the beginning to support both scalability (variable numbers of identical processing resources) and customizability (special purpose resources).In this paper we consider the following issues. When is customization or scaling beneficial? How can one determine the right degree of customization or scaling for a particular application domain? What architectural compromises were made in the Lx project to contain the complexity inherent in a customizable and scalable processor family?The experiments described in the paper show that specialization for an application domain is effective, yielding large gains in price/performance ratio. We also show how scaling machine resources scales performance, although not uniformly across all applications. Finally we show that customization on an application-by-application basis is today still very dangerous and much remains to be done for it to become a viable solution.

293 citations


Proceedings ArticleDOI
01 Jun 2000
TL;DR: A hardware/software partitioning algorithm that performs fine-grained partitioning of an application to execute on the combined CPU and datapath and optimizes the global application execution time, including the software and hardware execution times, communication time anddatapath reconfiguration time.
Abstract: In this paper we describe a new hardware/software partitioning approach for embedded reconfigurable architectures consisting of a general-purpose processor (CPU), a dynamically reconfigurable datapath (e.g. an FPGA), and a memory hierarchy. We have developed a framework called Nimble that automatically compiles system-level applications specified in C to executables on the target platform. A key component of this framework is a hardware/software partitioning algorithm that performs fine-grained partitioning (at loop and basic-block levels) of an application to execute on the combined CPU and datapath. The partitioning algorithm optimizes the global application execution time, including the software and hardware execution times, communication time and datapath reconfiguration time. Experimental results on real applications show that our algorithm is effective in rapidly finding close to optimal solutions.

280 citations


Journal ArticleDOI
TL;DR: A case study describing the experience of using this approach for testing the performance of a system used as a gateway in a large industrial client/server transaction processing application is presented.
Abstract: An approach to software performance testing is discussed. A case study describing the experience of using this approach for testing the performance of a system used as a gateway in a large industrial client/server transaction processing application is presented.

270 citations


Proceedings ArticleDOI
01 Aug 2000
TL;DR: MW (Master-Worker) is described - a software framework that allows users to quickly and easily parallelize scientific computations using the master-worker paradigm on the Computational Grid.
Abstract: Describes MW (Master-Worker) - a software framework that allows users to quickly and easily parallelize scientific computations using the master-worker paradigm on the Computational Grid. MW provides both a "top-level" interface to application software and a "bottom-level" interface to existing Grid computing toolkits. Both interfaces are briefly described. We conclude with a case study, where the necessary Grid services are provided by the Condor high-throughput computing system, and the MW-enabled application code is used to solve a combinatorial optimization problem of unprecedented complexity.

267 citations


Patent
Linda Luu1
27 Dec 2000
TL;DR: In this article, a technique for remote installation of application software from a source computer system to one or more target computer systems coupled to a Local Area Network (LAN) is presented.
Abstract: A technique for the remote installation of application software from a source computer system to one or more target computer systems (workstation) coupled to a Local Area Network (LAN). The present invention allows a LAN Administrator to install application software on a user's workstation automatically at any time without user's intervention. The state of (i.e. a snapshot of) the LAN Administrator's system before and after the installation of the application software is captured and an installation package is built. Installation on the user workstations is then scheduled. For installation, the installation package is transmitted to the user workstation where an install program carries out commands in the installation package for installing the application software.

251 citations


Proceedings ArticleDOI
01 Nov 2000
TL;DR: The authors' MicroGrid simulation tools enable Globus applications to be run in arbitrary virtual grid resource environments, enabling broad experimentation and validation experiments show that the MicroGrid can match actual experiments within a few percent.
Abstract: The complexity and dynamic nature of the Internet (and the emerging Computational Grid) demand that middleware and applications adapt to the changes in configuration and availability of resources. However, to the best of our knowledge there are no simulation tools which support systematic exploration of dynamic Grid software (or Grid resource) behavior.We describe our vision and initial efforts to build tools to meet these needs. Our MicroGrid simulation tools enable Globus applications to be run in arbitrary virtual grid resource environments, enabling broad experimentation. We describe the design of these tools, and their validation on micro-benchmarks, the NAS parallel benchmarks, and an entire Grid application. These validation experiments show that the MicroGrid can match actual experiments within a few percent (2 percent to 4 percent).

229 citations


Proceedings ArticleDOI
12 Oct 2000
TL;DR: This project aim at expanding the telecare solution to a larger population of elderly people who are presently forced to live in hospices by developing a system for remotely monitoring human behavior during daily life at home.
Abstract: The authors developed a system for remotely monitoring human behavior during daily life at home, to improve the security and the quality of life. The activity was monitored through infrared position sensors and magnetic switches. For the falls detection the authors had to develop a smart sensor. The local communications were performed using RF wireless links to reduce the cabling and to allow mobility of the person. An application software performs data exploitation locally but it also performs remote data transmission through the network. This project aim at expanding the telecare solution to a larger population of elderly people who are presently forced to live in hospices.

Patent
26 Jun 2000
TL;DR: In this article, an approach for generating computer models of individuals is provided comprising a booth (11) that is connected to a server (2) via the Internet (3), where image data of an individual is captured using the booth (1) and a computer model corresponding to the individual is then generated by comparing the captured image data relative to a stored generic model.
Abstract: Apparatus for generating computer models of individuals is provided comprising a booth 11) that is connected to a server (2) via the Internet (3). Image data of an individual is captured using the booth (1) and a computer model corresponding to the individual is then generated by comparing the captured image data relative to a stored generic model. Data representative of a generated model is then transmitted to the server (2) where it is stored. Stored data can then be retrieved via the Internet using a personal computer (4) having application software stored therein. The application software on the personal computer (4) can then utilise the data to create graphic representations of an individual in any one of a number of poses.

Proceedings ArticleDOI
01 Nov 2000
TL;DR: Umpire is described, a new tool for detecting programming errors at runtime in message passing applications using the MPI profiling layer, and an evaluation on a variety of applications demonstrates the effectiveness of this approach.
Abstract: As evidenced by the popularity of MPI (Message Passing Interface), message passing is an effective programming technique for managing coarse-grained concurrency on distributed computers. Unfortunately, debugging message-passing applications can be difficult. Software complexity, data races, and scheduling dependencies can make programming errors challenging to locate with manual, interactive debugging techniques. This article describes Umpire, a new tool for detecting programming errors at runtime in message passing applications. Umpire monitors the MPI operations of an application by interposing itself between the application and the MPI runtime system using the MPI profiling layer. Umpire then checks the application's MPI behavior for specific errors. Our initial collection of programming errors includes deadlock detection, mismatched collective operations, and resource exhaustion. We present an evaluation on a variety of applications that demonstrates the effectiveness of this approach.

Journal Article
TL;DR: The use of a recommender system to enable continuous knowledge acquisition and individualized tutoring of application software across an organization and the results of a year-long naturalistic inquiry into application’s usage patterns are presented, based on logging users’ actions.
Abstract: We describe the use of a recommender system to enable continuous knowledge acquisition and individualized tutoring of application software across an organization. Installing such systems will result in the capture of evolving expertise and in organization-wide learning (OWL). We present the results of a year-long naturalistic inquiry into application’s usage patterns, based on logging users’ actions. We analyze the data to develop user models, individualized expert models, confidence intervals, and instructional indicators. We show how this information could be used to tutor users. Introduction Recommender Systems typically help people select products, services, and information. A novel application of recommender systems is to help individuals select ’what to learn next’ by recommending knowledge that their peers have found useful. For example, people typically utilize only a small portion of a software application’s functionality (one study shows users applying less than 10% of Microsoft Word’s commands). A recommender system can unobtrusively note which portions of an application’s functionality that the members of an organization find useful, group the organization’s members into sets of similar users, or peers (based on similar demographic factors such as job title, or similarities in command usage patterns), and produce recommendations for learning that are specific to the individual in the context of his/her organization, peers, and current activities. This paper reports research on a recommender system (Resnick & Varian, 1997) intended to promote gradual but perpetual performance improvement in the use of application software. We present our rationale, an analysis of a year’s collected data, and a vision of how users might learn from the system. We have worked with one commercial application, and believe our approach is generally applicable. The research explores the potential of a new sort of user modeling based on summaries of logged user data. This method of user modeling enables the observation of a large number of users over a long period of time, enables concurrent development of student models and individualized expert models, and applies recommender system techniques to on-the-job instruction. Earlier work is reported in Linton (1990), and Linton (1996). Kay and Thomas (1995), Thomas (1996) report on related work with a text editor in an academic environment. A recommender system to enhance the organization-wide learning of application software is a means of promoting organizational learning (Senge, 1990). By pooling and sharing expertise, recommender systems augment and assist the natural social process of people learning from each other. This approach is quite distinct from systems, such as Microsoft’s Office Assistant, which recommend new commands based on their logical equivalence to the lessefficient way a user may be performing a task. The system presented here will (1) capture evolving expertise from community of practice (Lave & Wenger 1991), (2) support less-skilled members of the community in acquiring expertise, and (3) serve as an organizational memory for the expertise it captures. In many workplaces ... mastery is in short supply and what is required is a kind of collaborative bootstrapping of expertise. (Eales & Welch, 1995, p. 100) The main goal of the approach taken in this work is to continuously improve the performance of application users by providing individualized modeling and coaching based on the automated comparison of user models to expert models. The system described here would be applicable in any situation where a number of application users perform similar tasks on networked computers 65 From: AAAI Technical Report WS-98-08. Compilation copyright © 1998, AAAI (www.aaai.org). All rights reserved. In the remainder of this section we describe the logging process and make some initial remarks about modeling and coaching software users. We then present an analysis of the data we have logged and our process of creating individual models of expertise. In the final section we describe further work and close with a summary. Each time a user issues a Word command such as Cut or Paste, the command is written to the log, together with a time stamp, and then executed. The logger, called OWL for Organization-Wide Learning, comes up when the user opens Word; it creates a separate log for each file the user edits, and when the user quits Word, it sends the logs to a server where they are periodically loaded into a database for analysis. A toolbar button labeled ’OWL is ON’ (or OFF) informs users of OWL’s tate and gives them control. Individual models of expertise We have selected the Edit commands for further analysis. A similar analysis could be performed for each type of command. The first of the three tables in Figure 1 presents data on the Edit commands for each of our 16 users. In the table, each column contains data for one user and each row contains data for one command (Edit commands that were not used have been omitted). A cell then, contains the count of the number of times the individual has used the command. The columns have been sorted so that the person using the most commands is on the left and the person using the fewest is on the right. Similarly, the rows have been sorted so that the most frequently used command is in the top row and the least frequently used command is in the bottom row. Consequently the cells with the largest values are in the upper left corner and those with the smallest values are in the lower right comer. The table has been shaded to make the contours of the numbers visible: the largest numbers have the darkest shading and the smallest numbers have no shading, each shade indicates an order of magnitude. Inspection of the first table reveals that users tend to acquire the Edit commands in a specific sequence, i.e., those that know fewer commands know a subset of the commands used by their more-knowledgeable peers. If instead, users acquired commands in an idiosyncratic order, the data would not sort as it does. And if they acquired commands in a manner that strongly reflected their job tasks or their writing tasks, there would be subgroups of users who shared common commands. Also, the more-knowledgeable users do not replace commands learned early on with more powerful commands, but instead keep adding new commands to their repertoire. Finally, the sequence of command acquisition corresponds to the commands’ frequency of use. While this last point is not necessarily a surprise, neither is it a given. There are some peaks and valleys in the data as sorted, and a fairly rough edge where commands transition from being used rarely to being used not at all. These peaks, valleys, and rough edges may represent periods of repetitive tasks or lack of data, respectively, or they may represent overdependence on some command that has a more powerful substitute or ignorance of a command or of a task (a sequence of commands) that uses the command. In other words, some of the peaks, valleys, and rough edges may represent opportunities to learn more effective use of the software. In the second table in Figure 1 the data have been smoothed. The observed value in each cell has been replaced by an expected value, the most likely value for the cell, using a method taken from statistics, based on the row, column and grand totals for the table (Howell, 1982). In the case of software use, the row effect is the overall relative utility of the command (for all users) and the column effect is the usage of related commands by the individual user. The expected value is the usage the command would have if the individual used it in a manner consistent with his/her usage of related commands and consistent with his/her peers’ usage of the command. These expected values are a new kind of expert model, one that is unique to each individual and each moment in time; the expected value in each cell reflects the individual’s use of related commands, and one’s peers’ use of the same command. The reason for differences between observed and expected values, between one’s actual and expert model, might have several explanations such as the individual’s tasks, preferences, experiences, or hardware, but we are most interested when the difference indicates the lack of knowledge or skill.

Proceedings Article
08 Dec 2000
TL;DR: SubDomain is presented: an OS extension designed to provide sufficient security to prevent vulnerability rot in Internet server platforms, and yet simple enough to minimize the performance, administrative, and implementation costs.
Abstract: Internet security incidents have shown that while network cryptography tools like SSL are valuable to Internet service, the hard problem is to protect the server itself from attack. The host security problem is important because attackers know to attack the weakest link, which is vulnerable servers. The problem is hard because securing a server requires securing every piece of software on the server that the attacker can access, which can be a very large set of software for a sophisticated server. Sophisticated security architectures that protect against this class of problem exist, but because they are either complex, expensive, or incompatible with existing application software, most Internet server operators have not chosen to use them.This paper presents SubDomain: an OS extension designed to provide sufficient security to prevent vulnerability rot in Internet server platforms, and yet simple enough to minimize the performance, administrative, and implementation costs. SubDomain does this by providing a least privilege mechanism for programs rather than for users. By orienting itself to programs rather than users, SubDomain simplifies the security administrator's task of securing the server.This paper describes the problem space of securing Internet servers, and presents the SubDomain solution to this problem. We describe the design, implementation, and operation of SubDomain, and provide working examples and performance metrics for services such as HTTP, SMTP, POP, and DNS protected with SubDomain.

Patent
28 Nov 2000
TL;DR: In this article, a power management architecture for an electrical power distribution system, or a portion thereof, is disclosed, which includes multiple intelligent electronic devices (IEDs) distributed throughout the power distribution systems to manage the flow and consumption of power from the system.
Abstract: A power management architecture for an electrical power distribution system, or portion thereof, is disclosed. The architecture includes multiple intelligent electronic devices (“IED's”) distributed throughout the power distribution system to manage the flow and consumption of power from the system. The IED's are linked via a network to back-end servers. Power management application software and/or hardware components operate on the IED's and the back-end servers and inter-operate via the network to implement a power management application. The architecture provides a scalable and cost effective framework of hardware and software upon which such power management applications can operate to manage the distribution and consumption of electrical power by one or more utilities/suppliers and/or customers which provide and utilize the power distribution system.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: The architecture for a hierarchical collector of network information is presented, a prototype implementation is described, and preliminary measurements characterizing its operation are presented.
Abstract: Network-aware applications, i.e., applications that adapt to network conditions in an application-specific way, need both static and dynamic information about the network to be able to adapt intelligently to network conditions. The CMU Remos interface gives applications access to a wide range of information in a network-independent fashion. Remos uses a logical topology to capture the network information that is relevant to applications in a concise way. However, collecting this information efficiently is challenging for several reasons: networks use diverse technologies and can be very large (Internet); applications need diverse information; and network managers might have concerns about leaking confidential information. In this paper we present an architecture for a hierarchical collector of network information. The decentralized architecture relies on data collectors that collect information on individual subnets; data collectors can collect information in manner that is appropriate for that subnet and can control the distribution of the information. For application queries that involve multiple subnets, we use a set of master collectors to partition requests and distribute subrequests to individual data collectors and to combine the results. Collectors cache recent network information to improve efficiency and responsiveness. This paper presents and justifies the collector architecture, describes a prototype implementation, and presents preliminary measurements characterizing its operation.

Proceedings ArticleDOI
10 Apr 2000
TL;DR: This paper presents a new framework for constructing mobile agents that introduces the notion of agent hierarchy and inter-agent migration and thus allows a group of mobile agents to be dynamically assembled into a single mobile agent.
Abstract: This paper presents a new framework for constructing mobile agents. The framework introduces the notion of agent hierarchy and inter-agent migration and thus allows a group of mobile agents to be dynamically assembled into a single mobile agent. It provides a powerful method to construct a distributed application, in particular a large-scale mobile application. To demonstrate how to exploit our framework, we construct an extensible and portable mobile agent system based on the framework. The system is implemented as a collection of mobile agents and thus can dynamically change and evolve its functions by migrating agents that offer the functions. Also, mobile agent-based applications running on the system can naturally inherit the extensibility and adaptability of the system.

Journal ArticleDOI
C. Zheng1, C.L. Thompson1
TL;DR: To help PA-RISC (precision architecture-reduced instruction set computing) users migrate to its upcoming IA-64 systems, Hewlett-Packard has developed the Aries software emulator, combining fast interpretation and static translation.
Abstract: Making the transition to a new architecture is never easy Users want to keep running their favorite applications as they normally would, without stopping to adapt them to a different platform For some legacy applications the problem is more severe Without all the source code, it is well-nigh impossible to recompile the application to a new platform Binary translation helps this transition process because it automatically converts the binary code from one instruction set to another without the need for high-level source code However, different choices force different trade-offs between some form of interpretation (or emulation) and static translation Interpretation requires no user intervention, but its performance is slow Static translation, on the other hand, requires user intervention but provides much better performance To help PA-RISC (precision architecture-reduced instruction set computing) users migrate to its upcoming IA-64 systems, Hewlett-Packard has developed the Aries software emulator, combining fast interpretation The article describes how the system works and outlines its performance characteristics and quality

Journal ArticleDOI
TL;DR: The paper provides an overview of the POEMS methodology and illustrates several of its key components, including a library of predefined analytical and simulation component models of the different domains and a knowledge base that describes performance properties of widely used algorithms.
Abstract: The POEMS project is creating an environment for end-to-end performance modeling of complex parallel and distributed systems, spanning the domains of application software, runtime and operating system software, and hardware architecture. Toward this end, the POEMS framework supports composition of component models from these different domains into an end-to-end system model. This composition can be specified using a generalized graph model of a parallel system, together with interface specifications that carry information about component behaviors and evaluation methods. The POEMS Specification Language compiler will generate an end-to-end system model automatically from such a specification. The components of the target system may be modeled using different modeling paradigms and at various levels of detail. Therefore, evaluation of a POEMS end-to-end system model may require a variety of evaluation tools including specialized equation solvers, queuing network solvers, and discrete event simulators. A single application representation based on static and dynamic task graphs serves as a common workload representation for all these modeling approaches. Sophisticated parallelizing compiler techniques allow this representation to be generated automatically for a given parallel program. POEMS includes a library of predefined analytical and simulation component models of the different domains and a knowledge base that describes performance properties of widely used algorithms. The paper provides an overview of the POEMS methodology and illustrates several of its key components. The modeling capabilities are demonstrated by predicting the performance of alternative configurations of Sweep3D, a benchmark for evaluating wavefront application technologies and high-performance, parallel architectures.

Proceedings ArticleDOI
30 Oct 2000
TL;DR: This paper presents a methodology that uses an object-oriented Web Test Model (WTM) to support Web application testing and captures both structural and behavioral test artifacts of Web applications and represents the artifacts form the object, behavior and structure perspectives.
Abstract: In recent years, Web applications have grown rapidly. As Web applications become complex, there is a growing concern about their quality and reliability. In this paper we present a methodology that uses an object-oriented Web Test Model (WTM) to support Web application testing. The test model captures both structural and behavioral test artifacts of Web applications and represents the artifacts form the object, behavior and structure perspectives. Based on the test model, both structural and behavioral test cases can be derived automatically to ensure the quality of Web applications. Moreover the model also can be used as a road map to identify change ripple effects and to find cost-effective testing strategies for reducing test efforts required in the regression testing.

Proceedings ArticleDOI
01 Jun 2000
TL;DR: The Fsprit/OMI-COSY project defines transaction-levels to set-up the exchange of IP's in separating function from architecture and body-behavior from proprietary interfaces, which are supported by the COSY Communications IPs presented in this paper.
Abstract: The Fsprit/OMI-COSY project defines transaction-levels to set-up the exchange of IP's in separating function from architecture and body-behavior from proprietary interfaces. These transaction-levels are supported by the “COSY COMMUNICATION IPs” that are presented in this paper. They implement onto Systems-On-Chip the extended Kahn Process Network that is defined in COSY for modeling signal processing applications. We present a generic implementation and performance model of these system-level communications and we illustrate specific implementations. They set system communications across software and hardware boundaries, and achieve bus independence through the Virtual Component Interface of the VSI Alliance. Finally, we describe the COSY-VCC flow that supports communication refinement from specification, to performance estimation, to implementation.

Proceedings ArticleDOI
01 Nov 2000
TL;DR: The toolkit, the Program Database Toolkit (PDT), is described, focussing on its most important contribution -- its handling of templates -- as well as its use in existing applications.
Abstract: The developers of high-performance scientific applications often work in complex computing environments that place heavy demands on program analysis tools. The developers need tools that interoperate, are portable across machine architectures, and provide source-level feedback. In this paper, we describe a tool framework, the Program Database Toolkit (PDT), that supports the development of program analysis tools meeting these requirements. PDT uses compile-time information to create a complete database of high-level program information that is structured for well-defined and uniform access by tools and applications. PDT’s current applications make heavy use of advanced features of C++, in particular, templates. We describe the toolkit, focussing on its most important contribution -- its handling of templates -- as well as its use in existing applications.

Proceedings ArticleDOI
01 Aug 2000
TL;DR: An approach to building a distributed software component system for scientific and engineering applications that is based on representing Computational Grid services as application-level software components, which provides tools such as registry and directory services, event services and remote component creation.
Abstract: Describes an approach to building a distributed software component system for scientific and engineering applications that is based on representing Computational Grid services as application-level software components. These Grid services provide tools such as registry and directory services, event services and remote component creation. While a service-based architecture for grids and other distributed systems is not new, this framework provides several unique features. First, the public interfaces to each software component are described as XML documents. This allows many adaptors and user interfaces to be generated from the specification dynamically. Second, this system is designed to exploit the resources of existing Grid infrastructures like Globus and Legion, and commercial Internet frameworks like e-speak. Third, and most important, the component-based design extends throughout the system. Hence, tools such as application builders, which allow users to select components, start them on remote resources, and connect and execute them, are also interchangeable software components. Consequently, it is possible to build distributed applications using a graphical "drag-and-drop" interface, a Web-based interface, a scripting language like Python, or an existing tool such as Matlab.

Proceedings ArticleDOI
01 Jun 2000
TL;DR: This work presents the power profiles for a commercial RTOS, μC/OS, running several applications on an embedded system based on the Fujitsu SPARClite processor and illustrates the ways in which application software can be designed to use the RTOS in a power-efficient manner.
Abstract: The increasing complexity and software content of embedded systems has led to the frequent use of system software that helps applications access underlying hardware resources easily and efficiently. In this paper, we analyze the power consumption of real-time operating systems (RTOSs), which form an important component of the system software layer. Despite the widespread use of, and significant role played by, RTOSs in mobile and low-power embedded systems, little is known about their power consumption characteristics. This work presents the power profiles for a commercial RTOS, μC/OS, running several applications on an embedded system based on the Fujitsu SPARClite processor. Our work demonstrates that the RTOS can consume a significant fraction of the system power and, in addition, impact the power consumed by other software components. We illustrate the ways in which application software can be designed to use the RTOS in a power-efficient manner. We believe that this work is a first step towards establishing a systematic approach to RTOS power modeling and optimization.

Patent
15 May 2000
TL;DR: In this paper, the authors propose a method of integrating host application software with data collection devices (e.g., bar code scanners) located on remote, wireless terminals, using a predetermined interface between the host application and the data collection object.
Abstract: A method of integrating host application software with data collection devices (e.g., bar code scanners) located on remote, wireless terminals. A data collection object executes on the host computer, using a predetermined interface between the host application software and the data collection object. That interface, and the communications between the host application software and the data collection object, are configured so that to the host application software the data collection device appears to be local hardware on the host computer. The data collection object creates and executes threads of execution for controlling operation of the data collection device, with the threads communicating with the remote terminals via a host computer transport layer, the wireless link, and a remote computer transport layer at the remote terminals. A data collection device driver on the remote terminal receives communications from the data collection object, and returns information to the data collection object, over the remote computer transport layer, wireless link, and host computer transport layer.

Patent
08 Mar 2000
TL;DR: In this article, the authors propose a method of integrating host application software with data collection devices (e.g., bar code scanners) located on remote, wireless terminals, using a predetermined interface between the host application and the data collection object.
Abstract: A method of integrating host application software with data collection devices (e.g., bar code scanners) located on remote, wireless terminals. A data collection object executes on the host computer, using a predetermined interface between the host application software and the data collection object. That interface, and the communications between the host application software and the data collection object, are configured so that to the host application software the data collection device appears to be local hardware on the host computer. The data collection object creates and executes threads of execution for controlling operation of the data collection device, with the threads communicating with the remote terminals via a host computer transport layer, the wireless link, and a remote computer transport layer at the remote terminals. A data collection device driver on the remote terminal receives communications from the data collection object, and returns information to the data collection object, over the remote computer transport layer, wireless link, and host computer transport layer.

Patent
09 Aug 2000
TL;DR: In this paper, a control system for programming an application program controlling a factory automation device on a communication network having a programming device operably connected to the communication network is presented, where a program package is embedded in the programming device and is used for creating and editing the application program.
Abstract: A control system for programming an application program controlling a factory automation device on a communication network having a programming device operably connected to the communication network. A program package is embedded in the programming device and is used for creating and editing the application program. At least one web page is resident on the programming device and operably connected to the program package. The web page is accessible to a user using a web browser to edit the application program controlling the factory automation device.

Proceedings ArticleDOI
15 Mar 2000
TL;DR: A QuO configuration language is described, as well as the specific configuration needs of particular QoS properties-real-time, security, and dependability-and the support the authors provide for them.
Abstract: Recent work in opening up distributed object systems to make them suitable for applications needing quality of service control has had the side effect of increasing the complexity in setting up, configuring, and initializing such applications. Configuration of distributed applications is more complicated than that of non-distributed applications, simply because of the heterogeneous and distributed nature of the application's components. CORBA and other distributed object middleware simplifies the configuration of distributed object applications, but hides much of the information and control necessary to achieve quality of service (QoS). We describe the techniques and tools that we have developed within our Quality Objects (QuO) framework for simplifying the configuration of distributed applications with QoS attributes. We describe a QuO configuration language, as well as the specific configuration needs of particular QoS properties-real-time, security, and dependability-and the support we provide for them.

Proceedings ArticleDOI
01 Jun 2000
TL;DR: A case study for the design, programming and usage of a reconfigurable system-on-chip, MorphoSys, which is targeted at computation-intensive applications.
Abstract: In this paper, we present a case study for the design, programming and usage of a reconfigurable system-on-chip, MorphoSys, which is targeted at computation-intensive applications. This 2-million transistor design combines a reconfigurable array of cells with a RISC processor core and a high bandwidth memory interface. The system architecture, software tools including a scheduler for reconfigurable systems, and performance analysis (with impressive speedups) for target applications are described.