scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1999"


Patent
30 Nov 1999
TL;DR: A software development method and system having a suite of graphical customization tools that enables developers to rapidly configure all aspects of the underlying application software, including the look-and-feel, behavior, and workflow as mentioned in this paper.
Abstract: A software development method and system having a suite of graphical customization tools that enables developers to rapidly configure all aspects of the underlying application software, including the look-and-feel, behavior, and workflow. This is accomplished without modifying application source code, base objects, or SQL. The sophisticated repository management capabilities of the method and system of our invention allows teams of developers to work efficiently on configuring applications. The application upgrader provides an automated process to upgrade the customizations to future product releases thus protecting the investment in customization. The ease, comprehensiveness, scalability, and upgradeability of the customization process help reduce the total lifecycle cost of customizing enterprise applications.

1,288 citations


Proceedings ArticleDOI
30 Aug 1999
TL;DR: A clustering tool called Bunch is developed that creates a system decomposition automatically by treating clustering as an optimization problem and a feature that enables the integration of designer knowledge about the system structure into an otherwise fully automatic clustering process is described.
Abstract: Software systems are typically modified in order to extend or change their functionality, improve their performance, port them to different platforms, and so on. For developers, it is crucial to understand the structure of a system before attempting to modify it. The structure of a system, however, may not be apparent to new developers, because the design documentation is non-existent or, worse, inconsistent with the implementation. This problem could be alleviated if developers were somehow able to produce high-level system decomposition descriptions from the low-level structures present in the source code. We have developed a clustering tool called Bunch that creates a system decomposition automatically by treating clustering as an optimization problem. The paper describes the extensions made to Bunch in response to feedback we received from users. The most important extension, in terms of the quality of results and execution efficiency, is a feature that enables the integration of designer knowledge about the system structure into an otherwise fully automatic clustering process. We use a case study to show how our new features simplified the task of extracting the subsystem structure of a medium size program, while exposing an interesting design flaw in the process.

507 citations


Patent
16 Jul 1999
TL;DR: In this article, the authors proposed a method and apparatus for providing an automatically upgradeable software application that includes targeted advertising based upon demographics and user interaction with the computer, including a display region used for banner advertising.
Abstract: A method and apparatus for providing an automatically upgradeable software application includes targeted advertising based upon demographics and user interaction with the computer. The software application includes a display region used for banner advertising that is downloaded over a network such as the Internet. The software application is accessible from a server via the network and demographic information on the user is acquired by the server and used for determining what advertising will be sent to the user. The software application further targets the advertisements in response to normal user interaction with the computer. Data associated with each advertisement is used by the software application in determining when a particular advertisement is to be displayed. This includes the specification of certain programs that the user may have so that, when the user runs the program (e.g., a spreadsheet program), a relevant advertisement will be displayed (e.g., an advertisement for a stock brokerage). This provides two-tiered, real-time targeting of advertising—both demographically and reactively. The software application includes programming that accesses the server to determine if one or more components of the application need upgrading. If so, the components can be downloaded and installed without further action by the user. A distribution tool is provided for software distribution and upgrading over the network. Also provided is a user profile that is accessible to any computer on the network. Furthermore, multiple users of the same computer can possess Internet web resources and files that are personalized, maintained and organized.

504 citations


Journal ArticleDOI
Joseph Mitola1
TL;DR: Analysis of the topological properties of the software radio architecture yields a layered distributed virtual machine reference model and a set of architecture design principles that may be useful in defining interfaces among hardware, middleware, and higher level software components that are needed for cost-effective software reuse.
Abstract: As the software radio makes its transition from research to practice, it becomes increasingly important to establish provable properties of the software radio architecture on which product developers and service providers can base technology insertion decisions. Establishing provable properties requires a mathematical perspective on the software radio architecture. This paper contributes to that perspective by critically reviewing the fundamental concept of the software radio, using mathematical models to characterize this rapidly emerging technology in the context of similar technologies like programmable digital radios. The software radio delivers dynamically defined services through programmable processing capacity that has the mathematical structure of the Turing machine. The bounded recursive functions, a subset of the total recursive functions, are shown to be the largest class of Turing-computable functions for which software radios exhibit provable stability in plug-and-play scenarios. Understanding the topological properties of the software radio architecture promotes plug-and-play applications and cost-effective reuse. Analysis of these topological properties yields a layered distributed virtual machine reference model and a set of architecture design principles for the software radio. These criteria may be useful in defining interfaces among hardware, middleware, and higher level software components that are needed for cost-effective software reuse.

386 citations


Patent
Thomas Joshua Shafron1
28 Oct 1999
TL;DR: In this article, a method of dynamically controlling and displaying an Internet browser interface, and to a dynamically controllable Internet browser interfaces, is presented, where a browser interface may be customized using a controlling software program that may be provided by an Internet content provider, an ISP, or that may reside on an Internet user's computer.
Abstract: The present invention is directed to a method of dynamically controlling and displaying an Internet browser interface, and to a dynamically controllable Internet browser interface. In accordance with the present invention, a browser interface may be customized using a controlling software program that may be provided by an Internet content provider, an ISP, or that may reside on an Internet user's computer. The controlling software program enables the Internet user, the content provider, or the ISP to customize and control the information and/or functionality of a user's browser and browser interface.

271 citations


Journal ArticleDOI
20 Oct 1999
TL;DR: The methodology provides a means to quickly build models of architectures at an abstract level, to easily map applications, modeled as Kahn Process Networks, onto these architecture models, and to analyze the performance of the resulting system by simulation.
Abstract: We present a methodology for the exploration of signal processing architectures at the system level. The methodology, named SPADE, provides a means to quickly build models of architectures at an abstract level, to easily map applications, modeled as Kahn Process Networks, onto these architecture models, and to analyze the performance of the resulting system by simulation. The methodology distinguishes between applications and architectures, and uses a trace-driven simulation technique for co-simulation of application models and architecture models. As a consequence, architecture models need not be functionally complete to be used for performance analysis while data dependent behavior is still handled correctly. We have used the methodology for the exploration of architectures and mappings of an MPEG-2 decoder application.

229 citations


Patent
29 Mar 1999
TL;DR: In this paper, the authors present a web-based application that enables users to proactively manage and accurately predict strategic software development and deliverables using distributed data collectors, an application server and a browser interface.
Abstract: A computer software application in the form of a software management and task completion and prediction apparatus by which project completion can be ascertained and management of a project can be maintained with a high efficiency and accuracy. The software application is a web-based application which enables users to proactively manage and accurately predict strategic software development and deliverables. This application and delivery management system comprises distributed data collectors, an application server and a browser interface. The data collectors automatically gather data already being generated by various tools within the organization, such as scheduling, defect tracking, requirements management and software quality tools. This data is constantly being collected and fed into the application server, thereby providing objective and updated information. New tools can be easily added without disrupting operations. The data collected is fed into the applications server which is the brain of the apparatus. The application server analyzes the data collected by the data collectors to generate a statistically significant probability curve. This curve is then compared to the original planned schedule of product delivery to determine if the project is meeting its targets. Based upon this comparison, the software application predicts a probable delivery date based upon the various inputs and variables from the development process. In addition, the application server also generate early warning alerts, as needed and indicated. At such time as the software apparatus identifies a potential problem, the alerts automatically inform the user, so as to mitigate a crisis and assist with resolution of the problem. The alerts are communicated to the designated user by e-mail.

218 citations


Proceedings ArticleDOI
01 Nov 1999
TL;DR: A probabilistic model and a reliability analysis technique that is applicable to high-level designs that is used to identify critical components and critical component interfaces and to investigate the sensitivity of the application reliability to changes in the reliabilities of components and their interfaces are introduced.
Abstract: Software designers are motivated to utilize off-the-shelf software components for rapid application development. Such applications are expected to have high reliability as a result of deploying trusted components. The claims of high reliability need further investigation based on reliability analysis techniques that are applicable to component-based applications. This paper introduces a probabilistic model and a reliability analysis technique that is applicable to high-level designs. The technique is named scenario-based reliability analysis (SBRA). SBRA is specific to component-based software whose analysis is strictly based on execution scenarios. Using scenarios, we construct a probabilistic model named a "component-dependency graph" (CDG). CDGs are directed graphs that represent components, component reliabilities, link and interface reliabilities, transitions and transition probabilities. In CDGs, component interfaces and link reliabilities are treated as first-class elements of the model. Based on CDGs, an algorithm is presented to analyze the reliability of the application as the function of reliabilities of its components and interfaces. A case study illustrates the applicability of the algorithm. The SBRA is used to identify critical components and critical component interfaces, and to investigate the sensitivity of the application reliability to changes in the reliabilities of components and their interfaces.

200 citations


Proceedings ArticleDOI
16 May 1999
TL;DR: In the last decade, the advent of the paradigm of ubiquitous computing has found three features common across many ubiquitous computing applications-transparent interfaces that provide appropriate alternatives to the desktop bound traditional graphical user interface, the ability to modify behavior of a application based on knowledge of its context of use, and the able to capture live experiences for later recall.
Abstract: In the last decade, we have experienced the advent of the paradigm of ubiquitous computing, with the goal of making computational services so pervasive throughout an environment that they become transparent to the human user. Research in ubiquitous computing raises many challenging issues for computer science in general, but successful research in ubiquitous computing requires the deployment of applications that can survive everyday use, and this in itself presents a great software engineering challenge. In our experience, we have found three features common across many ubiquitous computing applications-transparent interfaces that provide appropriate alternatives to the desktop bound traditional graphical user interface, the ability to modify behavior of a application based on knowledge of its context of use, and the ability to capture live experiences for later recall. Building ubiquitous computing applications with these features raises software engineering problems in toolkit design, software structuring for separation of concerns and component integration. We clarify these problems and discuss our approaches towards their solution.

185 citations


Journal ArticleDOI
TL;DR: A standard model for describing the structure of research article introductions, the CARS (Create A Research Space) model, is evaluated in terms of how well it can be applied to 12 articles which have received "best paper" awards in the field of software engineering.
Abstract: A standard model for describing the structure of research article introductions, the CARS (Create A Research Space) model, is evaluated in terms of how well it can be applied to 12 articles which have received "best paper" awards in the field of software engineering. The results indicate that, although the model adequately describes the main framework of the introductions, a number of important features are not accounted for, in particular: an extensive review of background literature, the inclusion of many definitions and examples, and an evaluation of the research in terms of application or novelty of the results.

185 citations


Patent
20 Oct 1999
TL;DR: In this paper, a software installation and recovery system provides an initial bootstrap sequence of instructions that initializes the low-level parameters of the client device, initialises the persistent storage system, loads a bootstrap loader from the persistent store into program memory, and passes execution to the bootstrapLoader.
Abstract: A software installation and recovery system provides an initial bootstrap sequence of instructions that initializes the low-level parameters of the client device, initializes the persistent storage system, loads a bootstrap loader from the persistent store into program memory, and passes execution to the bootstrap loader. A second stage boot loader locates the operating system in the persistent store, loads the operating system into program memory, and passes execution to the operating system which then performs necessary hardware and software initialization, loads the viewing object database code and other application software from the persistent store, and begins execution of the applications. The persistent store contains at least two partitions for each of the following: the second stage boot loader; the operating system kernel; and the application software. A partition table resides in the boot sector that records an indication for duplicated partitions in which one of the partitions is marked primary and another is marked backup. The invention verifies that each level of software was loaded off of the primary partition. If a load was from the primary partition and the installation at that level was successful, then a successful indication is recorded for that level, otherwise, the backup partition for that level is copied over the primary partition and a failure indication is recorded for that level. Finalizing the installation for the top application level of software may be delayed until all parts of the application environment have been successfully loaded and started.

Journal ArticleDOI
TL;DR: The exploration of the 2-D convolver's design space will provide guidelines for the development of a library of DSP-oriented hardware configurations intended to significantly speed up the performance of general DSP processors.
Abstract: In order to make software applications simpler to write and easier to maintain, a software digital signal-processing library that performs essential signal- and image-processing functions is an important part of every digital signal processor (DSP) developer's toolset In general, such a library provides high-level interface and mechanisms, therefore, developers only need to know how to use algorithms, not the details of how they work Complex signal transformations then become function calls, eg, C-callable functions Considering the two-dimensional (2-D) convolver function as an example of great significance for DSP's, this paper proposes to replace this software function by an emulation on a field-programmable gate array (FPGA) initially configured by software programming Therefore, the exploration of the 2-D convolver's design space will provide guidelines for the development of a library of DSP-oriented hardware configurations intended to significantly speed up the performance of general DSP processors Based on the specific convolver, and considering operators supported in the library as hardware accelerators, a series of tradeoffs for efficiently exploiting the bandwidth between the general-purpose DSP and accelerators are proposed In terms of implementation, this paper explores the performance and architectural tradeoffs involved in the design of an FPGA-based 2-D convolution coprocessor for the TMS320C40 DSP microprocessor available from Texas Instruments Incorporated However, the proposed concept is not limited to a particular processor

Journal ArticleDOI
TL;DR: It’s interesting to look at the decisions that went into the design of Linux, and how the Linux development effort evolved, to see how Linux managed to become something that was not at all part of the original vision.
Abstract: Linux has succeeded not because the original goal was to make it widely portable and widely available, but because it was based on good design principles and a good development model. This strong foundation made portability and availability easier to achieve. Originally Linux was targeted at only one architecture: the Intel 80386 CPU. Today Linux runs on everything from PalmPilots to Alpha workstations; it is the most widely ported operating system available for PCs. If you write a program to run on Linux, then, for a wide range of machines, that program can be “write once, run anywhere.” It’s interesting to look at the decisions that went into the design of Linux, and how the Linux development effort evolved, to see how Linux managed to become something that was not at all part of the original vision. Linux today has achieved many of the design goals that people originally assumed only a microkernel architecture could achieve. When I began to write the Linux kernel, the conventional wisdom was that you had to use a microkernel-style architecture. However, I am a pragmatic person, and at the time I felt that microkernels (a) were experimental, (b) were obviously more complex, and (c) executed notably slower. Speed matters a lot in a real-world operating system, and I found that many of the tricks researchers were developing to speed microkernel processing could just as easily be applied to traditional kernels to accelerate their execution. By constructing a general kernel model drawn from elements common to all typical architectures, the Linux kernel gets many of the portability benefits that otherwise require an abstraction layer, without paying the performance penalty paid by microkernels. By allowing for kernel modules, hardware-specific code can often be confined to a module, keeping the core kernel highly portable. Device drivers are a good example of effective use of kernel modules to keep hardware specifics in the modules.

Patent
15 Jan 1999
TL;DR: In this article, a system and method for controlling an engine or a machine at a remote unit from a central office, which may be a fixed location, contemplates a transmit/receive interface connected to the computer or microprocessor of the engine or machine controller, integrated with a communications module that is configured to establish communications with a World Wide Web server on the Internet.
Abstract: A system and method for controlling an engine or machine at a remote unit from a central office, which may be a fixed location, contemplates a transmit/receive interface connected to the computer or microprocessor of the engine or machine controller. This interface is integrated with a communications module that is configured to establish communications with a World Wide Web server on the Internet. A similar system is established at the location of the central office connected to a computer controlled by a fleet owner/operator, for example. The fleet owner can upload machine control data or machine control application software to an intermediate digital file storage maintained by the Web server via the Internet. This data or application software can be accessed and downloaded at any time by the remote operator without any direct interface or communication with the central office. The remote machine controller includes a data entry device that allows the remote operator to issue commands to the Web server to upload or download information, input password or security information for access to the data in intermediate file storage, or leave messages for the central office. The transmit/receive interface at the remote unit includes means for receiving the downloaded information, determining whether the information is application data or application software, and updating the machine controller accordingly. In this manner, either specific data can be downloaded to modify the performance of the controlled machine, or entirely new application software or modified application software can be downloaded directly to the machine controller.

Proceedings ArticleDOI
16 May 1999
TL;DR: This paper describes a verification method that requires little or no specialized knowledge in model construction and allows us to extract models mechanically from the source of software applications, securing accuracy.
Abstract: Formal verification methods are used only sparingly in software development. The most successful methods to date are based on the use of model checking tools. To use such tools, the user must first define a faithful abstraction of the application (the model), specify how the application interacts with its environment, and then formulate the properties that it should satisfy. Each step in this process can become an obstacle. To complete the verification process successfully often requires specialized knowledge of verification techniques and a considerable investment of time. In this paper we describe a verification method that requires little or no specialized knowledge in model construction. It allows us to extract models mechanically from the source of software applications, securing accuracy. Interface definitions and property specifications have meaningful defaults that can be adjusted when the checking process becomes more refined. All checks can be executed mechanically, even when the application itself continues to evolve. Compared to conventional software testing, the thoroughness of a check of this type is unprecedented.

Proceedings ArticleDOI
28 Mar 1999
TL;DR: This paper argues for an application-directed approach to benchmarking, using performance metrics that reflect the expected behavior of a particular application across a range of hardware or software platforms.
Abstract: Most performance analysis today uses either microbenchmarks or standard macrobenchmarks (e.g. SPEC, LADDIS, the Andrew benchmark). However, the results of such benchmarks provide little information to indicate how well a particular system will handle a particular application. Such results are, at best, useless and, at worst, misleading. In this paper we argue for an application-directed approach to benchmarking, using performance metrics that reflect the expected behavior of a particular application across a range of hardware or software platforms. We present three different approaches to application-specific measurement, one using vectors that characterize both the underlying system and an application, one using trace-driven techniques, and a hybrid approach. We argue that such techniques should become the new standard.

Journal ArticleDOI
TL;DR: A case study of key patterns used to develop ORBs that can be dynamically configured and evolved for specific application requirements and network/end-system characteristics are presented.
Abstract: Distributed object computing forms the basis for next-generation application middleware. At the heart of distributed object computing are object request brokers (ORBs), which automate many tedious and error-prone distributed programming tasks. This article presents a case study of key patterns used to develop ORBs that can be dynamically configured and evolved for specific application requirements and network/end-system characteristics.

Journal ArticleDOI
TL;DR: In this paper, the authors present a generic fault-tolerant computer architecture based on commercial off-the-shelf (COTS) components (both processor hardware boards and real-time operating systems).
Abstract: The development and validation of fault-tolerant computers for critical real-time applications are currently both costly and time consuming. Often, the underlying technology is out-of-date by the time the computers are ready for deployment. Obsolescence can become a chronic problem when the systems in which they are embedded have lifetimes of several decades. This paper gives an overview of the work carried out in a project that is tackling the issues of cost and rapid obsolescence by defining a generic fault-tolerant computer architecture based essentially on commercial off-the-shelf (COTS) components (both processor hardware boards and real-time operating systems). The architecture uses a limited number of specific, but generic, hardware and software components to implement an architecture that can be configured along three dimensions: redundant channels, redundant lanes, and integrity levels. The two dimensions of physical redundancy allow the definition of a wide variety of instances with different fault tolerance strategies. The integrity level dimension allows application components of different levels of criticality to coexist in the same instance. The paper describes the main concepts of the architecture, the supporting environments for development and validation, and the prototypes currently being implemented.

Patent
18 Nov 1999
TL;DR: In this article, the authors describe a method for deploying a generic application engine in a browser program executing on a client platform, where an application engine kernel is formed in the browser program and a minimum required subset of application engine components are then loaded by the kernel in order to process any initial user requests.
Abstract: Methods and apparatus for deploying a generic application engine in a browser program executing on a client platform are described. As a method, an application engine kernel is formed in the browser program that is independent of the client platform and the browser program concurrently with loading user interface (UI) components and corresponding data components associated with the application engine. A minimum required subset of application engine components are then loaded by the kernel in order to process any initial user requests.

Proceedings ArticleDOI
02 May 1999
TL;DR: An efficient middleware architecture named TMO Support Middleware (TMOSM) is presented which can be easily adapted to many commercial-off-the-shelf (COTS) platforms and the performance of a prototype implementation of TMOSM running on Windows NT platforms is discussed.
Abstract: The time-triggered message-triggered object (TMO) structuring scheme has been established to remove the limitation of conventional object structuring techniques in developing applications containing real time (RT) distributed computing components. It is a natural and syntactically small but semantically powerful extension of the object oriented design and implementation techniques which allows the system designer to abstractly and yet accurately specify timing characteristics of data and function components of high level distributed computing objects. It is a unified approach for design and implementation of both RT and non-RT distributed applications. A cost-effective way to support TMO-structured distributed RT programming is to build a TMO execution engine as a middleware running on well established commercial software/hardware platforms. We present an efficient middleware architecture named TMO Support Middleware (TMOSM) which can be easily adapted to many commercial-off-the-shelf (COTS) platforms. The performance of a prototype implementation of TMOSM running on Windows NT platforms is also discussed.

Proceedings ArticleDOI
16 May 1999
TL;DR: An extensible set of reference types that drive and constrain the mapping of components to hosts are described, and it is shown how this model elevates application performance and reliability yet requires minimal changes in programming the application's logic.
Abstract: The design of efficient and reliable distributed applications that operate in large networks, over links with varying capacities and loads, demands new programming abstractions and mechanisms The conventional static design-time determination of local-remote relationships between components implies that (dynamic) environmental changes are hard if not impossible to address without reengineering The paper presents a novel programming model that is centered around the concept of "dynamic application layout", which permits the manipulation of component location at runtime This leads to a clean separation between the programming of the application's logic and the programming of the layout, which can also be performed externally at runtime The main abstraction vehicle for layout programming is a reflective inter-component reference, which embodies co- and re-location semantics We describe an extensible set of reference types that drive and constrain the mapping of components to hosts, and show how this model elevates application performance and reliability yet requires minimal changes in programming the application's logic The model was realized in the FarGo system, whose design and implementation in Java are presented, along with an event based scripting language and corresponding event monitoring service for managing the layout of FarGo applications

Patent
14 Oct 1999
TL;DR: In this paper, a rule-based instruction file has been configured by the provider of the application software package to cause the rulebased installation engine to execute commands according to the simplified script language file.
Abstract: A method and system for custom computer software installation using a standard rule-based installation engine is disclosed. Custom installation parameters are translated into a simplified script language file by a system administrator. An application software package is installed onto a computer using the standard rule-based installation engine, which is executed normally according to commands stored in a rule-based instruction file. The rule-based instruction file has been configured by the provider of the application software package to cause the rule-based installation engine to execute commands according to the simplified script language file. In this manner, the system administrator may achieve flexibility and control over each phase of the software installation process without being required to have a knowledge of the specific language of the rule-based instruction file.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: This paper introduces a performance prediction method, AdRM (Adaptive Regression Modeling), to determine file transfer times for network-bound distributed data-intensive applications, and demonstrates the effectiveness of the method on two distributed data applications.
Abstract: The computational grid is becoming the platform of choice for large-scale distributed data-intensive applications. Accurately predicting the transfer times of remote data files, a fundamental component of such applications, is critical to achieving application performance. In this paper, we introduce a performance prediction method, AdRM (Adaptive Regression Modeling), to determine file transfer times for network-bound distributed data-intensive applications. We demonstrate the effectiveness of the AdRM method on two distributed data applications, SARA (Synthetic Aperture Radar Atlas) and SRB (Storage Resource Broker), and discuss how it can be used for application scheduling. Our experiments use the Network Weather Service [36, 37], a resource performance measurement and forecasting facility, as a basis for the performance prediction model. Our initial findings indicate that the AdRM method can be effective in accurately predicting data transfer times in wide-area multi-user grid environments.

Patent
30 Apr 1999
TL;DR: In this paper, a tree view control-flow structure for a machine vision system is presented, where a first set of control programs representing possible machine vision tasks are provided and a second set of standard controls are provided which correspond to possible hardware.
Abstract: A method, a system and a computer-readable storage medium having stored therein a program for interactively developing a graphical control-flow structure and associated application software for use in a machine vision system is provided. The structure is a tree view structure including a control sequence having at least one node. The method includes providing a first set of control programs representing possible machine vision tasks. The first set of control programs defines a first set of standard controls. Hardware operating parameters are provided which correspond to possible hardware. The hardware operating parameters defining a second set of standard controls. Graphical representations of possible hardware and possible machine vision tasks are displayed. Commands are received from a user to select desired hardware operating parameters corresponding to desired hardware and a machine vision graphical representation and its associated first control program corresponding to a desired machine vision task. The tree structure is displayed wherein the selected machine vision graphical representation is a node of the structure and the first control program is linked into the structure. A plurality of separate application processing engines interlinked together are provided for seamlessly communicating results obtained by execution of the selected first control program. The selected first control program is linked with the desired hardware operating parameters to form the application software in response to the commands without the user writing any of the application software.

PatentDOI
TL;DR: A unified web-based voice messaging system provides voice application control between a web browser and an application server via an hypertext transport protocol (HTTP) connection on an Internet Protocol (IP) network.
Abstract: A unified web-based voice messaging system provides voice application control between a web browser and an application server via an hypertext transport protocol (HTTP) connection on an Internet Protocol (IP) network. The application server generates and maintains a server-side data record, also referred to as a “brownie”, that includes application state information and user attribute information for an identified user session with the web browser. The application server, in response to receiving a new web page request from the browser, initiates a web application instance to begin a transient application session with the browser. The brownie also includes a session identifier that uniquely identifies the session with the user of the application session. The application server stores the brownie in a memory resident within the server side of the network, and sends to the browser the session identifier and the corresponding web page requested by the web browser.

Journal ArticleDOI
TL;DR: A software generation methodology is proposed that takes advantage of a restricted class of specifications and allows for tight control over the implementation cost, and exploits several techniques from the domain of Boolean function optimization.
Abstract: Software components for embedded reactive real-time applications must satisfy tight code size and run-time constraints. Cooperating finite state machines provide convenient intermediate format for embedded system co-synthesis, between high-level specification languages and software or hardware implementations. We propose a software generation methodology that takes advantage of a restricted class of specifications and allows for tight control over the implementation cost. The methodology exploits several techniques from the domain of Boolean function optimization. We also describe how the simplified control/data-flow graph used as an intermediate representation can be used to accurately estimate the size and timing cost of the final executable code.

Patent
20 Oct 1999
TL;DR: In this article, a database server and a Citrix-type direct access server are interconnected between a database and a plurality of subscribers, each of which gain secure access into a server via a modem and an internet service provider (ISP).
Abstract: A database server and a Citrix®-type direct access server electronically interconnected between said database server and a plurality of subscribers, each of which gain secure access into a server via a modem and an internet service provider (ISP). Thin client access provides for electronic transfer of billing and data entry to each direct access subscriber upon request. Browser based subscribers use forms processing to transfer data into the database server which utilizes an appropriate application software therein to produce billing invoices and statements to clients and customers of each corresponding subscriber. Thin client access also provides real time electronic viewing and query access regarding data and billings stored in the database server by each corresponding direct access subscriber. A home page of a website of the system provides access via an ISP to the database server by a plurality of browser-based subscribers. The home page provides secure access by each browser-based subscriber to each of a plurality of subscriber areas within the system. The database server includes open database compliant software (ODBC) for seamless integration with other software applications. Data entered on the forms is then sent electronically to be entered into said database server to produce billing invoices and statements from applications software to clients and customers of each corresponding browser-based subscriber.

Patent
08 Apr 1999
TL;DR: In this paper, the authors present a method and system for dynamically injecting execution logic into shared memory spaces of a windowed operating system using a modified kernel dynamic link library, which can be used for debugging aids, hooking other processes, tracing the execution of a process, and for other purposes.
Abstract: Methods and system for dynamically injecting execution logic into shared memory spaces of a windowed operating system. An injection dynamic link library is loaded from an injection application into a pre-determined memory location within an area of shared memory within the windowed operating system. A main dynamic link library function within an original kernel dynamic link library including kernel functions for the windowed operating system is located from the injection dynamic link library. A jump command is inserted from injection dynamic link library within the main dynamic link library function in the kernel dynamic link library to create a modified kernel dynamic link library. The jump command jumps to an injection hook function within the injection dynamic link library whenever a new windowed operating system process is created. The injection hook function within the injection dynamic link library includes multiple injection functions that are executed by the windowed operating system prior to executing any other software applications whenever a new process is created in a windowed operating system. The methods and system of the present invention allow execution logic to be injected into new processes created by windowed operating systems using shared memory spaces such as Microsoft Windows 95/98. The execution logic is executed prior to any application software associated with the new processes. The methods and system of the present invention may be used for debugging aids, hooking other processes, tracing the execution of a process, and for other purposes.

Journal ArticleDOI
Jack Jean1, Karen A. Tomko1, V. Yavagal1, J. Shah1, R. Cook1 
TL;DR: The development of a dynamically reconfigurable system that can support multiple applications running concurrently and the impact of supporting concurrency and preloading in reducing application execution time is demonstrated.
Abstract: This paper describes the development of a dynamically reconfigurable system that can support multiple applications running concurrently. A dynamically reconfigurable system allows hardware reconfiguration while part of the reconfigurable hardware is busy computing. An FPGA resource manager (RM) is developed to allocate and de-allocate FPGA resources and to preload FPGA configuration files. For each individual application, different tasks that require FPGA resources are represented as a flow graph which is made available to the RM so as to enable efficient resource management and preloading. The performance of using the RM to support several applications is summarized. The impact of supporting concurrency and preloading in reducing application execution time is demonstrated.

Proceedings ArticleDOI
16 May 1999
TL;DR: The four testing techniques which are coordinated in Lutess uniform framework are shown to be well-suited to efficient software testing and the lessons learnt in the context of industrial partnerships are discussed.
Abstract: Several studies have shown that automated testing is a promising approach to save significant amounts of time and money in the industry of reactive software. But automated testing requires a formal framework and adequate means to generate test data. In the context of synchronous reactive software, we have built such a framework and its associated tool-Lutess-to integrate various well-founded testing techniques. This tool automatically constructs test harnesses for fully automated test data generation and verdict return. The generation conforms to different formal descriptions: software environment constraints, functional and safety-oriented properties to be satisfied by the software, software operational profiles and software behavior patterns. These descriptions are expressed in an extended executable temporal logic. They correspond to more and more complex test objectives raised by the first pre-industrial applications of Lutess. This paper concentrates on the latest development of the tool and its use in the validation of standard feature specifications in telephone systems. The four testing techniques which are coordinated in Lutess uniform framework are shown to be well-suited to efficient software testing. The lessons learnt from the use of Lutess in the context of industrial partnerships are discussed.