scispace - formally typeset
Search or ask a question

Showing papers on "Data management published in 1999"


Patent
14 May 1999
TL;DR: In this article, a first set of computer data is acquired that represents a model of an organization of people having fundamental components, such as processes or capabilities, that are represented in the computer data by data items.
Abstract: Management information is processed. A first set of computer data is acquired that represents a model of an organization of people having fundamental components, such as processes or capabilities, that are represented in the first set of computer data by data items. The first set of computer data is associated with a second set of computer data that represents a portfolio of management concepts, such as management goals. A report of management concepts is issued based on the second set of computer data and is sorted by fundamental component.

894 citations


Proceedings ArticleDOI
12 Mar 1999
TL;DR: The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely used method for categorizing data fusion-related functions as discussed by the authors, and the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi-source/multi-source systems.
Abstract: The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely-used method for categorizing data fusion-related functions. This paper discusses the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi- sensor/multi-source systems. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. If the job is to estimate the state of a people, it can be useful to include consideration of informational and perceptual states in addition to the physical state. Developing cost-effective multi-source information systems requires a method for specifying data fusion processing and control functions, interfaces, and associate databases. The lack of common engineering standards for data fusion systems has been a major impediment to integration and re-use of available technology: current developments do not lend themselves to objective evaluation, comparison or re-use. This paper reports on proposed revisions and expansions of the JDL Data FUsion model to remedy some of these deficiencies. This involves broadening the functional model and related taxonomy beyond the original military focus, and integrating the Data Fusion Tree Architecture model for system description, design and development.

819 citations


Book
22 Dec 1999
TL;DR: This one-stop guide to choosing the right tools and technologies for a state-of-the-art data management strategy built on a Customer Relationship Management (CRM) framework helps you understand the principles of data warehousing and data mining systems and carefully spell out techniques for applying them so that your business gets the biggest pay-off possible.
Abstract: From the Publisher: How data mining delivers a powerful competitive advantage! Are you fully harnessing the power of information to support management and marketing decisions? You will,with this one-stop guide to choosing the right tools and technologies for a state-of-the-art data management strategy built on a Customer Relationship Management (CRM) framework. Authors Alex Berson,Stephen Smith,and Kurt Thearling help you understand the principles of data warehousing and data mining systems,and carefully spell out techniques for applying them so that your business gets the biggest pay-off possible. Find out about Online Analytical Processing (OLAP) tools that quickly navigate within your collected data. Explore privacy and legal issues. . . evaluate current data mining application packages. . . and let real-world examples show you how data mining can impact — and improve — all of your key business processes. Start uncovering your best prospects and offering them the products they really want (not what you think they want)! How data mining delivers a powerful competitive advantage! Are you fully harnessing the power of information to support management and marketing decisions? You will,with this one-stop guide to choosing the right tools and technologies for a state-of-the-art data management strategy built on a Customer Relationship Management (CRM) framework. Authors Alex Berson,Stephen Smith,and Kurt Thearling help you understand the principles of data warehousing and data mining systems,and carefully spell out techniques for applying them so that your business gets the biggest pay-off possible. Find out about Online Analytical Processing (OLAP) tools thatquickly navigate within your collected data. Explore privacy and legal issues. . . evaluate current data mining application packages. . . and let real-world examples show you how data mining can impact — and improve — all of your key business processes. Start uncovering your best prospects and offering them the products they really want (not what you think they want)!

637 citations


Patent
21 Jul 1999
TL;DR: A system and method for vehicle diagnostic and health monitoring includes a client computer device within the vehicle, coupled to the vehicle's monitoring systems, for data management, remote session management and user interaction as discussed by the authors.
Abstract: A system and method for vehicle diagnostic and health monitoring includes a client computer device within the vehicle, coupled to the vehicle's monitoring systems, for data management, remote session management and user interaction, a communication system, coupled to the client computer device, for providing remote communication of data including data derived from internal monitoring systems of the vehicle, and a remote service center including a vehicle data store, a server computer, a diagnostic engine, and a communicator for communicating the results of analysis of vehicle information to the client computer device via the communication system.

629 citations


Journal ArticleDOI
Ilkka Tuomi1
05 Jan 1999
TL;DR: The reversed hierarchy of knowledge is shown to lead to a different approach in developing information systems that support knowledge management and organizational memory, and this difference may have major implications for organizational flexibility and renewal.
Abstract: In knowledge management literature it is often pointed out that it is important to distinguish between data, information and knowledge. The generally accepted view sees data as simple facts that become information as data is combined into meaningful structures, which subsequently become knowledge as meaningful information is put into a context and when it can be used to make predictions. This view sees data as a prerequisite for information, and information as a prerequisite for knowledge. I explore the conceptual hierarchy of data, information and knowledge, showing that data emerges only after we have information, and that information emerges only after we already have knowledge. The reversed hierarchy of knowledge is shown to lead to a different approach in developing information systems that support knowledge management and organizational memory. It is also argued that this difference may have major implications for organizational flexibility and renewal.

583 citations


Journal ArticleDOI
TL;DR: This paper is an effort to survey these techniques and to classify this research in a few broad areas in the area of data management in mobile computing.
Abstract: The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computing-those that involve data management-hold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwidth channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.

416 citations



Journal ArticleDOI
01 Mar 1999
TL;DR: This paper focuses on capturing and reasoning about semantic aspects of schema descriptions of heterogeneous information sources for supporting integration and query optimization and introduces new constructors to support the semantic integration process.
Abstract: Providing an integrated access to multiple heterogeneous sources is a challenging issue in global information systems for cooperation and interoperability. In this context, two fundamental problems arise. First, how to determine if the sources contain semantically related information, that is, information related to the same or similar real-world concept(s). Second, how to handle semantic heterogeneity to support integration and uniform query interfaces. Complicating factors with respect to conventional view integration techniques are related to the fact that the sources to be integrated already exist and that semantic heterogeneity occurs on the large-scale, involving terminology, structure, and context of the involved sources, with respect to geographical, organizational, and functional aspects related to information use. Moreover, to meet the requirements of global, Internet-based information systems, it is important that tools developed for supporting these activities are semi-automatic and scalable as much as possible.The goal of this paper is to describe the MOMIS [4, 5] (Mediator envirOnment for Multiple Information Sources) approach to the integration and query of multiple, heterogeneous information sources, containing structured and semistructured data. MOMIS has been conceived as a joint collaboration between University of Milano and Modena in the framework of the INTERDATA national research project, aiming at providing methods and tools for data management in Internet-based information systems. Like other integration projects [1, 10, 14], MOMIS follows a “semantic approach” to information integration based on the conceptual schema, or metadata, of the information sources, and on the following architectural elements: i) a common object-oriented data model, defined according to the ODLI3 language, to describe source schemas for integration purposes. The data model and ODLI3 have been defined in MOMIS as subset of the ODMG-93 ones, following the proposal for a standard mediator language developed by the I3/POB working group [7]. In addition, ODLI3 introduces new constructors to support the semantic integration process [4, 5]; ii) one or more wrappers, to translate schema descriptions into the common ODLI3 representation; iii) a mediator and a query-processing component, based on two pre-existing tools, namely ARTEMIS [8] and ODB-Tools [3] (available on Internet at http://sparc20.dsi.unimo.it/), to provide an I3 architecture for integration and query optimization. In this paper, we focus on capturing and reasoning about semantic aspects of schema descriptions of heterogeneous information sources for supporting integration and query optimization. Both semistructured and structured data sources are taken into account [5]. A Common Thesaurus is constructed, which has the role of a shared ontology for the information sources. The Common Thesaurus is built by analyzing ODLI3 descriptions of the sources, by exploiting the Description Logics OLCD (Object Language with Complements allowing Descriptive cycles) [2, 6], derived from KL-ONE family [17]. The knowledge in the Common Thesaurus is then exploited for the identification of semantically related information in ODLI3 descriptions of different sources and for their integration at the global level. Mapping rules and integrity constraints are defined at the global level to express the relationships holding between the integrated description and the sources descriptions. ODB-Tools, supporting OLCD and description logic inference techniques, allows the analysis of sources descriptions for generating a consistent Common Thesaurus and provides support for semantic optimization of queries at the global level, based on defined mapping rules and integrity constraints.

374 citations


Patent
15 Jan 1999
TL;DR: In this article, a system includes a communication module for downloading workspace data (135) from a remote site, an application program interface coupled to the communications module for communicating with a workspace data manager (160) to enable manipulation of the downloaded workspace data and thereby create manipulated data, and a general synchronization module for synchronizing the manipulated data with the workspace data stored at the remote site.
Abstract: A system includes a communication module for downloading workspace data (135) from a remote site, an application program interface coupled to the communications module for communicating with a workspace data manager (160) to enable manipulation of the downloaded workspace data and thereby create manipulated data, and a general synchronization module (130) coupled to the communications module for synchronizing the manipulated data with the workspace data (135) stored at the remote site. An instantiator requests the workspace data manager to provide an interface for enabling manipulation of the downloaded workspace data. The workspace data manager may create another instance of the interface or may provide access to its only interface to enable manipulation of the data. A data reader may translate the downloaded workspace data from the format used by the remote site to the format used by the workspace data manager. Upon logout, a de-instantiator synchronizes the data with the global server and deletes workspace data. The system handles the situation where the data stored at the remote site has not changed therefore includes the downloaded data, and the situation the data stored at the remote site has been modified and therefore is different than the downloaded data.

371 citations


Journal ArticleDOI
TL;DR: An integrated view on knowledge management and networking being a very powerful combination for the future of knowledge management is described and a framework for knowledge networking is developed which can be used as a basis in order to structure and reveal interdependences.
Abstract: In this article we describe an integrated view on knowledge management and networking being a very powerful combination for the future of knowledge management. We start by giving an overview of the increasing importance of networks in the modern economy. Subsequently, we conceptualize a Network perspective on knowledge management. Therefore we firstly give a theoretical foundation on networks, and secondly explain the interdependences between networks and knowledge management. These reflection lead to the development of a framework for knowledge networking, where we distinguish between a micro-perspective and a macro-perspective. Finally, we develop a framework for knowledge networking which can be used as a basis in order to structure and reveal interdependences. We conclude by giving some implications for management and future research.

361 citations


01 Jan 1999
TL;DR: In this article, the most significant algorithms and impossibility results in the area of distributed algorithms are presented in a simple automata-theoretic setting, and their complexity is analyzed according to precisely defined complexity measures.
Abstract: Distributed Algorithms contains the most significant algorithms and impossibility results in the area, all in a simple automata-theoretic setting. The algorithms are proved correct, and their complexity is analyzed according to precisely defined complexity measures. The problems covered include resource allocation, communication, consensus among distributed processes, data consistency, deadlock detection, leader election, global snapshots, and many others.

Journal ArticleDOI
TL;DR: The reader is introduced to temporal data management, stale-of-the-art solutions to challenging aspects of temporalData management are surveyed, and research directions are points to research directions.
Abstract: A wide range of database applications manage time-varying information. Existing database technology currently provides little support for managing such data. The research area of temporal databases has made important contributions in characterizing the semantics of such information and in providing expressive and efficient means to model, store, and query temporal data. This paper introduces the reader to temporal data management, surveys stale-of-the-art solutions to challenging aspects of temporal data management, and points to research directions.

Patent
29 Jan 1999
TL;DR: In this paper, a data management system has a plurality of data managers in one or more layers of a layered architecture and a common access method is disclosed to enable disparate pervasive computing devices to interact with centralized data management systems.
Abstract: A common access method is disclosed to enable disparate pervasive computing devices to interact with centralized data management systems. A modular, scalable data management system is envisioned to further expand the role of the pervasive devices as direct participants in the data management system. This data management system has a plurality of data managers and is provided with a plurality of data managers in one or more layers of a layered architecture. The system performs with a data manager and with a input from a user or pervasive computing device via an API a plurality of process on data residing in heterogeneous data repositories of computer system including promotion, check-in, check-out, locking, library searching, setting and viewing process results, tracking aggregations, and managing parts, releases and problem fix data under management control of a virtual control repository having one or more physical heterogeneous repositories. The system provides for storing, accessing, tracking data residing in said one or more data repositories managed by the virtual control repository. DMS applications executing directly within, on or behalf of, the pervasive computing device organize data using the PFVL paradigm. Configurable managers include a query control repository for existence of peer managers and provide logic switches to dynamically interact with peers. A control repository layer provides a common process interface across all managers. A command translator performs the appropriate mapping of generic control repository layer calls to the required function for the underlying storage engine.

Patent
07 Oct 1999
TL;DR: An aircraft data management system provides a passenger seated on the aircraft with a number of entertainment and productivity enhancing options as mentioned in this paper. But this system is not applicable to other venues have identifiable seating locations such as buses, passenger ships, hotels and auditoriums.
Abstract: An aircraft data management system provides a passenger seated on the aircraft with a number of entertainment and productivity enhancing options. Such options include, without limitation, video (194), audio (196), internet (190), airplane systems data (198) and power (162). Located proximate to each seat group is an integrated seat box (18) that includes a network interface card that identifies a requesting passenger for proper directing of the required data and/or power from devices that interface with a network controller (186) back to the requesting passenger. Both on-aircraft and off-aircraft devices may be accessed by the system. While particularly drawn to aircraft, the data management system is also applicable to other venues have identifiable seating locations such as buses, passenger ships, hotels and auditoriums.

Journal ArticleDOI
TL;DR: Drawing conclusions are drawn regarding the strategic direction of this new discipline and its effect on competition, productivity and quality for the business of tomorrow.
Abstract: This paper defines the newly emerging concept of knowledge management. The topics presented include: principles and practices of knowledge management, organization, distribution, dissemination, collaboration and refinement of information, and the effect on productivity and quality in business today. The technical applications and tools currently utilized within this discipline are also discussed. Case studies are included on the following firms: Teltech, Ernst & Young, Microsoft, and Hewlett Packard. These are analyzed to determine the effect knowledge management practices have on quality improvement and increased productivity. The authors have included a recommended strategy for implementation of knowledge management “best practices”. Finally, conclusions are drawn regarding the strategic direction of this new discipline and its effect on competition, productivity and quality for the business of tomorrow.

Journal ArticleDOI
TL;DR: This paper identifies recent accomplishments and associated research needs of the near term in spatial databases, addressing the growing data management and analysis needs of spatial applications such as geographic information systems.
Abstract: Spatial databases, addressing the growing data management and analysis needs of spatial applications such as geographic information systems, have been an active area of research for more than two decades. This research has produced a taxonomy of models for space, spatial data types and operators, spatial query languages and processing strategies, as well as spatial indexes and clustering techniques. However, more research is needed to improve support for network and field data, as well as query processing (e.g., cost models, bulk load). Another important need is to apply spatial data management accomplishments to newer applications, such as data warehouses and multimedia information systems. The objective of this paper is to identify recent accomplishments and associated research needs of the near term.

Proceedings ArticleDOI
03 Aug 1999
TL;DR: The overall goal is to provide the NASA scientific and engineering communities a substantial increase in their ability to solve problems that depend on use of large-scale and/or dispersed resources: aggregated computing, diverse data archives, laboratory instruments and engineering test facilities, and human collaborators.
Abstract: Information Power Grid (IPG) is the name of NASA's project to build a fully distributed computing and data management environment-a Grid. The IPG project has near, medium, and long-term goals that represent a continuum of engineering, development, and research topics. The overall goal is to provide the NASA scientific and engineering communities a substantial increase in their ability to solve problems that depend on use of large-scale and/or dispersed resources: aggregated computing, diverse data archives, laboratory instruments and engineering test facilities, and human collaborators. The approach involves infrastructure and services than can locate, aggregate, integrate, and manage resources from across the NASA enterprise. An important aspect of IPG is to produce a common view of these resources, and at the same time provide for distributed management and local control. In addition to addressing the overall goal of enhanced science and engineering, there is a potential important side effect. With a large collection of resources that have common use interfaces and a common management approach, the potential exists for a considerable pool of computing capability that could relatively easily, for example, be called on in extraordinary situations such as crisis response.

Journal ArticleDOI
TL;DR: The authors conclude that the need for definitions is of some urgency because a number of differences have been identified in the interpretation of what integration means and how it should be accomplished.
Abstract: This paper relates the main findings of a literature review of integrated management systems (IMS). In general, integration has been discussed in the literature dealing with quality, environmental, and health and safety management. The need for an IMS has arisen as a result of the decisions of organisations to implement an environmental management system and/or an occupational health and safety management system in addition to a quality management system. A number of differences have been identified in the interpretation of what integration means and how it should be accomplished. This leads the authors to conclude that the need for definitions is of some urgency. It is also pointed out that the current emphasis is on achieving compatibility between the standards to facilitate alignment.

Journal ArticleDOI
TL;DR: The results suggest that high levels of institutionalization of all quality management practices are associated with higher levels of quality performance and key factors that differentiated high- and low-quality performing IS units include senior management leadership, mechanisms to promote learning and the management infrastructure of the IS unit.
Abstract: The availability of high-quality software is critical for the effective use of information technology in organizations. Research in software quality has focused largely on the technical aspects of quality improvement, while limited attention has been paid to the organizational and sociobehavioral aspects of quality management. This study represents one effort at addressing this void in the information systems literature. The quality and systems development literatures are synthesized to develop eleven quality management constructs and two quality performance constructs. Scales for these constructs are empirically validated using data collected from a national survey of IS organizations. A LISREL framework is used to test the reliability and validity of the thirteen constructs. The results provide support for the reliability and validity of the constructs. A cluster analysis of the data was conducted to examine patterns of association between quality management practices and quality performance. The results suggest that higher levels of institutionalization of all quality management practices are associated with higher levels of quality performance. Our results also suggest that key factors that differentiated high- and low-quality performing IS units include senior management leadership, mechanisms to promote learning and the management infrastructure of the IS unit. Future research efforts directed at causally interrelating the quality management practices should lead to the development of a theory of quality management in systems development.

Posted Content
TL;DR: The next-generation astronomy digital archives will cover most of the sky at fine resolution in many wavelengths, from X-rays, through ultraviolet, optical, and infrared, and the archives will be stored at diverse geographical locations.
Abstract: The next-generation astronomy digital archives will cover most of the universe at fine resolution in many wave-lengths, from X-rays to ultraviolet, optical, and infrared. The archives will be stored at diverse geographical locations. One of the first of these projects, the Sloan Digital Sky Survey (SDSS) will create a 5-wavelength catalog over 10,000 square degrees of the sky (see this http URL). The 200 million objects in the multi-terabyte database will have mostly numerical attributes, defining a space of 100+ dimensions. Points in this space have highly correlated distributions. The archive will enable astronomers to explore the data interactively. Data access will be aided by a multidimensional spatial index and other indices. The data will be partitioned in many ways. Small tag objects consisting of the most popular attributes speed up frequent searches. Splitting the data among multiple servers enables parallel, scalable I/O and applies parallel processing to the data. Hashing techniques allow efficient clustering and pair-wise comparison algorithms that parallelize nicely. Randomly sampled subsets allow debugging otherwise large queries at the desktop. Central servers will operate a data pump that supports sweeping searches that touch most of the data. The anticipated queries require special operators related to angular distances and complex similarity tests of object properties, like shapes, colors, velocity vectors, or temporal behaviors. These issues pose interesting data management challenges.

Journal ArticleDOI
TL;DR: The role of MPEG-7 is presented and ideas for using MPEG-6 technology based on examples of improved versions of existing applications as well as completely new ones are outlined.
Abstract: Audio-visual information must allow some degree of interpretation, which can be passed onto, or accessed by a device or a computer code. MPEG-7 aims to create a standard for describing these operational requirements. We provide an overview on the development, functionality, and applicability of MPEG-7. We present the role of MPEG-7 and outline ideas for using MPEG-7 technology based on examples of improved versions of existing applications as well as completely new ones. The MPEG standards preceding MPEG-7 have mainly addressed coded representation of audio-visual information. MPEG-7, on the other hand, focuses on the standardization of a common interface for describing multimedia materials (representing information about the content, but not the content itself ("the bits about the bits")). In this context, MPEG-7 addresses aspects such as facilitating interoperability and globalization of data resources and flexibility of data management.

Book
30 Aug 1999
TL;DR: The management of Networked Systems: Driving Force or Impediment?
Abstract: Foreword Part I: Introduction and Fundamentals 1. The Management of Networked Systems -Task Definition 2. Fundamental Structures of Networked Systems 3. Requirements of the Management of Networked Systems Part II - Management Architectures 4. Management Architectures and Their Submodels 5. OSI Management 6. Internet Management 7. CORBA as a Management Architecture 8. DMTF Desktop Management Interface 9. Web-based Management Architectures 10. Gateways Between Management Architectures PART III: Management Tools 11. Classification of Management Tools 12. Standalone Test and Monitoring Tools 13. Management Platforms 14. Enterprise Tools 15. Development Tools 16. Selected Solutions and Tools for Network and Systems Management PART IV: Operational Use 17. Introduction to the Operation of a Networked System 18. Use of Management Tools in Operation 19. Quality Monitoring and Assurance of Operations 20. Two Examples of Provider-Oriented Management Products PART V: Outlook 21. Future Requirements and Solutions for IT Management 22. Management Architectures and Information Models 23. Management: Driving Force or Impediment? Bibliography Abbreviations Index

Journal ArticleDOI
TL;DR: It is argued that a fragmented mosaic of programs and problematics currently exists, at various levels of incompatibility, and a research program is described that develops a model on four dimensions that appears to order the various programs, practices and processes in this divergent field.
Abstract: This article reviews developments in the field of applied knowledge management dating from 1990 and argues that a fragmented mosaic of programs and problematics currently exists, at various levels of incompatibility. Using a software product, we map the information space around applied knowledge management as an illustration of this basic fact. We then describe a research program that extends this logic and develop a model on four dimensions that appears to order the various programs, practices and processes in this divergent field. Implications for managers of knowledge management initiatives are discussed, and avenues for future research suggested.

Book
01 Sep 1999
TL;DR: In this article, the authors present an overview of the most valuable resources in human-power management, including people power, scheduling, and time management, as well as change and change management.
Abstract: PART I. 1. Making it Happen. 2. Operations Directives. PART II. 3. Business Policy. 4. Operations Management and Inter-relationships. 5. Planning, Implementing and Controlling. PART III. 6. Facilities and Work. 7. People Power: The Most Valuable Resource? PART IV. 8. Capacity Management. 9. Scheduling and Time Management. 10. Materials Management. 11. Supply Chain Management. 12. Project Management. PART V. 13. Quality Performance. 14. Measurement of Performance. PART VI. 15. Change and Change Management. References. Index.

Journal Article
TL;DR: This paper is a July 1999 snapshot of a "whitepaper" that I've been working on, to formulate and put into prose my thoughts on the research opportunities XML brings to the general area of data management.
Abstract: This paper is a July 1999 snapshot of a "whitepaper" that I've been working on. The purpose of the whitepaper, which I initially drafted in April 1999, was to formulate and put into prose my thoughts on the research opportunities XML brings to the general area of data management. It is important to know that this paper is not a survey. It offers my personal opinions and thoughts on Data Management for XML, fully incorporating my biases and ignorances. Related work is not discussed, and references are not provided with the exception of a handful of URLs. Furthermore, I expect the whitepaper to evolve over time; please see http://www-db.stanford.edu/~widom/xml-whitepaper.html for the latest version.

Patent
03 Jun 1999
TL;DR: In this article, a system and method for managing clinical trial data includes dynamically generating, at a server, a data entry form to be displayed at a client, which is generated dynamically in a SGML-derived language.
Abstract: A system and method for managing clinical trial data includes dynamically generating, at a server, a data entry form to be displayed at a client. The data entry form is generated dynamically in a SGML-derived language. Control elements within the form comprise images which are used to construct the control elements and larger controls. The form is generated from a protocol database and a context received from the client, is populated from the data database, and is published to the client. Templates based on the protocol database comprise several frames including intermediate frames for displaying frame borders which are non-horizontal and non-vertical. If the trial protocol changes during a trial, the generated form is based on the protocol version active at the time data was entered into the form. Inadvertent use of the application is discouraged requiring an authentication procedure and displaying a picture of the authenticated user. Furthermore, help is provided by creating a link between the text of each question and information about the question. The source of help may be any or all of a protocol document, an investigative brochure, and a study guide. In addition, a user, upon logging in, is presented with a dashboard screen which provides information or links to information such as trial-related news, alerts, statistical information, progress reports and a list of work to be completed.

Proceedings Article
22 Aug 1999
TL;DR: Using information retrieval, information extraction, and collaborative filtering techniques, these systems are able to enhance corporate knowledge management by overcoming traditional problems of knowledge acquisition and maintenance and associated (human and financial) costs.
Abstract: In this paper we describe two systems designed to connect users to distributed, continuously changing experts and their knowledge. Using information retrieval, information extraction, and collaborative filtering techniques, these systems are able to enhance corporate knowledge management by overcoming traditional problems of knowledge acquisition and maintenance and associated (human and financial) costs. We describe the purpose of these two systems, how they work, and current deployment in a global corporate environment to enable end users to directly discover experts and their knowledge.

Patent
27 May 1999
TL;DR: In this article, a data management system has a plurality of data managers and is of a layered architecture, which performs with a data manager and with a user input via an API a plurality process on data residing in heterogeneous data repositories of a computer system including promotion, check-in, checkout, locking, library searching, setting and viewing process results, tracking aggregations and managing parts, releases and problem fix data under management control of a virtual control repository having one or more physical heterogeneous repositories.
Abstract: A Data Management System has a plurality of data managers and is of a layered architecture. The system performs with a data manager and with a user input via an API a plurality of process on data residing in heterogeneous data repositories of said computer system including promotion, check-in, check-out, locking, library searching, setting and viewing process results, tracking aggregations, and managing parts, releases and problem fix data under management control of a virtual control repository having one or more physical heterogeneous repositories. The system provides for storing, accessing, tracking data residing in said one or more data repositories managed by the virtual control repository. User Interfaces provide a combination of command line, scripts, GUI, Menu, Web Browser, and other interactive means which maps the user's view to a PFVL paradigm. Configurable Managers include a query control repository for existence of peer managers and provide logic switches to dynamically interact with peers. A control repository access layer provides a common process interface across all managers, which utilizes a virtual table paradigm to standardize communication with the control repository. Command translators map the generic control repository accesses into the appropriate format for interfacing with the underlying physical embodiment of the control repository.

Patent
10 Nov 1999
TL;DR: In this article, a system transacting e-commerce and related data management, analysis and reporting includes a database, first and second clients, and first through fourth servers communicatively connected to an internet network.
Abstract: A system transacting e-commerce and related data management, analysis and reporting includes a database, first and second clients, and first through fourth servers communicatively connected to an internet network. The database stores a variety of information relating to customers and has a cost reduction database portion storing cost reduction information. The first and second client have access to the on-line system. The first server maintains a commercial website. The second server supports a consultant interface on the internet network. The third server imports financial information into the database. The fourth server receives purchasing and financial information. The filter generates purchasing options and calculates a purchase price. A first program dynamically displays a cost reduction list. A second program analyzes a customer's purchasing history for cost effectiveness and forecasts future plans for improved purchasing implementation. A default pricing mechanism selectively uses default pricing values to generate a bid.

Journal ArticleDOI
TL;DR: The goal is to design cooperative strategies between server and client to provide access to information in such a way as to minimize energy expenditure by clients.
Abstract: Mobile computing has the potential for managing information globally. Data management issues in mobile computing have received some attention in recent times, and the design of adaptive braodcast protocols has been posed as an important probllem. Such protocols are employed by database servers to decide on the content of bbroadcasts dynamically, in response to client mobility and demand patterns. In this paper we design such protocols and also propose efficient retrieval strategies that may be employed by clients to download information from broadcasts. The goal is to design cooperative strategies between server and client to provide access to information in such a way as to minimize energy expenditure by clients. We evaluate the performance of our protocols both analytically and through simulation.