scispace - formally typeset
Search or ask a question

Showing papers on "Information integration published in 1996"


Journal ArticleDOI
01 Aug 1996
TL;DR: This paper describes the query reformulation process in SIMS and the operators used in it, and provides precise definitions of the reformulation operators and the rationale behind choosing the specific ones SIMS uses.
Abstract: The standard approach to integrating heterogeneous information sources is to build a global schema that relates all of the information in the different sources, and to pose queries directly against it. The problem is that schema integration is usually difficult, and as soon as any of the information sources change or a new source is added, the process may have to be repeated.

415 citations


Book
01 Jan 1996
TL;DR: The computational models needed to support the mediating functions in this three-layer, mediated architecture are focused on and initial applicatiions are introduced.
Abstract: This paper describes and classifies methods to transform data to information in a three-layer, mediated architecture. The layers can be characterized from the top down as information-consuming applications, mediators which perform intelligent integration of information (I3), and data, knowledge and simulation resources.The objective of modules in the I3 architecture is to provide end users' apoplications with information obtained through selection, abstraction, fusion, caching, extrapolation, and pruning of data. The data is obtained from many diverse and heterogeneous sources. The I3 objective requires the establishment of a consensual information system architecture, so that many participants and technologies can contribute. An attempt to provide such a range of services within a single, tightly integrated system is unlikely to survive technological or environmental change.This paper focuses on the computational models needed to support the mediating functions in this architecture and introduces initial applicatiions. The architecture has been motivated in [Wied:92C].

262 citations


Proceedings Article
04 Aug 1996
TL;DR: The architecture provides an expressive language for describing information sources, which makes it easy to add new sources and to model the fine-grained distinctions between their contents, and the query-answering algorithm guarantees that the descriptions of the sources are exploited to access only sources that are relevant to a given query.
Abstract: We describe the architecture and query-answering algorithms used in the Information Manifold, an implemented information gathering system that provides uniform access to structured information sources on the World-Wide Web. Our architecture provides an expressive language for describing information sources, which makes it easy to add new sources and to model the fine-grained distinctions between their contents. The query-answering algorithm guarantees that the descriptions of the sources are exploited to access only sources that are relevant to a given query. Accessing only relevant sources is crucial to scale up such a system to large numbers of sources. In addition, our algorithm can exploit run-time information to further prune information sources and to reduce the cost of query planning.

235 citations


Book
01 Jan 1996
TL;DR: Part 1 Introduction: mechanisms of information integration in the brain, Toshio Inui, and Association lecture: object tokens, attention, and visual memory, Anne Treisman and Brett DeSchepper.
Abstract: Part 1 Introduction: mechanisms of information integration in the brain, Toshio Inui. Part 2 Association lecture: object tokens, attention, and visual memory, Anne Treisman and Brett DeSchepper. Part 3 Integration in perception of visual structure: a Bayesian framework for the integration of visual modules, Heinrich H. Bulthoff and Alan L. Yuille stereo and texture cue integration in the perception of planar and curved large real surfaces, John P. Frisby, David Buckley, and Jonathan Freeman an architecture for rapid, hierarchical structural description, John E. Hummel and Brian J. Stankiewicz. Part 4 Integration over fixations in vision: integration and accumulation of information across saccadic eye movements, David E. Irwin and Rachel V. Andrews a neuro-physiological distinction between attention and intention, Carol L. Colby. Part 5 Multimodal integration for representation of space: multiple pathways for processing visual space, Michael S.A. Graziano and Charles G. Gross multimodal spatial constraints on tactile selective attention, Jon Driver and Peter G. Grossenbacher multimodal spatial attention visualised by motion illusion, Okihide Hikosaka, Satoru Miyauchi, Hiroshige Takeichi, and Shinsuke Shimojo haptic and visual representations of space, Lawrence E. Marks and Laura Armstrong. Part 6 Integration for motor control: are proprioceptive sensory inputs combined into a "gestalt?", Jean P. Roll, Jean C. Gilhodes, Regine Roll, and Francoise Harlay integration of extrinsic and motor space, David A. Rosenbaum, Loukia D. Loukopoulos, Sascha E. Englebrecht, Ruud G.J. Meulenbroek, and Jonathan Vaughan bidirectional theory approach to integration, Mitsuo Kawato one visual experience, many visual systems, Melvyn A. Goodale. Part 7 Integration in language: integration of multiple sources of information in language processing, Dominic W. Massaro representation and activation in syntactic processing, Maryellen C. MadDonald using eye movements to study spoken language comprehension - evidence for visually mediated incremental interpretation, Michael K. Tanenhaus, Michael J. Spivey-Knowlton, Kathleen M. Eberhard, and Julie C. Sedivy accounting for parsing principles - from parsing preferences to language acquisition, Gerry T.M. Altmann. (Part contents).

203 citations


Journal ArticleDOI
TL;DR: This paper discusses general modeling mechanisms and looks at several issues relating specifically to process modeling for AEC, comparing the approaches of the various core models described, and providing some recommendations.
Abstract: Computer-integrated construction (CIC) and concurrent engineering for architecture, engineering, and construction (AEC) require data standards or common information models through which computer systems can exchange project information High-level conceptual core models are required as unifying references for the more detailed, application-specific models used for the actual information exchange A variety of core models have been developed in the area of AEC process information This paper introduces several such models from a variety of projects It discusses general modeling mechanisms and looks at several issues relating specifically to process modeling for AEC, comparing the approaches of the various core models described, and providing some recommendations The overall objective is the eventual emergence of generally accepted standards in this area

96 citations


Journal ArticleDOI
01 Aug 1996
TL;DR: This work introduces matchmaking, and argues that it permits large numbers of dynamic consumers and providers, operating on rapidly-changing data, to share information more effectively than via traditional methods.
Abstract: Trends such as the massive increase in information available via electronic networks, the use of on-line product data by distributed concurrent engineering teams, and dynamic supply chain integration for electronic commerce are placing severe burdens on traditional methods of information sharing and retrieval. Sources of information are far too numerous and dynamic to be found via traditional information retrieval methods, and potential consumers are seeing increased need for automatic notification services. Matchmaking is an approach based on emerging information integration technologies whereby potential producers and consumers of information send messages describing their information capabilities and needs. These descriptions, represented in rich, machine-interpretable description languages, are unified by the matchmaker to identify potential matches. Based on the matches, a variety of information brokering services are performed. We introduce matchmaking, and argue that it permits large numbers of dynamic consumers and providers, operating on rapidly-changing data, to share information more effectively than via traditional methods. Two matchmakers are described, the SHADE matchmaker, which operates over logic-based and structured text languages, and the COINS matchmaker, which operates over free text. These matchmakers have been used for a variety of applications, most significantly, in the domains of engineering and electronic commerce. We describe our experiences with the SHADE and COINS matchmaker, and we outline the major observed benefits and problems of matchmaking.

82 citations


Patent
18 Oct 1996
TL;DR: In this article, an information management device is provided which is capable of performing a flexible search of input information, without the need to attach user-specified keywords or search information and without pre-processing of the input information such as character matching processing, natural language processing, statistical processing and recognition processing.
Abstract: An information management device is provided which is capable of performing a flexible search of input information, without the need to attach user-specified keywords or search information and without the need of pre-processing of the input information such as character matching processing, natural language processing, statistical processing and recognition processing. The information management device is used in a network of multiple information processing devices at least one of which is a mobile information processing device and is equipped with an information input unit that inputs information via the mobile information processing device; an attribute value input unit that measures and inputs at least one of information attribute values from the mobile information processing device and attribute values of the information resulting from the input of the information; an information database that stores the information along with the corresponding attribute information; an information registration unit that registers the information and the attribute value to the information database; a search key input that inputs search keys; an attribute database unit that outputs at attribute information in response to the search keys; an information search unit that outputs to the information database; a search directive that includes at least one attribute information output from the attribute database; and an information output unit that outputs information from the information database in response to the search directive.

67 citations


Journal ArticleDOI
TL;DR: An object-oriented model that integrates product and process information to support collaboration among design and construction agents, and two prototype construction agents for construction planning and monitoring project progress are presented.
Abstract: Product and process models provide the necessary information framework for implementing computer systems for the architect/engineering/construction (A/E/C) industry. Although the focus of these models is slightly different, both are needed to provide a foundation for managing project information during the design and construction phases. Design information— “product” information based on building components—needs to be integrated with construction management tasks, the “process” information necessary to build the components. It is therefore important to provide an integrated information model to bridge the gap between product and process information for a construction project. An integrated information model not only encourages those involved in construction to use and add to design information, but also provides richer information representation, better efficiency and data consistency, and the flexibility to support life-cycle information management. The research presented in this paper was performed under the auspices of the collaborative engineering research program at the U.S. Army Corps of Engineers Construction Engineering Research Laboratories (USACERL), which is attempting to redefine existing design processes to make them more collaborative and to develop enabling technologies to support the new process. An important part of this research is the development of an integrated information model that allows agents to communicate/collaborate over the life cycle of the project. This paper presents an object-oriented model that integrates product and process information to support collaboration among design and construction agents, and two prototype construction agents for construction planning and monitoring project progress. The development of these two agents demonstrates the value of using integrated product and process models for managing facility information in the A/E/C industry.

60 citations


Journal ArticleDOI
TL;DR: The authors are creating a prototype set of information services called the California Environmental Digital Information System, which includes a diverse collection of environmental data, which follows a client-server architecture.
Abstract: Work-centered digital information services are a set of library-like services meant to address work group needs. Workplace users especially need to access legacy documents and external collections. They also frequently want to retrieve information (rather than documents per se), and they require that digital information systems be integrated into established work practices. Realizing work-centered digital information systems requires a broad technical agenda. Three types of analysis-document image, natural language, and computer vision, are necessary to facilitate information extraction. Users also need new user interface paradigms and authoring tools to better access multimedia information, as well as improved protocols for client-program interaction with repositories (collections). Moreover, entirely new types of documents must be developed to exploit these capabilities. The system developed by the authors follows a client-server architecture, in which the servers are repositories implemented as databases supporting user-defined functions and user-defined access methods. The repositories also serve as indexing servers. The authors are creating a prototype set of information services called the California Environmental Digital Information System, which includes a diverse collection of environmental data.

53 citations


Book
01 Jan 1996
TL;DR: Part 1 Classical information theory: intensional and extensional meaning of information long-range correlation and extended memory of symbol sequences and physical aspects: information processing of cosmic signals relations between correlation and information entropy of quantum-mechanical many-body systems.
Abstract: Part 1 Classical information theory: intensional and extensional meaning of information long-range correlation and extended memory of symbol sequences Part 2 Physical aspects: information processing of cosmic signals relations between correlation and information entropy of quantum-mechanical many-body systems Part 3 Biology: contextual dependency of information in biology Part 4 System theoretical aspects: pragmatic information as a unifying concept Part 5 Philosophy of science: can information be naturalized? simple and complex systems in science Part 6 Philosophical issues: complexity, meaning and the Cartesian cut Part 7 Conceptual design: the genesis of information remarks about a concept of information Part 8 Linguistics: situational semantics and computer linguistics Future aspects: knowledge cities - metropols of an information society

48 citations


Journal ArticleDOI
01 Jun 1996
TL;DR: It is concluded that Dretske's analysis of knowledge and information provides the most suitable basis for further development.
Abstract: It is argued here that the discipline of information systems does not have a clear and substantive conceptualization of its most fundamental category, namely, information itself. As a first stage in addressing the problem, this paper evaluates a wide range of theories or concepts of information in order to assess their suitability as a basis for information systems. Particular importance is placed on the extent to which they deal with the semantic and pragmatic dimensions of information and its relation to meaning. It is concluded that Dretske's analysis of knowledge and information provides the most suitable basis for further development.

Book
01 Aug 1996
TL;DR: This text details the use of information in organizations and integrates material from library information science, management and related disciplines and assessing the value of information.
Abstract: This text details the use of information in organizations and integrates material from library information science, management and related disciplines. Sections cover: information models of integration; information behaviour of managers; and assessing the value of information.

Book
06 Jun 1996
TL;DR: A three-level notion of space serves as a basis of a model for the integration of spatial information that likewise takes into account the geometry, metrics and the topology of geo-objects in 3D-GISs.
Abstract: This work presents a model for the integration of spatial information for 3D Geo-Information Systems (3D-GISs) Such systems execute the integration of spatial information by conversion of vector and raster representations This, however, leads to conceptual difficulties because of the totally different paradigms After an introduction to the history and architecture of Geo-Information Systems, this work examines spatial representations in 2D and 3D space regarding their suitability in 3D-GISs A three-level notion of space serves as a basis of a model for the integration of spatial information It likewise takes into account the geometry, metrics and the topology of geo-objects

Journal ArticleDOI
TL;DR: The Cheshire II online catalog system was designed to provide a bridge between the realms of purely bibliographical information and the rapidly expanding full-text and multimedia collections available online.
Abstract: The Cheshire II online catalog system was designed to provide a bridge between the realms of purely bibliographical information and the rapidly expanding full-text and multimedia collections available online. It is based on a number of national and international standards for data description, communication, and interface technology. The system uses a client-server architecture with X window client communication with an SGML-based probabilistic search engine using the 239.50 information retrieval protocol.

Dissertation
01 Jan 1996
TL;DR: This thesis shows that Situation Theory is best at representing all the qualitative features of information in an information retrieval system, combined with the Dempster-Shafer Theory of Evidence, which is the first of its kind to capture these features within a uniform framework.
Abstract: Current information retrieval models only offer simplistic and specific representations of information. Therefore, there is a need for the development of a new formalism able to model information retrieval systems in a more generic manner. In 1986, Van Rijsbergen suggested that such formalisms can be both appropriately and powerfully defined within a logic. The resulting formalism should capture information as it appears in an information retrieval system, and also in any of its inherent forms. The aim of this thesis is to understand the nature of information in information retrieval, and to propose a logic-based model of an information retrieval system that reflects this nature. The first objective of this thesis is to identify essential features of information in an information retrieval system. These are: 0 flow, 0 intensionality, 0 partiality, 0 structure, 0 significance, and o uncertainty. It is shown that the first four features are qualitative, whereas the last two are quantitative, and that their modelling requires different frameworks: a theory of information, and a theory of uncertainty, respectively. The second objective of this thesis is to determine the appropriate framework for each type of feature, and to develop a method to combine them in a consistent fashion. The combination is based on the Transformation Principle. Many specific attempts have been made to derive an adequate definition of information. The one adopted in this thesis is based on that of Dretske, Barwise, and Devlin who claimed that there is a primitive notion of information in terms of which a logic can be defined, and subsequently developed a theory of information, namely Situation Theory. Their approach was in accordance with Van Rijsbergen' s suggestion of a logic-based formalism for modelling an information retrieval system. This thesis shows that Situation Theory is best at representing all the qualitative features. Regarding the modelling of the quantitative features of information, this thesis shows that the framework that models them best is the Dempster-Shafer Theory of Evidence, together with the notion of refinement, later introduced by Shafer. The third objective of this thesis is to develop a model of an information retrieval system based on Situation Theory and the Dempster-Shafer Theory of Evidence. This is done in two steps. First, the unstructured model is defined in which the structure and the significance of information are not accounted for. Second, the unstructured model is extended into the structured model, which incorporates the structure and the significance of information. This strategy is adopted because it enables the careful representation of the flow of information to be performed first. The final objective of the thesis is to implement the model and to perform empirical evaluation to assess its validity. The unstructured and the structured models are implemented based on an existing on-line thesaurus, known as WordNet. The experiments performed to evaluate the two models use the National Physical Laboratory standard test collection. The experimental performance obtained was poor, because it was difficult to extract the flow of information from the document set. This was mainly due to the data used in the experimentation which was inappropriate for the test collection. However, this thesis shows that if more appropriate data, for example, indexing tools and thesauri, were available, better performances would be obtained. The conclusion of this work was that Situation Theory, combined with the Dempster-Shafer Theory of Evidence, allows the appropriate and powerful representation of several essential features of information in an information retrieval system. Although its implementation presents some difficulties, the model is the first of its kind to capture, in a general manner, these features within a uniform framework. As a result, it can be easily generalized to many types of information retrieval systems (e.g., interactive, multimedia systems), or many aspects of the retrieval process (e.g., user modelling).

Patent
17 Jan 1996
TL;DR: An information navigation system based on an information resource topology among information resources in which each information resource is associated with at least one term combination and a set of links is proposed in this paper.
Abstract: An information navigation system based on an information resource topology among information resources in which each information resource is associated with at least one term combination and a set of links, where each term combination specifies a set of terms describing each information resource and each link links information resources with matching term combinations, and for every existing term combination, a set of information resources that contain said every existing term combination form a cluster, where a cluster is defined as a set of information resources for which there exists at least one path between every pair of information resources in said set of information resources such that said at least one path contains only information resources from said set of information resources and a path is defined as a series of information resources connected through links. The information navigation functions including gathering, searching, and topology managing can be realized on this the information resource topology.

Book ChapterDOI
01 Jun 1996
TL;DR: A hybrid approach combining the advantages of both the federated and multidatabase techniques which is believed to provide the most feasible avenue for large scale integration of large scale information sources.
Abstract: Current methodologies for information integration are inadequate for solving the problem of integration of large scale, distributed information sources (e.g. databases, free-form text, simulation etc.). The existing approaches are either too restrictive and complicated as in the “federated” (global model) approach or do not provide the necessary functionality as in the “multidatabase” approach. We propose a hybrid approach combining the advantages of both the federated and multidatabase techniques which we believe provide the most feasible avenue for large scale integration. Under our architecture, the individual data site administrators provide anaugmented export schemaspecifying knowledge about the sources of data (where data exists), their structure (underlying data model or file structure), their content (what data exists), and their relationships (how the data relates to other information in its domain). The augmented export schema from each information source provides an intelligent agent, called the “mediator”, knowledge which can be used to infer information on some of the existing inter-system relationships. This knowledge can then be used to generate a partially integrated, global view of the data.

Journal ArticleDOI
TL;DR: In this article, the authors describe the result of a survey carried out in manufacturing environments to find out future trends and current implementation difficulties of quality management schemes, and find that the future of manufacturing systems lies within total information integration.
Abstract: This paper describes the result of a survey carried out in manufacturing environments to find out future trends and current implementation difficulties of quality management schemes. There is some evidence from the survey that the future of manufacturing systems lies within total information integration. Some of the companies have already achieved partial integration and many are considering establishing a totally integrated understanding of management. A total quality management (TQM) philosophy can play a major role in leading towards such total integration, which would probably result in new forms of management. Currently, it is also interesting to find that the required quality data are gathered on the shopfloor and then processed by middle management, but do not influence top management quality policies as much as might be expected. However, TQM based on continuous improvement is seen as a competitive advantage, although many companies and industries interpret it differently. It seems that the new hi...

Proceedings ArticleDOI
15 Sep 1996
TL;DR: An integrated solution for computerized distribution planning in a geographic information system (GIS) context, a synergy that magnifies the data accessibility between load forecasting and feeder planning tools, scaling the traditional gap between long-term and short-term distribution system planning.
Abstract: Electric distribution planning involves a great deal of information, residing in different systems. Information sharing among these systems is essential in improving the efficiency and quality of distribution system planning. This paper presents an integrated solution for computerized distribution planning in a geographic information system (GIS) context, a synergy that magnifies the data accessibility between load forecasting and feeder planning tools, scaling the traditional gap between long-term and short-term distribution system planning. A stochastic cell-based load forecasting algorithm is first developed, followed by an optimal load allocation module, NODESIM, which spatially relates the load growth to the vector-based circuit topology from feeder planning tools. NODESIM enables the multi-year distribution system studies in a GIS context, to best assist utility planners in deciding where and when the customers will grow and how to expand the system facilities to meet the demand growth.

Proceedings ArticleDOI
08 Sep 1996
TL;DR: It is discussed how fuzzy set techniques can contribute to most of the information engineering tasks due to the fuzzy set representation capabilities and their computational facilities.
Abstract: Information engineering constitutes a variety of tasks related to: 1) information processing (data clarification, enhancing, classification, fusion, summarization, and modelling); 2) information retrieving (through querying and reasoning); and 3) information exploitation (for making decision, designing and optimizing). These tasks are becoming increasingly important with the confluence of computer and communication technologies, e.g. on the Internet. Fuzzy set methods offer useful tools for handling these tasks due to their ability to provide a qualitative interface with data and to model graded notions such as uncertainty, preference and similarity, which play a key role in reasoning and decision. We discuss how fuzzy set techniques can contribute to most of the information engineering tasks due to the fuzzy set representation capabilities and their computational facilities. The paper emphasizes the centrality of information and points out the role of fuzzy sets in different information engineering tasks and application areas.

01 Jan 1996
TL;DR: A cognitive framework is sketched that permits to analyze central concepts of the information retrieval scenario, including information as an ordered pair representing the difference between two knowledge states, and some critical features of this concept are discussed.
Abstract: Information overloading is one of the major problems of the Information Society, and it is experienced by many people. Information retrieval is aimed at solving such problem, and hence it is a crucial discipline of this new era. Despite its centrality, information retrieval has its own shortcomings: for instance, most of Internet users have discovered with excitement the information retrieval systems available on Internet (the so called 'search engines'), but they have also experienced how often the performance of such services is too low, very far from an ideal 100%. The lackness of a formal account is probably one of the most evident of these shortcomings: concepts like information, information need and relevance are neither well understood nor formally defined. This paper sketches a cognitive framework that permits to analyze these central concepts of the information retrieval scenario. The cognitive framework consists of concepts as cognitive agents acting in the world, knowledge states possessed by the cognitive agents, transitions among knowledge states, and inferences. On the basis of such framework, information is formally defined as an ordered pair representing the difference between two knowledge states; this definition permits to clarify the distinction among data, knowledge and information and to discuss the issue of the subjectiveness of information. On this ground, the concept of information need is examined: it is defined, it is studied in the context of the interaction between an information retrieval system and a user, and the well known classification in verificative, conscious topical and muddled needs is analyzed. On the basis of the above definitions of information and information need, relevance is formally defined, and some critical features of this concept are discussed.

Patent
21 Mar 1996
TL;DR: In this article, a sort information integration part 13 extracts the category names (sort items), link names and link destination document IDs (URLs) out of those index pages and integrates them and statistically analyzes its integrated information based on every category name and document ID and makes clear the similarity relations among categories and link destinations.
Abstract: PROBLEM TO BE SOLVED: To improve the retrieval efficiency of hypertext information that is sorted and systematized from different points of view and also to improve the operability in a retrieval mode. SOLUTION: Plural index pages (sort information) acquired from hypertext document storage parts 11a, 11b...11n are inputted to a sort information integration part 13. The part 13 extracts the category names (sort items), link names and link destination document IDs (URLs) out of those index pages and integrates them. At the same time, the part 13 statistically analyzes its integrated information based on every category name and document ID and makes clear the similarity relations among categories and link destination documents. Then, the part 13 shows the similar categories and link names on the screen of a hypertext display part 12 in an integrated way based on the degrees of similarity. COPYRIGHT: (C)1997,JPO

Proceedings ArticleDOI
25 Aug 1996
TL;DR: The model consists of multiple processing modules and a MRF (Markov random field)-based hypothesis network for integration of multiple sources of information and shows that the present system recognizes notes better than does a system based on a singly connected Bayesian hypothesis network.
Abstract: This paper describes the process model for a system that recognizes the rhythm, chords, and source-separated musical notes in monaural music signals. The model consists of multiple processing modules and a MRF (Markov random field)-based hypothesis network for integration of multiple sources of information. Because the MRP enables information to be integrated on a multiply connected hypothesis network, the results of evaluation experiments show that the present system recognizes notes better than does a system based on a singly connected Bayesian hypothesis network.

Proceedings ArticleDOI
21 Nov 1996
TL;DR: An updated version of this rule-based expert system which performs high level data fusion for human decision support is described, as well as some of the measures of information which the authors have devised for assessing the performance of the system.
Abstract: There are various architectures for data fusion. Given that the purpose of a data fusion system is to combine related data from multiple sources to provide enhanced information, one way of assessing the performance is to measure the enhancement or degradation in the information provided by the system and this is the point of view which we take in this paper. In order to achieve this goal, methods for measuring the information provided by the output of the system are required. The most commonly used method employs the relationship which exists between measures of information and measures of uncertainty. By convention, measures of uncertainty for a system can also be treated as measures of the potential information in the system. Therefore, the premise is that a decrease in uncertainty amounts to a decrease in the potential information, so that the information yielded from the system increases. The authors previously (1996) introduced concepts for a rule-based expert system which performs high level data fusion for human decision support. In this paper, we shall describe an updated version of this system, as well as some of the measures of information which we have devised for assessing the performance of the system. Finally, we shall discuss ways of combining these measures of information to gauge the enhancement or degradation in the information provided by the system.

Proceedings ArticleDOI
TL;DR: An overview of the existing measures of uncertainty and information is given, and some new measures for the various levels of the data fusion process are proposed.
Abstract: In many commercial and military activities such as manufacturing, robotics, surveillance, target tracking and military command and control, information may be gathered by a variety of sources. The types of sources which may be used cover a broad spectrum and the data collected may be either numerical or linguistic in nature. Data fusion is the process in which data from multiple sources are combined to provide enhanced information quality and availability over that which is available from any individual source. The question is how to assess these enhancements. Using the U.S. JDL Model, the process of data fusion can be divided into several distinct levels. The first three levels are object refinement, situation refinement and threat refinement. Finally, at the fourth level (process refinement) the performance of the system is monitored to enable product improvement and sensor suite management. This monitoring includes the use of measures of information from the realm of generalized information theory to assess the improvements or degradation due to the fusion processing. The premise is that decreased uncertainty equates to increased information. At each level, the uncertainty may be represented in different ways. In this paper we give an overview of the existing measures of uncertainty and information, and propose some new measures for the various levels of the data fusion process.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: An information strategy has been developed and consists of the following stages: definition of the problem, its structure and sub-problems, acquisition of data by targeted processing of computer-supported bibliographic, numeric, textual and graphic databases, analysis of data and building of specialized in-house information systems.
Abstract: The success of biotechnological research, development and marketing depends to a large extent on the international transfer of information and on the ability to organise biotechnology information into knowledge. To increase the efficiency of information-based approaches, an information strategy has been developed and consists of the following stages: definition of the problem, its structure and sub-problems; acquisition of data by targeted processing of computer-supported bibliographic, numeric, textual and graphic databases; analysis of data and building of specialized in-house information systems; information processing for structuring data into systems, recognition of trends and patterns of knowledge, particularly by information synthesis using the concept of information density; design of research hypotheses; testing hypotheses in the laboratory and/or pilot plant; repeated evaluation and optimization of hypotheses by information methods and testing them by further laboratory work. The information approaches are illustrated by examples from the university-industry joint projects in biotechnology, biochemistry and agriculture.

01 Jan 1996
TL;DR: In this work, it is considered how knowledge from the area information seeking behaviour can be used when information systems are to be developed.
Abstract: In this work is considered how knowledge from the area information seeking behaviour can be used when information systems are to be developed. This work emphasize a holistic view on how information ...

Proceedings ArticleDOI
26 Feb 1996
TL;DR: The framework for vertical information management developed in response to "queries from outer space" is presented and supports the specification of the request for high-level information; the extraction of relevant data; and the derivation of the high- level information.
Abstract: Decision makers need high-level information on a wide variety of topics. In particular they are not constrained by the current contents of available information sources. They often ask for data that is not present. On the other hand there is often a large body of relevant, detailed data that could be usefully summarized or abstracted for the decision maker. We articulate major issues that arise with these "queries from outer space" and present the framework for vertical information management developed in response to these issues. The term vertical refers to the delivery of information upwards to decision makers at higher and higher levels of the management hierarchy. The framework supports the specification of: the request for high-level information; the extraction of relevant data; and the derivation of the high-level information.

01 Jan 1996
TL;DR: A class of intelligent agents called Information Integration Agents, which are particularly well suited to application on the Internet, and can be used to satisfy a wide range of needs, are presented.
Abstract: This paper presents a class of intelligent agents called Information Integration Agents. These agents are particularly well suited to application on the Internet, and can be used to satisfy a wide range of needs. We discuss two prototype Information Integration Agents that have been deployed on the Internet. One, the BargainFinder agent, has been active for over 9 months amidst considerable interest from Internet users and the mass media. BargainFinder performs comparison price shopping among a number of on-line CD stores. The second, the NewsFinder agent, is currently being tested internally. NewsFinder retrieves on-line news articles, matches them against user profiles, and transmits them via portable ubiquitous displays (currently alphanumeric pagers). We discuss these agents and the class of Information Integration Agents in general, and conjecture that agents of this sort will be extremely valuable to a broad spectrum of Internet users.

Book ChapterDOI
20 May 1996
TL;DR: An integration process which allows similarities to be discovered between the schemas under study and works on object-oriented schemas, using a model of thesaurus drawn from the domain dealing with the meaning of words: linguistics.
Abstract: The complexity of databases is increasing continually. The work of several designers become necessary. Therefore it is interesting to improve the design process with a new phase devoted to information integration, in order to take into account the designers'viewpoints. In this paper, we present an integration process which allows similarities to be discovered between the schemas under study. It works on object-oriented schemas. Whenever possible, we propose several results for the integration of two given schemas. This then makes it possible to choose the one which is the best adapted to the working context, amongst the result schemas. When design schemas are being integrated, the structural, but also and above all, the semantic part of the schemas are studied. To represent the semantic of the words which are used in a schema, we have defined a model of thesaurus drawn from the domain dealing with the meaning of words: linguistics.