scispace - formally typeset
Search or ask a question

Showing papers on "Information integration published in 2006"


Journal ArticleDOI
TL;DR: In this article, a review of data mining applications in manufacturing engineering is presented, in particular production processes, operations, fault detection, maintenance, decision support, and product quality improvement.
Abstract: The paper reviews applications of data mining in manufacturing engineering, in particular production processes, operations, fault detection, maintenance, decision support, and product quality improvement. Customer relationship management, information integration aspects, and standardization are also briefly discussed. This review is focused on demonstrating the relevancy of data mining to manufacturing industry, rather than discussing the data mining domain in general. The volume of general data mining literature makes it difficult to gain a precise view of a target area such as manufacturing engineering, which has its own particular needs and requirements for mining applications. This review reveals progressive applications in addition to existing gaps and less considered areas such as manufacturing planning and shop floor control.

499 citations


Posted Content
TL;DR: It is argued that traditional approaches to measuring the amount of information in a choice set fail to account for important structural dimensions of information and may therefore incorrectly predict information overload.
Abstract: Today's consumers are often overloaded with information. This article argues that traditional approaches to measuring the amount of information in a choice set fail to account for important structural dimensions of information and may therefore incorrectly predict information overload. Two experiments show that a structural approach to measuring information, such as information theory, is better able to predict information overload and that information structure also has important implications for information acquisition. A Monte-Carlo simulation, in which decision rules are applied to multiple information environments, shows that the amount of information processing mediates the relationship between information structure and information overload.

292 citations


Journal ArticleDOI
TL;DR: This survey describes and compares the main approaches to IE and the different ML techniques used to achieve Adaptive IE technology.
Abstract: The growing availability of online textual sources and the potential number of applications of knowledge acquisition from textual data has lead to an increase in Information Extraction (IE) research. Some examples of these applications are the generation of data bases from documents, as well as the acquisition of knowledge useful for emerging technologies like question answering, information integration, and others related to text mining. However, one of the main drawbacks of the application of IE refers to its intrinsic domain dependence. For the sake of reducing the high cost of manually adapting IE applications to new domains, experiments with different Machine Learning (ML) techniques have been carried out by the research community. This survey describes and compares the main approaches to IE and the different ML techniques used to achieve Adaptive IE technology.

217 citations


Journal IssueDOI
TL;DR: The authors examine the current three interdisciplinary approaches to conceptualizing how humans have sought information including the everyday life information seeking–sense-making approach, the information foraging approach, and the problem–solution perspective on information seeking approach and propose an initial integrated model of these different approaches with information use.
Abstract: For millennia humans have sought, organized, and used information as they learned and evolved patterns of human information behaviors to resolve their human problems and survive. However, despite the current focus on living in an “information age,” we have a limited evolutionary understanding of human information behavior. In this article the authors examine the current three interdisciplinary approaches to conceptualizing how humans have sought information including (a) the everyday life information seeking–sense-making approach, (b) the information foraging approach, and (c) the problem–solution perspective on information seeking approach. In addition, due to the lack of clarity regarding the role of information use in information behavior, a fourth information approach is provided based on a theory of information use. The use theory proposed starts from an evolutionary psychology notion that humans are able to adapt to their environment and survive because of our modular cognitive architecture. Finally, the authors begin the process of conceptualizing these diverse approaches, and the various aspects or elements of these approaches, within an integrated model with consideration of information use. An initial integrated model of these different approaches with information use is proposed. © 2006 Wiley Periodicals, Inc.

203 citations


Journal Article
TL;DR: In this paper, the authors propose a process which semi-automatically lifts metamodels into ontologies by making implicit concepts in the metamodeel explicit in the ontology.
Abstract: The use of different modeling languages in software development makes their integration a must. Most existing integration approaches are meta-model-based with these metamodels representing both an abstract syntax of the corresponding modeling language and also a data structure for storing models. This implementation specific focus, however, does not make explicit certain language concepts, which can complicate integration tasks. Hence, we propose a process which semi-automatically lifts metamodels into ontologies by making implicit concepts in the metamodel explicit in the ontology. Thus, a shift of focus from the implementation of a certain modeling language towards the explicit reification of the concepts covered by this language is made. This allows matching on a solely conceptual level, which helps to achieve better results in terms of mappings that can in turn be a basis for deriving implementation specific transformation code.

155 citations


Proceedings ArticleDOI
01 Sep 2006
TL;DR: This work presents a new formalism for schema mapping that extends these existing formalisms in two significant ways, one of which is the ability to express, in a declarative way, grouping and data merging semantics.
Abstract: Many problems in information integration rely on specifications, called schema mappings, that model the relationships between schemas. Schema mappings for both relational and nested data are well-known. In this work, we present a new formalism for schema mapping that extends these existing formalisms in two significant ways. First, our nested mappings allow for nesting and correlation of mappings. This results in a natural programming paradigm that often yields more accurate specifications. In particular, we show that nested mappings can naturally preserve correlations among data that existing mapping formalisms cannot. We also show that using nested mappings for purposes of exchanging data from a source to a target will result in less redundancy in the target data. The second extension to the mapping formalism is the ability to express, in a declarative way, grouping and data merging semantics. This semantics can be easily changed and customized to the integration task at hand. We present a new algorithm for the automatic generation of nested mappings from schema matchings (that is, simple element-to-element correspondences between schemas). We have implemented this algorithm, along with algorithms for the generation of transformation queries (e.g., XQuery) based on the nested mapping specification. We show that the generation algorithms scale well to large, highly nested schemas. We also show that using nested mappings in data exchange can drastically reduce the execution cost of producing a target instance, particularly over large data sources, and can also dramatically improve the quality of the generated data.

146 citations


Journal ArticleDOI
01 Nov 2006
TL;DR: The main components of audio-visual biometric systems are described, existing systems and their performance are reviewed, and future research and development directions in this area are discussed.
Abstract: Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area

142 citations


Journal ArticleDOI
TL;DR: Of particular interest are the challenges associated with the design of multidisciplinary and multiscale systems; these challenges and opportunities are examined in the context of materials design.
Abstract: The intent in robust design is to improve the quality of products and processes by reducing their sensitivity to variations, thereby reducing the effects of variability without removing its sources. Robust design is especially useful for integrating information from designers working at multiple length and time scales. Inevitably this involves the integration of uncertain information. This uncertainty is derived from many sources and robust design may be classified based on these sources-uncertainty in noise or environmental and other noise factors (type I); uncertainty in design variables or control factors (type II); and uncertainty introduce by modeling methods (type III). Each of these types of uncertainty can be mitigated by robust design. Of particular interest are the challenges associated with the design of multidisciplinary and multiscale systems; these challenges and opportunities are examined in the context of materials design.

139 citations


Journal ArticleDOI
TL;DR: This article develops the DCM framework, which consists of data preprocessing, dual mining of positive and negative correlations, and finally matching construction, and develops a novel “ensemble” approach, which creates an ensemble of DCM matchers by randomizing the schema data into many trials and aggregating their ranked results by taking majority voting.
Abstract: To enable information integration, schema matching is a critical step for discovering semantic correspondences of attributes across heterogeneous sources. While complex matchings are common, because of their far more complex search space, most existing techniques focus on simple 1:1 matchings. To tackle this challenge, this article takes a conceptually novel approach by viewing schema matching as correlation mining, for our task of matching Web query interfaces to integrate the myriad databases on the Internet. On this “deep Web ” query interfaces generally form complex matchings between attribute groups (e.g., {author} corresponds to {first name, last name} in the Books domain). We observe that the co-occurrences patterns across query interfaces often reveal such complex semantic relationships: grouping attributes (e.g., {first name, last name}) tend to be co-present in query interfaces and thus positively correlated. In contrast, synonym attributes are negatively correlated because they rarely co-occur. This insight enables us to discover complex matchings by a correlation mining approach. In particular, we develop the DCM framework, which consists of data preprocessing, dual mining of positive and negative correlations, and finally matching construction. We evaluate the DCM framework on manually extracted interfaces and the results show good accuracy for discovering complex matchings. Further, to automate the entire matching process, we incorporate automatic techniques for interface extraction. Executing the DCM framework on automatically extracted interfaces, we find that the inevitable errors in automatic interface extraction may significantly affect the matching result. To make the DCM framework robust against such “noisy” schemas, we integrate it with a novel “ensemble” approach, which creates an ensemble of DCM matchers, by randomizing the schema data into many trials and aggregating their ranked results by taking majority voting. As a principled basis, we provide analytic justification of the robustness of the ensemble approach. Empirically, our experiments show that the “ensemblization” indeed significantly boosts the matching accuracy, over automatically extracted and thus noisy schema data. By employing the DCM framework with the ensemble approach, we thus complete an automatic process of matchings Web query interfaces.

123 citations


Book ChapterDOI
01 Oct 2006
TL;DR: A shift of focus from the implementation of a certain modeling language towards the explicit reification of the concepts covered by this language is made, which helps to achieve better results in terms of mappings that can in turn be a basis for deriving implementation specific transformation code.
Abstract: The use of different modeling languages in software development makes their integration a must. Most existing integration approaches are metamodel-based with these metamodels representing both an abstract syntax of the corresponding modeling language and also a data structure for storing models. This implementation specific focus, however, does not make explicit certain language concepts, which can complicate integration tasks. Hence, we propose a process which semi-automatically lifts metamodels into ontologies by making implicit concepts in the metamodel explicit in the ontology. Thus, a shift of focus from the implementation of a certain modeling language towards the explicit reification of the concepts covered by this language is made. This allows matching on a solely conceptual level, which helps to achieve better results in terms of mappings that can in turn be a basis for deriving implementation specific transformation code.

123 citations


Journal ArticleDOI
TL;DR: The definition and aims of the “3D to nD Modelling” project, a platform grant-funded project by UK’s British Engineering and Physics Sciences Research Council, are outlined and a scenario of widening BIM implementation into the overall aspects involved in the whole life cycle of a building project is presented.

Journal ArticleDOI
TL;DR: A family of information collection policies that vary in the granularity at which system state information is represented and maintained are proposed that are incorporated into an integrated middleware framework AutoSeC (Automatic Service Composition) to provide support for dynamic service brokering that ensures effective utilization of system resources over wireless networks.
Abstract: Efficient resource provisioning that allows for cost-effective enforcement of application QoS relies on accurate system state information. However, maintaining accurate information about available system resources is complex and expensive, especially in mobile environments where system conditions are highly dynamic. Resource provisioning mechanisms for such dynamic environments must therefore be able to tolerate imprecision in system state while ensuring adequate QoS to the end-user. In this paper, we address the information collection problem for QoS-based services in mobile environments. Specifically, we propose a family of information collection policies that vary in the granularity at which system state information is represented and maintained. We empirically evaluate the impact of these policies on the performance of diverse resource provisioning strategies. We generally observe that resource provisioning benefits significantly from the customized information collection mechanisms that take advantage of user mobility information. Furthermore, our performance results indicate that effective utilization of coarse-grained user mobility information renders better system performance than using fine-grained user mobility information. Using results from our empirical studies, we derive a set of rules that supports seamless integration of information collection and resource provisioning mechanisms for mobile environments. These results have been incorporated into an integrated middleware framework AutoSeC (Automatic Service Composition) to provide support for dynamic service brokering that ensures effective utilization of system resources over wireless networks.

Journal ArticleDOI
TL;DR: The methods described in this paper were implemented in a prototype system that provides complete Web-based integration services for remote clients and supports other critical integration requirements, such as information source heterogeneity, dynamic evolution of the information environment, quick ad-hoc integration, and intermittent source availability.

Journal ArticleDOI
TL;DR: This paper presents an ontology-driven integration approach called a priori approach that ensures the automation of the integration process when all sources reference a shared ontology, and possibly extend it by adding their own concept specializations.

Patent
21 Feb 2006
TL;DR: In this paper, the authors present a method and system for managing remote applications running on devices that acquire, process and store data locally in order to integrate said data with heterogeneous enterprise information systems and business processes.
Abstract: The present invention provides a method and system for managing remote applications running on devices that acquire, process and store data locally in order to integrate said data with heterogeneous enterprise information systems and business processes. The 5 system allows for remotely deploying, running, monitoring and updating of applications embedded within devices. The applications acquire, store and process data about assets that is eventually sent to a centralized data processing infrastructure. The system comprises an information integration framework that integrates the processed data with data that is extracted from heterogeneous data sources, in real-time, in order to create synthesized information.

01 Jan 2006
TL;DR: The maturity of techniques for ontology learning from textual resources is examined, addressing the question whether the state-of-the-art is mature enough to produce ontologies'on demand'.
Abstract: Ontologies are nowadays used for many applications requiring data, services and resources in general to be interoperable and machine understandable. Such applications are for example web service discovery and composition, information integration across databases, intelligent search, etc. The general idea is that data and services are semantically described with respect to ontologies, which are formal specifications of a domain of interest, and can thus be shared and reused in a way such that the shared meaning specified by the ontology remains formally the same across different parties and applications. As the cost of creating ontologies is relatively high, different proposals have emerged for learning ontologies from structured and unstructured resources. In this article we examine the maturity of techniques for ontology learning from textual resources, addressing the question whether the state-of-the-art is mature enough to produce ontologies'on demand'.

Journal ArticleDOI
TL;DR: A novel methodology and architecture are proposed for accomplishing the two configuration tasks and bridging the gap between them and a dependency analysis approach is proposed and implemented to link customer groups with clusters of product specifications.
Abstract: Product configuration design is of critical importance in design for mass customization. This paper will investigate two important issues in configuration design. The first issue is requirement configuration and a dependency analysis approach is proposed and implemented to link customer groups with clusters of product specifications. The second issue concerns the engineering configuration and it is modelled as an association relation between clusters of product specifications and configuration alternatives. A novel methodology and architecture are proposed for accomplishing the two configuration tasks and bridging the gap between them. This methodology is based on integration of popular data mining approaches (such as fuzzy clustering and association rule mining) and variable precision rough set. It focuses on the discovery of configuration rules from the purchased products according to customer groups. The proposed methodology is illustrated with a case study of an electrical bicycle.

Book ChapterDOI
29 Oct 2006
TL;DR: This paper presents ongoing work to develop a context-aware similarity theory for concepts specified in expressive description logics such as $\mathcal ALCNR$.
Abstract: Similarity measurement theories play an increasing role in GIScience and especially in information retrieval and integration Existing feature and geometric models have proven useful in detecting close but not identical concepts and entities However, until now none of these theories are able to handle the expressivity of description logics for various reasons and therefore are not applicable to the kind of ontologies usually developed for geographic information systems or the upcoming geospatial semantic web To close the resulting gap between available similarity theories on the one side and existing ontologies on the other, this paper presents ongoing work to develop a context-aware similarity theory for concepts specified in expressive description logics such as $\mathcal ALCNR$.

18 Sep 2006
TL;DR: The research presented in the thesis provides solutions for the computer-aided integration of distributed heterogeneous geo-information and geo-services, based on their semantics, by formally describing the geoservices with ontological concepts and reasoning with them in Description Logics.
Abstract: There is an increasing need for organisations to perform on demand geoprocessing tasks by integrating and reusing distributed geo-information and geo-services (typically provided as services on the web, such as interactive maps, route planners and geometric transformations). To enable sensible integration, computers require to operate with formal semantics of the services involved, making explicit the meaning of the service content. The research presented in the thesis provides solutions for the computer-aided integration of distributed heterogeneous geo-information and geo-services, based on their semantics. This is achieved by formally describing the geoservices with ontological concepts and reasoning with them in Description Logics. The target groups of this research are firstly geo-information engineers who are confronted with information integration issues and service interoperability issues, and secondly, information engineers in general confronted with distributed information and with end-users that need to access distributed services as one virtual application.

Proceedings ArticleDOI
10 Nov 2006
TL;DR: This paper presents an upper-level ontology combining concepts and relationships from both the thematic and spatial dimensions and shows how to incorporate temporal semantics into this ontology.
Abstract: The W3C's Semantic Web Activity is illustrating the use of semantics for information integration, search, and analysis. However, the majority of the work in this community has focused more on the thematic aspects of information and has paid less attention to its spatial and temporal dimensions. In this paper, we present an integrative ontology-based framework incorporating the thematic, spatial, and temporal dimensions of information. This framework is built around the RDF metadata model. Our ultimate goal is to provide an information system which allows searching and analysis of relationships in any or all of the three dimensions of space, time, and theme. Toward this end, we present an upper-level ontology combining concepts and relationships from both the thematic and spatial dimensions and show how to incorporate temporal semantics into this ontology. We also introduce the notion of a thematic context linking entities of differing dimensions and define a set of query operators built upon these contexts.

Journal ArticleDOI
TL;DR: The article concludes by stressing the need for evaluative studies, especially in the promising field of ICT-based collaborative learning, and the importance to be attached to the position and qualifications of the teaching staff is emphasized.
Abstract: In contrast to traditional meta-analyses of research, an alternative overview and analysis of the research literature on the impact of information and communication technologies (ICT) in medical education is presented in this article. A distinction is made between studies that have been set up at the micro-level of the teaching and learning situation and studies on meso-level issues. At the micro-level, ICT is hypothesized to foster three basic information processing activities: presentation, organization, and integration of information. Next to this, ICT is expected to foster collaborative learning in the medical knowledge domain. Empirical evidence supports the potential of ICT to introduce students to advanced graphical representations but the studies also stress the importance of prior knowledge and the need for real-life tactile and practical experiences. The number of empirical studies focusing on the impact of ICT on information organization is restricted but the results suggest a positive impact on student attitudes and relevant learning gains. However, again, students need a relevant level of prior knowledge. Empirical studies focusing on the impact of ICT on information integration highlight the positive impact of ICT-based assessment and computer simulations; for the latter this is especially the case when novices are involved, and when they master the prerequisite ICT skills. Little empirical evidence is available regarding the impact of computer games. Research results support the positive impact of ICT-based collaboration but care has to be taken when skills development is pursued. At the meso-level, the available empirical evidence highlights the positive impact of ICT to promote the efficiency of learning arrangements. Research grounds the key position of ICT in a state-of-the-art medical curriculum. Recent developments focusing on repositories of learning materials for medical education have yet not been evaluated. The article concludes by stressing the need for evaluative studies, especially in the promising field of ICT-based collaborative learning. Furthermore, the importance to be attached to the position and qualifications of the teaching staff is emphasized.

Book ChapterDOI
01 Jan 2006
TL;DR: This paper will in part present progress made in the overall Cyc Project during its twenty-year lifespan – its vision, its achievements thus far, and the work that remains to be done.
Abstract: Semi-formally represented knowledge, such as the use of standardized keywords, is a traditional and valuable mechanism for helping people to access information. Extending that mechanism to include formally represented knowledge (based on a shared ontology) presents a more effective way of sharing large bodies of knowledge between groups; reasoning systems that draw on that knowledge are the logical counterparts to tools that perform well on a single, rigidly defined task. The underlying philosophy of the Cyc Project is that software will never reach its full potential until it can react flexibly to a variety of challenges. Furthermore, systems should not only handle tasks automatically, but also actively anticipate the need to perform them. A system that rests on a large, general-purpose knowledge base can potentially manage tasks that require world knowledge, or “common sense” – the knowledge that every person assumes his neighbors also possess. Until that knowledge is fully represented and integrated, tools will continue to be, at best,idiots savants. Accordingly, this paper will in part present progress made in the overall Cyc Project during its twenty-year lifespan – its vision, its achievements thus far, and the work that remains to be done. We will also describe how these capabilities can be brought together into a useful ambient assistant application. Ultimately, intelligent software assistants should dramatically reduce the time and cognitive effort spent on infrastructure tasks. Software assistants should be ambient systems – a user works within an environment in which agents are actively trying to classify the user's activities, predict useful subtasks and expected future tasks, and, proactively, perform those tasks or at least the sub-tasks that can be performed automatically. This in turn requires a variety of necessary technologies (including script and plan recognition, abductive reasoning, integration of external knowledge sources, facilitating appropriate knowledge entry and hypothesis formation), which must be integrated into the Cyc reasoning system and Knowledge Base to be fully effective.

Book ChapterDOI
05 Nov 2006
TL;DR: The Dartgrid is an application development framework together with a set of semantic tools to facilitate the integration of heterogenous relational databases using semantic web technologies.
Abstract: Integrating relational databases is recently acknowledged as an important vision of the Semantic Web research, however there are not many well-implemented tools and not many applications that are in large-scale real use either. This paper introduces the Dartgrid which is an application development framework together with a set of semantic tools to facilitate the integration of heterogenous relational databases using semantic web technologies. For examples, DartMapping is a visualized mapping tool to help DBA in defining semantic mappings from heterogeneous relational schemas to ontologies. DartQuery is an ontology-based query interface helping user to construct semantic queries, and capable of rewriting SPARQL semantic queries to a set of SQL queries. DartSearch is an ontology-based search engine enabling user to make full-text search over all databases and to navigate across the search results semantically. It is also enriched with a concept ranking mechanism to enable user to find more accurate and reliable results. This toolkit has been used to develop an currently in-use application for China Academy of Traditional Chinese Medicine (CATCM). In this application, over 70 legacy relational databases are semantically interconnected by an ontology with over 70 classes and 800 properties, providing integrated semantic-enriched query, search and navigation services to TCM communities.

Proceedings ArticleDOI
05 May 2006
TL;DR: Issues associated with Level 2 (Situation Assessment) including: user perception and perceptual reasoning representation, knowledge discovery process models, procedural versus logical reasoning about relationships, user-fusion interaction through performance metrics, and syntactic and semantic representations are presented.
Abstract: Situation assessment (SA) involves deriving relations among entities, e.g., the aggregation of object states (i.e. classification and location). While SA has been recognized in the information fusion and human factors literature, there still exist open questions regarding knowledge representation and reasoning methods to afford SA. For instance, while lots of data is collected over a region of interest, how does this information get presented to an attention constrained user? The information overload can deteriorate cognitive reasoning so a pragmatic solution to knowledge representation is needed for effective and efficient situation understanding. In this paper, we present issues associated with Level 2 (Situation Assessment) including: (1) user perception and perceptual reasoning representation, (2) knowledge discovery process models, (3) procedural versus logical reasoning about relationships, (4) user-fusion interaction through performance metrics, and (5) syntactic and semantic representations. While a definitive conclusion is not the aim of the paper, many critical issues are proposed in order to characterize future successful strategies to knowledge representation and reasoning strategies for situation assessment.

Journal Article
TL;DR: This paper proposes meta-concepts with which the ontology developers describe the domain concepts of parts libraries, which have explicit ontological semantics, so that they help to identify domain concepts consistently and structure them systematically.
Abstract: Seamless integration of digital parts libraries or electronic parts catalogs for e-procurement is impeded by semantic heterogeneity. The utilization of ontologies as metadata descriptions of the information sources is a possible approach to providing an integrated view of multiple parts libraries. However, in order to integrate ontologies, the mismatches between them should be resolved. In this paper, we propose meta-concepts with which the ontology developers describe the domain concepts of parts libraries. The meta-concepts have explicit ontological semantics, so that they help to identify domain concepts consistently and structure them systematically. Consequently, our method ensures that the mismatches between parts library ontologies are confined to manageable mismatches which a software program can resolve automatically. Modeling ontologies of real mold and die parts libraries is taken as an example task to show how to use the meta-concepts. We also demonstrate how easily a computer system can merge the resultant well-established ontologies. c 2006 Elsevier Ltd. All rights reserved.

Journal ArticleDOI
TL;DR: Results from several test studies demonstrate the effectiveness of the approach in retrieving biologically interesting relations between genes and proteins, the networks connecting them, and of the utility of PathSys as a scalable graph-based warehouse for interaction-network integration and a hypothesis generator system.
Abstract: Background The goal of information integration in systems biology is to combine information from a number of databases and data sets, which are obtained from both high and low throughput experiments, under one data management scheme such that the cumulative information provides greater biological insight than is possible with individual information sources considered separately.

Journal IssueDOI
TL;DR: Fundamental forms of information, as well as the term information itself, are defined and developed for the purposes of information sciences studies, and concepts of natural and represented information, encoded and embodied information are elaborated.
Abstract: Fundamental forms of information, as well as the term information itself, are defined and developed for the purposes of information sciencesstudies. Concepts of natural and represented information (taking an unconventional sense of representation), encoded and embodied information, as well as experienced, enacted, expressed, embedded, recorded, and trace information are elaborated. The utility of these terms for the discipline is illustrated with examples from the study of information-seeking behavior and of information genres. Distinctions between the information and curatorial sciences with respect to their social (and informational) objects of study are briefly outlined. © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The proposed system can securely gather, integrate, and display distributed medical information using mobile-agent technology and agent-driven security.
Abstract: Healthcare is information driven and knowledge driven. Good healthcare depends on making decisions at the right time and place, using the right patient data and applicable knowledge. Communication is of utmost relevance in today's healthcare settings, in that delivery of care, research, and management all depend on sharing information. The proposed system can securely gather, integrate, and display distributed medical information using mobile-agent technology and agent-driven security

Journal ArticleDOI
01 Jul 2006
TL;DR: This paper proposes a mobile clinical information system (MobileMed), which integrates the distributed and fragmented patient data across heterogeneous sources and makes them accessible through mobile devices and provides a means for effortless implementation and deployment of such systems.
Abstract: Patient clinical data are distributed and often fragmented in heterogeneous systems, and therefore the need for information integration is a key to reliable patient care. Once the patient data are orderly integrated and readily available, the problems in accessing the distributed patient clinical data, the well-known difficulties of adopting a mobile health information system, are resolved. This paper proposes a mobile clinical information system (MobileMed), which integrates the distributed and fragmented patient data across heterogeneous sources and makes them accessible through mobile devices. The system consists of four main components: a smart interface, an HL7 message server (HMS), a central clinical database (CCDB), and a web server. The smart interface and the HMS work in concert to generate HL7 messages from the existing legacy systems, which essentially send the patient data in HL7 messages to the CCDB to be stored and maintained. The CCDB and the web server enable the physicians to access the integrated up-to-date patient data. By proposing the smart interface approach, we provide a means for effortless implementation and deployment of such systems. Through a performance study, we show that the HMS is reliable yet fast enough to be able to support efficient clinical data communication

Journal ArticleDOI
TL;DR: In this paper, the authors present the development process of an effective decision-support framework for adopting integrated information systems within SMEs, which comprises 11 steps, such as identifying information systems-related business problems, forming a project team, and assessing legacy systems and software vendors.
Abstract: Small-and-medium enterprises (SMEs) are the backbone of the economy in most countries. With the opening up of the economy, it is crucial that SMEs continuously improve their competitiveness to assert themselves in the global market. There is also a greater need for information integration in SMEs that lack the financial resources and business resilience of large enterprises. This research paper presents the development process of an effective decision-support framework for adopting integrated information systems within SMEs. The methodology comprises 11 steps, such as identifying information systems-related business problems, forming a project team, and assessing legacy systems and software vendors. The development process of the decision-support methodology has passed through four major stages: identifying the required specification of the methodology, selection and justification of the most suitable delivery medium, creating and evaluating a pilot version of the methodology, and developing the final dec...