scispace - formally typeset
Search or ask a question

Showing papers on "Information integration published in 2020"


Journal ArticleDOI
20 Feb 2020
TL;DR: Industrial information integration engineering is a set of foundational concepts and techniques that facilitate the industrial information integration process and in recent years, many applicat...
Abstract: Industrial information integration engineering (IIIE) is a set of foundational concepts and techniques that facilitate the industrial information integration process. In recent years, many applicat...

109 citations


Journal ArticleDOI
TL;DR: A semi-automatic approach is developed to establish a systematic, accurate and convenient digital twinning system based on images and CAD drawings based on neuro-fuzzy system and elaborates on the methodological framework.

74 citations



Journal ArticleDOI
TL;DR: It is argued here that nearly all examples of so-called ‘integrated information’, including neuronal information processing and conventional computing, are only temporally integrated in the sense that outputs are correlated with multiple inputs: the information integration is implemented in time, rather than space, and thereby cannot correspond to physically integrated information.
Abstract: A key aspect of consciousness is that it represents bound or integrated information, prompting an increasing conviction that the physical substrate of consciousness must be capable of encoding integrated information in the brain. However, as Ralph Landauer insisted, 'information is physical' so integrated information must be physically integrated. I argue here that nearly all examples of so-called 'integrated information', including neuronal information processing and conventional computing, are only temporally integrated in the sense that outputs are correlated with multiple inputs: the information integration is implemented in time, rather than space, and thereby cannot correspond to physically integrated information. I point out that only energy fields are capable of integrating information in space. I describe the conscious electromagnetic information (cemi) field theory which has proposed that consciousness is physically integrated, and causally active, information encoded in the brain's global electromagnetic (EM) field. I here extend the theory to argue that consciousness implements algorithms in space, rather than time, within the brain's EM field. I describe how the cemi field theory accounts for most observed features of consciousness and describe recent experimental support for the theory. I also describe several untested predictions of the theory and discuss its implications for the design of artificial consciousness. The cemi field theory proposes a scientific dualism that is rooted in the difference between matter and energy, rather than matter and spirit.

47 citations


Journal ArticleDOI
Li Da Xu1
TL;DR: The potential contribution of systems science to Industry 40 is discussed in this article, where the authors point out that systems science is a necessity to deal with the overwhelming systems complexity in Industry 40 and the surrounding industrial ecosystem.
Abstract: Industrie 40, also called Industry 40, represents the current trend of manufacturing industry characterized by high level of automation, industrial integration, and industrial information integration It mainly includes enabling technologies such as the Internet of Things (IoT), cyber‐physical systems (CPS), cloud computing, industrial integration, industrial information integration, and other technologies Systems science has been emerged and well developed since the second half of the twentieth century From 1940s until now, systems science has been called systems science, systems engineering, systems theory, cybernetics, systems analysis, systems methodology, systems approach, and systems thinking Since its emergence, a wealth of research has produced an astonishing array of theoretical results and empirical insights, and a large suite of methods and techniques Systems science has been continuously, widely, and successfully applied to many subjects in natural science and social sciences This paper aims at discussing the potential contribution of systems science to Industry 40 As we approach the era of Fourth Industrial Revolution, this paper points out that systems science is a necessity to deal with the overwhelming systems complexity in Industry 40 and the surrounding industrial ecosystem

45 citations


Posted ContentDOI
26 Nov 2020-bioRxiv
TL;DR: Refining the understanding of information-processing in the human brain through Integrated Information Decomposition can provide powerful insights into the human neurocognitive architecture, and its role in supporting consciousness.
Abstract: A central goal of neuroscience is to understand how the brain synthesises information from multiple inputs to give rise to a unified conscious experience. This process is widely believed to require integration of information. Here, we combine information theory and network science to address two fundamental questions: how is the human information-processing architecture functionally organised? And how does this organisation support human consciousness? To address these questions, we leverage the mathematical framework of Integrated Information Decomposition to delineate a cognitive architecture wherein specialised modules interact with a “synergistic global workspace,” comprising functionally distinct gateways and broadcasters. Gateway regions gather information from the specialised modules for processing in the synergistic workspace, whose contents are then further integrated to later be made widely available by broadcasters. Through data-driven analysis of resting-state functional MRI, we reveal that gateway regions correspond to the brain’s well-known default mode network, whereas broadcasters of information coincide with the executive control network. Demonstrating that this synergistic workspace supports human consciousness, we further apply Integrated Information Decomposition to BOLD signals to compute integrated information across the brain. By comparing changes due to propofol anaesthesia and severe brain injury, we demonstrate that most changes in integrated information happen within the synergistic workspace. Furthermore, it was found that loss of consciousness corresponds to reduced integrated information between gateway, but not broadcaster, regions of the synergistic workspace. Thus, loss of consciousness may coincide with breakdown of information integration by this synergistic workspace of the human brain. Together, these findings demonstrate that refining our understanding of information-processing in the human brain through Integrated Information Decomposition can provide powerful insights into the human neurocognitive architecture, and its role in supporting consciousness.

41 citations


Posted Content
TL;DR: An Adaptive Graph-Interactive Framework for joint multiple intent detection and slot filling is proposed, where an intent-slot graph interaction layer is applied to each token adaptively to model the strong correlation between the slot and intents.
Abstract: In real-world scenarios, users usually have multiple intents in the same utterance. Unfortunately, most spoken language understanding (SLU) models either mainly focused on the single intent scenario, or simply incorporated an overall intent context vector for all tokens, ignoring the fine-grained multiple intents information integration for token-level slot prediction. In this paper, we propose an Adaptive Graph-Interactive Framework (AGIF) for joint multiple intent detection and slot filling, where we introduce an intent-slot graph interaction layer to model the strong correlation between the slot and intents. Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction. Experimental results on three multi-intent datasets show that our framework obtains substantial improvement and achieves the state-of-the-art performance. In addition, our framework achieves new state-of-the-art performance on two single-intent datasets.

35 citations


Journal ArticleDOI
TL;DR: It is demonstrated that perceptual integration and differentiation can be mapped to theoretically motivated neural information signatures, suggesting a direct relationship between phenomenology and neurophysiology.
Abstract: At any given moment, we experience a perceptual scene as a single whole and yet we may distinguish a variety of objects within it. This phenomenon instantiates two properties of conscious perception: integration and differentiation. Integration is the property of experiencing a collection of objects as a unitary percept and differentiation is the property of experiencing these objects as distinct from each other. Here, we evaluated the neural information dynamics underlying integration and differentiation of perceptual contents during bistable perception. Participants listened to a sequence of tones (auditory bistable stimuli) experienced either as a single stream (perceptual integration) or as two parallel streams (perceptual differentiation) of sounds. We computed neurophysiological indices of information integration and information differentiation with electroencephalographic and intracranial recordings. When perceptual alternations were endogenously driven, the integrated percept was associated with an increase in neural information integration and a decrease in neural differentiation across frontoparietal regions, whereas the opposite pattern was observed for the differentiated percept. However, when perception was exogenously driven by a change in the sound stream (no bistability), neural oscillatory power distinguished between percepts but information measures did not. We demonstrate that perceptual integration and differentiation can be mapped to theoretically motivated neural information signatures, suggesting a direct relationship between phenomenology and neurophysiology.

32 citations


Proceedings ArticleDOI
23 Aug 2020
TL;DR: This work integrates comprehensive sources of information, including the content of consumer-generated videos, the narrative comment sentences supplied by consumers, and the product attributes in an end-to-end modeling framework and collects a large-scale dataset accordingly from real-world data in Taobao.
Abstract: In e-commerce, consumer-generated videos, which in general deliver consumers' individual preferences for the different aspects of certain products, are massive in volume. To recommend these videos to potential consumers more effectively, diverse and catchy video titles are critical. However, consumer-generated videos seldom accompany appropriate titles. To bridge this gap, we integrate comprehensive sources of information, including the content of consumer-generated videos, the narrative comment sentences supplied by consumers, and the product attributes, in an end-to-end modeling framework. Although automatic video titling is very useful and demanding, it is much less addressed than video captioning. The latter focuses on generating sentences that describe videos as a whole while our task requires the product-aware multi-grained video analysis. To tackle this issue, the proposed method consists of two processes, i.e., granular-level interaction modeling and abstraction-level story-line summarization. Specifically, the granular-level interaction modeling first utilizes temporal-spatial landmark cues, descriptive words, and abstractive attributes to builds three individual graphs and recognizes the intra-actions in each graph through Graph Neural Networks (GNN). Then the global-local aggregation module is proposed to model inter-actions across graphs and aggregate heterogeneous graphs into a holistic graph representation. The abstraction-level story-line summarization further considers both frame-level video features and the holistic graph to utilize the interactions between products and backgrounds, and generate the story-line topic of the video. We collect a large-scale dataset accordingly from real-world data in Taobao, a world-leading e-commerce platform, and will make the desensitized version publicly available to nourish further development of the research community. Relatively extensive experiments on various datasets demonstrate the efficacy of the proposed method.

31 citations


Journal ArticleDOI
Shen Weidong1, Tianliang Hu1, Chengrui Zhang1, Yingxin Ye1, Li Zhengyu1 
TL;DR: A welding task data model, which includes information on accurate geometry, dimension and welding requirement, is presented to solve problems of intelligent process planning and information integration of robotic welding.
Abstract: Nowadays, as an efficient and automatic welding machine that accepts and executes human instructions, welding robots are widely used in industry. However, the lack of intelligence in process planning makes welding preparation complex and time-consuming. In order to realize intelligent process planning of robotic welding, one of the key factors is designing a welding task data model that can support process planning. However, current welding task models have some drawbacks, such as inaccurate geometry information, lacking information on welding requirements, and lacking consideration of machine-readability and compatibility. They cannot provide sufficient information for intelligent process planning. In this paper, a welding task data model, which includes information on accurate geometry, dimension and welding requirement, is presented to solve these problems. Firstly, through requirement analysis, necessary information items of a welding task data model are analyzed and summarized. Then the welding task data model is designed in detail by using EXPRESS. The feasibility of proposed welding task data model is demonstrated through creating the welding task file of an automobile front door subassembly. Moreover, an application framework of the welding task file is presented. Results show that proposed welding task data model is feasible for supporting intelligent process planning and information integration of robotic welding.

31 citations


Proceedings ArticleDOI
01 Nov 2020
TL;DR: This article proposed an adaptive graph-interactive framework (AGIF) for joint multiple intent detection and slot filling, where an intent-slot graph interaction layer was introduced to model the strong correlation between the slot and intents, making a fine-grained intent information integration for the token-level slot prediction.
Abstract: In real-world scenarios, users usually have multiple intents in the same utterance. Unfortunately, most spoken language understanding (SLU) models either mainly focused on the single intent scenario, or simply incorporated an overall intent context vector for all tokens, ignoring the fine-grained multiple intents information integration for token-level slot prediction. In this paper, we propose an Adaptive Graph-Interactive Framework (AGIF) for joint multiple intent detection and slot filling, where we introduce an intent-slot graph interaction layer to model the strong correlation between the slot and intents. Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction. Experimental results on three multi-intent datasets show that our framework obtains substantial improvement and achieves the state-of-the-art performance. In addition, our framework achieves new state-of-the-art performance on two single-intent datasets.

Proceedings ArticleDOI
TL;DR: Wang et al. as mentioned in this paper integrated comprehensive sources of information, including the content of consumer-generated videos, the narrative comment sentences supplied by consumers, and the product attributes, in an end-to-end modeling framework.
Abstract: In e-commerce, consumer-generated videos, which in general deliver consumers' individual preferences for the different aspects of certain products, are massive in volume To recommend these videos to potential consumers more effectively, diverse and catchy video titles are critical However, consumer-generated videos seldom accompany appropriate titles To bridge this gap, we integrate comprehensive sources of information, including the content of consumer-generated videos, the narrative comment sentences supplied by consumers, and the product attributes, in an end-to-end modeling framework Although automatic video titling is very useful and demanding, it is much less addressed than video captioning The latter focuses on generating sentences that describe videos as a whole while our task requires the product-aware multi-grained video analysis To tackle this issue, the proposed method consists of two processes, ie, granular-level interaction modeling and abstraction-level story-line summarization Specifically, the granular-level interaction modeling first utilizes temporal-spatial landmark cues, descriptive words, and abstractive attributes to builds three individual graphs and recognizes the intra-actions in each graph through Graph Neural Networks (GNN) Then the global-local aggregation module is proposed to model inter-actions across graphs and aggregate heterogeneous graphs into a holistic graph representation The abstraction-level story-line summarization further considers both frame-level video features and the holistic graph to utilize the interactions between products and backgrounds, and generate the story-line topic of the video We collect a large-scale dataset accordingly from real-world data in Taobao, a world-leading e-commerce platform, and will make the desensitized version publicly available to nourish further development of the research community

Journal ArticleDOI
TL;DR: The results of this study indicate that information security has the largest impact on SCII, followed by IT, and information quality, which indicates that IIL as a negative aspect of information integration may deprive the strength of the relationship between SCII and operational performance.

Journal ArticleDOI
25 Mar 2020
TL;DR: This research is the basis for identifying project risks within the framework of building information modeling and can be an effective contribution to increasing the risk-taking efficiency of the project.
Abstract: Merging and updating project information and recording changes can give dynamic risk identification at all stages of the project. The main purpose of this research is to create an integration in construction information.,In this research, the 5D model was prepared and then all model information was entered into the database designed in SQL Server, the project report tables were coded, and finally, a database with four groups of information was ready for risk identification.,Creating an integrated risk identification platform reduced rework and time and cost control and change management, which were positive effects of risk identification at the right time.,In order to identify risks, creating multilateral databases whose information integration enables timely completion of the project and compliance with the planning.,This research is the basis for identifying project risks within the framework of building information modeling and can be an effective contribution to increasing the risk-taking efficiency of the project.,As a matter of fact, marked time and cost are terrific motivating forces for the building industry, materializing with identify risk well-time. In any case, identifying risk engaged with all of the dimensions depend on this industry.

Journal ArticleDOI
TL;DR: The results show that the ICSPB-IoT provides cross-phase, cross-organizational information interaction for government supervision departments, contractors and owners to improve the efficiency of communication and collaboration between them.
Abstract: At present, the low degree of informatization in the construction industry chain is one of the prominent problems impeding the application efficiency of prefabrication. To break through the bottleneck of information integration and interaction in the construction process of prefabricated buildings, this paper proposes a conceptual framework of an Intelligent Construction System for Prefabricated Buildings based on the Internet of Things (ICSPB-IoT) according to the needs of government supervision departments, contractors and owners. Then this paper describes the formation of the “information network” from “information flow”, based on which the operating mechanism of the ICSPB-IoT is discussed using the method of system engineering analysis. Finally, the proposed ICSPB-IoT is graded into different implementation levels according to the degree of intelligence of the Internet of Things information, and the implementation path is proposed for construction enterprises with different levels of digitalization. The results show that the ICSPB-IoT provides cross-phase, cross-organizational information interaction for government supervision departments, contractors and owners to improve the efficiency of communication and collaboration between them. This paper also provides decision-making support for the design, layout and implementation of the Internet of Things for prefabrication construction enterprises.

Journal ArticleDOI
17 May 2020-Sensors
TL;DR: The analysis of smart Agriculture and Ind-4.0 concludes with the proposal of new operational indications for the verification and certification of the reliability of the information on the entire decision-making chain.
Abstract: Smart Agriculture (SA) is an evolution of Precision Farming (PF). It has technological basis very close to the paradigms of Industry 4.0 (Ind-4.0), so that it is also often referred to as Agriculture 4.0. After the proposal of a brief historical examination that provides a conceptual frame to the above terms, the common aspects of SA and Ind-4.0 are analyzed. These are primarily to be found in the cognitive approaches of Knowledge Management 4.0 (KM4.0, the actual theoretical basis of Ind-4.0), which underlines the need to use Integrated Information Systems (IIS) to manage all the activity areas of any production system. Based upon an infological approach, “raw data” becomes “information” only when useful to (or actually used in) a decision-making process. Thus, an IIS must be always designed according to such a view, and KM4.0 conditions the way of collecting and processing data on farms, together with the “information precision” by which the production system is managed. Such precision needs, on their turn, depend on the hierarchical level and the “Macrodomain of Prevailing Interest” (MPI) related to each decision, where the latter identifies a predominant viewpoint through which a system can be analyzed according to a prevailing purpose. Four main MPIs are here proposed: (1) physical and chemical, (2) biological and ecological, (3) productive and hierarchical, and (4) economic and social. In each MPI, the quality of the knowledge depends on the cognitive level and the maturity of the methodological approaches there achieved. The reliability of information tends to decrease from the first to the fourth MPI; lower the reliability, larger the tolerance margins that a measurement systems must ensure. Some practical examples are then discussed, taking into account some IIS-monitoring solutions of increasing complexity in relation to information integration needs and related data fusion approaches. The analysis concludes with the proposal of new operational indications for the verification and certification of the reliability of the information on the entire decision-making chain.

Book ChapterDOI
23 Mar 2020
TL;DR: This paper surveys the latest developments of ontology learning and highlights how they could be adopted and play a vital role in the integration of information systems.
Abstract: Ontology-based information integration is a useful method to integrate heterogeneous data at the semantic level. However, there are some bottlenecks of the traditional method for constructing ontology, i.e., time-consuming, error-prone, and semantic loss. Ontology learning is a kind of ontology construction approach based on machine learning, it provides a new opportunity to tackle the above bottlenecks. Especially, it could be employed to construct ontologies and integrate large-scale and heterogeneous data from various information systems. This paper surveys the latest developments of ontology learning and highlights how they could be adopted and play a vital role in the integration of information systems. The recent techniques and tools of ontology learning from text and relational database are reviewed, the possibility of using ontology learning in information integration were discussed based on the mapping results of the aforementioned bottlenecks and features of ontology learning. The potential directions for using ontology learning in information systems integration were given.

Journal ArticleDOI
TL;DR: In this article, the use of drones in replacing or supplementing industrial robots, transportation systems and humans in factories, especially regarding agile manufacturing under the requirements of flexibility, reconfigurability and collaboration.

Journal ArticleDOI
TL;DR: It is argued that the integrated inference framework not only provides an intuitive and coherent perspective for a variety of clinical trial inference problems but also has broad application areas in clinical trial settings and beyond, as a quantitative data integration tool for making robust inference in a target population precise manner for policy and decision makers.
Abstract: A major challenge for rare disease clinical trials is the limited amount of available information for making robust statistical inference. While external data present information integration opport...

Proceedings ArticleDOI
Yaqing Wang1, Yifan Ethan Xu2, Xian Li2, Xin Luna Dong2, Jing Gao1 
23 Aug 2020
TL;DR: A novel meta-learning latent variable approach, called MetaBridge, is proposed, which can learn transferable knowledge from a subset of categories with limited labeled data and capture the uncertainty of never-seen categories with unlabeled data and integrate meta learning and latent variable in a unified model to effectively capture the Uncertainty of various categories.
Abstract: Product catalogs are valuable resources for eCommerce website. In the catalog, a product is associated with multiple attributes whose values are short texts, such as product name, brand, functionality and flavor. Usually individual retailers self-report these key values, and thus the catalog information unavoidably contains noisy facts. It is very important to validate the correctness of these values in order to improve shopper experiences and enable more effective product recommendation. Due to the huge volume of products, an effective automatic validation approach is needed. In this paper, we propose to develop an automatic validation approach that verifies the correctness of textual attribute values for products. This can be formulated as a task as cross-checking a textual attribute value against product profile, which is a short textual description of the product on eCommerce website. Although existing deep neural network models have shown success in conducting cross-checking between two pieces of texts, their success has to be dependent upon a large set of quality labeled data, which are hard to obtain in this validation task: products span a variety of categories. Due to the category difference, annotation has to be done on all the categories, which is impossible to achieve in real practice. To address the aforementioned challenges, we propose a novel meta-learning latent variable approach, called MetaBridge, which can learn transferable knowledge from a subset of categories with limited labeled data and capture the uncertainty of never-seen categories with unlabeled data. More specifically, we make the following contributions. (1) We formalize the problem of validating the textual attribute values of products from a variety of categories as a natural language inference task in the few-shot learning setting, and propose a meta-learning latent variable model to jointly process the signals obtained from product profiles and textual attribute values. (2) We propose to integrate meta learning and latent variable in a unified model to effectively capture the uncertainty of various categories. With this model, annotation costs can be significantly reduced as we make best use of labeled data from limited categories. (3) We propose a novel objective function based on latent variable model in the few-shot learning setting, which ensures distribution consistency between unlabeled and labeled data and prevents overfitting by sampling different records from the learned distribution. Extensive experiments on real eCommerce datasets from hundreds of categories demonstrate the effectiveness of MetaBridge on textual attribute validation and its outstanding performance compared with state-of-the-art approaches.

Journal ArticleDOI
TL;DR: This study thoroughly reviews 119 papers on operation-level vision-based construction monitoring, published in mainstream construction informatics journals, and categorized them into three sequential technologies, state-of-the-art algorithms, open challenges, and future directions.
Abstract: Operation-level vision-based monitoring and documentation has drawn significant attention from construction practitioners and researchers. To automate the operation-level monitoring of construction and built environments, there have been much effort to develop computer vision technologies. Despite their encouraging findings, it remains a major challenge to exploit technologies in real construction projects, implying that there are knowledge gaps in practice and theory. To fill such knowledge gaps, this study thoroughly reviews 119 papers on operation-level vision-based construction monitoring, published in mainstream construction informatics journals. Existing research papers can be categorized into three sequential technologies: (1) camera placement for operation-level construction monitoring, (2) single-camera-based construction monitoring and documentation, and (3) multi-camera-based onsite information integration and construction monitoring. For each technology, state-of-the-art algorithms, open challenges, and future directions are discussed.

Journal ArticleDOI
TL;DR: The model analysis suggests that the identified capability factors (organizational, group and individual) should be improved with regard to information integration, which is used to evaluate performance in the healthcare industry, and this risk assessment may be useful in other relevant industries.
Abstract: This study contributes to the promotion of healthcare information integration and readiness assessment of the factors impacted by quality improvement in hospital performance, which is beneficial fo...

Journal ArticleDOI
TL;DR: The results show that information integration of the two platforms significantly increases service efficiency and the efficiency of on-demand services depends much more on the information integration and the operational efficiency of the service provider than road conditions.
Abstract: On-demand services through Internet platforms, e.g. ride-sharing, food take-out services, have emerged as a new business model. In these businesses, customers place orders on Internet platforms and get services fulfilled offline in a timely manner. In this paper, we examine the factors that affect the efficiency of on-demand food take-out services. Besides operational and road factors, we highlight the role of information integration of the ordering platform and the logistics platform. Our results show that information integration of the two platforms significantly increases service efficiency. Through integration, the logistics platform can optimize delivery dispatch based on more comprehensive and accurate historical and real-time demand and delivery information, avoiding suboptimal and short-sighted decisions. We also find that the efficiency of on-demand services depends much more on the information integration and the operational efficiency of the service provider than road conditions. We discuss the theoretical and practical implications for the business model of on-demand services.

Journal ArticleDOI
25 Apr 2020
TL;DR: Wang et al. as discussed by the authors applied organizational information processing theory (OIPT) to construct a maritime supply chain collaborative decision-making model and examine its impact on logistics service performance.
Abstract: The increasing demand for high-quality logistics services has forced container shipping firms to decrease logistics service failure to retain the customers. This study thus aims to apply organizational information processing theory (OIPT) to construct a maritime supply chain collaborative decision-making model and examine its impact on logistics service performance.,In total, 142 usable questionnaires were collected from questionnaire survey. A two-step structural equation modeling approach including confirmatory factor analysis was subsequently performed to test the hypotheses.,The results show that internal information integration positively impacts external information integration, that external information integration positively impacts collaborative decision-making, and that collaborative decision-making positively impacts logistics service performance for container shipping firms. However, a relationship between internal information integration and collaborative decision-making was not found in this study.,This study primarily examines collaborative decision-making from the view of container shipping firms. Future research including other supply chain members is needed to generalize the results and could also incorporate other factors such as relationship quality and culture, into the model to address this issue.,To decrease the occurrence of logistics failures and improve service quality in the maritime logistics process, it is suggested that container shipping firms apply information technology for acquiring and assimilating logistics information internally and externally across the supply chain to facilitate decision-making.,This study contributes to the knowledge about the antecedents and impacts of collaborative decision-making for container shipping firms in Taiwan. Particularly, in line with OITP, the findings indicate that container shipping firms can facilitate logistics decision-making and strategy formulation through information integration, which in turn enhances logistics service performance.

Journal ArticleDOI
TL;DR: The need for a differentiation between integration efforts covered by the umbrella term “BIM-GIS integration” is highlighted to illustrate the importance of differentiation between heterogeneous information integration efforts.
Abstract: . The heterogeneous character of information models results in communication barriers between subsystems in railway organizations dealing with Building Information Modeling (BIM) and Geographic information systems (GIS). The integration of information is a promising way to bridge the heterogeneity of information models and satisfy the need for a more efficient communication. Integration efforts exploited in expert literature are often referenced using umbrella terms like “BIM-GIS Integration” or “GeoBIM”, although dealing with different challenges and addressing different purposes. This paper highlights the need for a differentiation between integration efforts covered by the umbrella term “BIM-GIS integration”. For this, a new approach for the categorization of information integration efforts was developed based on a literature research. Afterwards, challenges concerning information integration efforts in the field of “BIM-GIS Integration” were exploited and assigned to the respective categories to illustrate the importance of differentiation between heterogeneous information integration efforts.

Journal ArticleDOI
TL;DR: In this article, the authors examined the effect of supply chain information integration consisting of internal and external information integration on organizational performance, which consists of customer satisfaction, supply chain performance, and financial performance.
Abstract: Article history: Received: June 25 2019 Received in revised format: July 29 2019 Accepted: September 4, 2019 Available online: September 5, 2019 This study aims to examine the effect of supply chain information integration consisting of internal and external Information Integration on organizational performance, which consists of customer satisfaction, supply chain performance, and financial performance. The technique of collecting data is based on a questionnaire distributed to 185 small food industry owners. Methods of data analysis is based on using SEM-PLS. The results show that internal information integration affected customer satisfaction, supply chain, and financial performance. External information integration is proven to have a non-linear effect on customer satisfaction and on the performance of the supply chain. Customer satisfaction and supply chain performance affect financial performance. Internal information integration is more dominant and plays an important role in increasing the performance of the organization. © 2020 by the authors; licensee Growing Science, Canada

Journal ArticleDOI
TL;DR: The test results show that the information integration and processing ability of marine tourism geographic information visualization is strong, the visualization information fusion and mining performance of Marine tourism are good, and the result of visual analysis of marine Tourism geographic information is stable and reliable.
Abstract: Zhu, W; Hou, Y; Wang, E, and Wang, Y, 2020 Design of geographic information visualization system for marine tourism based on data mining In: Yang, Y; Mi, C; Zhao, L, and Lam, S (eds), Global Topics and New Trends in Coastal Research: Port, Coastal and Ocean Engineering Journal of Coastal Research, Special Issue No 103, pp 1034–1037 Coconut Creek (Florida), ISSN 0749-0208This paper presents a design scheme of marine tourism geographic information visualization system based on big data fusion and B/S architecture The software development design of marine tourism geographic information visualization system is carried out under embedded Linux kernel and B/S architecture The algorithm design of sea tourism geographic information visualization is carried out by using big data mining and fuzzy information fusion clustering method The software design consists of marine tourism geographic information collection module, marine tourism geographic information BIM reconstruction module, database module, marine tourism geographic information interaction module and integrated information processing module, etc The basic database structure of marine tourism geographic information visualization system is constructed under the B/S structure system, and program loading and automatic compilation design are adopted to realize the software development of marine tourism geographic information visualization system under bus control and embedded environment The test results show that the information integration and processing ability of marine tourism geographic information visualization is strong, the visualization information fusion and mining performance of marine tourism are good, and the result of visual analysis of marine tourism geographic information is stable and reliable

Proceedings ArticleDOI
20 Apr 2020
TL;DR: This work formally defines the notion of natural keys and proposes a supervised learning approach to automatically detect natural keys in Wikipedia tables using carefully engineered features, which achieves 80% F-measure, which is at least 20% more than all related approaches.
Abstract: Wikipedia is the largest encyclopedia to date. Scattered among its articles, there is an enormous number of tables that contain structured, relational information. In contrast to database tables, these webtables lack metadata, making it difficult to automatically interpret the knowledge they harbor. The natural key is a particularly important piece of metadata, which acts as a primary key and consists of attributes inherent to an entity. Determining natural keys is crucial for many tasks, such as information integration, table augmentation, or tracking changes to entities over time. To address this challenge, we formally define the notion of natural keys and propose a supervised learning approach to automatically detect natural keys in Wikipedia tables using carefully engineered features. Our solution includes novel features that extract information from time (a table’s version history) and space (other similar tables). On a curated dataset of 1,000 Wikipedia table histories, our model achieves 80% F-measure, which is at least 20% more than all related approaches. We use our model to discover natural keys in the entire corpus of Wikipedia tables and provide the dataset to the community to facilitate future research.

Journal ArticleDOI
TL;DR: This paper will present the influence of the key success factors (KSF) on the satisfaction of the management controller through quantitative research with a survey and determine their potential impact on the management control systems.
Abstract: The information system and in particular Enterprise Resource Planning are essential tools for management controllers. Nowadays, in large and medium-sized enterprises, information integration is carried out in most cases in an ERP environment. Management controllers have also seen the birth of their profession thanks to the changes made in organizations on information systems. The main aim of this article is a question of identifying the expectations of the firm, of characterizing the ERP and of determining their potential impact on the management control systems. This paper will present the influence of the key success factors (KSF) on the satisfaction of the management controller through quantitative research with a survey. To collect the data necessary for this research, a quantitative confirmatory study will be conducted.

Book ChapterDOI
01 Jul 2020
TL;DR: This work analyzes problems solved by traditional monitoring systems, and proposes an analytical monitoring system architecture based on knowledge graphs to address these classes of problems such systems cannot solve.
Abstract: Dynamic network monitoring systems are typically designed to solve a predefined number of tasks, new requirements lead to expensive development efforts and sometimes even require changes in the system architecture. Knowledge graphs are powerful and flexible tools for information integration and supported by a set of standardized vocabularies and languages (the “Sematic Web” toolset). In this work, we discuss the application of knowledge graphs to develop and analyze an analytical computing infrastructure for a dynamic network monitoring system. As a typical dynamic network, a multiservice telecommunication network is considered. The presented system combines static models of a telecommunication network and dynamic monitoring data and makes it possible to obtain complex analytical reports using SPARQL queries over the knowledge graph. Those reports are of crucial importance to network stakeholders for improving the network services and performance. First, we analyze problems solved by traditional monitoring systems, and identify the classes of problems such systems cannot solve. Then we propose an analytical monitoring system architecture based on knowledge graphs to address these classes of problems. We present the system structure and detailed descriptions of the ontological and mathematical models of the resulting knowledge graph. In order to test the architecture discussed, we create an example task of the analytical monitoring system and analyze system performance depending on the size of the knowledge graph. The results of the analysis are presented using a number of SPARQL queries.