scispace - formally typeset
Search or ask a question

Showing papers on "XML published in 2016"


Journal ArticleDOI
TL;DR: The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra and peptide and protein identification results to quantification data, using a modular and extensible set of open-source, cross-platform libraries.

130 citations


Journal ArticleDOI
TL;DR: A family of languages that enable combination of data and topology querying for graph databases are presented, and it is shown that it includes efficient and highly expressive formalisms for querying both the structure of the data and the data itself.
Abstract: Graph databases have received much attention as of late due to numerous applications in which data is naturally viewed as a graph; these include social networks, RDF and the Semantic Web, biological databases, and many others. There are many proposals for query languages for graph databases that mainly fall into two categories. One views graphs as a particular kind of relational data and uses traditional relational mechanisms for querying. The other concentrates on querying the topology of the graph. These approaches, however, lack the ability to combine data and topology, which would allow queries asking how data changes along paths and patterns enveloping it. In this article, we present a comprehensive study of languages that enable such combination of data and topology querying. These languages come in two flavors. The first follows the standard approach of path queries, which specify how labels of edges change along a path, but now we extend them with ways of specifying how both labels and data change. From the complexity point of view, the right type of formalisms are subclasses of register automata. These, however, are not well suited for querying. To overcome this, we develop several types of extended regular expressions to specify paths with data and study their querying power and complexity. The second approach adopts the popular XML language XPath and extends it from XML documents to graphs. Depending on the exact set of allowed features, we have a family of languages, and our study shows that it includes efficient and highly expressive formalisms for querying both the structure of the data and the data itself.

101 citations


Journal ArticleDOI
TL;DR: The challenges of legal research in an increasingly complex, multi-level and multi-lingual world is described and how the Eunomos software helps users cut through the information overload to get the legal information they need in an organized and structured way and keep track of the state of the relevant law on any given topic.
Abstract: This paper describes the Eunomos software, an advanced legal document and knowledge management system, based on legislative XML and ontologies. We describe the challenges of legal research in an increasingly complex, multi-level and multi-lingual world and how the Eunomos software helps users cut through the information overload to get the legal information they need in an organized and structured way and keep track of the state of the relevant law on any given topic. Using NLP tools to semi-automate the lower-skill tasks makes this ambitious project a realistic commercial prospect as it helps keep costs down while at the same time allowing greater coverage. We describe the core system from workflow and technical perspectives, and discuss applications of the system for various user groups.

82 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: A new release of OpenDial, an open-source toolkit for building and evaluating spoken dialogue systems that relies on an information-state architecture where the dialogue state is represented as a Bayesian network and acts as a shared memory for all system modules.
Abstract: We present a new release of OpenDial, an open-source toolkit for building and evaluating spoken dialogue systems. The toolkit relies on an information-state architecture where the dialogue state is represented as a Bayesian network and acts as a shared memory for all system modules. The domain models are specified via probabilistic rules encoded in XML. OpenDial has been deployed in several application domains such as human‐robot interaction, intelligent tutoring systems and multi-modal in-car driver assistants.

74 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel middleware service solution that overcomes the drawbacks with using the pre-cache approach, PrecAche Technology of Android System (PATAS), and creates a new middleware of web pages, Version Flags, to indicate whether PF and PD are expired.

62 citations


Journal ArticleDOI
TL;DR: It was proved that this model improved the computing capacity of system, with high performance–cost ratio, and it is hoped to provide support for decision-making of enterprise managers.
Abstract: Cluster, consisting of a group of computers, is to act as a whole system to provide users with computer resources. Each computer is a node of this cluster. Cluster computer refers to a system consisting of a complete set of computers connected to each other. With the rapid development of computer technology, cluster computing technique with high performance---cost ratio has been widely applied in distributed parallel computing. For the large-scale close data in group enterprise, a heterogeneous data integration model was built under cluster environment based on cluster computing, XML technology and ontology theory. Such model could provide users unified and transparent access interfaces. Based on cluster computing, the work has solved the heterogeneous data integration problems by means of Ontology and XML technology. Furthermore, good application effect has been achieved compared with traditional data integration model. Furthermore, it was proved that this model improved the computing capacity of system, with high performance---cost ratio. Thus, it is hoped to provide support for decision-making of enterprise managers.

51 citations


Journal ArticleDOI
TL;DR: A highly flexible laser scanning simulation framework named Heidelberg LiDAR Operations Simulator (HELIOS), implemented as a Java library and split up into a core component and multiple extension modules, that fulfills its design goals.
Abstract: . In many technical domains of modern society, there is a growing demand for fast, precise and automatic acquisition of digital 3D models of a wide variety of physical objects and environments. Laser scanning is a popular and widely used technology to cover this demand, but it is also expensive and complex to use to its full potential. However, there might exist scenarios where the operation of a real laser scanner could be replaced by a computer simulation, in order to save time and costs. This includes scenarios like teaching and training of laser scanning, development of new scanner hardware and scanning methods, or generation of artificial scan data sets to support the development of point cloud processing and analysis algorithms. To test the feasibility of this idea, we have developed a highly flexible laser scanning simulation framework named Heidelberg LiDAR Operations Simulator (HELIOS). HELIOS is implemented as a Java library and split up into a core component and multiple extension modules. Extensible Markup Language (XML) is used to define scanner, platform and scene models and to configure the behaviour of modules. Modules were developed and implemented for (1) loading of simulation assets and configuration (i.e. 3D scene models, scanner definitions, survey descriptions etc.), (2) playback of XML survey descriptions, (3) TLS survey planning (i.e. automatic computation of recommended scanning positions) and (4) interactive real-time 3D visualization of simulated surveys. As a proof of concept, we show the results of two experiments: First, a survey planning test in a scene that was specifically created to evaluate the quality of the survey planning algorithm. Second, a simulated TLS scan of a crop field in a precision farming scenario. The results show that HELIOS fulfills its design goals.

46 citations


Journal ArticleDOI
TL;DR: This study has worked on developing a process which translates data in a way without any type of information loss, to manage data and metadata in such a way so they may not improve complexity and keep the strong linkage among them.
Abstract: In big data, data originates from many distributed and different sources in the shape of audio, video, text and sound on the bases of real time; which makes it massive and complex for traditional systems to handle. For this, data representation is required in the form of semantically-enriched for better utilization but keeping it simplified is essential. Such a representation is possible by using Resource Description Framework (RDF) introduced by World Wide Web Consortium (W3C). Bringing and transforming data from different sources in different formats into the RDF form having rapid ratio of increase is still an issue. This requires improvements to cover transition of information among all applications with induction of simplicity to reduce complexities of prominently storing data. With the improvements induced in the shape of big data representation for transformation of data to form into Extensible Markup Language (XML) and then into RDF triple as linked in real time. It is highly needed to make transformation more data friendly. We have worked on this study on developing a process which translates data in a way without any type of information loss. This requires to manage data and metadata in such a way so they may not improve complexity and keep the strong linkage among them. Metadata is being kept generalized to keep it more useful than being dedicated to specific types of data source. Which includes a model explaining its functionality and corresponding algorithms focusing how it gets implemented. A case study is used to show transformation of relational database textual data into RDF, and at end results are being discussed.

44 citations


Journal ArticleDOI
TL;DR: This survey paper provides a concise and comprehensive review of the methods related to XML-based semi-structured semantic analysis and disambiguation, and describes current and potential application scenarios that can benefit from XML semantic analysis.
Abstract: Since the last two decades, XML has gained momentum as the standard for web information management and complex data representation. Also, collaboratively built semi-structured information resources, such as Wikipedia, have become prevalent on the Web and can be inherently encoded in XML. Yet most methods for processing XML and semi-structured information handle mainly the syntactic properties of the data, while ignoring the semantics involved. To devise more intelligent applications, one needs to augment syntactic features with machine-readable semantic meaning. This can be achieved through the computational identification of the meaning of data in context, also known as (a.k.a.) automated semantic analysis and disambiguation, which is nowadays one of the main challenges at the core of the Semantic Web. This survey paper provides a concise and comprehensive review of the methods related to XML-based semi-structured semantic analysis and disambiguation. It is made of four logical parts. First, we briefly cover traditional word sense disambiguation methods for processing flat textual data. Second, we describe and categorize disambiguation techniques developed and extended to handle semi-structured and XML data. Third, we describe current and potential application scenarios that can benefit from XML semantic analysis, including: data clustering and semantic-aware indexing, data integration and selective dissemination, semantic-aware and temporal querying, web and mobile services matching and composition, blog and social semantic network analysis, and ontology learning. Fourth, we describe and discuss ongoing challenges and future directions, including: the quantification of semantic ambiguity, expanding XML disambiguation context, combining structure and content, using collaborative/social information sources, integrating explicit and implicit semantic analysis, emphasizing user involvement, and reducing computational complexity.

41 citations


Posted Content
TL;DR: In the context of the emergent Web of Data, it is crucial to provide interoperability and integration mechanisms to bridge the gap between the SW and XML worlds.
Abstract: In the context of the emergent Web of Data, a large number of organizations, institutes and companies (e.g., DBpedia, Geonames, PubMed ACM, IEEE, NASA, BBC) adopt the Linked Data practices and publish their data utilizing Semantic Web (SW) technologies. On the other hand, the dominant standard for information exchange in the Web today is XML. Many international standards (e.g., Dublin Core, MPEG-7, METS, TEI, IEEE LOM) have been expressed in XML Schema resulting to a large number of XML datasets. The SW and XML worlds and their developed infrastructures are based on different data models, semantics and query languages. Thus, it is crucial to provide interoperability and integration mechanisms to bridge the gap between the SW and XML worlds. In this chapter, we give an overview and a comparison of the technologies and the standards adopted by the XML and SW worlds. In addition, we outline the latest efforts from the W3C groups, including the latest working drafts and recommendations (e.g., OWL 2, SPARQL 1.1, XML Schema 1.1). Moreover, we present a survey of the research approaches which aim to provide interoperability and integration between the XML and SW worlds. Finally, we present the SPARQL2XQuery and XS2OWL Frameworks, which bridge the gap and create an interoperable environment between the two worlds. These Frameworks provide mechanisms for: (a) Query translation (SPARQL to XQuery translation); (b) Mapping specification and generation (Ontology to XML Schema mapping); and (c) Schema transformation (XML Schema to OWL transformation).

38 citations


Proceedings ArticleDOI
Zhen Hua Liu1, Beda Hammerschmidt1, Doug McMahon1, Ying Liu1, Chang Hui Joe1 
14 Jun 2016
TL;DR: In this article, the authors present JSON DataGuide, an auto-computed dynamic soft schema for JSON collections that closes the functional gap between the fixed-schema SQL world and the schema-less NoSQL world.
Abstract: Oracle release 12cR1 supports JSON data management that enables users to store, index and query JSON data along with relational data. The integration of the JSON data model into the RDBMS allows a new paradigm of data management where data is storable, indexable and queryable without upfront schema definition. We call this new paradigm Flexible Schema Data Management (FSDM). In this paper, we present enhancements to Oracle's JSON data management in the upcoming 12cR2 release. We present JSON DataGuide, an auto-computed dynamic soft schema for JSON collections that closes the functional gap between the fixed-schema SQL world and the schema-less NoSQL world. We present a self-contained query friendly binary format for encoding JSON (OSON) to close the query performance gap between schema-encoded relational data and schema free JSON textual data. The addition of these new features makes the Oracle RDBMS well suited to both fixedschema SQL and flexible-schema NoSQL use cases, and allows users to freely mix the two paradigms in a single data management system.

Journal ArticleDOI
TL;DR: A mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner, and its fundamental framework and functions are presented, show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.

Journal ArticleDOI
TL;DR: A systematic review on the studies of web service security found that there is lot of research going on in web services, dealing mostly with attack detection as well as identification of vulnerabilities in the services.

Journal ArticleDOI
TL;DR: A generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs is proposed and a set of analysis methods for identifying useful information from thelog files are proposed.
Abstract: Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for identifying useful information from the log files and implement the methods in a package in the Python programming language, glassPy. We demonstrate the data model and glassPy with logs from a game-based assessment, SimCityEDU.

Journal ArticleDOI
09 Mar 2016-PLOS ONE
TL;DR: Systems like Couchbase are interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest.
Abstract: This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest.

Proceedings ArticleDOI
02 Jun 2016
TL;DR: F# Data as mentioned in this paper is a library that integrates external structured data into F#, using a shape inference algorithm that infers a shape from representative sample documents and then integrates the inferred shape into the F# type system using type providers.
Abstract: Most modern applications interact with external services and access data in structured formats such as XML, JSON and CSV. Static type systems do not understand such formats, often making data access more cumbersome. Should we give up and leave the messy world of external data to dynamic typing and runtime checks? Of course, not! We present F# Data, a library that integrates external structured data into F#. As most real-world data does not come with an explicit schema, we develop a shape inference algorithm that infers a shape from representative sample documents. We then integrate the inferred shape into the F# type system using type providers. We formalize the process and prove a relative type soundness theorem. Our library significantly reduces the amount of data access code and it provides additional safety guarantees when contrasted with the widely used weakly typed techniques.

Proceedings Article
01 May 2016
TL;DR: TEITOK is a web-based framework that combines textual and linguistic annotation within a single TEI based XML document, that provides several built-in NLP tools to automatically (pre)process texts, and is highly customizable.
Abstract: TEITOK is a web-based framework for corpus creation, annotation, and distribution, that combines textual and linguistic annotation within a single TEI based XML document. TEITOK provides several built-in NLP tools to automatically (pre)process texts, and is highly customizable. It features multiple orthographic transcription layers, and a wide range of user-defined token-based annotations. For searching, TEITOK interfaces with a local CQP server. TEITOK can handle various types of additional resources including Facsimile images and linked audio files, making it possible to have a combined written/spoken corpus. It also has additional modules for PSDX syntactic annotation and several types of stand-off annotation.

Journal Article
TL;DR: A framework for knowledge management is presented, which takes into account the different levels of knowledge that exist within e-businesses, and the architecture of a KM system that incorporates enabling technologies such as intelligent agents and XML is discussed.
Abstract: Knowledge Management (KM) is emerging as one of the management tools to gain competitive advantage and e-businesses are beginning to invest in KM initiatives. Though several organizations have reported successful KM projects, there are many failures due to a variety of reasons including the incongruence between strategic and KM objectives, as well as lack of a framework for supporting KM related activities. This paper presents a framework for knowledge management, which takes into account the different levels of knowledge that exist within e-businesses. The architecture of a KM system that incorporates enabling technologies such as intelligent agents and XML is also discussed.

Proceedings Article
29 Aug 2016
TL;DR: A pipeline of NLP tools that accepts natural language text as input and outputs knowledge in a machine-readable format that outputs frame-based knowledge as RDF triples or XML, including the word-level alignment with the surface form.
Abstract: We present KNEWS, a pipeline of NLP tools that accepts natural language text as input and outputs knowledge in a machine-readable format. The tool outputs frame-based knowledge as RDF triples or XML, including the word-level alignment with the surface form, as well as first-order logical formulae. KNEWS is freely available for download. Moreover, thanks to its versatility, KNEWS has already been employed for a number of different applications for information extraction and automatic reasoning.

Journal ArticleDOI
TL;DR: This paper proposes a novel labeling scheme that not only completely avoids re-labeling but also improves the performance of determining the structural relationships when XML documents are frequently updated at arbitrary positions.
Abstract: Nowadays several labeling schemes are proposed to facilitate XML query processing, in which structural relationships among nodes could be quickly determined without accessing original XML documents. However, previous node indexing often encounters some troublesome problems when updates take place, such as a large amount of labels requiring re-labeling, huge space requirements for the updated labels, and inefficient determination of structural relationships. In this paper, we propose a novel labeling scheme that not only completely avoids re-labeling but also improves the performance of determining the structural relationships when XML documents are frequently updated at arbitrary positions. The fundamental difference between our scheme and previous ones is that, the gain in update performance of our labeling scheme does not come at the expense of the label size and the query performance. In particular, instead of completely assigning new labels for inserted nodes, the deleted labels are reused in our labeling scheme for encoding newly inserted nodes, which could effectively lower the label size. Moreover, we formally analyze the effectiveness of our proposed labeling scheme. Finally, we complement our analysis with experimental results on a range of real XML data.

Journal ArticleDOI
TL;DR: This paper concentrates on a crucial issue in fuzzy data management: fuzzy data modeling in XML, and provides a generic overview of the approaches proposed to modeling fuzzy XML data.

Journal ArticleDOI
01 Aug 2016
TL;DR: This paper outlines a vision of a different kind of interface, one that is built (in part) from the data, which focuses on graph databases, but is applicable to several other kinds of databases such as JSON and XML.
Abstract: Visual query interfaces make it easy for scientists and other nonexpert users to query a data collection. Heretofore, visual query interfaces have been statically-constructed, independent of the data. In this paper we outline a vision of a different kind of interface, one that is built (in part) from the data. In our data-driven approach, the visual interface is dynamically constructed and maintained. A data-driven approach has many benefits such as reducing the cost in constructing and maintaining an interface, superior support for query formulation, and increased portability of the interface. We focus on graph databases, but our approach is applicable to several other kinds of databases such as JSON and XML.

Proceedings ArticleDOI
18 Jul 2016
TL;DR: A taxonomy of the types of XML injection attacks is discussed, and a constraint solver is used to derive four different ways to mutate XML messages, turning them into attacks (tests) automatically, a result that is much better than what a state-of-the-art tool based on fuzz testing could achieve.
Abstract: XML is extensively used in web services for integration and data exchange. Its popularity and wide adoption make it an attractive target for attackers and a number of XML-based attack types have been reported recently. This raises the need for cost-effective, automated testing of web services to detect XML-related vulnerabilities, which is the focus of this paper. We discuss a taxonomy of the types of XML injection attacks and use it to derive four different ways to mutate XML messages, turning them into attacks (tests) automatically. Further, we consider domain constraints and attack grammars, and use a constraint solver to generate XML messages that are both malicious and valid, thus making it more difficult for any protection mechanism to recognise them. As a result, such messages have a better chance to detect vulnerabilities. Our evaluation on an industrial case study has shown that a large proportion (78.86%) of the attacks generated using our approach could circumvent the first layer of security protection, an XML gateway (firewall), a result that is much better than what a state-of-the-art tool based on fuzz testing could achieve.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This technology briefing describes srcML, the toolkit, and the application of XPath and XSLT to query and modify source code.
Abstract: This technology briefing is intended for those interested in constructing custom software analysis and manipulation tools to support research or commercial applications. srcML (srcML.org) is an infrastructure consisting of an XML representation for C/C++/C#/Java source code along with efficient parsing technology to convert source code to-and-from the srcML format. The briefing describes srcML, the toolkit, and the application of XPath and XSLT to query and modify source code. Additionally, a short tutorial of how to use srcML and XML tools to construct custom analysis and manipulation tools will be conducted.

Journal ArticleDOI
TL;DR: This work proposes a black-box fuzzing approach to detect different types of XQuery injection vulnerabilities in web applications driven by native XML databases and proposes a prototype system "XQueryFuzzer" based on the proposed approach.

19 Nov 2016
TL;DR: In this paper, a generic synchronization framework based on the operational transformation approach that supports synchro- nisation of text files, calendars, XML files by using the same tool is presented.
Abstract: Synchronisation of replicated shared data is a key issue in collaborative writing systems. Most existing syn- chronization tools are specific to a particular type of shared data, i.e. text files, calendars, XML files. There- fore, users must use different tools to maintain their different copies up-to-date. In this paper we propose a generic synchronization framework based on the operational transformation approach that supports synchro- nisation of text files, calendars, XML files by using the same tool. We present how our framework is used to support cooperative writing of XML documents. An implementation is illustrated through the revision control system called So6, which is part of a distributed collaborative technology called LibreSource.

Journal ArticleDOI
TL;DR: A new mapping approach, known as XAncestor, which consists of two algorithms: an XML mapping algorithm (XtoDB) and a query mapping algorithm that translates XPath queries into corresponding SQL queries based on the constructed RDB in order to reduce the query response time.
Abstract: XML has become a common language for data exchange on the Web, so it needs to be managed effectively. There are four central problems in XML data management: capture, storage, retrieval, and exchange. Even though numerous database systems are available, the relational database (RDB) is often used to store and query the content of XML documents. Therefore the processes of mapping from XML to RDB and vice versa occur frequently. Numerous researchers have proposed approaches to map hierarchically structured XML documents into the tabular format of a RDB. However, the previously developed approaches have faced problems in terms of storage and query response time. If the design of a RDB is inefficient, the number of join operations between tables increases when a query is executed, which affects the query response time. To overcome this limitation, this paper proposes a new mapping approach, known as XAncestor, which consists of two algorithms: an XML mapping algorithm (XtoDB) and a query mapping algorithm (XtoSQL). XtoDB maps XML documents to a fixed RDB with less storage space. XtoSQL translates XPath queries into corresponding SQL queries based on the constructed RDB in order to reduce the query response time i.e., the time taken to execute the translated SQL query. XAncestor is then developed as a prototype in order to test its effectiveness. The results of XAncestor are compared with those produced by five similar approaches. The comparison proves that XAncestor performs better than the previously developed approaches in terms of effectiveness and scalability. The correctness of XAncestor is also verified. The paper concludes with some recommendations for further work.

Journal ArticleDOI
TL;DR: This paper proposes a framework called temporal OWL 2, inspired by the XML data framework, which provides a low-impact solution to the temporal management of Semantic Web ontologies and guarantees logical and physical data independence for temporal schemas.
Abstract: The W3C OWL 2 recommendation is an ontology language for the Semantic Web. It allows defining both schema (i.e., entities, axioms, and expressions) and instances (i.e., individuals) of ontologies. However, OWL 2 lacks explicit support for time-varying schema or for time-varying instances. Hence, knowledge engineers or maintainers of semantics-based Web resources have to use ad hoc techniques to specify OWL 2 time-varying ontologies. In this paper, for a disciplined and systematic approach to the temporal management of Semantic Web ontologies, we propose the adoption of a framework called temporal OWL 2 ( $$\uptau $$ OWL), which is inspired by the $$\uptau $$ XSchema framework defined for XML data. In a way similar to what happens in $$\uptau $$ XSchema, $$\uptau $$ OWL allows creating a temporal OWL 2 ontology from a conventional (i.e., non-temporal) OWL 2 ontology and a set of logical and physical annotations. Logical annotations identify which elements of the ontology can vary over time; physical annotations specify how the time-varying aspects are represented in the OWL 2 document. Using annotations to integrate temporal aspects in the traditional Semantic Web, our framework (1) guarantees logical and physical data independence for temporal schemas and (2) provides a low-impact solution, since it requires neither modifications of existing Semantic Web ontologies, nor extensions to the OWL 2 recommendation and Semantic Web standards. Moreover, since the conventional schema and annotation documents could evolve over time to respond to new applications’ requirements, $$\uptau $$ OWL supports temporal schema versioning by allowing changing these components and by keeping track of their evolution through the conventional schema versions and annotation document versions, respectively. Two complete sets of operations are proposed for changing the conventional schema and annotation documents; to complete the figure, a set of operations is also introduced for updating temporal schema which must be changed consequently each time one of the mentioned components evolves over time. To show the feasibility of our approach, a prototype tool, named $$\uptau $$ OWL-Manager, is presented.

Journal ArticleDOI
Xiangguo Zhao1, Xin Bi1, Guoren Wang1, Zhen Zhang1, Hongbo Yang1 
TL;DR: A novel solution to classify uncertain XML documents, including uncertain XML Documents representation and two uncertain learning algorithms based on Extreme Learning Machine is proposed.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: This paper designs and implements a web based real time programmable logic controllers (PLC) data monitoring system on EPICS data based on browser and server and provides data tips showing and full screen mode.
Abstract: A recent huge interest in Machine to Machine communication is known as the Internet Of Things (IOT), to allow the possibility for autonomous devices to use Internet for exchanging the data. The Internet and the World Wide Web have caused a revolution in communication between the people. They were born from the need to exchange scientific information between instrumentation. Control System Studio (CSS) (Best Operator Yet Operator Interface) BOY OPI is not only an application but also a framework that can be extended with widgets and data sources. The framework provided implementations for all the common functionalities, such as XML file reader and writer, PV connection handling, abstract widgets, properties, OPI Runtime and OPI Editor. The framework can be extended using the extension points provided by the framework. Monitoring systems CSS are extremely important in Experimental Physics and Industrial Control Systems (EPICS). Most of them are based on client/server(C/S). This paper designs and implements a web based real time programmable logic controllers (PLC) data monitoring system on EPICS data. This system is based on browser and server (B/S). Using MODBUS/TCP communication data have been archived in EPICS. Then all data is displayed in a real time chart in browser (Internet Explorer or Firefox/Mozilla). The chart is refreshed every regular interval and can be zoomed and adjusted. Also, it provides data tips showing and full screen mode. Acquire of the data would be handled by multi-data acquisition card which has been hardwired communication with PLC using 24 VDC to 5 VDC and vice versa electronics circuit.