scispace - formally typeset
Search or ask a question

Showing papers in "CTIT technical report series in 2001"



Journal Article
TL;DR: A formal execution semantics for UML activity diagrams that is appropriate for workflow modelling that is based upon the Statemate semantics of statecharts, extended with some transactional properties to deal with data manipulation.
Abstract: In this report we define a formal execution semantics for UML activity diagrams that is appropriate for workflow modelling. Our workflow models express software requirements and therefore assume a perfect implementation. In our semantics, software state changes do not take time. It is based upon the Statemate semantics of statecharts, extended with some transactional properties to deal with data manipulation. Our semantics also deals with real time and with multiple state instances. We first give an informal description of our semantics and then formalise this in terms of labelled transition systems. We compare our semantics with other semantics for UML activity diagrams and workflow modelling by analysing the different choices made in those semantics.

86 citations


Journal Article
TL;DR: A survey of the research, development, and standardization efforts in the area of business-to-business e-contracting is presented and an acronym list of the most common abbreviations in e-commerce terminology is provided.
Abstract: The rapid development of IT gives rise to changes in many industries and different spheres of life. It simplifies and speeds up processes and data processing. As a result, it changes the processes and the data involved in them. The area of commerce and contracting in particular undergoes similar changes. The new developments aim at speeding up, facilitating and globalizing the process of contracting. This inevitably leads to new contracting processes and changes the standard approach for contracting. This report presents a survey of the research, development, and standardization efforts in the area of business-to-business e-contracting. The report is divided into three parts. The first part presents a multi-dimensional contracting framework. The second part reviews current standardization processes and developments in the e-commerce field. The third part provides summaries and comments on a selected set of papers on e-contracting. Depending on their goals, focus and content, the papers are categorized in accordance with the contract cycle phases described in one of the dimensions of the contracting framework. Where appropriate, the content of the papers is related to the dimensions of the contracting framework. This report further provides an acronym list of the most common abbreviations in e-commerce terminology.

51 citations


Journal Article
TL;DR: In this article, the authors present a system for automatic indexing and content-based retrieval of multimedia documents. But the system uses multi-modal clues, obtained from three different multimedia components: audio, video, and superimposed text.
Abstract: Content-based video retrieval is emerging as an important part in the process of utilization of various multimedia documents. In this report we present a novel system for the automatic indexing and content-based retrieval of multimedia documents. We chose the domain of Formula 1 sport videos because the manual annotation of Formula 1 races is complicated and time consuming. Our system uses multi-modal clues, obtained from three different multimedia components: audio, video, and superimposed text. The audio and video feature extraction subsystems are developed to extract important parameters from multimedia documents. We also performed text detection and recognition to extract some semantic information superimposed in the Formula 1 race video. To unify the audio and video clues we employed dynamic Bayesian networks. Many experiments that we carried out are also presented, as well as the results and conclusions drawn from them.

20 citations


Journal Article
TL;DR: The Calculating with Concepts (CC) technique, which has been developed to improve the precision of UML class diagrams and allows the formal reasoning based on these diagrams, is introduced.
Abstract: This paper introduces the Calculating with Concepts (CC) technique, which has been developed to improve the precision of UML class diagrams and allows the formal reasoning based on these diagrams. This paper aims at showing the industrial benefits of using such a formal and rigorous approach to reason about business processes and software applications in the early phases of the software development process. The paper discusses how the CC technique can be used in the specification of business processes and in the development of their supporting software applications or tools. This paper also illustrates the use of the technique with a realistic case study on tool integration.

16 citations


Journal Article
TL;DR: An integrated approach to information handling and knowledge management in web-based open-ended learning environments that supports both learners and instructors in information structuring and task- oriented processing and usage is discussed.
Abstract: This paper discusses an integrated approach to information handling and knowledge management in web-based open-ended learning environments. It supports both learners and instructors in information structuring and task- oriented processing and usage. AIMS, a web -based intelligent tool for task-based information and performance support is implemented to exemplify the theoretical assumptions of this approach. AIMS focuses on three important aspects of the information handling process - information structuring, information visualization, and user centered approach. We employ concept maps (CM) to build a subject domain ontology and use it as a basis for defining course structures. The domain CM is also used for attractive non-linear visualization and conceptual graphical navigation of the subject domain and the search results, thus allowing for more efficient information searches. In order to provide appropriate adaptation to the individual information needs and preferences of the learners AIMS models their behavior.

11 citations


Journal Article
TL;DR: The requirements for QoS enhancements to the Bluetooth 1.0 specification and a Quality of Service Framework are presented and the document is concluded with issues for future work.
Abstract: The Quality of Service functions and procedures included in the Bluetooth 1.0 specification have been reviewed. Next issues associated with providing Quality of Service over a wireless link in general and Bluetooth in particular have been investigated. Allthough the Bluetooth 1.0 specification provides some Quality of Service support, some deficiencies have been identified. The requirements for QoS enhancements to the Bluetooth 1.0 specification and a Quality of Service Framework are presented in this document. The document is concluded with issues for future work.

8 citations


Journal Article
TL;DR: In this article, a mathematically derived selectivity model is proposed for Zipfian distributed IR databases, where the two parameters have been computed based on the data fragmentation, and after each (though usually infrequent) update, the model can forget the data distribution, resulting in fast and quite good selectivity estimation.
Abstract: New application domains cause todays database sizes to grow rapidly, posing great demands on technology. Data fragmentation facilitates techniques (like distribution, parallelization, and main-memory computing) meeting these demands. Also, fragmentation might help improving effcient processing of query types such as top N. Database design and query optimization require a good notion of the costs resulting from a certain fragmentation. Our mathematically derived selectivity model facilitates this. Once its two parameters have been computed based on the fragmentation, after each (though usually infrequent) update, our model can forget the data distribution, resulting in fast and quite good selectivity estimation. We show experimental verification for Zipfian distributed IR databases.

7 citations


Journal Article
TL;DR: In this article, the authors present an ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon.
Abstract: In this paper, ongoing work concerning the language modelling and lexicon optimization of a Dutch speech recognition system for Spoken Document Retrieval is described: the collection and normalization of a training data set and the optimization of our recognition lexicon. Effects on lexical coverage of the amount of training data, of decompounding compound words and of different selection methods for proper names and acronyms are discussed.

5 citations


Journal Article
TL;DR: The CarSim system as mentioned in this paper uses a template formalism to represent a written accident report and uses a planning component to model the trajectories and temporal values of every vehicle involved in the accident.
Abstract: The problem of generating a 3D simulation of a car accident from a written description can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two system parts, we designed a template formalism to represent a written accident report. The CarSim system processes formal descriptions of accidents and creates corresponding 3D simulations. A planning component models the trajectories and temporal values of every vehicle that is involved in the accident. Two algorithms plan the simulation of the accident. The CarSim system contains algorithms for planning collisions with static objects, as well as algorithms for modeling accidents consisting of more than one collision and collisions with vehicles which have stopped.

5 citations


Journal Article
TL;DR: In this article, the potential of polymer thick-film sensors for use as biometric sensors on smartcards was assessed, and the outline of a novel biometric discrimination method was presented, for which an equal false-acceptance and false-rejection error rate of 2.3% was reported.
Abstract: In this paper the potential of polymer thick-film sensors are assessed for use as biometric sensors on smartcards. Piezoelectric and piezoresistive sensors have been printed on flexible polyester, then bonded to smartcard blanks. The tactile interaction of a person with these sensors has been investigated. It was found that whilst piezoresistive films offer good dynamic and static sensing properties, the relative complexity of their measurement circuitry over that of piezoelectrics, favours the use of piezoelectric films in the smartcard arena. The outline of a novel biometric discrimination method is presented, for which an equal false-acceptance and false-rejection error rate of 2.3% is reported.

Journal Article
TL;DR: It is shown that although timing constraints cannot be explicitly represented in LOTOS, the language is suitable for the specification of co-ordination of real- time tasks, which is the main functionality of the real-time kernel.
Abstract: This paper presents and discusses the LOTOS specification of a real-time parallel kernel. The purpose of this specification exercise has been to evaluate LOTOS with respect to its capabilities to model real-time features with a realistic industrial product. LOTOS was used to produce the formal specification of TRANS-RTXC, which is a real-time parallel kernel developed by Intelligent Systems international. This paper shows that although timing constraints cannot be explicitly represented in LOTOS, the language is suitable for the specification of co-ordination of real-time tasks, which is the main functionality of the real-time kernel. This paper also discusses the validation process of the kernel specification and the role of tools in this validation process. We believe that our experience (use of structuring techniques, use of validation methods and tools, etc) is valuable for designers who want to apply formal models in their design or analysis tasks.

Journal Article
TL;DR: A novel theoretical perspective to understand implementation of such technologies based on learning theories is proposed and it is conjecture that a better organisational learning climate could promote successful implementation of groupware through group learning.
Abstract: Nowadays, more and more information and communication technologies (ICT) gain the characteristics of groupware as they strive to support different aspects of collaborative work. These types of ICT become progressively intertwined in the infrastructures of companies and therefore their implementation deserves as much attention as their design and development. However the literature keeps on providing examples of failures of groupware projects. In this paper we propose a novel theoretical perspective to understand implementation of such technologies based on learning theories. A technology implementation process is regarded as a learning process. The model describes two levels of the implementation process: the user level (individuals and groups) and the organisational level. At the group level it is based on five steps of collaborative learning within a group of users. At the organisational level it is related to the learning climate. By means of the longitudinal case study we have operationalized the constructs from the model towards concrete user-group and managerial activities that advance implementation of groupware. The discussion leads us to conjecture that a better organisational learning climate could promote successful implementation of groupware through group learning. Ultimately the insights derived from the model should lead to the tangible ways to foster the learning climate.

Journal Article
TL;DR: The identification of problems associated with planning, operationalisation and monitoring in group-based learning in higher education and the identification of telematic support options which, in combination with appropriate instructional decisions, have the potential to remedy these problems are identified.
Abstract: Group-based learning is being introduced into many settings in higher education. Is this a sustainable development with respect to the resources required? Under what conditions can group-based learning be applied successfully in distance education and in increasingly flexible campus-based learning? Can telematic support facilitate and enrich courses where group-based learning is applied? These questions formed the basis of the motivation for the research project whose main results are presented here. The goals set for the research were the identification of problems associated with planning, operationalisation and monitoring in group-based learning in higher education and the identification of telematic support options which, in combination with appropriate instructional decisions, have the potential to remedy these problems. The solutions identified were tested in the context of three case studies.

Journal Article
TL;DR: This article discusses the syntax, semantics, and the newsgroup application of DXL, and shows how heterogeneous sources can be queried and integrated into a single XML document, and discusses the architecture that is setup to implement DXL.
Abstract: With large volumes of data being exchanged over the Internet query languages are needed to bridge the gap between databases and the web. DXL provides an extendible framework, designed to exchange data between heterogeneous sources and targets, such as databases and XML documents. One application of DXL, the focus in this article, is to retrieve data from databases and yield the result in XML documents. The major contribution of DXL compared to other query languages, like RXL and XQuery, is that the structure of an output XML document may depend not only on the DXL query but also on the source data. To achieve this DXL uses a construct clause, where the structure of the XML document is (partially) defined, but unlike other XML query languages this clause is embedded in a template, which can be called recursively. We demonstrate the power of DXL with a newsgroup example, where each posted message may have arbitrarily nested follow-ups. The extendibility of the framework is ensured by using XML to describe the syntax of DXL. Besides discussing the syntax, semantics, and the newsgroup application of DXL, we will also show how heterogeneous sources can be queried and integrated into a single XML document, and we discuss the architecture that is setup to implement DXL.

Journal Article
TL;DR: In this article, the authors propose Allocational Temporal Logic (ATL) as a formalism to express properties concerning the dynamic allocation (birth) and de-allocation (death) of entities in an object-based system.
Abstract: This paper proposes Allocational Temporal Logic (ATL) as a formalism to express properties concerning the dynamic allocation (birth) and de-allocation (death) of entities, such as the objects in an object-based system. The logic is interpreted on History-Dependent Automata, extended with a symbolic representation for certain cases of unbounded allocation. The paper also presents a simple imperative language with primitive statements for (de)allocation, with an operational semantics, to demonstrate the kind of behaviour that can be modelled. The main contribution of the paper is a tableau-based model checking algorithm for ATL, along the lines of Lichtenstein and Pnueli's algorithm for LTL.

Journal Article
TL;DR: In this article, a number of agency requirements on the software, which are identified and compared with functional requirements, are discussed and further research issues are identified, and results in the area of multi-agent systems may be applicable in the design of information systems for which agency requirements hold.
Abstract: In digital marketplaces, companies are present in the form of their software, which engages in business interactions with other companies. Each organisation that is active in the marketplace is trying to reach its own business goals, which may be in conflict with the goals of other organisations. The software by which an organisation is present in a digital marketplace must act on behalf of this organisation to reach these goals. Thus, there is a relation of agency between the software and the organisation that the software represents. This relation gives rise to a number of agency requirements on the software, which are identified and compared with functional requirements in this report. Results in the area of Multi-Agent Systems may be applicable in the design of information systems for which agency requirements hold. A number of such results are discussed, and further research issues are identified.

Journal Article
TL;DR: A home network which integrates both real-time and non-real-time capabilities for one coherent, distributed architecture that will support inexpensive, small appliances as well as more expensive, large appliances is proposed.
Abstract: This paper proposes a home network which integrates both real-time and non-real-time capabilities for one coherent, distributed architecture. Such a network is not yet available. Our network will support inexpensive, small appliances as well as more expensive, large appliances. The network is based on a new type of real-time token protocol that uses scheduling to achieve optimal token-routing through the network. Depending on the scheduling algorithm, bandwidth utilisations of 100 percent are possible. Token management, to prevent token-loss or multiple tokens, is essential to support a dynamic, plug-and-play configuration. Small appliances, like sensors, would contain low-cost, embedded processors with limited computing power, which can handle lightweight network protocols. All other operations can be delegated to other appliances that have sufficient resources. This provides a basis for transparency, as it separates controlling and controlled object. Our network will support this. We will show the proposed architecture of such a network and present experiences with and preliminary research of our design.

Journal Article
TL;DR: In this paper, the TeleTOP method and implementation model are described and the similarities and differences between the implementation, the adaptation, and the use of the teleTOP course support environment in the Faculty of Educational Science and Technology and the faculty of Telematics (computer science and telecommunications networks) are analyzed.
Abstract: At the Faculty of Educational Science and Technology, University of Twente, in The Netherlands, the entire faculty is involved not only in the use of a new WWW-based coursemanagement system (called TeleTOP) but more fundamentally in a new educational approach. In addition we are working with other faculties to support the same progression. How are we doing this? In this article, the TeleTOP Method and implementation model are (http://teletop.edte.utwente.nl) are described and the similarities and differences between the implementation, the adaptation, and the use of the TeleTOP course support environment in the Faculty of Educational Science and the Faculty of Telematics (computer science and networks) are analyzed.

Journal Article
TL;DR: This paper presents a mathematically derived model to predict the quality implications of neglecting information before query execution and constructs a model that can be trained on other document collections for which the necessary quality information is available, or can be obtained quite easily.
Abstract: Efficient, exible, and scalable integration of full text information retrieval (IR) in a DBMS is not a trivial case. This holds in particular for query optimization in such a context. To facilitate the bulk-oriented behavior of database query processing, a priori knowledge of how to limit the data efficiently prior to query evaluation is very valuable at optimization time. The usually imprecise nature of IR querying provides an extra opportunity to limit the data by a trade-off with the quality of the answer. In this paper we present a mathematically derived model to predict the quality implications of neglecting information before query execution. In particular we investigate the possibility to predict the retrieval quality for a document collection for which no training information is available, which is usually the case in practice. Instead, we construct a model that can be trained on other document collections for which the necessary quality information is available, or can be obtained quite easily. We validate our model for several document collections and present the experimental results. These results show that our model performs quite well, even for the case were we did not train it on the test collection itself.

Journal Article
TL;DR: The results show that mind mapping and the creation of hyperscapes are very effective means to give learners control over multimedia materials and improve motivation.
Abstract: Mainstream research on educational technology has focused on the discovery of more effective ways for conveying “relevant” information to students. A problem that we identified is that students often do not engage with the subject matter, especially when dealing with complex domain representations, even when high quality hypermedia resources are available. With the widespread of multimedia content and the emergence of massive information resources there is a need for powerful and effective learning tools that can handle all kinds of media configurations. Our results show that mind mapping and the creation of hyperscapes are very effective means to give learners control over multimedia materials and improve motivation.

Journal Article
TL;DR: In this paper, a web-based approach for modelling and searching large collections of documents, based on a conceptual schema, is presented. But the main focus in this paper is the evaluation of a retrieval performance experiment, carried out to examine the advances of the web-space search engine, compared to a standard search engine using a widely accepted IR model.
Abstract: Finding relevant information using search engines that index large portions of the World-Wide Web is often a frustrating task. Due to the diversity of the information available, those search engines will have to rely on techniques, developed in the eld of information retrieval (IR). When focusing on more limited domains of the Internet, large collections of documents can be found, having a highly structured and multimedia character. Furthermore, it can be assumed that the content is more related. This allows more precise and advanced query formulation techniques to be used for the Web, as commonly used within a database environment. The Webspace Method focuses on such document collections, and o ers an approach for modelling and searching large collections of documents, based on a conceptual schema. The main focus in this article is the evaluation of a retrieval performance experiment, carried out to examine the advances of the webspace search engine, compared to a standard search engine using a widely accepted IR model. A mayor improvement in retrieval performance, measured in terms of recall and precision, up to a factor two, can be achieved when searching document collections, using the Webspace Method.

Journal Article
TL;DR: In this article, the design of a remote look-up service for MIB item definitions is discussed, which facilitates the retrieval of missing MIB module definitions, as well as definitions of individual MIB items.
Abstract: Despite some deficiencies, the Internet management framework is widely deployed and thousands of Management Information Base (MIB) modules have been defined thus far. These modules are used by implementors of agent software, as well as by managers and management applications, to understand the syntax and semantics of the management information that may be exchanged. At the manager’s side, MIB modules are usually stored in separate files, which are maintained by the human manager and read by the management application. Since maintenance of this file repository can be cumbersome, management applications are often confronted with incomplete and outdated information. To solve this “meta-management” problem, this paper discusses the design of a remote look-up service for MIB item definitions. Such service facilitates the retrieval of missing MIB module definitions, as well as definitions of individual MIB items. Initially the service may be provided by a single server, but other servers can be added at later stages to improve performance and prevent copyright problems. It is envisaged that vendors of network equipment will also install servers, to distribute their vendor specific MIBs. The paper describes how the service, which is provided on a best effort basis, can be accessed by managers / management applications, and how internally servers inform each other about the MIB modules they support.

Journal Article
TL;DR: This work uses the business and software architecture of TSI as a case study to validate the approach to ICT architecture and explains the techniques discussed in a textbook by Wieringa.
Abstract: We use the business and software architecture of Travel Service International (TSI) as a case study to validate our approach to ICT architecture. The techniques discussed are explained at length in a textbook by Wieringa.


Journal Article
TL;DR: The focus in this article will be on the formulation of complex queries over a collection of related multimedia documents, also called a webspace, which combines search by both divergence and convergence to formulate the query, using a graphical representation of the webspace schema.
Abstract: To nd information on theWorld-WideWeb (WWW), two approaches are generally followed. Browsing the web from a specific starting point, or web-site map, is called search by divergence. The second approach, search by convergence, is followed when using a search engine. Most search engines use a information retrieval strategy, which requires that the user supplies some keywords to find the relevant information. Due to the diversity and unstructuredness of the WWW, both approaches offer only limited query formulation techniques to find the relevant information. When focusing on smaller domains of the Internet, still large collections of documents have to be dealt with, which are presented on a single web-site or Intranet. There the content is more related and structured, which allows us to apply database techniques to the web. The Webspace Method aims at using DB techniques to model and query such document collections. A semantical level of abstraction is obtained, by describing the content of the documents with some high-level concepts, defined in an object-oriented schema. This allows us to bring the power of query formulation as known within a database environment to the web. At the same time, we focus on the integration with Information Retrieval, which allows us to formulate complex content-based queries over a collection of web-based documents, containing various types of multimedia. After an introduction into the Webspace Method, the focus in this article will be on the formulation of complex queries over a collection of related multimedia documents, also called a webspace. For that purpose the Webspace Search Engine is built, which combines search by both divergence and convergence to formulate the query, using a graphical representation of the webspace schema. Under the hood, the Webspace Search Engine uses the Data eXchange Language (DXL) to gather the requested information. We will explain the DXL's framework for data exchange, and discuss how it is integrated into the Webspace Search Engine. Furthermore, we will show by some examples how, with help of the Data eXchange Language (DXL), specific parts of documents can be retrieved and integrated into the result of the query, based on the concepts dened in the webspace schema. This in contrast to the average search engine, which just delivers a document's URL.

Journal Article
TL;DR: The design for a new type of non-volatile mass storage memory, based on scanning probe techniques, that combines the low volume and power consumption of the FlashRAM, with the high capacity of the hard disk is discussed.
Abstract: The design for a new type of non-volatile mass storage memory is discussed. This new design, based on scanning probe techniques, combines the low volume and power consumption of the FlashRAM, with the high capacity of the hard disk. The small form factor of the device makes it an excellent candidate for mass storage in handheld embedded systems. Its hierarchical architecture allows us to make a trade-off between data-rate, access time and power consumption. The power consumption scales linearly with the desired data-rate, and is expected to be lower than what can be achieved with competing technologies.

Journal Article
TL;DR: A form of input and output for functional languages that is in a senseorthogonal to the actual computation is proposed, where a value which is written out explicitly in the program text by way of typical example, is replaced by a special constant that asks the user to type in parts of the value, as needed by the computation.
Abstract: We propose a form of input and output for functional languages that is in a sense \emph{orthogonal} to the actual computation: certain input and output directives can be added to a completed, fully working program text, and they do neither disturb referential transparency nor necessitate to change the types of the program text The input and output directives change the order of evaluation as little as possible (lazy evaluation remains lazy), though there is sufficient control over the order in which the input and output actions occur to make it acceptable for the user The basic idea is that a value which is written out explicitly in the program text by way of typical example, is replaced by a special constant that asks the user to type in parts of the value, as needed by the computation The mechanism seems suitable for a large class of so-called interactive programs