scispace - formally typeset
Search or ask a question

Showing papers in "CTIT technical report series in 2010"


Journal Article
TL;DR: In this paper, a new family of lightweight block ciphers named KLEIN, which is designed for resource-constrained devices such as wireless sensors and RFID tags, is presented.
Abstract: Resource-efficient cryptographic primitives become fundamental for realizing both security and efficiency in embedded systems like RFID tags and sensor nodes. Among those primitives, lightweight block cipher plays a major role as a building block for security protocols. In this paper, we describe a new family of lightweight block ciphers named KLEIN, which is designed for resource-constrained devices such as wireless sensors and RFID tags. Compared to the related proposals, KLEIN has advantage in the software performance on legacy sensor platforms, while its hardware implementation can be compact as well.

291 citations


Journal Article
TL;DR: In this paper, the authors present case studies that describe how the graph transformation tool GROOVE has been used to model problems from a wide variety of domains, highlighting the wide applicability of GROVE in particular, and of graph transformation in general.
Abstract: In this paper we present case studies that describe how the graph transformation tool GROOVE has been used to model problems from a wide variety of domains. These case studies highlight the wide applicability of GROOVE in particular, and of graph transformation in general. They also give concrete templates for using GROOVE in practice. Furthermore, we use the case studies to analyse the main strong and weak points of GROOVE.

119 citations


Journal Article
TL;DR: In this article, an overview of the computational complexity of many problem variants, including enumeration variants, parameterized problems, and approximation strategies to the MPE-problem with and without additional (neither observed nor explained) variables.
Abstract: One of the key computational problems in Bayesian networks is computing the maximal posterior probability of a set of variables in the network, given an observation of the values of another set of variables. In its most simple form, this problem is known as the MPE-problem. In this paper, we give an overview of the computational complexity of many problem variants, including enumeration variants, parameterized problems, and approximation strategies to the MPE-problem with and without additional (neither observed nor explained) variables. Many of these complexity results appear elsewhere in the literature; other results have not been published yet. The paper aims to provide a fairly exhaustive overview of both the known and new results.

70 citations


Journal Article
TL;DR: A semi-automated approach of log processing is proposed to systematically identify potential process-related threats in SCADA, which is effective in detecting anomalous events that might alter the regular process workflow.
Abstract: SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and performs actions, which look legitimate, but which areintended to disrupt the SCADA process. To detect such threats, we propose a semi-automated approach of log processing. We conduct experiments on a real-life water treatment facility. A preliminary case study suggests that our approach is effective indetecting anomalous events that might alter the regular process workflow.

61 citations


Journal Article
TL;DR: In this article, the authors investigate the relation between explanation and trust in the context of computing science and apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples.
Abstract: There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, the goal of explanation is to acquire or maintain the users' trust. In this paper, we investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. We apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, we discuss consequences of our analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities.

59 citations


Journal Article
TL;DR: In this paper, a new HVE scheme based on bilinear groups of prime order was proposed, which supports vectors over any alphabet and can hide both the plaintext and public key used for encryption.
Abstract: A hidden vector encryption scheme (HVE) is a derivation of identity-based encryption, where the public key is actually a vector over a certain alphabet. The decryption key is also derived from such a vector, but this one is also allowed to have ``$\star$'' (or wildcard) entries. Decryption is possible as long as these tuples agree on every position except where a ``$\star$'' occurs. These schemes are useful for a variety of applications: they can be used as building block to construct attribute-based encryption schemes and sophisticated predicate encryption schemes (for e.g. range or subset queries). Another interesting application -- and our main motivation -- is to create searchable encryption schemes that support queries for keywords containing wildcards. Here we construct a new HVE scheme, based on bilinear groups of prime order, which supports vectors over any alphabet. The resulting ciphertext length is equally shorter than existing schemes, depending on a trade-off. The length of the decryption key and the computational complexity of decryption are both constant, unlike existing schemes where these are both dependent on the amount of non-wildcard symbols associated to the decryption key. Our construction hides both the plaintext and public key used for encryption. We prove security in a selective model, under the decision linear assumption.

56 citations


Journal Article
TL;DR: This report describes efforts to distribute a portion of the computer networks research community's network data through the Simpleweb/University of Twente Traffic Traces Data Repository.
Abstract: The computer networks research community lacks of shared measurement information. As a consequence, most researchers need to expend a considerable part of their time planning and executing measurements before being able to perform their studies. The lack of shared data also makes it hard to compare and validate results. This report describes our efforts to distribute a portion of our network data through the Simpleweb/University of Twente Traffic Traces Data Repository.

40 citations


Journal Article
TL;DR: This paper looks at OSNs, their associated privacy risks, and existing research into solutions, and helps identify the research directions for the Kindred Spirits project.
Abstract: In recent years, Online Social Networks (OSNs) have become an important part of daily life for many. Users build explicit networks to represent their social relationships, either existing or new. Users also often upload and share a plethora of information related to their personal lives. The potential privacy risks of such behavior are often underestimated or ignored. For example, users often disclose personal information to a larger audience than intended. Users may even post information about others without their consent. A lack of experience and awareness in users, as well as proper tools and design of the OSNs, perpetuate the situation. This paper aims to provide insight into such privacy issues and looks at OSNs, their associated privacy risks, and existing research into solutions. The final goal is to help identify the research directions for the Kindred Spirits project.

35 citations


Journal Article
TL;DR: In this paper, the authors propose to use MapReduce to quickly test new retrieval approaches on a cluster of machines by sequentially scanning all documents and show that sequential scanning is a viable approach to running large-scale information retrieval experiments with little effort.
Abstract: We propose to use MapReduce to quickly test new retrieval approaches on a cluster of machines by sequentially scanning all documents. We present a small case study in which we use a cluster of 15 low cost machines to search a web crawl of 0.5 billion pages showing that sequential scanning is a viable approach to running large-scale information retrieval experiments with little effort. The code is available to other researchers at: http://sourceforge.net/projects/mirex/

28 citations


Journal Article
TL;DR: Cyber-crime science is an emerging area of study aiming to prevent cyber-crime by combining security protection techniques from Information Security with empirical research methods used in Crime Science as discussed by the authors.
Abstract: Cyber-crime Science is an emerging area of study aiming to prevent cyber-crime by combining security protection techniques from Information Security with empirical research methods used in Crime Science. Information security research has developed techniques for protecting the confidentiality, integrity, and availability of information assets but is less strong on the empirical study of the effectiveness of these techniques. Crime Science studies the effect of crime prevention techniques empirically in the real world, and proposes improvements to these techniques based on this. Combining both approaches, Cyber-crime Science transfers and further develops Information Security techniques to prevent cyber-crime, and empirically studies the effectiveness of these techniques in the real world. In this paper we review the main contributions of Crime Science as of today, illustrate its application to a typical Information Security problem, namely phishing, explore the interdisciplinary structure of Cyber-crime Science, and present an agenda for research in Cyber-crime Science in the form of a set of suggested research questions.

27 citations


Journal Article
TL;DR: In this paper, an architecture-based method is presented for confidentiality risk assessment in IT outsourcing, and a case study is presented to evaluate this new method and to evaluate its effectiveness.
Abstract: Today, companies are required to be in control of their IT assets, and to provide proof of this in the form of independent IT audit reports. However, many companies have outsourced various parts of their IT systems to other companies, which potentially threatens the control they have of their IT assets. To provide proof of being in control of outsourced IT systems, the outsourcing client and outsourcing provider need a written service level agreement (SLA) that can be audited by an independent party. SLAs for availability and response time are common practice in business, but so far there is no practical method for specifying confidentiality requirements in an SLA. Specifying confidentiality requirements is hard because in contrast to availability and response time, confidentiality incidents cannot be monitored: attackers who breach confidentiality try to do this unobserved by both client and provider. In addition, providers usually do not want to reveal their own infrastructure to the client for monitoring or risk assessment. Elsewhere, we have presented an architecture-based method for confidentiality risk assessment in IT outsourcing. In this paper, we adapt this method to confidentiality requirements specification, and present a case study to evaluate this new method.

Journal Article
TL;DR: In this article, the authors present an analysis of privacy issues in the context of DDoS attacks and raise awareness on the risks of taking part in them, even though the group behind the attacks claims to be anonymous, the tools they provide do not offer any security services such as anonymization.
Abstract: On November 28, 2010, the world started watching the whistle blower website WikiLeaks to begin publishing part of the 250,000 US Embassy Diplomatic cables. These confidential cables provide an insight on U.S. international affairs from 274 different embassies, covering topics such as analysis of host countries and leaders and even requests for spying out United Nations leaders. The release of these cables has caused reactions not only in the real world, but also on the Internet. In fact, a cyberwar started just before the initial release. Wikileaks has reported that their servers were experiencing distributed denial-of-service attacks (DDoS). A DDoS attack consists of many computers trying to overload a server by firing a high number of requests, leading ultimately to service disruption. In this case, the goal was to avoid the release of the embassy cables. After the initial cable release, several companies started severed ties with WikiLeaks. One of the first was Amazon.com, that removed the WikiLeaks web- site from their servers. Next, EveryDNS, a company in which the domain wikileaks.org was registered, dropped the domain entries from its servers. On December 4th, PayPal cancelled the account that WikiLeaks was using to receive on-line donations. On the 6th, Swiss bank PostFinance froze the WikiLeaks assets and Mastercard stopped receiving payments to the WikiLeaks account. Visa followed Mastercard on December 7th. These reactions caused a group of Internet activists (or “hacktivists”) named Anonymous to start a retaliation against PostFinance, PayPay, MasterCard, Visa, Moneybrookers.com and Amazon.com, named “Operation Payback”. The retaliation was performed as DDoS attacks to the websites of those companies, disrupting their activities (except for the case of Amazon.com) for different periods of time. The Anonymous group consists of volunteers that use a stress testing tool to perform the attacks. This tool, named LOIC (Low Orbit Ion Cannon), can be found both as a desktop application and as a Web page. Even though the group behind the attacks claims to be anonymous, the tools they provide do not offer any security services, such as anonymization. As a consequence, a hacktivist that volunteers to take part in such attacks, can be traced back easily. This is the case for both current versions of the LOIC tool. Therefore, the goal of this report is to present an analysis of privacy issues in the context of these attacks, and raise awareness on the risks of taking part in them.

Journal Article
TL;DR: An energy meter is used to measure the differences in PC power consumption while browsing the web normally, and while browsing with ads being blocked, and shows that, on average, the additional energy consumption to display web advertisements is 2.5W.
Abstract: Advertising is an important source of income for many websites. To get the attention of the unsuspecting (and probably uninterested) visitors, web advertisements (ads) tend to use elaborate animations and graphics. Depending on the specific technology being used, displaying such ads on the visitor's screen may require a vast amount of CPU-power. Since present day desktop-CPUs can easily use 100W or more, ads may consume a substantial amount of energy. Although it is important for environmental reasons to reduce energy consumption, increasing the number of ads seems to be counterproductive. The goal of this paper is to investigate the power consumption of web advertisements. For this purpose we used an energy meter to measure the differences in PC power consumption while browsing the web normally (thus with ads enabled), and while browsing with ads being blocked. To simulate normal web browsing, we created a browser-based tool called AutoBrowse, which periodically opens an URL from a predefined list. To block advertisements, we used the Adblock Plus extension for Mozilla Firefox. To measure also power consumption with other browsers, we used in addition the Apache HTTP server and its mod_proxy module to act as an ad-blocking proxy server. The measurements on several PCs and browsers show that, on average, the additional energy consumption to display web advertisements is 2.5W. To put this number into perspective, we calculated that the total amount of energy used to display web advertisement is equivalent of the total yearly electricity consumption of nearly 2000 households in the Netherlands. It takes 3,6 “average” wind turbines to generate this amount of energy.

Journal Article
TL;DR: A review of the existing literature in this domain and discuss various aspects and requirements for forensic face recognition systems particularly focusing on Bayesian framework is presented in this paper, where several issues related with court admissibility and reliability of system are also discussed.
Abstract: Beside a few papers which focus on the forensic aspects of automatic face recognition, there is not much published about it in contrast to the literature on developing new techniques and methodologies for biometric face recognition. In this report, we review forensic facial identification which is the forensic experts‟ way of manual facial comparison. Then we review famous works in the domain of forensic face recognition. Some of these papers describe general trends in forensics [1], guidelines for manual forensic facial comparison and training of face examiners who will be required to verify the outcome of automatic forensic face recognition system [2]. Some proposes theoretical framework for application of face recognition technology in forensics [3] and automatic forensic facial comparison [4, 5]. Bayesian framework is discussed in detail and it is elaborated how it can be adapted to forensic face recognition. Several issues related with court admissibility and reliability of system are also discussed. Until now, there is no operational system available which automatically compare image of a suspect with mugshot database and provide result usable in court. The fact that biometric face recognition can in most cases be used for forensic purpose is true but the issues related to integration of technology with legal system of court still remain to be solved. There is a great need for research which is multi-disciplinary in nature and which will integrate the face recognition technology with existing legal systems. In this report we present a review of the existing literature in this domain and discuss various aspects and requirements for forensic face recognition systems particularly focusing on Bayesian framework.

Journal Article
TL;DR: According to the method, the flow measuring operation is initiated as soon as an increase in the flow rate is detected, thus permitting the precise measurement even of very rapid draw-off processes.
Abstract: A (computational) client puzzle scheme enables a client to prove to a server that a certain amount of computing resources (CPU cycles and/or Memory look-ups) has been dedicated to solve a puzzle. Researchers have identified a number of potential applications, such as constructing timed cryptography, fighting junk emails, and protecting critical infrastructure from DoS attacks. In this paper, we first revisit this concept and formally define two properties, namely deterministic computation and parallel computation resistance. Our analysis show that both properties are crucial for the effectiveness of client puzzle schemes in most application scenarios. We prove that the RSW client puzzle scheme, which is based on the repeated squaring technique, achieves both properties. Secondly, we introduce two batch verification modes for the RSW client puzzle scheme in order to improve the verification efficiency of the server, and investigate three methods for handling errors in batch verifications. Lastly, we show that client puzzle schemes can be integrated with reputation systems to further improve the effectiveness in practice.

Journal Article
TL;DR: In this article, a formal workflow model is proposed with data driven coordination and explicating properties of the continuous data processing, which can be used to optimize data workflows, i.e., reducing the computational power for processing the workflows in an engine by reusing intermediate processing results in several workflows.
Abstract: Online data or streaming data are getting more and more important for enterprise information systems, eg by integrating sensor data and workflows The continuous flow of data provided eg by sensors requires new workflow models addressing the data perspective of these applications, since continuous data is potentially infinite while business process instances are always finite In this paper a formal workflow model is proposed with data driven coordination and explicating properties of the continuous data processing These properties can be used to optimize data workflows, ie, reducing the computational power for processing the workflows in an engine by reusing intermediate processing results in several workflows

Journal Article
TL;DR: This contribution gives a high-level overview of the issues that the emergence of cloud computing as a paradigm raises, both from a computer science and a philosophical perspective.
Abstract: Over the last years, something called “cloud computingi?½i?½? has become a major theme in computer science and information security. Essentially, it concerns delivering information technology as a service, by enabling the renting of software, computing power and storage. In this contribution, we give a high-level overview of the issues that the emergence of cloud computing as a paradigm raises, both from a computer science and a philosophical perspective. We discuss 1) the ideal and limitations of encrypted data processing, 2) the necessity of simulating physical constraints in virtualised infrastructures, 3) the personal equivalent of cloud computing in the form of outsourced identity, and 4) the possibilities for connecting policy and technical level issues by means of a new ethical approach, called informational precaution.

Journal Article
TL;DR: In this paper, the authors try to uncover methods and techniques that can be used to automatically improve search results on queries formulated by children, and a prototype of a query expander is built that implements several of these techniques.
Abstract: The number of children that have access to an Internet connection (at home or at school) is large and growing fast. Many of these children search the web by using a search engine. These search engines do not consider their skills and preferences however, which makes searching difficult. This paper tries to uncover methods and techniques that can be used to automatically improve search results on queries formulated by children. In order to achieve this, a prototype of a query expander is built that implements several of these techniques. The paper concludes with an evaluation of the prototype and a discussion of the promising results.

Journal Article
TL;DR: This document tries to give concise, (semi)formal specifications for the second generation electronic passports as used by most EU countries, and for the closely related ISO18013 standard for electronic driving licenses, as a follow-up to making open source Java Card implementations of these standards.
Abstract: This document tries to give concise, (semi)formal specifications for the second generation electronic passports as used by most EU countries, and for the closely related ISO18013 standard for electronic driving licenses. We developed these specifications as a follow-up to making open source Java Card implementations of these standards. Our aim is to provide useful information – implicit in the official specification, but crucial for the overall security – in a simple format that could be useful to anyone implementing these standards, performing security tests, or doing code reviews. More generally, we want to explore useful formats for rigorously specifying the typical complex combinations of security protocols that arise in real applications. In particular, we provide state diagrams which describe the state that are largely implicit in the official specifications, but which have to be explicit in any implementation, and which also provide a basis for systematic modelbased testing.

Journal Article
TL;DR: In this paper, the authors analyze the logs from laptop theft in two universities and complement the results with penetration tests, showing that surveillance cameras and access control have a limited role in the security of the organization and that the level of security awareness of the employees plays the biggest role in stopping theft.
Abstract: Organizations rely on physical, technical and procedural mechanisms to protect their physical assets. Of all physical assets, laptops are the probably the most troublesome to protect, since laptops are easy to remove and conceal. Organizations open to the public, such as hospitals and universities, are easy targets for laptop thieves, since every day hundreds of people not employed by the organization wander in the premises. The problem security professionals face is how to protect the laptops in such open organizations. In this study, we look at the eectiveness of the security mechanisms against laptop theft in two universities. We analyze the logs from laptop thefts in both universities and complement the results with penetration tests. The results from the study show that surveillance cameras and access control have a limited role in the security of the organization and that the level of security awareness of the employees plays the biggest role in stopping theft. The results of this study are intended to aid security professionals in the prioritization of security mechanisms.

Journal Article
TL;DR: Risk-Based Requirements Elicitation and Prioritization (RiskREP), a method for managing IT security risks by combining the results of a top-down requirements analysis with a bottom-up threat analysis, is presented.
Abstract: Today, companies are required to be in control of the security of their IT assets. This is especially challenging in the presence of limited budgets and conflicting requirements. Here, we present Risk-Based Requirements Elicitation and Prioritization (RiskREP), a method for managing IT security risks by combining the results of a top-down requirements analysis with a bottom-up threat analysis. Top-down, it prioritizes security goals and from there derives verifiable requirements. Bottom-up, it analyzes architectures in order to identify security risks in the form of critical components. Linking these critical components to security requirements helps to analyze the effects of these requirements on business goals, and to prioritize security requirements. The security requirements also are the basis for deriving test cases for security analysis and compliance monitoring.

Journal Article
TL;DR: In this paper, the authors use the sociological framework of actor-network theory to model information security starting from group membership instead of containment, and provide algorithms for threat finding as well as examples.
Abstract: Traditional information security modelling approaches often focus on containment of assets within boundaries. Due to what is called de-perimeterisation, such boundaries, for example in the form of clearly separated company networks, disappear. This paper argues that in a de-perimeterised situation a focus on containment in security modelling is ineffective. Most importantly, the tree structure induced by the notion of containment is insufficient to model the interactions between digital, physical and social aspects of security. We use the sociological framework of actor-network theory to model information security starting from group membership instead of containment. The model is based on hypergraphs, and is also applicable to physical and social security measures. We provide algorithms for threat finding as well as examples.

Journal Article
TL;DR: This paper presents an indeterministic approach for handling uncertain decisions in a duplicate detection process by using a probabilistic target schema, where instead of deciding between multiple possible worlds, all these worlds can be modeled in the resulting data.
Abstract: In current research, duplicate detection is usually considered as a deterministic approach in which tuples are either declared as duplicates or not. However, most often it is not completely clear whether two tuples represent the same real-world entity or not. In deterministic approaches, however, this uncertainty is ignored, which in turn can lead to false decisions. In this paper, we present an indeterministic approach for handling uncertain decisions in a duplicate detection process by using a probabilistic target schema. Thus, instead of deciding between multiple possible worlds, all these worlds can be modeled in the resulting data. This approach minimizes the negative impacts of false decisions. Furthermore, the duplicate detection process becomes almost fully automatic and human effort can be reduced to a large extent. Unfortunately, a full-indeterministic approach is by definition too expensive (in time as well as in storage) and hence impractical. For that reason, we additionally introduce several semi-indeterministic methods for heuristically reducing the set of indeterministic handled decisions in a meaningful way.

Journal Article
TL;DR: In this paper, it was shown that the performance guarantee of the WSEPT rule is not better than 1.229, which constitutes the first lower bound for WSPT in this setting, and in particular, it has somewhat nastier worst-case examples than deterministic scheduling.
Abstract: We consider the problem to minimize the weighted sum of completion times in nonpreemptive parallel machine scheduling. In a landmark paper from 1986, Kawaguchi and Kyan [5] showed that scheduling the jobs according to the WSPT rule -also known as Smith's rule- has a performance guarantee of ${1\over 2}(1+\sqrt{2}) \approx 1.207$. They also gave an instance to show that this bound is tight. We consider the stochastic variant of this problem in which the processing times are exponentially distributed random variables. We show,somehow counterintuitively, that the performance guarantee of the WSEPT rule, the stochastic analogue of WSPT, is not better than 1.229. This constitutes the first lower bound for WSEPT in this setting, and in particular, it shows that even with exponentially distributed processing times, stochastic scheduling has somewhat nastier worst-case examples than deterministic scheduling. In that respect, our analysis sheds new light on the fundamental differences between deterministic and stochastic scheduling.

Journal Article
TL;DR: In this paper, the authors investigated the structure and the order of the cyclic group generated by a cyclic permutation of a prime number and showed how these permutations are related to other prime numbers.
Abstract: Some length-preserving operations on strings only permute the symbol positions in strings; such an operation $X$ gives rise to a family $\{X_n\}_{n\geq2}$ of similar permutations. We investigate the structure and the order of the cyclic group generated by $X_n$. We call an integer $n$ $X$-{\em prime} if $X_n$ consists of a single cycle of length $n$ ($n\geq2$). Then we show some properties of these $X$-primes, particularly, how $X$-primes are related to $X^\prime$-primes as well as to ordinary prime numbers. Here $X$ and $X^\prime$ range over well-known examples (reversal, cyclic shift, shuffle, twist) and some new ones based on the Archimedes spiral and on the Josephus problem.

Journal Article
TL;DR: In this paper, the authors introduce the notion of Timed-Ephemerizer, which can be regarded as a hybrid primitive by combining Ephemerizer and Timed Release Encryption.
Abstract: The concept of Ephemerizer, proposed by Perlman, is a cryptographic primitive for assured data deletion. With an Ephemerizer protocol, data in persistent storage devices will always be encrypted simultaneously using an ephemeral public key of the Ephemerizer (an entity which will publish a set of ephemeral public keys and periodically delete the expired ones) and the long-term public key of a user. An Ephemerizer protocol enables the user to securely decrypt the encrypted data without leaking any information to the Ephemerizer. So far, no security model has ever been proposed for this primitive and existing protocols have not been studied formally. Not surprisingly, we show that some existing Ephemerizer protocols possess security vulnerabilities. In this paper, we introduce the notion of Timed-Ephemerizer, which can be regarded as a hybrid primitive by combining Ephemerizer and Timed-Release Encryption. Compared with an Ephemerizer protocol, a Timed-Ephemerizer protocol further guarantees that data will only be released after a pre-defined disclosure time. Moreover, we propose a security model for Timed-Ephemerizer and formalize relevant security properties. We also propose a new Timed-Ephemerizer protocol and prove its security in the security model.

Journal Article
Gijs Kant1
TL;DR: In this paper, a preliminary comparison is presented for checking isomorphism of pairs of graphs that are used in graph-based model checking, and a preliminary algorithm that does not compute a canonical form is presented.
Abstract: Graph isomorphism checking can be used in graph-based model checking to achieve symmetry reduction. Instead of one-to-one comparing the graph representations of states, canonical forms of state graphs can be computed. These canonical forms can be used to store and compare states. However, computing a canonical form for a graph is computationally expensive. Whether computing a canonical representation for states and reducing the state space is more efficient than using canonical hashcodes for states and comparing states one-to-one is not a priori clear. In this paper these approaches to isomorphism reduction are described and a preliminary comparison is presented for checking isomorphism of pairs of graphs. An existing algorithm that does not compute a canonical form performs better that tools that do for graphs that are used in graph-based model checking. Computing canonical forms seems to scale better for larger graphs.

Journal Article
TL;DR: A precise logical formalisation of the essentials of the Mifare Classic card, in the language of a theorem prover (PVS), covers the LFSR, the filter function and (parts of) the authentication protocol, thus serving as precise documentation of the card's ingredients and their properties.

Journal Article
TL;DR: In this paper, the authors propose a way to ingest graph transformation with compositionality, reaping the same benefits from modularity as enjoyed by process algebra, using the existing concept of graph interface, and show under what circumstances rules can be decomposed into smaller subrules, each working on a subgraph of the complete, whole-world graph.
Abstract: Graph transformation works under a whole-world assumption. In modelling realistic systems, this typically makes for large graphs and sometimes also large, hard to understand rules. From process algebra, on the other hand, we know the principle of reactivity, meaning that the system being modelled is embedded in an environment with which it continually interacts. This has the advantage of allowing modular system specifications and correspondingly smaller descriptions of individual components. Reactivity can alternatively be understood as enabling compositionality: the specification of components and subsystems are composed to obtain the complete model. In this work we show a way to ingest graph transformation with compositionality, reaping the same benefits from modularity as enjoyed by process algebra. In particular, using the existing concept of graph interface, we show under what circumstances rules can be decomposed into smaller subrules, each working on a subgraph of the complete, whole-world graph, in such a way that the effect of the original rule is precisely captured by the synchronisation of subrules.

Journal Article
TL;DR: The claim in this paper is to focus on the essentials, i.e., the actual observations being described by location, time, owner, instrument, and measurement.
Abstract: Management of sensor data requires metadata to understand the semantics of observations. While e-science researchers have high demands on metadata, they are selective in entering metadata. The claim in this paper is to focus on the essentials, i.e., the actual observations being described by location, time, owner, instrument, and measurement. The applicability of this approach is demonstrated in two very different case studies.