scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 2008"


BookDOI
TL;DR: This paper presents a support method which allows the process designer to quantitatively measure the compliance degree of a given process model against a set of control objectives, which will allow process designers to comparatively assess the Compliance degree of their design as well as be better informed on the cost of non-compliance.
Abstract: Historically, business process design has been driven by business objectives, specifically process improvement. However this cannot come at the price of control objectives which stem from various legislative, standard and business partnership sources. Ensuring the compliance to regulations and industrial standards is an increasingly important issue in the design of business processes. In this paper, we advocate that control objectives should be addressed at an early stage, i.e., design time, so as to minimize the problems of runtime compliance checking and consequent violations and penalties. To this aim, we propose supporting mechanisms for business process designers. This paper specifically presents a support method which allows the process designer to quantitatively measure the compliance degree of a given process model against a set of control objectives. This will allow process designers to comparatively assess the compliance degree of their design as well as be better informed on the cost of non-compliance.

780 citations


Book ChapterDOI
TL;DR: An overview of visual analytics, its scope and concepts, addresses the most important research challenges and presents use cases from a wide variety of application scenarios is provided.
Abstract: In today's applications data is produced at unprecedented rates. While the capacity to collect and store new data rapidly grows, the ability to analyze these data volumes increases at much lower rates. This gap leads to new challenges in the analysis process, since analysts, decision makers, engineers, or emergency response teams depend on information hidden in the data. The emerging field of visual analytics focuses on handling these massive, heterogenous, and dynamic volumes of information by integrating human judgement by means of visual representations and interaction techniques in the analysis process. Furthermore, it is the combination of related research areas including visualization, data mining, and statistics that turns visual analytics into a promising field of research. This paper aims at providing an overview of visual analytics, its scope and concepts, addresses the most important research challenges and presents use cases from a wide variety of application scenarios.

527 citations



Book ChapterDOI
TL;DR: Attempto Controlled English (ACE) is a controlled natural language, i.e. a precisely defined subset of English that can automatically and unambiguously be translated into first-order logic.
Abstract: Attempto Controlled English (ACE) is a controlled natural language, ie a precisely defined subset of English that can automatically and unambiguously be translated into first-order logic ACE may seem to be completely natural, but is actually a formal language, concretely it is a first-order logic language with an English syntax Thus ACE is human and machine understandable ACE was originally intended to specify software, but has since been used as a general knowledge representation language in several application domains, most recently for the semantic web ACE is supported by a number of tools, predominantly by the Attempto Parsing Engine (APE) that translates ACE texts into Discourse Representation Structures (DRS), a variant of first-order logic Other tools include the Attempto Reasoner RACE, the AceRules system, the ACE View plug-in for the Protege ontology editor, AceWiki, and the OWL verbaliser

264 citations


Journal Article
TL;DR: A distinguisher that uses the value of the Mutual Information between the observed measurements and a hypothetical leakage to rank key guesses and is effective without any knowledge about the particular dependencies between mea- surements and leakage as well as between leakage and processed data, which makes it a universal tool.
Abstract: We propose a generic information-theoretic distinguisher for differential side-channel analysis. Our model of side-channel leakage is a refinement of the one given by Standaert et al. An embedded device containing a secret key is modeled as a black box with a leakage function whose output is captured by an adversary through the noisy measure- ment of a physical observable. Although quite general, the model and the distinguisher are practical and allow us to develop a new differen- tial side-channel attack. More precisely, we build a distinguisher that uses the value of the Mutual Information between the observed measurements and a hypothetical leakage to rank key guesses. The attack is effective without any knowledge about the particular dependencies between mea- surements and leakage as well as between leakage and processed data, which makes it a universal tool. Our approach is confirmed by results of power analysis experiments. We demonstrate that the model and the attack work effectively in an attack scenario against DPA-resistant logic.

217 citations


Journal Article
TL;DR: In this paper, three parallelization methods for Monte-Carlo Tree Search (MCTS) are discussed: leaf parallelization, root parallelization and tree parallelization (tree parallelization requires two techniques: adequately handling of local mutexes and virtual loss).
Abstract: Monte-Carlo Tree Search (MCTS) is a new best-first search method that started a revolution in the field of Computer Go. Parallelizing MCTS is an important way to increase the strength of any Go program. In this article, we discuss three parallelization methods for MCTS: leaf parallelization, root parallelization,and tree parallelization. To be effective tree parallelization requires two techniques: adequately handling of (1) local mutexes and (2) virtual loss. Experiments in 1313 Go reveal that in the program Mango root parallelization may lead to the best results for a specific time setting and specific program parame- ters. However, as soon as the selection mechanism is able to handle more adequately the balance of exploitation and exploration, tree paralleliza- tion should have attention too and could become a second choice for parallelizing MCTS. Preliminary experiments on the smaller 99 board provide promising prospects for tree parallelization.

198 citations


Journal Article
TL;DR: It is concluded that the practical key recovery attack against KeeLoq can be used to subvert the security of real systems and reveal the master secret used in an entire class of devices from attacking a single device.
Abstract: KeeLoq is a lightweight block cipher with a 32-bit block size and a 64-bit key. Despite its short key size, it is used in remote keyless entry systems and other wireless authentication applications. For example, there are indications that authentication protocols based on KeeLoq are used, or were used by various car manufacturers in anti-theft mechanisms. This paper presents a practical key recovery attack against KeeLoq that requires 216 known plaintexts and has a time complexity of 244.5 KeeLoq encryptions. It is based on the principle of slide attacks and a novel approach to meet-in-the-middle attacks.We investigated the way KeeLoq is intended to be used in practice and conclude that our attack can be used to subvert the security of real systems. In some scenarios the adversary may even reveal the master secret used in an entire class of devices from attacking a single device. Our attack has been fully implemented. We have built a device that can obtain the data required for the attack in less than 100 minutes, and our software experiments show that, given the data, the key can be found in 7.8 days of calculations on 64 CPU cores.

134 citations


Journal Article
TL;DR: This paper presents the first application ofmax-plus to a large-scale problem and verifies its efficacy in realistic settings and provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs.
Abstract: Since traffic jams are ubiquitous in the modern world, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal Though most current traffic lights use simple heuristic protocols, more efficient controllers can be discovered automatically via multiagent reinforcement learning, where each agent controls a single traffic light However, in previous work on this approach, agents select only locally optimal actions without coordinating their behavior This paper extends this approach to include explicit coordination between neighboring traffic lights Coordination is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents This paper presents the first application of max-plus to a large-scale problem and thus verifies its efficacy in realistic settings It also provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs Furthermore, it provides a new understanding of the properties a traffic network must have for such coordination to be beneficial and shows that max-plus outperforms previous methods on networks that possess those properties

133 citations


Journal Article
TL;DR: In this paper, the authors apply a similar masking strategy to the most compact (unmasked) S-box to date, and achieve perfect masking, which is the state-of-the-art.
Abstract: Implementations of the Advanced Encryption Standard (AES), including hardware applications with limited resources (e.g., smart cards), may be vulnerable to "side-channel attacks" such as differential power analysis. One countermeasure against such attacks is adding a random mask to the data; this randomizes the statistics of the calculation at the cost of computing "mask corrections." The single nonlinear step in each AES round is the "S-box" (involving a Galois inversion), which incurs the majority of the cost for mask corrections. Oswald et al.[1] showed how the "tower field" representation allows maintaining an additive mask throughout the Galois inverse calculation. This work applies a similar masking strategy to the most compact (unmasked) S-box to date[2]. The result is the most compact masked S-box so far, with "perfect masking" (by the definition of Blomer[3]) giving suitable implementations immunity to first-order differential side-channel attacks.

131 citations


BookDOI
TL;DR: In the context of the INEX 2007 Ad Hoc Track (BookSearch'07) as discussed by the authors, the authors of this paper presented an XML document classification using Extended VSM (VSM) approach.
Abstract: Ad Hoc Track.- Overview of the INEX 2007 Ad Hoc Track.- INEX 2007 Evaluation Measures.- XML Retrieval by Improving Structural Relevance Measures Obtained from Summary Models.- TopX @ INEX 2007.- The Garnata Information Retrieval System at INEX'07.- Dynamic Element Retrieval in the Wikipedia Collection.- The Simplest XML Retrieval Baseline That Could Possibly Work.- Using Language Models and Topic Models for XML Retrieval.- UJM at INEX 2007: Document Model Integrating XML Tags.- Phrase Detection in the Wikipedia.- Indian Statistical Institute at INEX 2007 Adhoc Track: VSM Approach.- A Fast Retrieval Algorithm for Large-Scale XML Data.- LIG at INEX 2007 Ad Hoc Track: Using Collectionlinks as Context.- Book Search Track.- Overview of the INEX 2007 Book Search Track (BookSearch'07).- Logistic Regression and EVIs for XML Books and the Heterogeneous Track.- CMIC at INEX 2007: Book Search Track.- XML-Mining Track.- Clustering XML Documents Using Closed Frequent Subtrees: A Structural Similarity Approach.- Probabilistic Methods for Structured Document Classification at INEX'07.- Efficient Clustering of Structured Documents Using Graph Self-Organizing Maps.- Document Clustering Using Incremental and Pairwise Approaches.- XML Document Classification Using Extended VSM.- Entity Ranking Track.- Overview of the INEX 2007 Entity Ranking Track.- L3S at INEX 2007: Query Expansion for Entity Ranking Using a Highly Accurate Ontology.- Entity Ranking Based on Category Expansion.- Entity Ranking from Annotated Text Collections Using Multitype Topic Models.- An n-Gram and Initial Description Based Approach for Entity Ranking Track.- Structured Document Retrieval, Multimedia Retrieval, and Entity Ranking Using PF/Tijah.- Using Wikipedia Categories and Links in Entity Ranking.- Integrating Document Features for Entity Ranking.- Interactive Track.- A Comparison of Interactive and Ad-Hoc Relevance Assessments.- Task Effects on Interactive Search: The Query Factor.- Link-the-Wiki Track.- Overview of INEX 2007 Link the Wiki Track.- Using and Detecting Links in Wikipedia.- GPX: Ad-Hoc Queries and Automated Link Discovery in the Wikipedia.- University of Waterloo at INEX2007: Adhoc and Link-the-Wiki Tracks.- Wikipedia Ad Hoc Passage Retrieval and Wikipedia Document Linking.- Multimedia Track.- The INEX 2007 Multimedia Track.

119 citations


Book ChapterDOI
TL;DR: A Mobile-Ambients-based process calculus to describe context-aware computing in an infrastructure-based Ubiquitous Computing setting and a type system enforcing security policies by a combination of static and dynamic checking of mobile agents is provided.
Abstract: We present a Mobile-Ambients-based process calculus to describe context-aware computing in an infrastructure-based Ubiquitous Computing setting. In our calculus, computing agents can provide and discover contextual information and are owners of security policies. Simple access control to contextual information is not sufficient to insure confidentiality in Global Computing, therefore our security policies regulate agents' rights to the provision and discovery of contextual information over distributed flows of actions. A type system enforcing security policies by a combination of static and dynamic checking of mobile agents is provided, together with its type soundness.

Journal Article
TL;DR: Complex Event Processing (CEP) as mentioned in this paper is a defined set of tools and techniques for analyzing and controlling the complex series of interrelated events that drive modern distributed information systems, which helps IS and IT professionals understand what is happening within the system, quickly identify and solve problems, and more effectively utilize events for enhanced operation, performance, and security.
Abstract: Complex Event Processing (CEP) is a defined set of tools and techniques for analyzing and controlling the complex series of interrelated events that drive modern distributed information systems. This emerging technology helps IS and IT professionals understand what is happening within the system, quickly identify and solve problems, and more effectively utilize events for enhanced operation, performance, and security. CEP can be applied to a broad spectrum of information system challenges, including business process automation, schedule and control processes, network monitoring and performance prediction, and intrusion detection. This talk is about the rise of CEP as we know it today, its historical roots and its current position in commercial markets. Some possible long-term future roles of CEP in the Information Society are discussed along with the need to develop rule-based event hierarchies on a commercial basis to make those applications possible. The talk gives empahsis to the point that "Rules are everywhere" and that mathematical formalisms cannot express all the forms that are in use in various event processing systems.


BookDOI
TL;DR: This presentation discusses the development of Digital Signature Schemes in Weakened Random Oracle Models, and its application to the Discrete Logarithm Problem with Low Hamming Weight Product Exponents.

Book ChapterDOI
TL;DR: A CUDA-based nonlinear finite element model based on an anisotropic visco-hyperelastic constitutive formulation implemented on a graphical processor unit (GPU) is implemented into the SOFA open source framework.
Abstract: Accurate biomechanical modelling of soft tissue is a key aspect for achieving realistic surgical simulations. However, because medical simulation is a multi-disciplinary area, researchers do not always have sufficient resources to develop an efficient and physically rigorous model for organ deformation. We address this issue by implementing a CUDA-based nonlinear finite element model into the SOFA open source framework. The proposed model is an anisotropic visco-hyperelastic constitutive formulation implemented on a graphical processor unit (GPU). After presenting results on the model's performance we illustrate the benefits of its integration within the SOFA framework on a simulation of cataract surgery.

Book ChapterDOI
TL;DR: The official measures of retrieval effectiveness that are employed for the Ad Hoc Track at INEX 2007 are described, showing that in earlier years all, but only, XML elements could be retrieved, the result format has been liberalized to arbitrary passages.
Abstract: This paper describes the official measures of retrieval effectiveness that are employed for the Ad Hoc Track at INEX 2007. Whereas in earlier years all, but only, XML elements could be retrieved, the result format has been liberalized to arbitrary passages. In response, the INEX 2007 measures are based on the amount of highlighted text retrieved, leading to natural extensions of the well-established measures of precision and recall. The following measures are defined: The Focused Task is evaluated by interpolated precision at 1% recall (iP[0.01]) in terms of the highlighted text retrieved. The Relevant in Context Task is evaluated by mean average generalized precision (MAgP) where the generalized score per article is based on the retrieved highlighted text. The Best in Context Task is also evaluated by mean average generalized precision (MAgP) but here the generalized score per article is based on the distance to the assessor's best-entry point.

Book ChapterDOI
TL;DR: The authors evaluate two entity normalization methods based on Wikipedia in the context of both passage and document retrieval for question anwering and find that even a simple normalization method leads to improvements of early precision, both for document and passage retrieval.
Abstract: In the named entity normalization task, a system identifies a canonical unambiguous referent for names like Bush or Alabama. Resolving synonymy and ambiguity of such names can benefit end-to-end information access tasks. We evaluate two entity normalization methods based on Wikipedia in the context of both passage and document retrieval for question anwering. We find that even a simple normalization method leads to improvements of early precision, both for document and passage retrieval. Moreover, better normalization results in better retrieval performance.

Book ChapterDOI
TL;DR: This paper proposes a solution to the problem of privacy-aware access control, which enforces access control through a collaboration of selected nodes in the network, and discusses their robustness against the main security threats.
Abstract: Access control over resources shared by social network users is today receiving growing attention due to the widespread use of social networks not only for recreational but also for business purposes. In a social network, access control is mainly regulated by the relationships established by social network users. An important issue is therefore to devise privacy-awareaccess control mechanisms able to perform a controlled sharing of resources by, at the same time, satisfying privacy requirements of social network users wrt their relationships. In this paper, we propose a solution to this problem, which enforces access control through a collaboration of selected nodes in the network. The use of cryptographic and digital signature techniques ensures that relationship privacy is guaranteed during the collaborative process. In the paper, besides giving the protocols to enforce collaborative access control we discuss their robustness against the main security threats.

Journal Article
TL;DR: This study hypothesizes that the responses of a VP simulating Post Traumatic Stress Disorder in an adolescent female could elicit a number of diagnostic mental health specific questions that are necessary for differential diagnosis of the condition.
Abstract: Recent research has established the potential for virtual characters to act as virtual standardized patients VP for the assessment and training of novice clinicians. We hypothesize that the responses of a VP simulating Post Traumatic Stress Disorder (PTSD) in an adolescent female could elicit a number of diagnostic mental health specific questions (from novice clinicians) that are necessary for differential diagnosis of the condition. Composites were developed to reflect the relation between novice clinician questions and VP responses. The primary goal in this study was evaluative: can a VP generate responses that elicit user questions relevant for PTSD categorization? A secondary goal was to investigate the impact of psychological variables upon the resulting VP Question/Response composites and the overall believability of the system.

Journal Article
TL;DR: In this paper, a formal definition of round-trip engineering and semantics of target changes in the context of partial and non-injective transformations is presented. But, as transformations in general are partial and not injective, they cannot be easily reversed to propagate changes.
Abstract: In a model-centric software development environment, a multitude of different models are used to describe a software system on different abstraction layers and from different perspectives. Following the MDA vision, model transformation is used to support the gradual refinement from abstract models into more concrete models. However, target models do not stay untouched but may be changed due to maintenance work or evolution of the software. Therefore, in order to preserve a coherent description of the whole system, it is necessary to propagate certain changes to a target model back to the source model. However, as transformations in general are partial and not injective, they cannot be easily reversed to propagate changes. This paper presents a formal definition of round-trip engineering and the semantics of target changes in the context of partial and non-injective transformations.

BookDOI
TL;DR: This book discusses the challenges and requirements for sensor data-based knowledge discovery solutions in high-priority application and explores the fusion between heterogeneous data streams from multiple sensor types and applications in science, engineering, and security.
Abstract: Addressing the issues challenging the sensor community, this book presents innovative solutions in offline data mining and real-time analysis of sensor or geographically distributed data Illustrated with case studies, it discusses the challenges and requirements for sensor data-based knowledge discovery solutions in high-priority application The book then explores the fusion between heterogeneous data streams from multiple sensor types and applications in science, engineering, and security Bringing together researchers from academia, government, and the private sector, this book delineates the application of knowledge modeling in data intensive operations Multi/Card Deck Copy

Journal Article
TL;DR: In this article, the authors propose a classification of service granularity types that reflect three different interpretations of the term granularity: functionality granularity, data granularity and business value granularity.
Abstract: Service granularity generally refers to the size of a service. The fact that services should be large-sized or coarse-grained is often postulated as a fundamental design principle of service oriented architecture (SOA). However, multiple meanings are put on the term granularity and the impact of granularity on architectural qualities is not always clear. In order to structure the discussion, we propose a classification of service granularity types that reflects three different interpretations. Firstly, functionality granularityrefers to how much functionality is offered by a service. Secondly, data granularityreflects the amount of data that is exchanged with a service. Finally, the business value granularityof a service indicates to which extent the service provides added business value. For each of these types, we discuss the impact of granularity on a set of architectural concerns, such as performance, reusability and flexibility. We illustrate each granularity type with small examples and we present some preliminary ideas of how controlling granularity may assist in alleviating some architectural issues as we encounter them in a large-sized bank-insurance company that is currently migrating to SOA.

Journal Article
TL;DR: In this paper, the authors examine the resistance of the popular hash function SHA-1 and its predecessor SHA-0 against dedicated preimage attacks and develop new cryptanalytic techniques to assess the security margin of these hash functions against these attacks.
Abstract: In this paper, we examine the resistance of the popular hash function SHA-1 and its predecessor SHA-0 against dedicated preimage attacks. In order to assess the security margin of these hash functions against these attacks, two new cryptanalytic techniques are developed: Reversing the inversion problem: the idea is to start with an impossible expanded message that would lead to the required digest, and then to correct this message until it becomes valid without destroying the preimage property. P 3 graphs : an algorithm based on the theory of random graphs that allows the conversion of preimage attacks on the compression function to attacks on the hash function with less effort than traditional meet-in-the-middle approaches. Combining these techniques, we obtain preimage-style shortcuts attacks for up to 45 steps of SHA-1, and up to 50 steps of SHA-0 (out of 80).

Journal Article
TL;DR: In this article, the authors introduce two definitions of approximations, local and global, such that the corresponding upper approximation of X is the smallest definable set containing X and that they are unique.
Abstract: For completely specified decision tables lower and upper approximations are unique, the lower approximation is the largest definable set contained in the approximated set X and the upper approximation of X is the smallest definable set containing X. For incomplete decision tables the existing definitions of upper approximations provide sets that, in general, are not minimal definable sets. The same is true for generalizations of approximations based on relations that are not equivalence relations. In this paper we introduce two definitions of approximations, local and global, such that the corresponding upper approximations are minimal. Local approximations are more precise than global approximations. Global lower approximations may be determined by a polynomial algorithm. However, algorithms to find both local approximations and global upper approximations are NP-hard. Additionally, we show that for decision tables with all missing attribute values being lost, local and global approximations are equal to one another and that they are unique.

BookDOI
TL;DR: In this article, the authors of the book "Artificial intelligence theories models and applications 5th Hellenic conference on ai setn 2008 syros greece october 2 4 2008 proceedings lecture notes in artificial intelligence is what we surely mean".
Abstract: Any books that you read, no matter how you got the sentences that have been read from the books, surely they will give you goodness. But, we will show you one of recommendation of the book that you need to read. This artificial intelligence theories models and applications 5th hellenic conference on ai setn 2008 syros greece october 2 4 2008 proceedings lecture notes in artificial intelligence is what we surely mean. We will show you the reasonable reasons why you need to read this book. This book is a kind of precious book written by an experienced author.


Journal Article
TL;DR: This paper analyzes the lifting of minimal inequalities derived from lattice-free triangles and proves that the lifting functions are unique for each of the first two categories such that the resultant inequality is minimal for the mixed integer infinite group problem, and shows that it is not necessarily unique in the third category.
Abstract: Recently, Andersen et al. [1] and Borozan and Cornu´ejols [3] characterized the minimal inequalities of a system of two rows with two free integer variables and nonnegative continuous variables. These in- equalities are either split cuts or intersection cuts derived using maximal lattice-free convex sets. In order to use these minimal inequalities to ob- tain cuts from two rows of a general simplex tableau, it is necessary to ex- tend the system to include integer variables (giving the two-dimensional mixed integer infinite group problem), and to develop lifting functions giving the coefficients of the integer variables in the corresponding in- equalities. In this paper, we analyze the lifting of minimal inequalities derived from lattice-free triangles. Maximal lattice-free triangles in R2 can be classified into three cate- gories: those with multiple integral points in the relative interior of one of its sides, those with integral vertices and one integral point in the relative interior of each side, and those with non integral vertices and one integral point in the relative interior of each side. We prove that the lifting functions are unique for each of the first two categories such that the resultant inequality is minimal for the mixed integer infinite group problem, and characterize them. We show that the lifting function is not necessarily unique in the third category. For this category we show that a fill-in inequality (Johnson [11]) yields minimal inequalities for mixed in- teger infinite group problem under certain sufficiency conditions. Finally, we present conditions for the fill-in inequality to be extreme.

Book ChapterDOI
TL;DR: This paper relates possible rules to rules by the criteria support and accuracy in NISs, and proposes two strategies of rule generation based on these criteria, which are applied to some data sets with incomplete information.
Abstract: This paper presents a framework of rule generation in Non -deterministic Information Systems (NISs ), which follows rough sets based rule generation in Deterministic Information Systems (DISs ). Our previous work about NISs coped with certain rules , minimal certain rules and possible rules . These rules are characterized by the concept of consistency . This paper relates possible rules to rules by the criteria support and accuracy in NISs . On the basis of the information incompleteness in NISs , it is possible to define new criteria, i.e., minimum support , maximum support , minimum accuracy and maximum accuracy . Then, two strategies of rule generation are proposed based on these criteria. The first strategy is Lower Approximation strategy , which defines rule generation under the worst condition. The second strategy is Upper Approximation strategy , which defines rule generation under the best condition. To implement these strategies, we extend Apriori algorithm in DISs to Apriori algorithm in NISs . A prototype system is implemented, and this system is applied to some data sets with incomplete information.


Journal Article
TL;DR: In this article, the authors demonstrate how dynamic reconfiguration can realize a range of countermeasures which are standard for software implementations and that were practically not portable to hardware so far, and introduce a new class of countermeasure that, to the best of our knowledge, has not been considered so far.
Abstract: Dynamically reconfigurable systems are known to have many advantages such as area and power reduction The drawbacks of these systems are the reconfiguration delay and the overhead needed to provide reconfigurability We show that dynamic reconfiguration can also improve the resistance of cryptographic systems against physical attacks First, we demonstrate how dynamic reconfiguration can realize a range of countermeasures which are standard for software implementations and that were practically not portable to hardware so far Second, we introduce a new class of countermeasure that, to the best of our knowledge, has not been considered so far This type of countermeasure provides increased resistance, in particular against fault attacks, by randomly changing the physical location of functional blocks on the chip area at run-time Third, we show how fault detection can be provided on certain devices with negligible area-overhead The partial bitstreams can be read back from the reconfigurable areas and compared to a reference version at run-time and inside the device For each countermeasure, we propose a prototype architecture and evaluate the cost and security level it provides All proposed countermeasures do not change the device's input-output behavior, thus they are transparent to upper-level protocols Moreover, they can be implemented jointly and complemented by other countermeasures on algorithm-, circuit-, and gate-level