scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 2004"


Book ChapterDOI
TL;DR: This paper proposes PRoPHET, a probabilistic routing protocol for intermittently connected networks and shows that it is able to deliver more messages than Epidemic Routing with a lower communication overhead.
Abstract: In this paper, we address the problem of routing in intermittently connected networks. In such networks there is no guarantee that a fully connected path between source and destination exists at any time, rendering traditional routing protocols unable to deliver messages between hosts. There does, however, exist a number of scenarios where connectivity is intermittent, but where the possibility of communication still is desirable. Thus, there is a need for a way to route through networks with these properties. We propose PRoPHET, a probabilistic routing protocol for intermittently connected networks and compare it to the earlier presented Epidemic Routing protocol through simulations. We show that PRoPHET is able to deliver more messages than Epidemic Routing with a lower communication overhead.

1,750 citations


Journal Article
TL;DR: This paper proposes a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators and can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used.
Abstract: This paper discusses how preference information of the decision maker can in general be integrated into multiobjective search. The main idea is to first define the optimization goal in terms of a binary performance measure (indicator) and then to directly use this measure in the selection process. To this end, we propose a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators. In contrast to existing algorithms, IBEA can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used. It is shown on several continuous and discrete benchmark problems that IBEA can substantially improve on the results generated by two popular algorithms, namely NSGA-II and SPEA2, with respect to different performance measures.

1,625 citations


Book ChapterDOI
TL;DR: Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI, which provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons.
Abstract: A large number of MPI implementations are currently available, each of which emphasize different aspects of high-performance computing or are intended to solve a specific research problem. The result is a myriad of incompatible MPI implementations, all of which require separate installation, and the combination of which present significant logistical challenges for end users. Building upon prior research, and influenced by experience gained from the code bases of the LAM/MPI, LA-MPI, and FT-MPI projects, Open MPI is an all-new, production-quality MPI-2 implementation that is fundamentally centered around component concepts. Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI. Its component architecture provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons. This paper presents a high-level overview the goals, design, and implementation of Open MPI.

1,603 citations


Journal Article
TL;DR: In this paper, the authors proposed a group signature scheme based on the Strong Diffie-Hellman assumption and a new assumption in bilinear groups called the Decision Linear assumption.
Abstract: We construct a short group signature scheme. Signatures in our scheme are approximately the size of a standard RSA signature with the same security. Security of our group signature is based on the Strong Diffie-Hellman assumption and a new assumption in bilinear groups called the Decision Linear assumption. We prove security of our system, in the random oracle model, using a variant of the security definition for group signatures recently given by Bellare, Micciancio, and Warinschi.

1,562 citations


Book ChapterDOI
TL;DR: Privacy and security risks and how they apply to the unique setting of low-cost RFID devices are described and several security mech- anisms are proposed and suggested areas for future research are suggested.
Abstract: Like many technologies, low-cost Radio Frequency Identification (RFID) systems will become pervasive in our daily lives when affixed to every- day consumer items as "smart labels". While yielding great productivity gains, RFID systems may create new threats to the security and privacy of individuals or organizations. This paper presents a brief description of RFID systems and their operation. We describe privacy and security risks and how they apply to the unique setting of low-cost RFID devices. We propose several security mech- anisms and suggest areas for future research.

1,516 citations


Journal Article
TL;DR: In this paper, an Atmel ATmega128 at 8 MHz was used to implement ECC point multiplication over fields using pseudo-Mersenne primes as standardized by NIST and SECG.
Abstract: Strong public-key cryptography is often considered to be too computationally expensive for small devices if not accelerated by cryptographic hardware. We revisited this statement and implemented elliptic curve point multiplication for 160-bit, 192-bit, and 224-bit NIST/SECG curves over GF(p) and RSA-1024 and RSA-2048 on two 8-bit microcontrollers. To accelerate multiple-precision multiplication, we propose a new algorithm to reduce the number of memory accesses. Implementation and analysis led to three observations: 1. Public-key cryptography is viable on small devices without hardware acceleration. On an Atmel ATmega128 at 8 MHz we measured 0.81s for 160-bit ECC point multiplication and 0.43s for a RSA-1024 operation with exponent e = 2 16 +1. 2. The relative performance advantage of ECC point multiplication over RSA modular exponentiation increases with the decrease in processor word size and the increase in key size. 3. Elliptic curves over fields using pseudo-Mersenne primes as standardized by NIST and SECG allow for high performance implementations and show no performance disadvantage over optimal extension fields or prime fields selected specifically for a particular processor architecture.

1,113 citations


Book ChapterDOI
TL;DR: A variant of temporal logic tailored for specifying desired properties of continuous signals, based on a bounded subset of the real-time logic mitl, augmented with a static mapping from continuous domains into propositions is introduced.
Abstract: In this paper we introduce a variant of temporal logic tailored for specifying desired properties of continuous signals. The logic is based on a bounded subset of the real-time logic mitl, augmented with a static mapping from continuous domains into propositions. From formulae in this logic we create automatically property monitors that can check whether a given signal of bounded length and finite variability satisfies the property. A prototype implementation of this procedure was used to check properties of simulation traces generated by Matlab/Simulink.

1,067 citations


Journal Article
TL;DR: Given a finite-state abstraction of a sequential program with potentially recursive procedures and input from the environment, whether there are input sequences that can drive the system into “bad/good” executions is checked.
Abstract: Given a finite-state abstraction of a sequential program with potentially recursive procedures and input from the environment, we wish to check statically whether there are input sequences that can drive the system into bad/good executions. Pushdown games have been used in recent years for such analyses and there is by now a very rich literature on the subject. (See, e.g., [BS92,Tho95,Wal96,BEM97,Cac02a,CDT02].) In this paper we use recursive game graphs to model such interprocedural control flow in an open system. These models are intimately related to pushdown systems and pushdown games , but more directly capture the control flow graphs of recursive programs ([AEY01,BGR01,ATM03b]). We describe alternative algorithms for the well-studied problems of determining both reachability and Buchi winning strategies in such games. Our algorithms are based on solutions to second-order data flow equations, generalizing the Datalog rules used in [AEY01] for analysis of recursive state machines. This offers what we feel is a conceptually simpler view of these well-studied problems and provides another example of the close links between the techniques used in program analysis and those of model checking. There are also some technical advantages to the equational approach. Like the approach of Cachat [Cac02a], our solution avoids the necessarily exponential-space blow-up incurred by Walukiewicz's algorithms for pushdown games. However, unlike [Cac02a], our approach does not rely on a representation of the space of winning configurations of a pushdown graph by (alternating) automata. Only minimal sets of exits that can be forced need to be maintained, and this provides the potential for greater space efficiency. In a sense, our algorithms can be viewed as an automaton-free version of the algorithms of [Cac02a].

1,038 citations


Journal Article
TL;DR: In this article, the authors presented an authentication protocol which serves as a proof of concept for authenticating an RFID tag to a reader device using the Advanced Encryption Standard (AES) as cryptographic primitive.
Abstract: Radio frequency identification (RFID) is an emerging technology which brings enormous productivity benefits in applications where objects have to be identified automatically This paper presents issues concerning security and privacy of RFID systems which are heavily discussed in public In contrast to the RFID community, which claims that cryptographic components are too costly for RFID tags, we describe a solution using strong symmetric authentication which is suitable for today's requirements regarding low power consumption and low die-size We introduce an authentication protocol which serves as a proof of concept for authenticating an RFID tag to a reader device using the Advanced Encryption Standard (AES) as cryptographic primitive The main part of this work is a novel approach of an AES hardware implementation which encrypts a 128-bit block of data within 1000 clock cycles and has a power consumption below 9 μA on a 035 μm CMOS process

709 citations


BookDOI
TL;DR: An extension to the semi-supervised aligned cluster analysis algorithm (SSACA), a temporal clustering algorithm that incorporates pairwise constraints in the form of must-link and cannot-link is proposed that incorporates an exhaustive constraint propagation mechanism to further improve the clustering process.
Abstract: In this paper, we investigate applying semi-supervised clustering to audio-visual emotion analysis, a complex problem that is traditionally solved using supervised methods. We propose an extension to the semi-supervised aligned cluster analysis algorithm (SSACA), a temporal clustering algorithm that incorporates pairwise constraints in the form of must-link and cannot-link. We incorporate an exhaustive constraint propagation mechanism to further improve the clustering process. To validate the proposed method, we apply it to emotion analysis on a multimodal naturalistic emotion database. Results show substantial improvements compared to the original aligned clustering analysis algorithm (ACA) and to our previously proposed semi-supervised approach.

625 citations


Book ChapterDOI
TL;DR: This tutorial introduces the techniques that are used to obtain results in the form of so-called error bounds in statistical learning theory.
Abstract: The goal of statistical learning theory is to study, in a statistical framework, the properties of learning algorithms. In particular, most results take the form of so-called error bounds. This tutorial introduces the techniques that are used to obtain such results.

Journal Article
TL;DR: This chapter presents the concrete and abstract semantics of timed automata (based on transition rules, regions and zones), decision problems, and algorithms for verification, and a detailed description on DBM (Difference Bound Matrices) is included.
Abstract: This chapter is to provide a tutorial and pointers to results and related work on timed automata with a focus on semantical and algorithmic aspects of verification tools. We present the concrete and abstract semantics of timed automata (based on transition rules, regions and zones), decision problems, and algorithms for verification. A detailed description on DBM (Difference Bound Matrices) is included, which is the central data structure behind several verification tools for timed systems. As an example, we give a brief introduction to the tool UPPAAL.

Book ChapterDOI
TL;DR: The Pegasus system that can map complex workflows onto the Grid and takes an abstract description of a workflow and finds the appropriate data and Grid resources to execute the workflow is described.
Abstract: In this paper we describe the Pegasus system that can map complex workflows onto the Grid. Pegasus takes an abstract description of a workflow and finds the appropriate data and Grid resources to execute the workflow. Pegasus is being released as part of the GriPhyN Virtual Data Toolkit and has been used in a variety of applications ranging from astronomy, biology, gravitational-wave science, and high-energy physics. A deferred planning mode of Pegasus is also introduced.

Book ChapterDOI
TL;DR: This paper summarizes the main activities of the FVC2004 organization and provides a first overview of the evaluation, including a new category dedicated to ”light” systems, characterized by limited computational and storage resources.
Abstract: A new technology evaluation of fingerprint verification algorithms has been organized following the approach of the previous FVC2000 and FVC2002 evaluations, with the aim of tracking the quickly evolving state-of-the-art of fingerprint recognition systems. Three sensors have been used for data collection, including a solid state sweeping sensor, and two optical sensors of different characteristics. The competition included a new category dedicated to ”light” systems, characterized by limited computational and storage resources. This paper summarizes the main activities of the FVC2004 organization and provides a first overview of the evaluation. Results will be further elaborated and officially presented at the International Conference on Biometric Authentication (Hong Kong) on July 2004.

Book ChapterDOI
TL;DR: An approach to integrate various similarity methods is presented, which determines similarity through rules which have been encoded by ontology experts and are then combined for one overall result.
Abstract: Ontology mapping is important when working with more than one ontology Typically similarity considerations are the basis for this In this paper an approach to integrate various similarity methods is presented In brief, we determine similarity through rules which have been encoded by ontology experts These rules are then combined for one overall result Several boosting small actions are added All this is thoroughly evaluated with very promising results

Book ChapterDOI
TL;DR: The First International Signature Verification Competition (SVC2004) recently was organized as a step towards establishing common benchmark databases and benchmarking rules and the experience gained will be very useful to similar activities in the future.
Abstract: sssHandwritten signature is the most widely accepted biometric for identity verification. To facilitate objective evaluation and comparison of algorithms in the field of automatic handwritten signature verification, we organized the First International Signature Verification Competition (SVC2004) recently as a step towards establishing common benchmark databases and benchmarking rules. For each of the two tasks of the competition, a signature database involving 100 sets of signature data was created, with 20 genuine signatures and 20 skilled forgeries for each set. Eventually, 13 teams competed for Task 1 and eight teams competed for Task 2. When evaluated on data with skilled forgeries, the best team for Task 1 gives an equal error rate (EER) of 2.84% and that for Task 2 gives an EER of 2.89%. We believe that SVC2004 has successfully achieved its goals and the experience gained from SVC2004 will be very useful to similar activities in the future.

Book ChapterDOI
TL;DR: Experiments show that the recognition performance of a fingerprint system can be improved significantly by using additional user information like gender, ethnicity, and height and a framework for integrating the ancillary information with the output of a primary biometric system is presented.
Abstract: Many existing biometric systems collect ancillary information like gender, age, height, and eye color from the users during enrollment. However, only the primary biometric identifier (fingerprint, face, hand-geometry, etc.) is used for recognition and the ancillary information is rarely utilized. We propose the utilization of “soft” biometric traits like gender, height, weight, age, and ethnicity to complement the identity information provided by the primary biometric identifiers. Although soft biometric characteristics lack the distinctiveness and permanence to identify an individual uniquely and reliably, they provide some evidence about the user identity that could be beneficial. This paper presents a framework for integrating the ancillary information with the output of a primary biometric system. Experiments conducted on a database of 263 users show that the recognition performance of a fingerprint system can be improved significantly (≈ 5%) by using additional user information like gender, ethnicity, and height.

Book ChapterDOI
TL;DR: This paper proposes a new position-based routing scheme called Anchor-based Street and Traffic Aware Routing (A-STAR), designed specifically for IVCS in a city environment, and shows significant performance improvement in a comparative simulation study with other similar routing approaches.
Abstract: One of the major issues that affect the performance of Mobile Ad hoc NETworks (MANET) is routing. Recently, position-based routing for MANET is found to be a very promising routing strategy for inter-vehicular communication systems (IVCS). However, position-based routing for IVCS in a built-up city environment faces greater challenges because of potentially more uneven distribution of vehicular nodes, constrained mobility, and difficult signal reception due to radio obstacles such as high-rise buildings. This paper proposes a new position-based routing scheme called Anchor-based Street and Traffic Aware Routing (A-STAR), designed specifically for IVCS in a city environment. Unique to A-STAR is the usage of information on city bus routes to identify an anchor path with high connectivity for packet delivery. Along with a new recovery strategy for packets routed to a local maximum, the proposed protocol shows significant performance improvement in a comparative simulation study with other similar routing approaches.

Book ChapterDOI
TL;DR: Semantic Match as mentioned in this paper is an operator that takes two graph-like structures (e.g., conceptual hierarchies or ontologies) and produces a mapping between those nodes of the two graphs that correspond semantically to each other.
Abstract: We think of Match as an operator which takes two graph-like structures (e.g., conceptual hierarchies or ontologies) and produces a mapping between those nodes of the two graphs that correspond semantically to each other. Semantic matching is a novel approach where semantic correspondences are discovered by computing, and returning as a result, the semantic information implicitly or explicitly codified in the labels of nodes and arcs. In this paper we present an algorithm implementing semantic matching, and we discuss its implementation within the S-Match system. We also test S-Match against three state of the art matching systems. The results, though preliminary, look promising, in particular for what concerns precision and recall.

Book ChapterDOI
TL;DR: It is shown that the multi-swarm optimizer significantly outperforms single population PSO on this problem, and that multi-quantum swarms are superior to multi-charged swarms and SOS.
Abstract: Many real-world problems are dynamic, requiring an optimization algorithm which is able to continuously track a changing optimum over time. In this paper, we present new variants of Particle Swarm Optimization (PSO) specifically designed to work well in dynamic environments. The main idea is to extend the single population PSO and Charged Particle Swarm Optimization (CPSO) methods by constructing interacting multi-swarms. In addition, a new algorithmic variant, which broadens the implicit atomic analogy of CPSO to a quantum model, is introduced. The multi-swarm algorithms are tested on a multi-modal dynamic function – the moving peaks benchmark – and results are compared to the single population approach of PSO and CPSO, and to results obtained by a state-of-the-art evolutionary algorithm, namely self-organizing scouts (SOS). We show that our multi-swarm optimizer significantly outperforms single population PSO on this problem, and that multi-quantum swarms are superior to multi-charged swarms and SOS.

Journal Article
TL;DR: The Strong Diffie-Hellman assumption has been used in this article to construct a short signature scheme which is existentially unforgeable under a chosen message attack without using random oracles.
Abstract: We describe a short signature scheme which is existentially unforgeable under a chosen message attack without using random oracles. The security of our scheme depends on a new complexity assumption we call the Strong Diffie-Hellman assumption. This assumption has similar properties to the Strong RSA assumption, hence the name. Strong RSA was previously used to construct signature schemes without random oracles. However, signatures generated by our scheme are much shorter and simpler than signatures from schemes based on Strong RSA. Furthermore, our scheme provides a limited form of message recovery.

Book ChapterDOI
TL;DR: The tool described here is the first part of a tool set called GROOVE (GRaph-based Object-Oriented VErification) for software model checking of object-oriented systems using graphs to represent state snapshots; transitions arise from the application of graph production rules.
Abstract: The tool described here is the first part of a tool set called GROOVE (GRaph-based Object-Oriented VErification) for software model checking of object-oriented systems. The special feature of GROOVE, which sets it apart from other model checking approaches, is that it is based on graph transformations. It uses graphs to represent state snapshots; transitions arise from the application of graph production rules. This yields so-called Graph Transition Systems (GTSrsquos) as computational models.

Book ChapterDOI
TL;DR: Results show that adaptive random testing does outperform ordinary random testing significantly (by up to as much as 50%) for the set of programs under study, providing evidences that the intuition is likely to be useful in improving the effectiveness of random testing.
Abstract: In this paper, we introduce an enhanced form of random testing called Adaptive Random Testing. Adaptive random testing seeks to distribute test cases more evenly within the input space. It is based on the intuition that for non-point types of failure patterns, an even spread of test cases is more likely to detect failures using fewer test cases than ordinary random testing. Experiments are performed using published programs. Results show that adaptive random testing does outperform ordinary random testing significantly (by up to as much as 50%) for the set of programs under study. These results are very encouraging, providing evidences that our intuition is likely to be useful in improving the effectiveness of random testing.

Journal Article
TL;DR: In this paper, a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics and interval logics is presented.
Abstract: We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.

Journal Article
TL;DR: In this paper, the authors show how to construct a tweakable blockcipher by associating to the i th coordinate of an element α i ∈ F* 2 n and multiplying by α i when one increments that component of the tweak.
Abstract: We describe highly efficient constructions, XE and XEX, that turn a blockcipher E: K x {0, 1} n → {0,1} into a tweakable blockcipher E: κ x T × {0, 1} n → {0, 1} n having tweak space T = {0, 1} n x ∥ where ∥ is a set of tuples of integers such as ∥ = [1.. 2 n/2 ] x [0.. 10]. When tweak T is obtained from tweak S by incrementing one if its numerical components, the cost to compute T K (M) having already computed some S K (M') is one blockcipher call plus a small and constant number of elementary machine operations. Our constructions work by associating to the i th coordinate of ∥ an element α i ∈ F* 2 n and multiplying by α i when one increments that component of the tweak. We illustrate the use of this approach by refining the authenticated-encryption scheme OCB and the message authentication code PMAC, yielding variants of these algorithms that are simpler and faster than the original schemes, and yet have simpler proofs. Our results bolster the thesis of Liskov, Rivest, and Wagner [10] that a desirable approach for designing modes of operation is to start from a tweakable blockcipher. We elaborate on their idea, suggesting the kind of tweak space, usage-discipline, and blockcipher-based instantiations that give rise to simple and efficient modes.

Book ChapterDOI
TL;DR: An introduction to autonomic computing is presented, to realize computer and software systems and applications that can manage themselves in accordance with high-level guidance from humans
Abstract: The increasing scale complexity, heterogeneity and dynamism of networks, systems and applications have made our computational and information infrastructure brittle, unmanageable and insecure. This has necessitated the investigation of an alternate paradigm for system and application design, which is based on strategies used by biological systems to deal with similar challenges – a vision that has been referred to as autonomic computing. The overarching goal of autonomic computing is to realize computer and software systems and applications that can manage themselves in accordance with high-level guidance from humans. Meeting the grand challenges of autonomic computing requires scientific and technological advances in a wide variety of fields, as well as new software and system architectures that support the effective integration of the constituent technologies. This paper presents an introduction to autonomic computing, its challenges, and opportunities.

Journal Article
TL;DR: In this paper, two new mechanisms were added to SPEA2 to improve its searching ability, a more effective crossover mechanism and an archive mechanism to maintain diversity of the solutions in the objective and variable spaces.
Abstract: Multi-objective optimization methods are essential to resolve real-world problems as most involve several types of objects. Several multi-objective genetic algorithms have been proposed. Among them, SPEA2 and NSGA-II are the most successful. In the present study, two new mechanisms were added to SPEA2 to improve its searching ability a more effective crossover mechanism and an archive mechanism to maintain diversity of the solutions in the objective and variable spaces. The new SPEA2 with these two mechanisms was named SPEA2+. To clarify the characteristics and effectiveness of the proposed method, SPEA2+ was applied to several test functions. In the comparison of SPEA2+ with SPEA2 and NSGA-II, SPEA2+ showed good results and the effects of the new mechanism were clarified. From these results, it was concluded that SPEA2+ is a good algorithm for multi-objective optimization problems.

Journal Article
TL;DR: In this paper, Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy, and they proved that unless the total number of queries is sublinear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless.
Abstract: In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless. As databases grow increasingly large, the possibility of being able to query only a sub-linear number of times becomes realistic. We further investigate this situation, generalizing the previous work in two important directions: multi-attribute databases (previous work dealt only with single-attribute databases) and vertically partitioned databases, in which different subsets of attributes are stored in different databases. In addition, we show how to use our techniques for datamining on published noisy statistics.

Book ChapterDOI
TL;DR: The traditional view on software architecture suffers from a number of key problems that cannot be solved without changing the authors' perspective on the notion of software architecture.
Abstract: This position paper makes the following claims that, in our opinion, are worthwhile to discuss at the workshop. 1) The first phase of software architecture research, where the key concepts are components and connectors, has matured the technology to a level where industry adoption is wide-spread and few fundamental issues remain. 2) The traditional view on software architecture suffers from a number of key problems that cannot be solved without changing our perspective on the notion of software architecture. These problems include the lack of first-class representation of design decisions, the fact that these design decisions are cross-cutting and intertwined, that these problems lead to high maintenance cost, because of which design rules and constraints are easily violated and obsolete design decisions are not removed. 3) As a community, we need to take the next step and adopt the perspective that a software architecture is, fundamentally, a composition of architectural design decisions. These design decisions should be represented as first-class entities in the software architecture and it should, at least before system deployment, be possible to add, remove and change architectural design decisions against limited effort.

Book ChapterDOI
Yiyu Yao1
TL;DR: This chapter examines the basic principles and issues of granular computing from semantic and algorithmic perspectives, as well as principles and operations of computing and reasoning with granules in a set-theoretic setting.
Abstract: There are two objectives of this chapter. One objective is to examine the basic principles and issues of granular computing. We focus on the tasks of granulation and computing with granules. From semantic and algorithmic perspectives, we study the construction, interpretation, and representation of granules, as well as principles and operations of computing and reasoning with granules. The other objective is to study a partition model of granular computing in a set-theoretic setting. The model is based on the assumption that a finite set of universe is granulated through a family of pairwise disjoint subsets. A hierarchy of granulations is modeled by the notion of the partition lattice. The model is developed by combining, reformulating, and reinterpreting notions and results from several related fields, including theories of granularity, abstraction and generalization (artificial intelligence), partition models of databases, coarsening and refining operations (evidential theory), set approximations (rough set theory), and the quotient space theory for problem solving.