scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the Brazilian Computer Society in 2004"



Journal ArticleDOI
TL;DR: Cartographic guidelines developed from this research suggest a combination of both masking strategies, and future research should focus on the refinement and further testing of these, and other alternative masking methods.
Abstract: This research proposes cartographic guidelines for presenting confidential point data on maps. Such guidelines do not currently exist, but are important for governmental agencies that disseminate personal data to the public because these agencies have to balance between the citizens’ right to know, and preserving a citizen’s right to privacy. In an experiment, participants compared an original point pattern of confidential crime locations with the same point pattern being geographically masked. Ten different masking methods were tested. The objective was to identify appropriate geographic masking methods that preserve both the confidentiality of individual locations, and the essential visual characteristics of the original point pattern. The empirical testing reported here is a novel approach for identifying various map design principles that would be useful for representing confidential point data on a map. The results of this research show that only two of the ten masking methods that were tested yield satisfactory solutions. The two masking methods include aggregating point locations at either (1) the midpoint of the street segment or (2) at the closest street intersection. The cartographic guidelines developed from this research suggest a combination of both masking strategies. Future research should focus on the refinement and further testing of these, and other alternative masking methods.

49 citations


Journal ArticleDOI
TL;DR: This paper examines natural-color maps by focusing on the painted map art of Hal Shelton, the person most closely associated with developing the genre during the mid twentieth century, and introduces techniques for designing and producing natural- colors that are economical and within the skill range of most cartographers.
Abstract: This paper examines natural-color maps by focusing on the painted map art of Hal Shelton, the person most closely associated with developing the genre during the mid twentieth century. Advocating greater use of natural-color maps by contemporary cartographers, we discuss the advantages of natural-color maps compared to physical maps made with hypsometric tints; why natural-color maps, although admired, have remained comparatively rare; and the inadequacies of using satellite images as substitutes for natural-color maps. Seeking digital solutions, the paper then introduces techniques for designing and producing natural-color maps that are economical and within the skill range of most cartographers. The techniques, which use Adobe Photoshop software and satellite land cover data, yield maps similar in appearance to those made by Shelton, but with improved digital accuracy. Full-color illustrations show examples of Shelton’s maps and those produced by digital techniques.

36 citations


Journal ArticleDOI
TL;DR: Effective ways of teaching students and professionals on how to develop high-quality software following the principles of agile software development are presented.
Abstract: Agile Methods propose a new way of looking at software development that questions many of the beliefs of conventional Software Engineering. Agile methods such as Extreme Programming (XP) have been very effective in producing high-quality software in real-world projects with strict time constraints. Nevertheless, most university courses and industrial training programs are still based on old-style heavyweight methods. This article, based on our experiences teaching XP in academic and industrial environments, presents effective ways of teaching students and professionals on how to develop high-quality software following the principles of agile software development. We also discuss related work in the area, describe real-world cases, and discuss open problems not yet resolved.

36 citations


Journal ArticleDOI
TL;DR: Interestingly, man who flattened the earth maupertuis and the sciences in the enlightenment that you really wait for now is coming.
Abstract: Interestingly, man who flattened the earth maupertuis and the sciences in the enlightenment that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.

30 citations


Journal ArticleDOI
TL;DR: This paper presents an alternative form for the classic Douglas-Peucker to produce a simplified polyline which is homeomorphic to the original one.
Abstract: The classic Douglas-Peucker line-simplification algorithm is recognized as the one that delivers the best perceptual representations of the original lines. It may, however, produce simplified polyline that is not topologically equivalent to the original one consisting of all vertex samples. On the basis of properties of the polyline hulls, Saalfeld devised a simple rule for detecting topological inconsistencies and proposed to solve them by carrying additional refinements. In this paper, we present an alternative form for the classic Douglas-Peucker to produce a simplified polyline which is homeomorphic to the original one. Our modified Douglas-Peucker algorithm is based on two propositions: (1) when an original polyline is star-shaped, its simplification from the Douglas-Peucker procedure cannot self-intersect; and (2) for any polyline, two of its star-shaped sub-polylines may only intersect if there is a vertex of one simplified sub-polyline inside the other's corresponding region.

28 citations


Journal ArticleDOI
TL;DR: This article describes and defines Hawaiian cartography, identifies the internal struggles an academic Indigenous Hawaiian cartographer shares with other Indigenous scholars attempting to negotiate different epistemologies, and presents three autoethnographic Hawaiian cartographic projects that are necessary steps in resolving the differences between Western and Indigenous epistemology.
Abstract: Maps, and the ability to spatially organize the place we live, are basic necessities of human survival and may very well be “one of the oldest forms of human communication”. Whether they are derived from scientific or mythological impetus, maps do the same thing – they tell stories of the relationships between people and their places of importance. Every map is a blending of experience, theoretical concepts, and technical craftsmanship; “constructions of reality”; representations of the environment as seen by the societies that create them. The way people experience their environment and express their relationship with it is directly linked to their epistemology, which in turn indicates how knowledge is processed and used. Indigenous and Western science share many similar characteristics, yet are distinctly different in ways that affect how geographical information is communicated. Hawaiian cartography is an “incorporating culture” that privileges processes such as mo‘olelo (stories), oli (chant), ‘olelo no‘eau (proverbs), hula (dance), mele (song) and mo‘o ku ‘auhau (genealogy). This article describes and defines Hawaiian cartography, identifies the internal struggles an academic Indigenous Hawaiian cartographer shares with other Indigenous scholars attempting to negotiate different epistemologies, and presents three autoethnographic Hawaiian cartographic projects that are necessary steps in resolving the differences between Western and Indigenous epistemologies.

26 citations


Journal ArticleDOI
TL;DR: A new method for surface reconstruction and smoothing based on unorganized noisy point clouds without normals is described, which produces a refined triangular mesh that approximates the original point cloud while preserving the salient features of the underlying surface.
Abstract: We describe a new method for surface reconstruction and smoothing based on unorganized noisy point clouds without normals. The output of the method is a refined triangular mesh that approximates the original point cloud while preserving the salient features of the underlying surface. The method has five steps: noise removal, clustering, data reduction, initial reconstruction, and mesh refinement. We also present theoretical justifications for the heuristics used in the reconstruction step.

24 citations


Journal ArticleDOI
TL;DR: A quantitative study that compares aspect-based and OO solutions for a representative set of design patterns and found that most aspect-oriented solutions improve separation of pattern-related concerns, although some aspect- oriented implementations of specific patterns resulted in higher coupling and more lines of code.
Abstract: Design patterns offer flexible solutions to common problems in software development. Recent studies have shown that several design patterns involve crosscutting concerns. Unfortunately, object-oriented (OO) abstractions are often not able to modularize those crosscutting concerns, which in turn decrease the system reusability and maintainability. Hence, it is important verifying whether aspect-oriented approaches support improved modularization of crosscutting concerns relative to design patterns. Ideally, quantitative studies should be performed to compare object-oriented and aspect-oriented implementations of classical patterns with respect to important software engineering attributes, such as coupling and cohesion. This paper presents a quantitative study that compares aspect-based and OO solutions for a representative set of design patterns. We have used stringent software engineering attributes as the assessment criteria. We have found that most aspect-oriented solutions improve separation of pattern-related concerns, although some aspect-oriented implementations of specific patterns resulted in higher coupling and more lines of code.

20 citations


Journal ArticleDOI
TL;DR: This work introduces a new system-level diagnosis model and an algorithm based on this model: Hi-Comp (Hierarchical Comparison-based Adaptive Distributed System-Level Diagnosis algorithm), the first diagnosis algorithm that is, at the same time, hierarchical, distributed and comparison-based.
Abstract: This work introduces a new system-level diagnosis model and an algorithm based on this model: Hi-Comp (Hierarchical Comparison-based Adaptive Distributed System-Level Diagnosis algorithm). This algorithm allows the diagnosis of systems that can be represented by a complete graph. Hi-Comp is the first diagnosis algorithm that is, at the same time, hierarchical, distributed and comparison-based. The algorithm is not limited to crash fault diagnosis, because its tests are based on comparisons. To perform a test, a processor sends a task to two processors of the system that, after executing the task, send their outputs back to the tester. The tester compares the two outputs; if the comparison produces a match, the tester considers the tested processors fault-free; on the other hand, if the comparison produces a mismatch, the tester considers that at least one of the two tested processors is faulty, but can not determine which one. Considering a system of N nodes, it is proved that the algorithm’s diagnosability is (N-1) and the latency is log2N testing rounds. Furthermore, a formal proof of the maximum number of tests required per testing round is presented, which can be O(N3). Simulation results are also presented.

19 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive approach to describe, deploy and adapt component-based applications having dynamic non-functional requirements, centered on high-level contracts associated to architectural descriptions, which allow the non- functional requirements to be handled separately during the system, design process.
Abstract: This paper presents a comprehensive approach to describe, deploy and adapt component-based applications having dynamic non-functional requirements. The approach is centered on high-level contracts associated to architectural descriptions, which allow the non-functional requirements to be handled separately during the system, design process. This helps to achieve separation of concerns facilitating the reuse of modules that implement the application in other systems. Besides specifying non-functional requirements, contracts are used at runtime to guide configuration adaptations required to enforce these requirements. The infrastructure required to manage the contracts follows an architectural pattern, which can be directly mapped to specific components included in a supporting reflective middleware. This allows designers to write a contract and to follow standard recipes to insert the extra code required to its enforcement in the supporting middleware.

Journal ArticleDOI
TL;DR: It is argued that there may be important benefits to the users if designers communicate their design vision, and the need to change current design practices to encourage creative and intelligent use of computer applications is pointed at.
Abstract: Empirical studies have revealed that most users are dissatisfied with current help systems. One of the causes for many of the problems found in help systems is a lack of coupling between the process for creating online help and the human-computer interaction design of the application. In this paper, we present a method for building online help based on design models according to a Semiotic Engineering approach. We argue that there may be important benefits to the users if designers communicate their design vision, and we also point at the need to change current design practices to encourage creative and intelligent use of computer applications. We show how this proposal opens a direct communication channel from designers to users, and we hope this will contribute to introducing this new culture.

Journal ArticleDOI
TL;DR: This paper proposes an approach for the construction of dependable component-based systems that integrates two complementary strategies: (i) a global exception handling strategy for inter-component composition and (ii) a local exception Handling strategy for dealing with errors in reusable components.
Abstract: Component-based development (CBD) is recognized today as the standard paradigm for structuring large software systems. However, the most popular component models and component-based development processes provide little guidance on how to systematically incorporate exception handling into component-based systems. The problem of how to employ language-level exception handling mechanisms to introduce redundancy in componentbased systems is recognized by CBD practitioners as very difficult and often not adequately solved. As a consequence, the implementation of the redundant exceptional behaviour causes a negative impact, instead of a positive one, on system and maintainability. In this paper, we propose an approach for the construction of dependable component-based systems that integrates two complementary strategies: (i) a global exception handling strategy for inter-component composition and (ii) a local exception handling strategy for dealing with errors in reusable components. A case study illustrates the application of our approach to a real software system.

Journal ArticleDOI
TL;DR: This paper focuses on defining how to reason and how to refine NFRs during the software development, based on software architecture principles that guide the definition of the proposed refinement rules.
Abstract: Non-functional requirements (NFRs) are rarely taken in account in most software development processes There are some reasons that can help us to understand why these requirements are not explicitly dealt with: their complexity, NFRs are usually stated only informally, their high abstraction level and the rare support of languages, methodologies and tools In this paper, we concentrate on defining how to reason and how to refine NFRs during the software development Our approach is based on software architecture principles that guide the definition of the proposed refinement rules In order to illustrate our approach, we adopt it to an appointmernt system

Journal ArticleDOI
TL;DR: A methodology for the formal design of Interactive Multimedia Documents based on the formal description technique RT-LOTOS is referred to, which provides the formal semantics for the dynamic behaviour of the document, consistency checking, and the scheduling of the presentation taking into account the temporal non-determinism of these documents.
Abstract: The flexibility of high level authoring models (such as SMIL 2.0) for the edition of complex Interactive Multimedia Documents can lead authors, in certain cases, to specify synchronization relations which could not be satisfied during the presentation of the document, thus characterizing the occurrence of temporal inconsistencies. For this reason, we need to apply a methodology which provides the formal semantics for the dynamic behaviour of the document, consistency checking, and the scheduling of the presentation taking into account the temporal non-determinism of these documents. This paper refers to a methodology for the formal design of Interactive Multimedia Documents based on the formal description technique RT-LOTOS. In particular, this paper presents an approach applied by our methodology for the automatic translation of SMIL 2.0 documents into RT-LOTOS specifications.

Journal ArticleDOI
TL;DR: The results have shown that the process proposed is equally agile when compared to XP, moreover, surveys conducted as part of the experiment pointed out that XWebProcess is more suitable to Web development in dimensions such as requirements gathering, user interface and navigation design, and software testing, therefore leading to better quality software.
Abstract: Agile software processes emerged to address the issue of building software on time and within the planned budget. To adopt an agile process, it is imperative to analyze and evaluate its effectiveness in supporting high quality software development while complying with stringent time constraints. In this paper we describe an agile method for Web-based application development (XWebProcess) and an experiment conducted with a group of forty senior undergraduate students to assess the quality/speed effectiveness of the proposed method vis-a-vis the effectiveness of Extreme Programming (XP). The results have shown that the process proposed is equally agile when compared to XP, moreover, surveys conducted as part of the experiment pointed out that XWebProcess is more suitable to Web development in dimensions such as requirements gathering, user interface and navigation design, and software testing, therefore leading to better quality software.

Journal ArticleDOI
TL;DR: This paper addresses the issue of an efficient dependability evaluation by a model-based approach of hierarchical control and resource management systems and derived a modeling methodology that is not only directed to build models in a compositional way, but it also includes some capabilities to reduce their solution complexity.
Abstract: Current and future computerized systems and infrastructures are going to be based on the layering of different systems, designed at different times, with different technologies and components and difficult to integrate. Control systems and resource management systems are increasingly employed in such large and heterogeneous environment as a parallel infrastructure to allow an efficient, dependable and scalable usage of the system components. System complexity comes out to be a paramount challenge to solve from a number of different viewpoints, including dependability modeling and evaluation. Key directions to deal with system complexity are abstraction and hierarchical structuring of the system functionalities. This paper addresses the issue of an efficient dependability evaluation by a model-based approach of hierarchical control and resource management systems. We exploited the characteristics of this specific, but important, class of systems and derived a modeling methodology that is not only directed to build models in a compositional way, but it also includes some capabilities to reduce their solution complexity. The modeling methodology and the resolution technique are then applied to a case study consisting of a resource management system developed in the context of the ongoing European project CAUTION++. The results obtained are useful to understand the impact of several system component factors on the dependability of the overall system instance.

Journal ArticleDOI
TL;DR: Evidence indicates that Civil War topographers mostly performed the tasks one would expect of them: mapmaking, reconnaissance, and orienteering, and were occasionally required to perform other duties tailored to their individual talents.
Abstract: This study advances knowledge concerning military topographical engineering in the Shenandoah Valley of Virginia during 1861 and 1862 operations. It examines representative historical maps, Union and Confederate official reports, the wartime journals of James W. Abert, Jedediah Hotchkiss, and David Hunter Strother, and a detailed postwar reminiscence by Thomas H. Williamson to illuminate the typical experience of the topographical engineer in early war operations in the Shenandoah. Evidence indicates that Civil War topographers mostly performed the tasks one would expect of them: mapmaking, reconnaissance, and orienteering. They were occasionally required to perform other duties tailored to their individual talents. There is evidence that the role of Confederate topographical engineers was more specific than that of Union officers.


Journal ArticleDOI
TL;DR: The study reveals that the Lakota created maps and utilized other cartographic tools that, while not following a western system of coordinates, grids, and scales, were nonetheless accurate instruments for navigation to important routes, landmarks, hunting grounds, and sacred sites.
Abstract: This article serves as an introduction to traditional cartographic tools and techniques of the Lakota Sioux people of the northern Great Plains. The study reveals that the Lakota created maps and utilized other cartographic tools that, while not following a western system of coordinates, grids, and scales, were nonetheless accurate instruments for navigation to important routes, landmarks, hunting grounds, and sacred sites. The tools and techniques utilized included oral transmission of cartographic data, stories and songs in the oral tradition, stellar cartography, hide maps, petroglyphs, earth scratchings, and various other physical and spiritual markers.

Journal ArticleDOI
TL;DR: A systematic examination of maps that appeared in the print media in the period immediately following September 11 suggests that in addition to illustrating the attacks and the subsequent events, maps cast their own narratives of these events.
Abstract: The attacks of September 11, 2001 on the World Trade Center and the Pentagon were unprecedented in scope if not in their fundamental nature. While the United States moved toward resurrection of Reagan’s Strategic Defense Initiative, known popularly as “Star Wars”, and focused its resources on sophisticated weaponry, terrorists with primitive weapons turned commercial aircraft into guided missiles. The suddenness and enormity of the events, coupled with the fact that so many people were acquainted with victims of the attacks, created a sense of concern and confusion that was more pervasive and ubiquitous than evoked by either the 1993 bombing of the Trade Center or the 1995 attack on the Murrah Federal Building. In the immediate aftermath, the events of September 11attracted the sympathies of the entire country, evoked both an outpouring of patriotism and a rhetoric of retribution, and temporarily redefined task saliencies (Wright, 1978) as firefighters and law enforcement officers became heroes of the moment. The media also assumed a heightened level of importance as people turned to television, the Internet, and print for information and for insight and meaning. On September 11, the New York Times recorded over 21 million page views on their site, more than twice the previous record, and a six-month circulation audit by the Times following September 11 showed daily gains of approximately 42,000 newspapers (Robinson, 2002). Since the number of maps appearing in the media has grown rapidly with the advent of desktop computing and electronic publishing technologies (Monmonier, 1989; 2001), it is not surprising that much of the story of September 11 has been illustrated with maps. At the very least, these maps offer distinctive insights that help define both the events and the public reaction, but a paradigm shift that emphasizes their textual nature suggests that in addition to illustrating the attacks and the subsequent events, maps cast their own narratives of these events. Our purpose here is to explore these narratives through a systematic examination of maps that appeared in the print media in the period immediately following September 11.


Journal ArticleDOI
TL;DR: Plan B is a new operating system that attempts to allow the applications and their programmers select and use whatever resources are available without forcing them to deal with the problems created by their dynamic distributed and heterogeneous environments.
Abstract: Nowadays computing environments are made of heterogeneous networked resources, but unlike environments used a decade ago, the current environments are highly dynamic. During a computing session, new resources are likely to appear and some are likely to go online or to move to some other place. The operating system is supposed to hide most of the complexity of such environments and make it easy to write applications using them. However, that is not the case with our current operating systems. Plan B is a new operating system that attempts to allow the applications and their programmers select and use whatever resources are available without forcing them to deal with the problems created by their dynamic distributed and heterogeneous environments. It does so by using constraints along with a new abstraction used to replace the traditional le abstraction.

Journal ArticleDOI
TL;DR: This article presents and evaluates a parallelization strategy for implementing a sequence alignment algorithm for long sequences in JIAJIA, a scope consistent software DSM system, and presents good speedups, showing that the strategy and programming support were appropriate.
Abstract: Distributed Shared Memory systems allow the use of the shared memory programming paradigm in distributed architectures where no physically shared memory exist. Scope consistent software DSMs provide a relaxed memory model that reduces the coherence overhead by ensuring consistency only at synchronization operations, on a per-lock basis. Much of the work in DSM systems is validated by benchmarks and there are only a few examples of real parallel applications running on DSM systems. Sequence comparison is a basic operation in DNA sequencing projects, and most of sequence comparison methods used are based on heuristics, that are faster but do not produce optimal alignments. Recently, many organisms had their DNA entirely sequenced, and this reality presents the need for comparing long DNA sequences, which is a challenging task due to its high demands for computational power and memory. In this article, we present and evaluate a parallelization strategy for implementing a sequence alignment algorithm for long sequences. This strategy was implemented in JIAJIA, a scope consistent software DSM system. Our results on an eight-machine cluster presented good speedups, showing that our parallelization strategy and programming support were appropriate.

Journal ArticleDOI
TL;DR: This paper presents the application of a formal testing methodology to protocols and services for wireless telephony networks that permits to perform conformance and interoperability testing detecting different kinds of implementation faults, as for instance output and transmission faults.
Abstract: This paper presents the application of a formal testing methodology to protocols and services for wireless telephony networks. The methodology provides a complete and integrated coverage of all phases of the testing procedure: specification, test generation, and test execution on a given architecture. It permits to perform conformance and interoperability testing detecting different kinds of implementation faults, as for instance output and transmission faults. The test execution is performed in the framework of a set of architectures capable to deal with different environments. Telecommunication systems and mobility are the main focus of the application presented in this paper. Two case studies illustrates the application of the methodology to a wireless telephone network: conformance and interoperability testing of Wireless Application Protocol (WAP) protocols and services based on the subscriber location.


Journal ArticleDOI
TL;DR: Two new algorithms specially designed to answer conjunctive and disjunctive operations involving the basic similarity criteria, providing also support for the manipulation of tie lists when the k-Nearest Neighbor query is involved are presented.
Abstract: Search operations in large sets of complex objects usually rely on similarity-based criteria, due to the lack of other general properties that could be used to compare the objects, such as the total order relationship, or even the equality relationship between pairs of objects, commonly used with data in numeric or short texts domains. Therefore, similarity between objects is the core criterion to compare complex objects. There are two basic operators for similarity queries: Range Query and k-Nearest Neighbors Query. Much research has been done to develop effective algorithms to implement them as standalone operations. However, algorithms to support these operators as parts of more complex expressions involving their composition were not developed yet. This paper presents two new algorithms specially designed to answer conjunctive and disjunctive operations involving the basic similarity criteria, providing also support for the manipulation of tie lists when the k-Nearest Neighbor query is involved. The new proposed algorithms were compared with the combinations of the basic algorithms, both in the sequential scan and in the Slim-tree metric access methods, measuring the number of disk accesses, the number of distance calculations, and wall-clock time. The experimental results show that the new algorithms have better performance than the composition of the two basic operators to answer complex similarity queries in all measured aspects, being up to 40 times faster than the composition of the basic algorithms. This is an essential point to enable the practical use of similarity operators in Relational Database Management Systems.

Journal ArticleDOI
TL;DR: The overall architecture of the Meta-ORB platform is described, which demonstrates this approach, and its two implementations are presented: a proof-of-concept prototype written in Python, and a Java-based implementation aimed at supporting mobile devices.
Abstract: Reflection is now an established technique for achieving dynamic adaptability of middleware platforms. It provides a clean and comprehensive way to access the internals of a platform implementation, allowing its customisation in order to achieve the best performance and adequacy under given operation environments and user requirements. In addition, the use of a runtime component model for the design of the internal platform structure facilitates the identification of the elements to be adapted, as all platform aspects are built in terms of components. The major limitation of this approach, however, is related to the multitude of aspects that make up a middleware platform, together with the requirement of keeping platform consistency after adaptations take place. This paper presents the results of ongoing research contributing to reduce this limitation. The approach is based on the use of a common meta-model, together with meta-information techniques to provide a uniform way to specify and manipulate platform configurations. Both platform configuration and runtime adaptation are always specified using a small number of building blocks defined in the meta-model. The paper also describes the overall architecture of the Meta-ORB platform, which demonstrates this approach, and presents its two implementations: a proof-of-concept prototype written in Python, and a Java-based implementation aimed at supporting mobile devices. The results are also evaluated from a quantitative perspective, according to the requirements of multimedia applications, one of the major areas of application of reflective middleware.

Journal ArticleDOI
TL;DR: This paper provides efficient and robust implementations of slowness oracles based on techniques that have been previously used to implement adaptive failure detection oracles, and shows that by using a slSlowess oracle that is well matched with a failure Detection oracle, one can achieve performance as much as 53.5% better than the alternative that does not use a slowned oracle.
Abstract: Due to their fundamental role in the design of fault-tolerant distributed systems, consensus protocols have been widely studied Most of the research in this area has focused on providing ways for circumventing the impossibility of reaching consensus on a purely asynchronous system subject to failures Of particular interest are the indulgent consensus protocols based upon weak failure detection oracles Following the first works that were more concerned with the correctness of such protocols, performance issues related to them are now a topic that has gained considerable attention In particular, a few studies have been conducted to analyze the impact that the quality of service of the underlying failure detection oracle has on the performance of consensus protocols To achieve better performance, adaptive failure detectors have been proposed Also, slowness oracles have been proposed to allow consensus protocols to adapt themselves to the changing conditions of the environment, enhancing their performance when there are substantial changes on the load to which the system is exposed In this paper we further investigate the use of these oracles to design efficient consensus services In particular, we provide efficient and robust implementations of slowness oracles based on techniques that have been previously used to implement adaptive failure detection oracles Our experiments on a wide-area distributed system show that by using a slowness oracle that is well matched with a failure detection oracle, one can achieve performance as much as 535% better than the alternative that does not use a slowness oracle

Journal ArticleDOI
TL;DR: Valid Value Tables (VVTs) are a set of tables that may be used to store a semantic model in a geodatabase by defining the valid combinations of coded values that describe the kinds of features in the database.
Abstract: There are many ways to encapsulate semantic models in GIS and cartographic data. A semantic model is the set of terms used to describe features in the database or on a map. For instance, a semantic model defines whether a low-lying saturated areaperpetually on the landscape is called a swamp, marsh or bog. In order to make maps with GIS data, some part of the GIS data model must contain the data’s semantic model so a mapmaker can symbolize the data for the map. Valid Value Tables (VVTs) are a set of tables that may be used to store a semantic model in a geodatabase by defining the valid combinations of coded values that describe the kinds of features in the database. Coded values are numbers (requiring relatively small amounts of storage space in a database and low impact on