scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the Brazilian Computer Society in 2010"


Journal ArticleDOI
TL;DR: Two existing gaps in the computation tools available for geodesign are identified: support for sketch and implementation of models representing scientific knowledge of how the world works and two important areas of research are identified that would address problems that currently impedeGeodesign.
Abstract: One of the original visions for GIS was as a tool for creating designs, but GIS has evolved in numerous other directions. Definitions of geodesign are reviewed, together with a short history of the concept. A distinction is drawn between Design and design, the latter being addressed through spatial decision support systems, and the former being seen as a superset of the latter. Geodesign also has a strong and well-defined relationship with cartography. The vision of landscape architecture propounded by the late Ian McHarg also provides a foundation for geodesign. Two existing gaps in the computation tools available for geodesign are identified: support for sketch and implementation of models representing scientific knowledge of how the world works. Two important areas of research are identified that would address problems that currently impede geodesign.

119 citations




Journal ArticleDOI
TL;DR: Six specific project management areas need to be addressed to facilitate successful virtual team operation, which are: Organizational Virtual Team Strategy, Risk Management, Infrastructure, Implementation of a Virtual Team Process, Team Structure and Organization, and Conflict Management.
Abstract: Globally distributed information systems development has become a key strategy for large sections of the software industry. This involves outsourcing projects to third parties or offshoring development to divisions in remote locations. A popular approach when implementing these strategies is the establishment of virtual teams. The justification for embarking on this approach is to endeavor to leverage the potential benefits of labor arbitrage available between geographical locations. When implementing such a strategy organizations must recognize that virtual teams operate differently to collocated teams, therefore, they must be managed differently. These differences arise due to the complex and collaborative nature of information systems development and the impact distance introduces. Geographical, temporal, cultural, and linguistic distance all negatively impact on coordination, cooperation, communication, and visibility in the virtual team setting. In these circumstances, it needs to be recognized that the project management of a virtual team must be carried out in a different manner to that of a collocated team. Results from this research highlight six specific project management areas, which need to be addressed to facilitate successful virtual team operation. They are: Organizational Virtual Team Strategy, Risk Management, Infrastructure, Implementation of a Virtual Team Process, Team Structure and Organization, and Conflict Management.

42 citations


Journal ArticleDOI
TL;DR: It is considered that Trail-Aware is an evolution of the simple use of contexts and profiles, and its application in an educational environment for distribution of learning objects using trail-aware is presented.
Abstract: In mobile computing environments, the tracking of users allows applications to adapt to the contexts visited by users (Context Awareness). In recent years, the use of context information and users’ profiles has been considered an opportunity for context-aware content management. The improvement and the wide adoption of location systems are stimulating the tracking of users, allowing the use of Trails. A trail is the history of the contexts visited by a user during a period. This article proposes a model for trails management and its application in the content management. In this text, we consider that Trail-Aware is an evolution of the simple use of contexts and profiles. The text presents a prototype and its application in an educational environment for distribution of learning objects using trail-aware.

42 citations


Journal ArticleDOI
TL;DR: This article presents a customizable multi-criteria model for task allocation in global software development projects that includes the development of mechanisms for customization, the incorporation of cause-effect relationships, and the use of probabilistic modeling of uncertainty with Bayesian networks
Abstract: The allocation of development tasks to sites is one of the most important activities in the management of global software development projects. Its various influences on the risks and benefits of distributed projects require careful consideration of multiple allocation criteria in a systematic way. In practice, however, work is often allocated based on only one single criterion such as cost, and defined processes or algorithms for task allocation are typically not used. Existing research approaches mainly focus on selected aspects such as the minimization of cross-site communication and are difficult to adapt to specific environments. This article presents a customizable multi-criteria model for task allocation in global software development projects. Based on an analysis of the state of the practice, a set of requirements was derived and used for evaluating existing task allocation models from different domains. The Bokhari algorithm was identified as a suitable starting point and modified with respect to the fulfillment of the requirements. The modification includes the development of mechanisms for customization, the incorporation of cause-effect relationships, and the use of probabilistic modeling of uncertainty with Bayesian networks. The application of the model is demonstrated in different scenarios that represent typical hypothetical and real distribution decision problems in industrial contexts. Experience from applying the model to such problems has shown, for instance, that depending on the weight of different criteria, very different task distributions will result. This demonstrates, in consequence, the need for systematic multi-criteria task allocation support in global software development.

38 citations


Journal ArticleDOI
TL;DR: A laboratory experiment with master students has been carried out, in order to compare two RE methods; namely, Use Cases and Communication Analysis, and results indicate greater model quality (in terms of completeness and granularity) when Communication Analysis guidelines are followed.
Abstract: Requirements Engineering (RE) is a relatively young discipline, and still many advances have been achieved during the last decades. In particular, numerous RE approaches are proposed in the literature with the aim of understanding a certain problem (e.g. information systems development) and establishing a knowledge base that is shared between domain experts and developers (i.e. a requirements specification). However, there is a growing concern for empirical validations that assess RE proposals and statements. This paper is related to the assessment of the quality of functional requirements specifications, using the Method Evaluation Model (MEM) as a theoretical framework. The MEM distinguishes the actual efficacy and the perceived efficacy of a method. In order to assess the actual efficacy or RE methods, the conceptual model quality framework by Lindland et al. can be applied; in this paper, we focus on the completeness and granularity of requirements models and extend this framework by defining four new metrics (e.g. degree of functional encapsulations completeness with respect to a reference model, number of functional fragmentation errors). In order to assess the perceived efficacy, conventional questionnaires can be used. A laboratory experiment with master students has been carried out, in order to compare (using the proposed metrics) two RE methods; namely, Use Cases and Communication Analysis. With respect to actual efficacy, results indicate greater model quality (in terms of completeness and granularity) when Communication Analysis guidelines are followed. With respect to perceived efficacy, we found that Use Cases was perceived to be slightly easier to use than Communication Analysis. However, Communication Analysis was perceived to be more useful in terms of determining the proper business processes granularity. The paper discusses these results and highlights some key issues for future research in this area.

28 citations


Journal ArticleDOI
TL;DR: It is found that many principles for static map design are less than reliable in a dynamic environment; the principles of static map symbolization and design do not always appear to be effective or congruent graphical representations of change.
Abstract: Maps provide a means for visual communication of spatial information. The success of this communication process largely rests on the design and symbolization choices made by the cartographer. For static mapmaking we have seen substantial research in how our design decisions can influence the legibility of the map’s message, however, we have limited knowledge about how dynamic maps communicate most effectively. Commonly, dynamic maps communicate spatiotemporal information by 1) displaying known data at discrete points in time and 2) employing cartographic transitions that depict changes that occur between these points. Since these transitions are a part of the communication process, we investigate how three common principles of static map design (visual variables, level of measurement, and classed vs. unclassed data representations) relate to cartographic transitions and their abilities to congruently and coherently represent temporal change in dynamic phenomena. In this review we find that many principles for static map design are less than reliable in a dynamic environment; the principles of static map symbolization and design do not always appear to be effective or congruent graphical representations of change. Through the review it becomes apparent that we are in need of additional research in the communication effectiveness of dynamic thematic maps. We conclude by identifying several research areas that we believe are key to developing research-based best practices for communicating about dynamic geographic processes.

27 citations



Journal ArticleDOI
TL;DR: An evaluation of compound terms extraction from a corpus of the domain of Pediatrics, using three different extraction methods, and the quality of the resulting terms according to different methods and cut-off points is analyzed.
Abstract: The need for domain ontologies motivates the research on structured information extraction from texts. A foundational part of this process is the identification of domain relevant compound terms. This paper presents an evaluation of compound terms extraction from a corpus of the domain of Pediatrics. Bigrams and trigrams were automatically extracted from a corpus composed by 283 texts from a Portuguese journal, Jornal de Pediatria, using three different extraction methods. Considering that these methods generate an elevated number of candidates, we analyzed the quality of the resulting terms according to different methods and cut-off points. The evaluation is reported by metrics such as precision, recall and f-measure, which are computed on the basis of a hand-made reference list of domain relevant compounds.

19 citations


Journal ArticleDOI
TL;DR: Terrain Sculptor is introduced, a software application that prepares generalized terrain models for relief shading based on a succession of raster operations and offers a graphical user interface to adjust the algorithm to various scales and terrain resolutions.
Abstract: Shaded relief derived from high-resolution terrain models often contains distracting terrain details that need to be removed for medium- and small-scale mapping. When standard raster filter operations are applied to digital terrain data, important ridge tops and valley edges are blurred, altering the characteristic shape of these features in the resulting shaded relief. This paper introduces Terrain Sculptor, a software application that prepares generalized terrain models for relief shading. The application uses a generalization methodology based on a succession of raster operations. Curvature coefficients detect and accentuate important relief features. Terrain Sculptor offers a graphical user interface to adjust the algorithm to various scales and terrain resolutions. The freeware application is available at http://www.terraincartography.com/terrainsculptor/.

Journal ArticleDOI
TL;DR: Aspects of NCL usability are analyzed with the Cognitive Dimensions of Notation framework and it is detailed how its design and conceptual model have succeeded in supporting reuse at a declarative level.
Abstract: NCL, the standard declarative language of the Brazilian Terrestrial Digital TV System and ITU-T Recommendation for IPTV Services, provides a high level of reuse in the design of hypermedia applications. In this paper we detail how its design and conceptual model have succeeded in supporting reuse at a declarative level. NCL supports not only static but also running code reuse. It also allows for reuse inside applications, reuse between applications, and reuse of code spans stored in external libraries. For a specification language to promote reuse, however, it must have a number of usability merits. Aspects of NCL usability are thus analyzed with the Cognitive Dimensions of Notation framework.

Journal ArticleDOI
TL;DR: This paper introduces an extension of the UPnP specification calledUPnP-UP, which allows user authentication and authorization mechanisms for UPnN devices and applications, and provides the basis to develop customized and secure UpnP pervasive services, maintaining backward compatibility with previous versions of UPnp.
Abstract: The Universal Plug and Play (UPnP) specification defines a set of protocols for promoting pervasive network connectivity of computers and intelligent devices or appliances. Nowadays, the UPnP technology is becoming popular due to its robustness to connect devices and the large number of developed applications. One of the major drawbacks of UPnP is the lack of user authentication and authorization mechanisms. Thus, control points, those devices acting as clients on behalf of a user, and UPnP devices cannot communicate based on user information. This paper introduces an extension of the UPnP specification called UPnP-UP, which allows user authentication and authorization mechanisms for UPnP devices and applications. These mechanisms provide the basis to develop customized and secure UPnP pervasive services, maintaining backward compatibility with previous versions of UPnP.

Journal ArticleDOI
TL;DR: This work presents a master-slave parallel genetic algorithm for the protein folding problem, using the 3D-HP side-chain model, and shows that the parallel GA achieved a good level of efficiency and obtained biologically coherent results, suggesting the adequacy of the methodology.
Abstract: This work presents a master-slave parallel genetic algorithm for the protein folding problem, using the 3D-HP side-chain model (3D-HP-SC). This model is sparsely studied in the literature, although more expressive than other lattice models. The fitness function proposed includes information not only about the free-energy of the conformation, but also compactness of the side-chains. Since there is no benchmark available to date for this model, a set of 15 sequences was used, based on a simpler model. Results show that the parallel GA achieved a good level of efficiency and obtained biologically coherent results, suggesting the adequacy of the methodology. Future work will include new biologically-inspired genetic operators and more experiments to create new benchmarks.

Journal ArticleDOI
TL;DR: The measurement analysis provides several interesting findings that can have implications for how videos should be retrieved in video sharing websites as well as for advertising systems that need to understand the role that users play when they create content in services such as YouTube.
Abstract: Videos have become a predominant part of users’ daily lives on the Web, especially with the emergence of online video sharing systems such as YouTube. Since users can independently share videos in these systems, some videos can be duplicates (i.e., identical or very similar videos). Despite having the same content, there are some potential context differences in duplicates, for example, in their associated metadata (i.e., tags, title) and their popularity scores (i.e., number of views, comments). Quantifying these differences is important to understand how users associate metadata to videos and to understand possible reasons that influence the popularity of videos, which is crucial for video information retrieval mechanisms, association of advertisements to videos, and performance issues related to the use of caches and content distribution networks (CDNs). This work presents a wide quantitative characterization of the context differences among identical contents. Using a large video sample collected from YouTube, we construct a dataset of duplicates. Our measurement analysis provides several interesting findings that can have implications for how videos should be retrieved in video sharing websites as well as for advertising systems that need to understand the role that users play when they create content in services such as YouTube.


Journal ArticleDOI
TL;DR: This paper argues that the most favorable uses of aspects happen when their code relies extensively on quantified statements, i.e., statements that may affect many parts of a system, and proposes two new metrics to capture in a simple way the amount of quantification employed in the aspects of a given system.
Abstract: In this paper, we argue that the most favorable uses of aspects happen when their code relies extensively on quantified statements, i.e., statements that may affect many parts of a system. When this happens, aspects better con- tribute to separation of concerns, since the otherwise dupli- cated and tangled code related to the implementation of a crosscutting concern is confined in a single block of code. We provide in the paper both qualitative and quantitative ar- guments in favor of quantification. We also propose two new metrics to capture in a simple way the amount of quantifica- tion employed in the aspects of a given system. Finally, we describe an Eclipse plugin, called ConcernMetrics that esti- mates the proposed metrics directly from the object-oriented code of an existing system, i.e., before crosscutting concerns are extracted to aspects. Our main motivation is to help de- velopers and maintainers to decide in a cost-effective way if it is worthwhile to use aspects in their systems.

Journal ArticleDOI
TL;DR: One artist, James Niehues, has produced by far the most maps in current use, and this Colorado School has been key in the development of a classic painted panoramic style of North American ski maps.
Abstract: This article examines mountain ski resort trail maps in North America in 2008. It looks at the styles of maps used by resorts and at the main artists involved in producing the maps. The survey included maps from 428 resorts with additional analysis of maps from the 100 largest resorts. Point of view and creation method are the primary factors in determining the style of each ski trail map. Artists have employed three main types of views for ski mountains: panoramas, profiles, and planimetric maps. Panoramic views are by far the most common type of map (86% of all maps and all of the maps at the top 100 areas). Profile views are used in 8% of the maps and planimetric views in only 6%. Production methods for ski trail maps fall into three main categories: painting, illustrating, and computer rendering. Maps created with painting techniques are the most widespread, in use at 72% of all resorts and at 89% of the top 100 areas. Those created in a hard-edged vector-based illustration style are in use at 20% of resorts and those created through computer modeling and rendering at 3% of resorts. Many artists have created ski trail maps for resorts in North America but one artist, James Niehues, has produced by far the most maps in current use. His maps are in use at over a quarter of all ski areas and at half of the top resorts. Niehues follows in the footsteps of two other Coloradans, Hal Shelton and then Bill Brown, and this Colorado School has been key in the development of a classic painted panoramic style of North American ski maps. Additional research is recommended to provide further details of the history of the maps and their creators as well as to analyze the artists’ terrain manipulations and to look at the growing use of electronic trail maps.

Journal ArticleDOI
TL;DR: An approach based on software visualization that can detect and externalize design evolution made in a software project during its initial development or at any further phase is presented.
Abstract: Software differs from most manufactured products because it is intangible. This characteristic makes it difficult to detect, control, and understand how it evolves. This paper presents an approach based on software visualization that can detect and externalize design evolution made in a software project during its initial development or at any further phase. By using this approach, a developer can be aware of the current state of the software as a whole and can additionally verify if the current design, also called emerging design, is evolving according to the team expectations and leader guidance, preventing problems caused by misunderstandings of the expected software solution. The approach was evaluated with free/open source software (FOSS) projects. The results indicate that the approach behaves as expected when applied to real software development projects, with minor performance bottlenecks.

Journal ArticleDOI
TL;DR: An approach that provides automatic or semi-automatic support for evolution and change management in heterogeneous legacy landscapes where (1) legacy heterogeneous, possibly distributed platforms are integrated in a service oriented fashion and (2) the coordination of functionality is provided at the service level, through orchestration.
Abstract: We present an approach that provides automatic or semi-automatic support for evolution and change management in heterogeneous legacy landscapes where (1) legacy heterogeneous, possibly distributed platforms are integrated in a service oriented fashion, (2) the coordination of functionality is provided at the service level, through orchestration, (3) compliance and correctness are provided through policies and business rules, (4) evolution and correctness-by-design are supported by the eXtreme Model Driven Development paradigm (XMDD) offered by the jABC (Margaria and Steffen in Annu. Rev. Commun. 57, 2004)—the model-driven service oriented development platform we use here for integration, design, evolution, and governance. The artifacts are here semantically enriched, so that automatic synthesis plugins can field the vision of Enterprise Physics: knowledge driven business process development for the end user. We demonstrate this vision along a concrete case study that became over the past three years a benchmark for Semantic Web Service discovery and mediation. We enhance the Mediation Scenario of the Semantic Web Service Challenge along the 2 central evolution paradigms that occur in practice: (a) Platform migration: platform substitution of a legacy system by an ERP system and (b) Backend extension: extension of the legacy Customer Relationship Management (CRM) and Order Management System (OMS) backends via an additional ERP layer.

Journal ArticleDOI
TL;DR: This paper describes an instance-based schema matching technique for an OWL dialect and proposes a data model for storing provenance data for schema matching and argues that automatic approaches of schema matching should storeprovenance data about matchings.
Abstract: Schema matching is a fundamental issue to many database applications, such as query mediation and data warehousing. It becomes a challenge when different vocabularies are used to refer to the same real-world concepts. In this context, a convenient approach, sometimes called extensional, instance-based, or semantic, is to detect how the same real world objects are represented in different databases and to use the information thus obtained to match the schemas. Additionally, we argue that automatic approaches of schema matching should store provenance data about matchings. This paper describes an instance-based schema matching technique for an OWL dialect and proposes a data model for storing provenance data. The matching technique is based on similarity functions and is backed up by experimental results with real data downloaded from data sources found on the Web.

Journal ArticleDOI
TL;DR: This paper presents a model, called the StoryToCode, which allows designing iTV programs focusing on using software components using Model Driven Architecture and allows designing and implementing applications, with context free, considering iTV program specific characteristics.
Abstract: This paper presents a model, called the StoryToCode, which allows designing iTV programs focusing on using software components. First, StoryToCode allows transforming a storyboard into an abstract description of an element set. After this, this model transforms these elements into a specific programming language source code. In StoryToCode a software component is treated as a special element that can be reused in other contexts (web, mobile, and so on). StoryToCode is based on Model Driven Architecture (MDA) and allows designing and implementing applications, with context free, considering iTV program specific characteristics.

Journal ArticleDOI
TL;DR: A graphics-intensive presentation of published maps, providing more than 70 cartographic examples that GIS users can adapt for their own needs, and a guide to creating maps that communicate effectively.
Abstract: Fri, 21 Dec 2018 00:56:00 GMT designed maps a sourcebook for pdf Annotated metro maps, which are a graphic representation abstracting a city's transportation network and providing additional details about a city, can be difficult to draw because of the landmark ... Fri, 21 Dec 2018 10:29:00 GMT Designed Maps: A Sourcebook for GIS Users | Request PDF [PDF] DOWNLOAD Designed Maps: A Sourcebook for GIS Users by Cynthia A. Brewer [PDF] DOWNLOAD Designed Maps: A Sourcebook for GIS Users Epub Sat, 22 Dec 2018 01:09:00 GMT [PDF] DOWNLOAD Designed Maps: A Sourcebook for GIS Users Description of the book \"Designed Maps: A Sourcebook for GIS Users\": \"Designed Maps\" is a graphics-intensive presentation of published maps, providing more than 70 cartographic examples that GIS users can adapt for their own needs. Fri, 28 Dec 2018 07:43:00 GMT Download PDF: Designed Maps: A Sourcebook for GIS Users by ... designed maps a sourcebook for gis users School of Botanical Medicine Home Page Programme particulars vary between the provision of breakfast or lunch, or both. Mon, 29 Oct 2018 01:23:00 GMT Designed Maps A Sourcebook For Gis Users Humaima Malik Blasted On Social Media Users For Making Fun Of Komal Rizvi Selfie Sat, 08 Dec 2018 06:12:00 GMT [PDF] Designed Maps: A Sourcebook for GIS Users Download ... a section that shows how to recreate the effects in the book using ArcMap, which exposed me to features I didn't know existed and lots of new ideas on how to use the familiar dialogs.I would Wed, 26 Dec 2018 07:38:00 GMT Read & Download (PDF Kindle) Designed Maps: A Sourcebook ... Ebook [Free]Download Designed Maps: A Sourcebook for GIS Users -> Cynthia A Brewer Ready Cynthia A Brewer [Free] PDF Go to: gyjrtfntfgn54ythbf.blogspot … Fri, 14 Dec 2018 05:08:00 GMT [Free]Download Designed Maps: A Sourcebook for GIS Users ... Designed Maps A Guide For Gis Users Pdf Download PDF Designed Maps A Sourcebook for GIS Users Download PDF Making Maps. Download ebook pdf Designing Better Maps: A Guide for GIS Users Cynthia A. Brewer Fri, 11 Jan 2019 09:57:00 GMT Designed Maps A Guide For Gis Users Pdf WordPress.com Designing Better Maps: A Guide For GIS Users PDF. Designing Better Maps: A Guide for GIS Users, second edition, is a comprehensive guide to creating maps that communicate effectively. In Designing Better Maps, renowned cartographer Cynthia A. Brewer guides readers through the basics of good cartography, including layout design, scales, projections, color selection, font choices, and symbol ... Tue, 01 Jan 2019 18:01:00 GMT Designing Better Maps: A Guide For GIS Users PDF Firebase In Lighting Design Sourcebook, Randall Whitehead provides a look inside the world of professional lighting design and provides hundreds of ideas for humanizing space through illumination. Chapter by chapter, beautiful lighting solutions unfold through dramatic photographs and insightful and helpful text. Whether you're looking for fantasy or function, this book offers the inspiration and the ... Thu, 13 Dec 2018 17:55:00 GMT The Color Design Sourcebook Book – PDF Download Designed Maps: A Sourcebook for GIS Users. This companion to the highly successful Designing Better This companion to the highly successful Designing Better Maps offers a graphics-intensive presentation of published maps Thu, 20 Dec 2018 07:24:00 GMT Designing Better Maps: A Guide For GIS Users By Cynthia Brewer A Sourcebook of Ideas An Anthology of Decorated Papers: A Sourcebook for Designers Stained Glass Sourcebook Mosaics: Design Sourcebook The Symbian OS Architecture

Journal ArticleDOI
TL;DR: This work presents a comparison between three well-established techniques for static gesture recognition, using Nearest Neighbor, Neural Networks, and Support Vector Machines as classifiers, and identifies and discusses a set of relevant criteria that must be observed for the training and evaluation steps, and its relation to the final results.
Abstract: It is a common behavior for human beings to use gestures as a means of expression, as a complement to speaking, or as a self-contained communication mode. In the field of Human–Computer Interaction, this behavior can be adopted to build alternative interfaces, aiming to ease the relationship between the human element and the computational element. Currently, various gesture recognition techniques are described in the technical literature; however, the validation studies of these techniques are usually performed isolatedly, which complicates comparisons between them. To reduce this gap, this work presents a comparison between three well-established techniques for static gesture recognition, using Nearest Neighbor, Neural Networks, and Support Vector Machines as classifiers. These classifiers evaluate a common dataset, acquired from an instrumented glove, and generate results for precision and performance measurements. The results obtained show that the classifier implemented as a Support Vector Machine presented the best generalization, with the highest recognition rate. In terms of performance, all methods presented evaluation times fast enough to be used interactively. Finally, this work identifies and discusses a set of relevant criteria that must be observed for the training and evaluation steps, and its relation to the final results.

Journal ArticleDOI
TL;DR: Non-connective linear cartograms are introduced as a way to represent traffic conditions in urban transportation networks and work together to create dramatic visual effects and attract greater attention from readers.
Abstract: Cartograms have the advantage of bringing a greater visual impact to map readers. Geographic locations or spatial relationships of objects are intentionally modified to suit the attributes pertaining to objects. In area cartograms, it is the size of the object that is intentionally modified, while in linear cartograms it is the length or direction that is intentionally modified. Traffic conditions in urban transportation networks are very dynamic phenomenon as they change through time. During highly congested hours, travel speeds are low, and travel times are long, and vice versa. In previous studies, traffic conditions were visualized by color and width of road segments. In this paper, non-connective linear cartograms are introduced as a way to represent traffic conditions. Non-connective linear cartograms are linear cartograms that do not show the connectivity between line segments. Lengths of road segments are modified to represent a specific theme in traffic conditions. When the length of road segments represents the congestion level, longer segments indicate higher congestion levels, meaning near road maximum capacity. When the length of the segments represents the travel speed, longer segments indicate higher travel speed and, therefore, shorter travel time. When the length of the segments represents the travel time, longer segments indicate longer travel time, and therefore lower travel speed. In the non-connective linear cartograms, lengths of line segments are not limited to the physical length of represented road segments. The flexibility of adjusting it makes length of line segment a visual variable just like color and width of line segment. All three visual variables work together to create dramatic visual effects and attract greater attention from readers.

Journal ArticleDOI
TL;DR: The Library of Virginia’s map collection has grown significantly since the Library opened in 1823, and today approximately 65,000 maps are housed at the Library, which is more than just pretty pictures, as this article attempts to show.
Abstract: The Library of Virginia’s map collection has grown significantly since the Library opened in 1823. Seven maps and four atlases are listed in the 1828 catalog and today approximately 65,000 maps are housed at the Library of Virginia. Rare manuscript collections, valuable “mother” maps of the state, and thousands of maps produced for commercial and federal publications are available for patron use. They are more than just pretty pictures, as this article attempts to show. In fact, this article is based on a presentation I gave in August 2008 at the Library of Virginia during the exhibition “From Williamsburg to Wills’s Creek: the Fry-Jefferson Map of Virginia.”



Journal ArticleDOI
TL;DR: An integrated middleware infrastructure that enables the use of not only idle processor cycles, but also unused disk space of shared machines to enable the reliable distributed storage of application data in the shared machines in a redundant and fault-tolerant way is presented.
Abstract: Opportunistic computational grids use idle processor cycles from shared machines to enable the execution of long-running parallel applications. Besides computational power, these applications may also consume and generate large amounts of data, requiring an efficient data storage and management infrastructure. In this article, we present an integrated middleware infrastructure that enables the use of not only idle processor cycles, but also unused disk space of shared machines. Our middleware enables the reliable distributed storage of application data in the shared machines in a redundant and fault-tolerant way. A checkpointing-based mechanism monitors the execution of parallel applications, saves periodical checkpoints in the shared machines, and in case of node failures, supports the application migration across heterogeneous grid nodes. We evaluate the feasibility of our middleware using experiments and simulations. Our evaluation shows that the proposed middleware promotes important improvements in grid data management reliability while imposing a low performance overhead.