scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Visual Languages and Computing in 2010"


Journal ArticleDOI
TL;DR: A new approach to enrich the topological information by integrating spatial proximity in the topology graph, through the use of weights in adjacency links is presented.
Abstract: Currently, there are large collections of drawings from which users can select the desired ones to insert in their documents. However, to locate a particular drawing among thousands is not easy. In our prior work we proposed an approach to index and retrieve vector drawings by content, using topological and geometric information automatically extracted from figures. In this paper, we present a new approach to enrich the topological information by integrating spatial proximity in the topology graph, through the use of weights in adjacency links. Additionally, we developed a web search engine for clip art drawings, where we included the new technique. Experimental evaluation reveals that the use of topological proximity results in better retrieval results than topology alone. However, the increase in precision was not as high as we expected. To understand why, we analyzed sketched queries performed by users in previous experimental sessions and we present here the achieved conclusions.

60 citations


Journal ArticleDOI
TL;DR: This work describes an abstract, extendible language used to specify the learning design of a course, which can be transformed into any LD language as required by the execution environment.
Abstract: Learning management systems (LMS) provide an operational environment in which an online course can be created and later executed. Inter-operation between creators and their authoring facilities, and the LMS execution engine are based on defining standards and specifications, such as the IMS Learning Design (LD). Because an LMS better serves as a course player than as a course creator, a large number of approaches and environments for standards-compliant course authoring have been developed. These approaches and environments propose a number of issues that deal with how adaptations are edited and how to define the connection of learning activities with external learning applications and services. These questions have raised concern, mostly because of the excessive commitment of the creators' methods and tools used with an educational modeling language, as well as the isolation of the language used to describe the course from the host LMS. This work describes an abstract, extendible language used to specify the learning design of a course, which can be transformed into any LD language as required by the execution environment. The language is used from a generative authoring environment that offers the possibility of editing web services as an additional resource to assess learning activities.

39 citations


Journal ArticleDOI
TL;DR: Differences in cognitive aspects of three visual instructional design languages (E^2ML, PoEML, coUML) are discussed, based on user evaluation, to enable language constructors to improve the usability of visual instructionalDesign languages in the future.
Abstract: The introduction of learning technologies into education is making the design of courses and instructional materials an increasingly complex task. Instructional design languages are identified as conceptual tools for achieving more standardized and, at the same time, more creative design solutions, as well as enhancing communication and transparency in the design process. In this article we discuss differences in cognitive aspects of three visual instructional design languages (E^2ML, PoEML, coUML), based on user evaluation. Cognitive aspects are of relevance for learning a design language, creating models with it, and understanding models created using it. The findings should enable language constructors to improve the usability of visual instructional design languages in the future. The paper concludes with directions with regard to how future research on visual instructional design languages could strengthen their value and enhance their actual use by educators and designers by synthesizing existing efforts into a unified modeling approach for VIDLs.

36 citations


Journal ArticleDOI
TL;DR: This paper investigates the strategies people use to convey route information in relation to a map by presenting two parallel studies involving human-human and human-computer interaction and reveals that speakers produce systematically different instructions with respect to perspective choice and references to locations across levels of granularity.
Abstract: When conveying information about spatial situations and goals, speakers adapt flexibly to their addressee in order to reach the communicative goal efficiently and effortlessly. Our aim is to equip a dialogue system with the abilities required for such a natural, adaptive dialogue. In this paper we investigate the strategies people use to convey route information in relation to a map by presenting two parallel studies involving human-human and human-computer interaction. We compare the instructions given to a human interaction partner with those given to a dialogue system which reacts by basic verbal responses and dynamic visualization of the route in the map. The language produced by human route givers is analyzed with respect to a range of communicative as well as cognitively crucial features, particularly perspective choice and references to locations across levels of granularity. Results reveal that speakers produce systematically different instructions with respect to these features, depending on the nature of the interaction partner, human or dialogue system. Our further analysis of clarification and reference resolution strategies produced by human route followers provides insights into dialogue strategies that future systems should be equipped with.

34 citations


Journal ArticleDOI
TL;DR: This paper describes the results of an exploratory study in which non-programmers were asked to find and modify the code responsible for specific functionality within unfamiliar programs, and presents two interacting models of how non- programmers approach this problem.
Abstract: Source code on the web is a widely available and potentially rich learning resource for non-programmers. However, unfamiliar code can be daunting to end-users without programming experience. This paper describes the results of an exploratory study in which we asked non-programmers to find and modify the code responsible for specific functionality within unfamiliar programs. We present two interacting models of how non-programmers approach this problem: the Task Process Model and the Landmark-Mapping model. Using these models, we describe code search strategies non-programmers employed and the barriers they encountered. Finally, we propose guidelines for future programming environments that support non-programmers in finding functionality in unfamiliar programs.

33 citations


Journal ArticleDOI
TL;DR: A theoretical application and a categorization, based on a domain-oriented separation of concerns of instructional design, is proposed, and some practical illustrations from experiments of specific DSM tooling are presented.
Abstract: This paper presents, illustrates and discusses theories and practices about the application of a domain-specific modeling (DSM) approach to facilitate the specification of Visual Instructional Design Languages (VIDLs) and the development of dedicated graphical editors. Although this approach still requires software engineering skills, it tackles the need of building VIDLs allowing both visual models for human-interpretation purposes (explicit designs, communication, thinking, etc.) and machine-readable notations for deployment or other instructional design activities. This article proposes a theoretical application and a categorization, based on a domain-oriented separation of concerns of instructional design. It also presents some practical illustrations from experiments of specific DSM tooling. Key lessons learned as well as observed obstacles and challenges to deal with are discussed in order to further develop such an approach.

32 citations


Journal ArticleDOI
Marian Petre1
TL;DR: Outlooks of four empirical studies of professional software developers in high-performing teams provide insight into how experts use visualization in reasoning about software design, and how their requirements for theSupport of design tasks differ from those for the support of other software development tasks.
Abstract: This paper considers the relationship between mental imagery and software visualization in professional, high-performance software development. It presents overviews of four empirical studies of professional software developers in high-performing teams: (1) expert programmers' mental imagery, (2) how experts externalize their mental imagery as part of teamwork, (3) experts' use of commercially available visualization software, and (4) what tools experts build themselves, how they use the tools they build for themselves, and why they build tools for themselves. Through this series of studies, the paper provides insight into a relationship between how experts reason about and imagine solutions, and their use of and requirements for external representations and software visualization. In particular, it provides insight into how experts use visualization in reasoning about software design, and how their requirements for the support of design tasks differ from those for the support of other software development tasks. The paper draws on theory from other disciplines to explicate issues in this area, and it discusses implications for future work in this field.

27 citations


Journal ArticleDOI
TL;DR: This paper describes the sketch recognition-based LAMPS system for teaching MPS1 by emulating the naturalness and realism of paper-based workbooks, while extending their functionality with human instructor-level critique and assessment at an automated level.
Abstract: The non-Romanized Mandarin Phonetic Symbols I (MPS1) system is a highly advantageous phonetic system for native English users studying Chinese Mandarin to learn, yet its steep initial learning curve discourages language programs to instead adopt Romanized phonetic systems. Computer-assisted language instruction (CALI) can greatly reduce this learning curve, in order to enable students to sooner benefit from the long-term advantages presented in MPS1 usage during the course of Chinese Mandarin study. Unfortunately, the technologies surrounding existing online handwriting recognition algorithms and CALI applications are insufficient in providing a ''dynamic'' counterpart to traditional paper-based workbooks employed in the classroom setting. In this paper, we describe our sketch recognition-based LAMPS system for teaching MPS1 by emulating the naturalness and realism of paper-based workbooks, while extending their functionality with human instructor-level critique and assessment at an automated level.

25 citations


Journal ArticleDOI
TL;DR: DaisyViz as mentioned in this paper is a model-based user interface toolkit, which enables end-users to rapidly develop domain-specific information visualization applications without traditional programming, and is based on a user interface model for information (UIMI).
Abstract: While information visualization technologies have transformed our life and work, designing information visualization systems still faces challenges. Non-expert users or end-users need toolkits that allow for rapid design and prototyping, along with supporting unified data structures suitable for different data types (e.g., tree, network, temporal, and multi-dimensional data), various visualization, interaction tasks. To address these issues, we designed DaisyViz, a model-based user interface toolkit, which enables end-users to rapidly develop domain-specific information visualization applications without traditional programming. DaisyViz is based on a user interface model for information (UIMI), which includes three declarative models: data model, visualization model, and control model. In the development process, a user first constructs a UIMI with interactive visual tools. The results of the UIMI are then parsed to generate a prototype system automatically. In this paper, we discuss the concept of UIMI, describe the architecture of DaisyViz, and show how to use DaisyViz to build an information visualization system. We also present a usability study of DaisyViz we conducted. Our findings indicate DaisyViz is an effective toolkit to help end-users build interactive information visualization systems.

25 citations


Journal ArticleDOI
TL;DR: The integrated system is able to detect errors that cannot be found by either of the individual approaches alone, which shows that the integrated system provides an added value beyond the mere combination of its parts.
Abstract: Labels in spreadsheets can be exploited for finding formula errors in two principally different ways. First, the spatial relationships between labels and other cells express simple constraints on the cells usage in formulas. Second, labels can be interpreted as units of measurements to provide semantic information about the data being combined in formulas, which results in different kinds of constraints. In this paper we demonstrate how both approaches can be combined into an integrated analysis, which is able to find significantly more errors in spreadsheets than each of the individual approaches. In particular, the integrated system is able to detect errors that cannot be found by either of the individual approaches alone, which shows that the integrated system provides an added value beyond the mere combination of its parts. We also compare the effectiveness of this combined approach with several other conceivable combinations of the involved components and identify a system that seems most effective to find spreadsheet formula errors based on label and unit-of-measurement information.

25 citations


Journal ArticleDOI
TL;DR: This paper presents a multicase study comprising three different cases that evaluate the approach from different perspectives, providing a global understanding of the possibilities and limitations of the pattern-based visual design approach.
Abstract: Collage is a pattern-based visual design authoring tool for the creation of collaborative learning scripts computationally modelled with IMS Learning Design (LD). The pattern-based visual approach aims to provide teachers with design ideas that are based on broadly accepted practices. Besides, it seeks hiding the LD notation so that teachers can easily create their own designs. The use of visual representations supports both the understanding of the design ideas and the usability of the authoring tool. This paper presents a multicase study comprising three different cases that evaluate the approach from different perspectives. The first case includes workshops where teachers use Collage. A second case implies the design of a scenario proposed by a third-party using related approaches. The third case analyzes a situation where students follow a design created with Collage. The cross-case analysis provides a global understanding of the possibilities and limitations of the pattern-based visual design approach.

Journal ArticleDOI
TL;DR: A graphical software system that provides an automatic support to the extraction of information from web pages by exploiting the visual appearance of the information in the document and is driven by the spatial relations occurring among the elements in the page.
Abstract: In this paper we present a graphical software system that provides an automatic support to the extraction of information from web pages. The underlying extraction technique exploits the visual appearance of the information in the document, and is driven by the spatial relations occurring among the elements in the page. However, the usual information extraction modalities based on the web page structure can be used in our framework, too. The technique has been integrated within the Spatial Relation Query (SRQ) tool. The tool is provided with a graphical front-end which allows one to define and manage a library of spatial relations, and to use a SQL-like language for composing queries driven by these relations and by further semantic and graphical attributes.

Journal ArticleDOI
TL;DR: A suite of visual languages to specify access and security policies according to the role based access control (RBAC) model are proposed and a system implementing the proposed visual languages is proposed.
Abstract: The definition of security policies in information systems and programming applications is often accomplished through traditional low level languages that are difficult to use. This is a remarkable drawback if we consider that security policies are often specified and maintained by top level enterprise managers who would probably prefer to use simplified, metaphor oriented policy management tools. To support all the different kinds of users we propose a suite of visual languages to specify access and security policies according to the role based access control (RBAC) model. Moreover, a system implementing the proposed visual languages is proposed. The system provides a set of tools to enable a user to visually edit security policies and to successively translate them into (eXtensible Access Control Markup Language) code, which can be managed by a Policy Based Management System supporting such policy language. The system and the visual approach have been assessed by means of usability studies and of several case studies. The one presented in this paper regards the configuration of access policies for a multimedia content management platform providing video streaming services also accessible through mobile devices.

Journal ArticleDOI
TL;DR: A new concentric-circle visualization method for visualizing multi-dimensional network data that can be used to identify the main features of network attacks, such as DDoS attack, by displaying their recognizable visual patterns is proposed.
Abstract: With the rapid growth of networked data communications in size and complexity, network administrators today are facing more challenges to protect their networked computers and devices from all kinds of attacks. This paper proposes a new concentric-circle visualization method for visualizing multi-dimensional network data. This method can be used to identify the main features of network attacks, such as DDoS attack, by displaying their recognizable visual patterns. To reduce the edge overlaps and crossings, we arrange multiple axes displayed as concentric circles rather than the traditional parallel lines. In our method, we use polycurves to link values (vertexes) rather than polylines used in parallel coordinate approach. Some heuristics are applied in our new method in order to improve the readability of views. We discuss the advantages as well as the limitations of our new method. In comparison with the parallel coordinate visualization, our approach can reduce more than 15% of the edge overlaps and crossings. In the second stage of the method, we have further enhanced the readability of views by increasing the edge crossing angle. Finally, we introduce our prototype system: a visual interactive network scan detection system called CCScanViewer. It is based on our new visualization approach and the experiments have showed that the new approach is effective in detecting attack features from a variety of networking patterns, such as the features of network scans and DDoS attacks.

Journal ArticleDOI
TL;DR: The new taxonomy focuses on evaluating current animation languages and can be used by algorithm visualization system designers as a tool to compare visualization system languages with each other as well as for designing and implementing new systems and language features.
Abstract: In this paper, we present a taxonomy of algorithm animation languages that augments Price's well-known taxonomy of software visualization. Whereas Price's taxonomy is directed to classifying features and characteristics of visualization systems, the new taxonomy focuses on evaluating current animation languages. The taxonomy can be used by algorithm visualization system designers as a tool to compare visualization system languages with each other as well as for designing and implementing new systems and language features. In addition, the taxonomy provides guidelines to the features that are needed for transferring animations between systems. This is an ongoing project that elaborates upon the work reported on in a briefer version of the taxonomy.

Journal ArticleDOI
TL;DR: This paper proposes a concept and architecture for a generic geometry-based recognizer that not only recognizes single components, but can also understand sketched diagrams as a whole, and can resolve ambiguities by syntactical and semantical analysis.
Abstract: Many of today's recognition approaches for hand-drawn sketches are feature-based, which is conceptually similar to the recognition of hand-written text. While very suitable for the latter (and more tasks, e.g., for entering gestures as commands), such approaches do not easily allow for clustering and segmentation of strokes, which is crucial to their recognition. This results in applications which do not feel natural but impose artificial restrictions on the user regarding how sketches and single components (shapes) are to be drawn. This paper proposes a concept and architecture for a generic geometry-based recognizer. It is designed for the mentioned issue of clustering and segmentation. All strokes are fed into independent preprocessors called transformers that process and abstract the strokes. The result of the transformers is stored in models. Each model is responsible for a certain type of primitive, e.g., a line or an arc. The advantage of models is that different interpretations of a stroke exist in parallel, and there is no need to rate or sort these interpretations. The recognition of a component in the drawing is then decomposed into the recognition of its primitives that can be directly queried for in the models. Finally, the identified primitives are assembled to the complete component. This process is directed by an automatically computed search plan, which exhibits shape characteristics in order to ensure an efficient recognition. In several case studies the applicability and generality of the proposed approach is shown, as very different types of components can be recognized. Furthermore, the proposed approach is part of a complete system for sketch understanding. This system not only recognizes single components, but can also understand sketched diagrams as a whole, and can resolve ambiguities by syntactical and semantical analysis. A user study was conducted to obtain recognition rates and runtime data of our recognizer.

Journal ArticleDOI
TL;DR: Several of the more popular visual tools for multimedia authoring, designed to help non-programmers construct multimedia presentations, are reviewed.
Abstract: Multimedia authoring tools were originally developed more than 20 years ago to help non-programmers construct multimedia presentations, especially in the area of primary education. Since then, both the application scope and the target audience have broadened. In this paper, we review several of the more popular visual tools for multimedia authoring.

Journal ArticleDOI
TL;DR: A machine learning model is developed that uses script traits as features to predict reuse of macros and shows that its accuracy is comparable to that of existing machine learning models for predicting reuse-but with a much simpler structure.
Abstract: To help people find a code that they might want to reuse, repositories of end-user code typically sort scripts by number of downloads, ratings, or other information based on prior uses of the code. However, this information is unavailable when the code is new or when it has not yet been reused. Addressing this problem requires identifying reusable code based solely on information that exists when a script is created. To provide such a model for web macro scripts, we identified script traits that might plausibly predict reuse, then used IBM CoScripter repository logs to statistically test how well each corresponded to actual reuse. These tests confirmed that the traits generally did correspond to higher levels of reuse as anticipated. We then developed a machine learning model that uses these traits as features to predict reuse of macros. Evaluating this model on repository logs showed that its accuracy is comparable to that of existing machine learning models for predicting reuse-but with a much simpler structure. Sensitivity analysis revealed that our model is quite robust; its quality is greatly reduced only when parameters are set to such extreme values that the model becomes inordinately selective. Testing the model with individual traits revealed those that provided the best predictions on their own. Based on these results, we outline opportunities for using our model to improve repositories of end-user code.

Journal ArticleDOI
TL;DR: A novel framework able to collect and then process relevant interaction data during the execution of haptic tasks, enabling to analyze dependability vs. usability correlations is proposed.
Abstract: Dependability of a system is commonly referred to its reliability, its availability and its maintenability (RAM), but when this concept is applied to user interfaces there is no common agreement on what aspects of user-system interaction are related to a satisfactory RAM level for the whole system. In particular, when dealing with haptic systems, interface dependability may become a crucial issue in medical and in military domains when life-critical systems are to be manipulated or where costly remote control operations are to be performed, like in industrial processes control or in aerospace/automotive engineering and manufacturing. This paper discusses the role of dependability in haptic user interfaces, aiming to the definition of a framework for the assessment of the usability and dependability properties of haptic systems and their possible correlations. The research is based on the analysis of a visual-haptic-based simulator targeted to maintenance activity training for aerospace industry which is taken as a case study. As a result, we propose a novel framework able to collect and then process relevant interaction data during the execution of haptic tasks, enabling to analyze dependability vs. usability correlations.

Journal ArticleDOI
TL;DR: This paper presents an unbiased discussion of the pragmatics of metamodeling tools against the background of this design rationale, and supports both standard and innovative metAModeling features oriented towards the principle of visual reification.
Abstract: In this article we present a metamodeling tool that is strictly oriented towards the needs of the working domain expert. The working domain expert looks for intuitive metamodeling features. In particular, these features include rich capabilities for specifying the visual appearance of models. Our research has identified an important design rationale for metamodeling tools that we call visual reification, which is the notion that metamodels are visualized the same way as their instances. Our tool supports both standard and innovative metamodeling features oriented towards the principle of visual reification. In this paper we present an unbiased discussion of the pragmatics of metamodeling tools against the background of this design rationale.

Journal ArticleDOI
TL;DR: This special issue intends to explore the use of visual design methodologies and artifacts to support teaching practitioners in their daily work, and to enhance the quality of TEL systems, facilitate sharing ideas, collaboration, reuse, and learning from experience.
Abstract: Submission Deadline: December 14, 2009 The Journal of Visual Languages and Computing (http://www.elsevier.com/locate/jvlc, impact factor: 0.863) invites authors to submit papers for the special issue on Visual Instructional Design Languages. This special issue follows up the VIDLATEL 2009 workshop (http://elearn.pri.univie.ac.at/vidlatel/) on the same topic, but it is open also to contributions that were not presented in the workshop. The special issue is scheduled to appear in December 2010. Background Many human activities are supported by the use of visual representations, which enable us to manage complex problems by enhancing our limited cognitive capabilities. Architects, musicians, surgeons and engineers use visual artifacts in their daily practice to plan, design and carry out their endeavors, in the form of music scores, images, diagrams, charts, etc. Visuals can support imagination, creative thinking, communication, discussion, organization, documentation, and formalization of ideas, plans or anything related to the tasks to be accomplished. Similarly, learning experiences, and in particular the difficult process of design and implementation of learning environments could be supported by the appropriate use of visual artifacts. Achievement of learning is pursued through activities using learning objects, resources and tools. The ever increasing number of existing Technology-Enhanced Learning (TEL) tools, standards, and applications (e.g., Moodle, dotLRN, RELOAD, LAMS, to name a few) provide academic staff with lots of useful functionalities to design their TEL environments. There are a number of specifications that allow formal representation of learning processes and contents (e.g., SCORM, IMS LD, IMS CP, IMS QTI, etc.) intended to facilitate reuse and interoperability of solutions. Nevertheless, it is well known that these specifications are complex and difficult to work with by the average practitioners, i.e., teachers, instructional designers, facilitators. This special issue intends to explore the use of visual design methodologies and artifacts (languages, notation systems, standards, frameworks, tools, metaphors, applications) to support teaching practitioners in their daily work, and to enhance the quality of TEL systems, facilitate sharing ideas, collaboration, reuse, and learning from experience. Topics We solicit manuscripts that provide original contributions to advance practice and/or theory of visual instructional design languages. Submissions are expected in, but not limited to, the following topics: • Visual design languages and notation systems for instructional design • Visual design applications and editing/authoring tools for instructional design • Narrative techniques and instructional design • User studies and case studies involving visual instructional design languages and applications • Computational modeling in visual instructional design languages • Human factors using visual instructional design languages • Pattern-based visual instructional design • Meta-models for visual instructional design languages • Other topics of relevance to the special issue theme Please notice that the special issue will not include papers about the use of visuals in learning (e.g., related to image-text relationship or cognitive load). Also note that manuscripts are not limited to any particular category, i.e., submissions may be empirical studies, case studies, state-of-the-art reviews, methodological or theoretical, etc. However, each submission must clearly demonstrate its (a) practical value to the practitioner community or (b) sound advancement of theory. Submissions Only original papers will be considered. Manuscripts are accepted for review with the understanding that the same work has not been, will not be, nor is at present submitted elsewhere, and that its submission for publication has been approved by all of the authors; further, that any person cited as a personal communication has approved such citation. If your submission is an extended version of one or more conference/workshop papers, please include the original publication(s) in your submission and indicate the changes/improvements made for this special issue. Manuscripts and inquiries should be sent electronically to vidlatel@gmail.com, indicating in the subject headers that the submission is intended for the Special Issue on Visual Instructional Design Languages. Manuscripts should be in PDF and follow the formatting instructions of the Journal of Visual Languages & Computing as indicated at http://www.elsevier.com/wps/find/journaldescription.cws_home/622907/authorinstru ctions. We limit the length of the articles submitted to this special issue to 40 double-spaced pages with 3 cm (1.18 in) margins, including figures, tables, references, etc. To help us organize the review process, please indicate your intention to submit a manuscript by sending an abstract to vidlatel@gmail.com by 14 December 2009. Schedule Prospective authors are invited to make themselves known to the editors ahead of time by sending an abstract of the planned manuscript to facilitate the management of submissions and ensure that the authors will be informed of any change. • Abstract submission: December 14, 2009 • Full manuscript submission: March 1, 2010 • Notification of review results: April 30, 2010 • Revised manuscripts due: June 14, 2010 • Notification of final decisions: July 9, 2010 • Camera ready submissions due: August 9, 2010 • Papers sent to publisher: August 15, 2010 The special issue is scheduled to appear in Journal of Visual Languages and Computing, Vol. 21, No. 6, December 2010. Further Information Please, contact the special issue editors at vidlatel@gmail.com for any inquiries.

Journal ArticleDOI
TL;DR: The diagram sketching tool InkKit is extended to combine software engineering sketches of different types to support software design processes by providing a sketch-based approach that allows the iterative creation of multiple outputs interacting with one another from the inter-linked models.
Abstract: Diagrams are often used to model complex systems: in many cases several different types of diagrams are used to model different aspects of the system. These diagrams, perhaps from multiple stakeholders of different specialties, must be combined to achieve a full abstract representation of the system. Many CAD tools offer multi-diagram integration; however, sketch-based diagramming tools are yet to tackle this difficult integration problem. We extend the diagram sketching tool InkKit to combine software engineering sketches of different types. Our extensions support software design processes by providing a sketch-based approach that allows the iterative creation of multiple outputs interacting with one another from the inter-linked models. We demonstrate that InkKit can generate a functional system consisting of a user interface with processes to submit and retrieve data from a database from sketched user interfaces designs and sketched entity relationship diagrams.

Journal ArticleDOI
TL;DR: A dashboard tool called IssuePlayer is developed, used to study the trends in which different types of issues are submitted, handled and piled up in software projects and use that information to identify process symptoms, e.g., the times when the code maintenance team is not responsive enough.
Abstract: This article presents a software visualization framework which can help project managers and team leaders in overseeing issues and their management in software development. To automate the framework, a dashboard tool called IssuePlayer is developed. The tool is used to study the trends in which different types of issues (e.g., bugs, support requests) are submitted, handled and piled up in software projects and use that information to identify process symptoms, e.g., the times when the code maintenance team is not responsive enough. The interactive nature of the tool enables identification of the team members who have not been as active as they were expected to be in such cases. The user can play, pause, rewind and forward the issue management histories using the tool. The tool is empirically evaluated by two industrial partners in North America and Europe. The survey and qualitative feedback support the usefulness and effectiveness of the tool in assessing the issue management processes and the performance of team members. The tool can be used complementarily in parallel with textual information provided by issue management tools (e.g., BugZilla) to enable team leaders to conduct effective and successful monitoring of issue management in software development projects.

Journal ArticleDOI
TL;DR: A modular application, based on Petri Nets formalism that allows for the specification of an interaction task and for the reuse of developed blocks in new virtual environment projects is presented.
Abstract: This work presents a methodology to formally model and to build three-dimensional interaction tasks in virtual environments using three different tools: Petri Nets, the Interaction Technique Decomposition taxonomy, and Object-Oriented techniques. User operations in the virtual environment are represented as Petri Net nodes; these nodes, when linked, represent the interaction process stages. In our methodology, places represent all the states an application can reach, transitions define the conditions to start an action, and tokens embody the data manipulated by the application. As a result of this modeling process we automatically generate the core of the application's source code. We also use a Petri Net execution library to run the application code. In order to facilitate the application modeling, we have adapted Dia, a well-known graphical diagram editor, to support Petri Nets creation and code generation. The integration of these approaches results in a modular application, based on Petri Nets formalism that allows for the specification of an interaction task and for the reuse of developed blocks in new virtual environment projects.

Journal ArticleDOI
TL;DR: Results show that search-by-match and collaborative filtering are useful approaches for helping users to publish, find, and reuse visual programs similar to topes, and greatly improve the accuracy of search over the traditional keyword-based approach.
Abstract: To help users with automatically reformatting and validating spreadsheets and other datasets, prior work introduced a user-extensible data model called ''topes'' and a supporting visual programming language. However, no support has existed to date for users to exchange and reuse topes. This functional gap results in wasteful duplication of work as users implement topes that other people have already created. In this paper, a design for a new repository system is presented that supports sharing and finding of topes for reuse. This repository tightly integrates traditional keyword-based search with two additional search methods whose usefulness in repositories of end-user code has gone unexplored to date. The first method is ''search-by-match'', where a user specifies examples of data, and the repository retrieves topes that can reformat and validate that data. The second method is collaborative filtering, which has played a vital role in repositories of non-code artifacts. The repository's search functionality was empirically tested on a prototype repository implementation by simulating queries generated from real user spreadsheets. This experiment reveals that search-by-match and collaborative filtering greatly improve the accuracy of search over the traditional keyword-based approach, to a recall as high as 95%. These results show that search-by-match and collaborative filtering are useful approaches for helping users to publish, find, and reuse visual programs similar to topes.

Journal ArticleDOI
TL;DR: This special issue is comprised of works selected from two related workshops on sketching that highlight a subset of the advances made in the field of sketch computing.
Abstract: Sketching is, in essence, a means of quickly visualizing information. You may want to sketch an example of an object to better explain it to a colleague, or the sketch may be an engineering diagram or model of a computer system. While we have powerful computer tools for formally rendering engineering diagrams and diagram modeling, sketching remains the preferred first visualization method for most designers. Sketch recognition interfaces are computer applications that provide a paper-like interface for people to sketch their ideas. The research challenges include unobtrusive, yet intelligent interaction support, translation between sketches and formal representations and use of sketch tools for writing and drawing training. Underlying the user interface we need accurate and fast recognition algorithms—development of better recognizers is one key to better sketch interfaces. Sketch computing is a relatively young field of computer science that can trace its roots to the development of pen-based inputs in the 1960s, however interest has increased in recent years as stylus and touch enabled displays have become more available. This special issue is comprised of works selected from two related workshops on sketching. The first titled ‘‘Special Track on Sketch Computing’’ was part of the Visual Languages and Computing 2008 (VLC ‘08) that was held jointly with Distributed Multimedia Systems in Boston, Massachusetts, USA on September 5, 2008. The second workshop ‘‘Sketch Tools for Diagramming’’ was part of the Visual Languages and Human-Centric Computing 2008 (VL/HCC ‘08) Conference, held in Herrsching am Ammersee, Germany on September 15, 2008. There are many fine works related to sketch computation in the domain of visual languages and computation, and we are pleased to include in this issue four papers selected from the two workshops above that highlight a subset of the advances made in the field.