scispace - formally typeset
Search or ask a question

Showing papers presented at "Advanced Visual Interfaces in 1996"


Proceedings ArticleDOI
27 May 1996
TL;DR: This work proposes a visual interface to handle the result of a query, based on a hybrid model for text, that provides several visual representations of the answer and its elements (queries, documents, and text), easing the analysis and the filtering process.
Abstract: Current user interfaces of full text retrieval systems do not help in the process of filtering the result of a query, usually very large. We address this problem and we propose a visual interface to handle the result of a query, based on a hybrid model for text. This graphical user interface provides several visual representations of the answer and its elements (queries, documents, and text), easing the analysis and the filtering process.

131 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: In this article, the authors proposed elastic windows with improved spatial layout and rapid multi-window operations, which are achieved by issuing operations on window groups hierachically organized in a space-filling tiled layout.
Abstract: Most windowing systems follow the independent overlapping windows approach, which emerged as an answer to the needs of the 80s' application and technology. Advances in computers, display technology, and the applications demand more functionality from window management systems. Based on these changes and the problems of current windowing appraoches, we have updated the requirements for multiwindow systems to guide new methods of window management. We propose elastic windows with improved spatial layout and rapid multi-window operations. Multi-window operations are achieved by issuing operations on window groups hierachically organized in a space-filling tiled layout. Sophisticated multi-window operations and spatial layout dynamics helps users to handle fast task-switching and to structure thier work environment to their rapidly changing needs. We claim that these multi-window operations and the improved spatial layout decrease the cognitive load on users. Users found our prototype system to be comprehensible and enjoyable as they playfully explored the way multiple windows are reshaped.

89 citations


Proceedings ArticleDOI
Peter Pirolli1, Ramana B. Rao1
27 May 1996
TL;DR: The Table Lens as discussed by the authors is a visualization for searching for patterns and outliers in multivariate data sets, which supports a lightweight form of exploratory data analysis by integrating a familiar organization, the table, with graphical representations and a small set of direct manipulation operators.
Abstract: The Table Lens is a visualization for searching for patterns and outliers in multivariate datasets. It supports a lightweight form of exploratory data analysis (EDA) by integrating a familiar organization, the table, with graphical representations and a small set of direct manipulation operators. We examine the EDA process as a special case of a generic process, which we call sensemaking. Using a GOMS methodology, we characterize a few central EDA tasks and compare performance of the Table Lens and one of the best of the more traditional graphical tools for EDA i.e. Splus. This analysis reveals that Table Lens is more or less on par with the power of Splus, while requiring the use of fewer specialized graphical representations. It essentially combines the graphical power of Splus with the direct manipulation and generic properties of spreadsheets and relational database front ends. We also propose a number of design refinements that are suggested by our task characterizations and analyses.

63 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: The PPP Persona is presented, a tool which can be used for showing, explaining, and verbally commenting textual and graphical output on a window-based interface that follows the client/server paradigm.
Abstract: Animated agents - either based on real video, cartoon-style drawings or even model-based 3D graphics are likely to become integral parts of future user interfaces. We present the PPP Persona, a tool which can be used for showing, explaining, and verbally commenting textual and graphical output on a window-based interface. The realization of the module follows the client/server paradigm, i.e, some client applications can send requests for executing presentation tasks to the server. However, to achieve a lively and appealing behaviour of the animated agent, the server autonomously performs some actions, eg. to span pauses or to react immediately to user interactions.

60 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: A spreadsheet is extended which makes use of a visual language for expressing formulae to also incorporate the use of user interface objects, increasing the utility of spreadsheets for investigating "what-if" scenarios.
Abstract: One of the primary uses of spreadsheets is in forecasting future events. This involves investigating "what-if" scenarios --- creating a spreadsheet, experimenting with different values for inputs, and observing how they effect the computed values. Unfortunately, current spreadsheets provide little support for this type of interaction. Data values must be typed in, and computed values can be observed only as numbers, or on simple charts. In this work we extend a spreadsheet which makes use of a visual language for expressing formulae to also incorporate the use of user interface objects. This allows the user to create any type of input and output interfaces they wish, increasing the utility of spreadsheets for investigating "what-if" scenarios.

56 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: Mocha is a distributed model with a client-server architecture that optimally partitions the software components of a typical algorithm animation system, and leverages the power of the Java language, an emerging standard for distributing interactive platform-independent applications across the Web.
Abstract: In this paper we propose a new model, called Mocha, for providing algorithm animation over the World Wide Web. Mocha is a distributed model with a client-server architecture that optimally partitions the software components of a typical algorithm animation system, and leverages the power of the Java language, an emerging standard for distributing interactive platform-independent applications across the Web.Mocha provides high levels of security, protects the algorithm code, places a light communication load on the Internet, and allows users with limited computing resources to access animations of computationally expensive algorithms. The user interface combines fast responsiveness and user friendliness with the powerful authoring capabilities of hypertext narratives.We describe the architecture of Mocha and show its advantages over previous methods for algorithm animation over the Internet. We also present a prototype of an animation system for geometric algorithms that can be accessed by any user with a WWW browser supporting Java (currently Netscape 2.0 and HotJava) at URL http://www.cs.brown.edu/people/jib/Mocha.html.

42 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: This work seamlessly integrated algorithm animation capabilities into Forms/3, a declarative VPL in which evaluation is the continuous maintenance of a network of one-way constraints, and shows that a VPL that uses this constraint-based evaluation model can provide features not found in other algorithm animation systems.
Abstract: Until now, only users of textual programming languages have enjoyed the fruits of algorithm animation. Users of visual programming languages (VPLs) have been deprived of the unique semantic insights algorithm animation offers, insights that would foster the understanding and debugging of visual programs. To begin solving this shortcoming, we have seamlessly integrated algorithm animation capabilities into Forms/3, a declarative VPL in which evaluation is the continuous maintenance of a network of one-way constraints. Our results show that a VPL that uses this constraint-based evaluation model can provide features not found in other algorithm animation systems.

31 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: A framework for user-interfaces to databases (IDSs) is proposed which draws from existing research on human computer interaction (HCI) and database systems and a prototype system is presented, showing the potential for automated mapping of a language specification to a fully functional implementation.
Abstract: A framework for user-interfaces to databases (IDSs) is proposed which draws from existing research on human computer interaction (HCI) and database systems. The framework is described in terms of a classification of the characteristic components of an IDS. These components, when progressively refined, may be mapped to a conceptual object-oriented language for the precise specification of the IDS. A prototype system is presented, showing the potential for automated mapping of a language specification to a fully functional implementation. As well as providing general support to any database interface developer, we believe that the framework will prove useful for researching a number of IDS issues.

26 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: A visual interface for computer-supported cooperative work (CSCW) that is an extension of the editor interface of ESCHER, a prototype database system based on the extended non-first-normal-form data model, and discusses its use in applications which require negotiated transactions.
Abstract: This paper introduces a visual interface for computer-supported cooperative work (CSCW). The interface is an extension of the editor interface of ESCHER, a prototype database system based on the extended non-first-normal-form data model. In ESCHER, the nested table approach is the paradigm for presenting data, where presenting includes browsing, editing and querying the database. Interaction is achieved by fingers generalising the well-known cursor concept. When several users are involved, the concept permits synchronous collaboration with the nested table acting as "whiteboard". We discuss its use in applications which require negotiated transactions, i.e. where the isolation principle of ACID-transactions gives way to negotiations. We also give examples of how interactive query formulation in a QBE-like fashion can support the collaboration. The arguments in the paper are mainly supported with screenshots taken from two applications, one of them also with non-textual data types which are seamlessly integrated into the nested tabular display paradigm.

23 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: Some principles that are important in creating useful visualizations of the World Wide Web are discussed: layout, abstraction, focus, and interaction.
Abstract: We discuss some principles that we believe are important in creating useful visualizations of the World Wide Web. They are: layout, abstraction, focus, and interaction. We illustrate these points with examples from the work of our group at the University of Toronto.

21 citations


Proceedings ArticleDOI
27 May 1996
TL;DR: A system supporting pen-based input and diagram recognition that employs a personal digital assistant (PDA) as an intelligent input device for the system, which provides the opportunity to use hardware specially designed for shape recognition and editing in a general diagram recognition system.
Abstract: We present a system supporting pen-based input and diagram recognition that employs a personal digital assistant (PDA) as an intelligent input device for the system. Functionality is distributed between the PDA and the main computer, with the PDA performing low-level shape recognition and editing functions, and the back-end computer performing high-level recognition functions, including recognition of spatial relations between picture elements. This organization provides a number of advantages over conventional pen-based systems employing simple digitizing tablets. It provides the opportunity to use hardware specially designed for shape recognition and editing in a general diagram recognition system, it allows for improved performance through parallel processing, and it allows diagram entry to be performed remotely through use of the PDA front end in the field, with recognized shapes subsequently downloaded to the main diagram recognizer. We discuss the overall organization of the system, as well as the individual pieces and the communication between them, and describe two ongoing projects employing this architecture.

Proceedings ArticleDOI
Stuart K. Card1
27 May 1996
TL;DR: Permission to make digital/hard copies of all or part of this material for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage.
Abstract: Permission to make digital/hard copies of all or part of this material for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the rifle of the publication and its date appear, and notice is

Proceedings ArticleDOI
27 May 1996
TL;DR: Vineta is a system prototype allowing navigation through bibliographic data without the typing and revising of keyword-based queries and users can utilize their natural sense of space to interact with the system.
Abstract: Vineta is a system prototype allowing navigation through bibliographic data without the typing and revising of keyword-based queries. Our approach to visualizing documents and terms in navigational retrieval includes the representation of documents and terms as graphical objects, and dynamic positioning of these objects in the 3D virtual navigation space. Users can navigate through this virtual navigation space examining individual documents and clusters of documents at various levels of detail. Users can utilize their natural sense of space to interact with the system.

Proceedings ArticleDOI
27 May 1996
TL;DR: The concept of ZOOM NAVIGATION is presented, a new interaction paradigm to cope with visualization and navigation problems as found in large information and application spaces based on the pluggable zoom, an object-oriented component derived from the variable zoom fisheye algorithm.
Abstract: We present the concept of ZOOM NAVIGATION, a new interaction paradigm to cope with visualization and navigation problems as found in large information and application spaces. It is based on the pluggable zoom, an object-oriented component derived from the variable zoom fisheye algorithm.Working with a limited screen space we apply a Degree-of-interest (DOI) function to guide the level of detail used in presenting information. Furthermore we determine the user's information and navigation needs by analysing the interaction history. This leads to the definition of the aspect-of-interest (AOI) function. The AOI is evaluated in order to choose one of the several information aspects, under which an item can be studied. This allows us to change navigational affordance and thereby enhance navigation.In this paper we describe the ideas behind the pluggable zoom and the definition of DOI and AOI functions. The application of these functions is demonstrated within two case studies, the ZOOM ILLUSTRATOR and the ZOOM NAVIGATOR. We discuss our experience with these implemented systems.

Proceedings ArticleDOI
27 May 1996
TL;DR: A general framework and layout algorithm that deals with arbitrary types of visual objects, allows objects to be viewed in any one of several different visual representations (at different levels of detail), and uses a small number of user-specified layouts to guide heuristic decisions for automatically deriving many other layouts in a manner that attempts to be consistent with the user's preferences is introduced.
Abstract: Among existing tools for laying out large collections of visual objects, some perform automatic layouts, possibly following some rules prespecified by the user, e.g., graph layout tools, while others let users specify layouts manually, e.g., CAD design tools. Most of them can only deal with specific types of visualizations, e.g., graphs, and some of them allow users to view visual objects at various levels of detail, e.g., tree-structure visualization tools. In this paper, we develop techniques that strike a balance between user specification and automatic generation of layouts, work at multiple granularities, and are generally applicable. In particular, we introduce a general framework and layout algorithm that (a) deals with arbitrary types of visual objects, (b) allows objects to be viewed in any one of several different visual representations (at different levels of detail), and (c) uses a small number of user-specified layouts to guide heuristic decisions for automatically deriving many other layouts in a manner that attempts to be consistent with the user's preferences. The algorithm has been implemented within the OPOSSUM database schema manager and has been rather effective in capturing the intuition of scientists from several disciplines who have used it to design their database and experiment schemas.

Proceedings ArticleDOI
27 May 1996
TL;DR: Mocha is a distributed system with a client-server architecture that optimally partitions the software components of a typical algorithm animation system, and leverages the power of the Java language, an emerging standard for distributing interactive platform-independent applications across the Web.
Abstract: We describe the implementation of a new system, called Mocha, for providing algorithm animation over the World Wide Web. Mocha is a distributed system with a client-server architecture that optimally partitions the software components of a typical algorithm animation system, and leverages the power of the Java language, an emerging standard for distributing interactive platform-independent applications across the Web.

Proceedings ArticleDOI
27 May 1996
TL;DR: This paper examines formal models of interactive systems and cognitive models of users and two forms of non-visual interaction: mathematics for the blind and interaction by smell (nasal interaction).
Abstract: Visual interfaces to computer systems are interactive. The cycle of visual interaction involves both visual perception and action. This paper examines formal models of interactive systems and cognitive models of users. Neither completely captures the special nature of visual interaction. In order to investigate this, the paper examines two forms of non-visual interaction: mathematics for the blind and interaction by smell (nasal interaction). Finally three forms of more pragmatic design-oriented method are considered: information rich task analysis (what information is required), status-event analysis (when it is perceived) and models of information (how to visually interact with it).

Proceedings ArticleDOI
27 May 1996
TL;DR: The flexibility of the VICKI (the Visualisation Construction Kit) environment allows users to create IVAs, with a level of functionality and appearance, suitable for their specific needs.
Abstract: The human acquisition of insight into multivariate data can be greatly enhanced if users can view and interact with that data graphically. Many Interactive Visualisation Artifacts (IVAs) have been developed for such activities, but they tend to focus on a single task. The flexibility of the VICKI (the Visualisation Construction Kit) environment allows users to create IVAs, with a level of functionality and appearance, suitable for their specific needs. This paper introduces the concepts behind VICKI and discusses issues of future development.

Proceedings ArticleDOI
27 May 1996
TL;DR: The goal is to empower individuals involved in design activities using the written medium, by amending it carefully with computational facilities, to preserve the fluidity and swiftness of design activities, and let users dynamically associate marks on the display surface with interpretations that provide interesting operations to the user.
Abstract: Our goal is to empower individuals involved in design activities using the written medium, by amending it carefully with computational facilities. To preserve the fluidity and swiftness of design activities, we let users dynamically associate marks on the display surface with interpretations that provide interesting operations to the user.Inherent to typical computer applications is a very static relationship between internal data structures and presentation. In contrast, applications in our system (we call them interpretations), have to be able to deal with a much more dynamic relationship between those areas.This paper motivates this idea, presents challenges faced by such an approach, explains a framework for designing and implementing such interpretations, and illustrates how exemplary interpretations make use of this framework.

Proceedings ArticleDOI
27 May 1996
TL;DR: An experimental image browser for medical imaging diagnosis implementing the query-by-pictorial-example philosophy for user interface is illustrated and a similarity matching between the query and an image to retrieve is defined.
Abstract: The paper describes a significant part of an experimental system for producing digital medical images, processing them to extract suitable spatial indexes, and to store and retrieve by content such images in order to provide users with an assisted visual browser to navigate a distributed archive. A prerequisite for the system described in this paper is that a physician should be able to manipulate the diagnostic images by simple visual commands that allow content-based access. In particular, the physician have to identify abnormalities (hot spots) in each image by deteriming their spatical locations, opacities, shapes and geometrical measures.Since our system needs the capability of retrieving images based on the presence of given patterns, it is necessary to define a similarity matching between the query and an image to retrieve. To efficiently perform such a matching, each image is stored together with a collection of metadata that are a very compact representation of the spatical contents of the image. These metadata forms the index of the image.We illustrate an experimental image browser for medical imaging diagnosis implementing the query-by-pictorial-example philosophy for user interface.

Proceedings ArticleDOI
27 May 1996
TL;DR: This Pictorial Query Language (PQL) for Geographic Information Systems (GIS) makes easier the formulation of a complex query and simplifies user approach to the system, maintaining a strong expressive power.
Abstract: In this paper a Pictorial Query Language (PQL) for Geographic Information Systems (GIS) is proposed. The user queries the GIS drawing symbolic objects, combining them together and selecting the derived result among those ones proposed by the PQL. The used interface is part of the Scenario GIS developed using an object oriented environment. This PQL makes easier the formulation of a complex query and simplifies user approach to the system, maintaining a strong expressive power. An overview on the data structure type, on the operators and on the relations among geographic entities is briefly made. The Visual Algebra and the relative operators are defined. The pictorial operations associated to the above mentioned algebra are described. Finally, an example of query and its visual composition on the screen is shown.

Proceedings ArticleDOI
27 May 1996
TL;DR: The system developed is easy to use and provides comfortable mechanisms for browsing, manipulating and reusing queries results as well as previous queries thus making feasible effective non-motonic, progressive query processes.
Abstract: The enormous popularity of the World Wide Web has made putting public access databases on the Web practically mandatory. Forms embedded within the Web clients (e.g. Netscape) are therefore emerging as the most common interfaces in database querying. Should this solution be considered completely satisfactory?We highlight some of the important limits we experienced with forms and we propose a convenient alternative solution, based on direct manipulation of icons. The system we have developed is easy to use and provides comfortable mechanisms for browsing, manipulating and reusing queries results as well as previous queries thus making feasible effective non-motonic, progressive query processes.

Proceedings ArticleDOI
27 May 1996
TL;DR: An original visual language is proposed, for the symbolic representation of the semantics induced by the colour quality and arrangement over a painting, based on the concepts of color semantics introduced by artists in the 900 and developed to support a visual query paradigm.
Abstract: The availability of large image databases is emphasizing the relevance of filters, which permit to focus the interest on a small subset of data. Taking advantage of the pictorial features of images, visual specification of such filters provides a powerful and natural way to express content-oriented queries. Albeit direct, the by example paradigm, does not allow to express high-level assertions on the pictorial content of images and specifically, paintings. To support the visuality, without losing power of expression, an original visual language is herein proposed, for the symbolic representation of the semantics induced by the colour quality and arrangement over a painting. The proposed language is based on the concepts of color semantics introduced by artists in the 900 and is developed to support a visual query paradigm. The present paper formalizes the grammar of the language and describes its implementation into a prototype system of painting retrieval by colour content.

Proceedings ArticleDOI
27 May 1996
TL;DR: This paper investigates the problem of querying a database of images and proposes a visual editor as an interaction tool, developed following a formal model (the PIE model), where properties such as completeness, reachability, and particularly undo, hold.
Abstract: In this paper, we investigate the problem of querying a database of images. In order to improve the communication between human and computer, we propose a visual editor as an interaction tool. Really, the most simple way to formulate a query to a database of images is to allow the user to draw a sketch of the picture he is interested in. This sketch will be used to formulate a query within the visual query system. This editor, called VisEd, has been developed following a formal model (the PIE model), where properties such as completeness, reachability, and particularly undo, hold.

Proceedings ArticleDOI
27 May 1996
TL;DR: Browsing In Time & Space (BITS) is an interface designed to explore virtual ecosystems that is based on a virtual notepad and pen metaphor and is inspired in the concept of logging.
Abstract: Browsing In Time & Space (BITS) is an interface designed to explore virtual ecosystems. A virtual ecosystem includes a three dimensional terrain model background, collections of man-made and natural objects, and behavior and interaction rules between the objects and the background. BITS is based on a virtual notepad and pen metaphor and is inspired in the concept of logging. Physical props are used to represent the notepad and the pen. The notepad includes a Time & Space Slider to facilitate time and space traveling, a set of buttons and a list of commands to control the interaction and enable the manipulation of objects, and a Notes Area. The handwritten notes can be referenced in time and space with the use of logging marks. BITS is being implemented on a PC-based architecture using sensors to track the pen's movement and the notepad's position. BITS major problem is related to the poor representation of the notes written in the notepad using the sensor based tracking system.

Proceedings ArticleDOI
27 May 1996
TL;DR: An implemented system that generates automatically verbal and nonverbal behaviors during a conversation between 3D synthetic agents during a discussion between 3d synthetic agents is presented, concentrated on gaze pattern during speech.
Abstract: We present an implemented system that generates automatically verbal and nonverbal behaviors during a conversation between 3D synthetic agents. Dialogue with its appropriate intonation as well as the accompanying facial expression, gaze and gesture are computed. Our system integrates rules linking words and intonation, facial expression and intonation, gesture and words, gesture and intonation, gaze and intonation extracted from cognitive science studies. In the present paper we are concentrated on gaze pattern during speech.

Proceedings ArticleDOI
27 May 1996
TL;DR: The main features of Virgilio are: to be parametric with respect to the explored database, to automatically produce an user oriented view of the dataset and to describe visualized data by means of the VRML language.
Abstract: In this paper we introduce Virgilio, a system which generates VR-based visualizations of complex data objects representing the result of a query. Virgilio takes as input the dataset resulting from a query on a generic database and creates a corresponding visual representation composed of a collection of VRML (VR Modeling Language) scenes. The system uses a repository of real world objects (e.g., rooms, tables, portrait cases) which includes their visual aspect, the types of data they can support as well as a containment relationship among pairs of objects. Virgilio works in the following way: (i) attribute values of the dataset are displayed on virtual world objects according to the capability of these objects to represent the proper type of data, (ii) semantic relationships among the objects in the dataset are represented using the containment relationship.The main features of Virgilio are: to be parametric with respect to the explored database, to automatically produce an user oriented view of the dataset and to describe visualized data by means of the VRML language.A system prototype is currently being implemented. As an example, we provide a set of snapshots showing the scenes built by Virgilio to represent the result of queries defined on a database of musical CD records.

Proceedings ArticleDOI
27 May 1996
TL;DR: Before the potential of the intelligent pen aud paper metaphor can be fully realized,egrated software tools are needed which support the construct of a user interface based on a partict~la: visual la~guag< The PENGUINS toolkit, provides such fools.
Abstract: I. iNTRODUCTiON Recent hardware advances have brought intera.ctive graphic tablets and pen-ba.sed notepa.d computers into the market place. This new technology, however, while offering great potential has not yet~ been very successful. One of the main reasons is that. software for pen-based computers is still immature: gesture recognition is poor and few applications make use of t, he new capabilities of the pen. The aim of the PF;N(~UINS project, is 1.o provide a. tools that. help in tim dewelopmel~t of sol't.warc for pen-based coinput.ers which t.a.kes full advallt.a.ge of the pen's new capabilit.ies. The project is based on the iT~telliyqe~t pen (~.~d paper metaphor for human-computer interaction wit.h i)ell-based computers [7] In this int.eract.ion ~et=apl~or. t.lw user comrnunicates with the comput.er using a~ application specific visual language-" co~posed of hai~dwrittcu text and diagrams. This contrasts with the usual stat.e of affairs, in which the input, of diagrams is a CUll/l~er-some, indirect process, requiring sophisticated lll,,2/ttl-based graphic editors with many complex ii~odes. Intelligent pen and fmper, however, promises *,hal s~ch input will be a.ble to be given in frec tbrnl, i~,od<'lessly in any order and place, and modified using tin.rural gesture commands. Users will thus be able to express themselves directly in the ting~a fvattca of t.hcir applica-t, ion domain using visual languages such as flowcharl.s, before the potential of the intelligent pen aud paper metaphor can be fully realized, int.egrated software tools are needed which support the constructiou ()f a user interface based on a partict~la: visual la~guag< The PENGUINS toolkit, provides such fools.

Proceedings ArticleDOI
27 May 1996
TL;DR: The graphical user interface of Mosaico is presented, an environment for the analysis and conceptual modeling of Object-Oriented database applications and is capable to present the content of a conceptual model in a diagrammatic form.
Abstract: In this paper we present the graphical user interface of Mosaico, an environment for the analysis and conceptual modeling of Object-Oriented database applications. Mosaico is based on a formalism, the Object-Oriented conceptual language TQL++, that appears more friendly than others. Neverthless, to relieve the designer from knowing the details of TQL++, we developed an iconic interface that guides the construction of a database application specification. The output of the conceptual modeling phase is a knowledge base, which can be verified statically and, once transformed into executable code, can be tested with sample data. Furthermore, Mosaico is capable to present the content of a conceptual model in a diagrammatic form. This facility has been implemented within an abstract diagram approach, which guarantees a high level of independence with respect to the drawing tool.

Proceedings ArticleDOI
27 May 1996
TL;DR: The concept of modal navigation as a technique that allows to achieve both simplicity in user interaction and flexibility in tuning navigation styles to specific needs of different categories of users is discussed.
Abstract: Hypermedia applications combine the flexibility of navigation based-access to information, typical of hypertext, with the communication power of multiple media, typical of multimedia systems. By their very nature, hypermedia applications support multimode interacation, i.e., interaction based on a combination of multiple modalities that are induced by different media and different navigation paradigms. The potentially huge number of mode combinations in hypermedia can accommodate a large variety of user needs and tasks. Multimode interaction, however, is intrinsically complex for the users if several multimode paradigmas coexist within the same application. This paper discusses the concept of modal navigation as a technique that allows to achieve both simplicity in user interaction and flexibility in tuning navigation styles to specific needs of different categories of users. According to modal navigation, the semantics of navigation commands depends upon the current setting of modes. Various paradigms are discussed for modal navigation that take into account different degrees of user's control in the definition of mode configuration and mode resetting. The approach will be exemplified by discussing a real life hypermedia application under development at HOC in cooperation with the Poldi Pezzoli Museum in Milano.