scispace - formally typeset
Search or ask a question

Showing papers on "Graphics published in 1997"


Book
01 Jan 1997
TL;DR: In this article, an introduction to computational geometry focusing on algorithms is presented, which is related to particular applications in robotics, graphics, CAD/CAM, and geographic information systems.
Abstract: This introduction to computational geometry focuses on algorithms. Motivation is provided from the application areas as all techniques are related to particular applications in robotics, graphics, CAD/CAM, and geographic information systems. Modern insights in computational geometry are used to provide solutions that are both efficient and easy to understand and implement.

4,805 citations


Book
01 Nov 1997
TL;DR: The book/CD package offers readers the opportunity to practice visualization using a complete C++ programming environment developed by the authors.
Abstract: From the Publisher: Visualization is a part of every day life. From weather map generation of financial modelling to MRI technology in medicine to 3D graphics used in movies like Jurassic Park, examples of visualization abound. The book/CD package offers readers the opportunity to practice visualization using a complete C++ programming environment developed by the authors.

1,973 citations


Journal ArticleDOI
TL;DR: This work describes a heavily modified version of MolScript that has added syntax for describing complicated coloring schemes and also has new graphics commands for controlling the coloring of atoms, bonds, and molecules.
Abstract: Owing to its flexibility, MolScript has become one of the most widely used programs for generating publicationquality molecular graphics. Integration with the Raster3D package, to allow the production of photorealistic rendered images, has increased its popularity still further. However, this intensive use has shown the need for enhancement of some areas of the program, especially for controlling the coloring of atoms, bonds, and molecules. This work describes a heavily modified version of MolScript that has added syntax for describing complicated coloring schemes and also has new graphics commands. Enhancements include drawing split-bond ball-and-stick models, smoothly varying the color of molecules (color ramping), abrupt color changes within secondary structural units, and the creation of dashed bonds. Making use of these added features is simple because all MolScript syntax is still supported and one typically needs only to add a few control commands. The final section of this article suggests some uses for this modified MolScript and provides illustrative examples.

1,695 citations


Book
01 Jul 1997
TL;DR: This comprehensive introduction develops the fundamental concepts and techniques of implicit surface modeling, rendering, and animating in terms accessible to anyone with a basic background in computer graphics.
Abstract: From the Publisher: Implicit surfaces offer special effects animators, graphic designers, CAD engineers, graphics students, and hobbyists a new range of capabilities for the modeling of complex geometric objects. In contrast to traditional parametric surfaces, implicit surfaces can easily describe smooth, intricate, and articulatable shapes. These powerful yet easily understood surfaces are finding use in a growing number of graphics applications. This comprehensive introduction develops the fundamental concepts and techniques of implicit surface modeling, rendering, and animating in terms accessible to anyone with a basic background in computer graphics. provides a thorough overview of implicit surfaces with a focus on their applications in graphics explains the best methods for designing, representing, and visualizing implicit surfaces surveys the latest research With contributions from seven graphics authorities, this innovative guide establishes implicit surfaces as a powerful and practical tool for animation and rendering.

745 citations


Proceedings ArticleDOI
30 Apr 1997
TL;DR: This work presents implementations and discussion of six techniques which allow manipulation of remote objects which provide distinct advantages in terms of ease of use and efficiency because they consider the tasks of grabbing and manipulation separately.
Abstract: Grabbing and manipulating virtual objects is an important user interaction for immersive virtual environments. We present implementations and discussion of six techniques which allow manipulation of remote objects. A user study of these techniques was performed which revealed their characteristics and deficiencies, and led to the development of a new class of techniques. These hybrid techniques provide distinct advantages in terms of ease of use and efficiency because they consider the tasks of grabbing and manipulation separately. CR Categories and Subject Descriptors: 1.3.7 [Computer Graphics] :Three-Dimensional Graphics and Realism - Virtual Reality; 1.3.6 [Computer Graphics]:Methodology and Techniques - Interaction Techniques.

721 citations


Proceedings ArticleDOI
03 Aug 1997
TL;DR: This paper describes the use of DG interfaces for several parameter-setting problems: light selection and placement for image rendering, both standard and image-based; opacity and color transfer-function specification for volume rendering; and motion control for particle-system and articulated-figure animation.
Abstract: Image rendering maps scene parameters to output pixel values; animation maps motion-control parameters to trajectory values. Because these mapping functions are usually multidimensional, nonlinear, and discontinuous, finding input parameters that yield desirable output values is often a painful process of manual tweaking. Interactive evolution and inverse design are two general methodologies for computer-assisted parameter setting in which the computer plays a prominent role. In this paper we present another such methodology. Design GalleryTM (DG) interfaces present the user with the broadest selection, automatically generated and organized, of perceptually different graphics or animations that can be produced by varying a given input-parameter vector. The principal technical challenges posed by the DG approach are dispersion, finding a set of input-parameter vectors that optimally disperses the resulting output-value vectors, and arrangement, organizing the resulting graphics for easy and intuitive browsing by the user. We describe the use of DG interfaces for several parameter-setting problems: light selection and placement for image rendering, both standard and image-based; opacity and color transfer-function specification for volume rendering; and motion control for particle-system and articulated-figure animation. CR Categories: I.2.6 [Artificial Intelligence]: Problem Solving, Control Methods and Search—heuristic methods; I.3.6 [Computer Graphics]: Methodology and Techniques—interaction techniques; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism.

633 citations


Patent
09 Jun 1997
TL;DR: In this article, an approach and method for selecting multimedia information such as video, audio, graphics and text residing on a plurality of Data Warehouses, relational database management systems (RDMS) or object-oriented database systems (ODBA) connected to the Internet or other network, and linking the multimedia information across the Internet, or other networks, to any phrase, word, sentence and paragraph of text.
Abstract: Apparatus and method are disclosed for selecting multimedia information, such as video, audio, graphics and text residing on a plurality of Data Warehouses, relational database management systems (RDMS) or object-oriented database systems (ODBA) connected to the Internet or other network, and for linking the multimedia information across the Internet, or other network, to any phrase, word, sentence and paragraph of text; or numbers; or maps; charts, and tables; or still pictures and/or graphics; or moving pictures and/or graphics; or audio elements contained in documents on an Internet or intranet web site so that any viewer of a web site, or other network resource, can directly access updated information in the Data Warehouse or a database in real time. The apparatus and method each: (i) stores a plurality of predetermined authentication procedures (such as user names and passwords) to gain admittance to Data Warehouses or databases, (ii) stores the Universal Resource Locators of intranet and Internet addresses of a plurality of expert predetermined optimum databases or Data Warehouses containing text, audio, video and graphic information, or multimedia information relating to the information on the web site or other network resource; (iii) stores a plurality of expert-predetermined optimum queries for use in the search engines of each of the pre-selected databases, each query representing a discrete searchable concept as expressed by a word, phrase, sentence or paragraph of text, or any other media such as audio and video on a web site, or other network resource; and (iv) presents to the user the results of a search of the Data Warehouse or database through a graphical user interface (GUI) which coordinates and correlates viewer selection criteria with the expert optimum remote database selection and queries.

623 citations


Proceedings ArticleDOI
30 Apr 1997
TL;DR: This paper presents a set of interaction techniques for use in headtracked immersive virtual environments that can be used for object selection, object manipulation, and user navigation in virtual environments.
Abstract: This paper presents a set of interaction techniques for use in headtracked immersive virtual environments. With these techniques, the user interacts with the 2D projections that 3D objects in the scene make on his image plane. The desktop analog is the use of a mouse to interact with objects in a 3D scene based on their projections on the monitor screen. Participants in an immersive environment can use the techniques we discuss for object selection, object manipulation, and user navigation in virtual environments. CR Categories and Subject Descriptors: 1.3.6 [Computer Graphics]: Methodology and Techniques - InteractionTechniques; 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - VirtualReality. Additional Keywords: virtual worlds, virtual environments, navigation, selection, manipulation.

449 citations


Journal ArticleDOI
TL;DR: For linear object classes, it is shown that linear transformations can be learned exactly from a basis set of 2D prototypical views and preliminary evidence that the technique can effectively "rotate" high-resolution face images from a single 2D view is shown.
Abstract: The need to generate new views of a 3D object from a single real image arises in several fields, including graphics and object recognition. While the traditional approach relies on the use of 3D models, simpler techniques are applicable under restricted conditions. The approach exploits image transformations that are specific to the relevant object class, and learnable from example views of other "prototypical" objects of the same class. In this paper, we introduce such a technique by extending the notion of linear class proposed by the authors (1992). For linear object classes, it is shown that linear transformations can be learned exactly from a basis set of 2D prototypical views. We demonstrate the approach on artificial objects and then show preliminary evidence that the technique can effectively "rotate" high-resolution face images from a single 2D view.

447 citations


Proceedings ArticleDOI
03 Aug 1997
TL;DR: An optimization algorithm for constructing PSC representations for graphics surface models, and the framework on models that are both geometrically and topologically complex is demonstrated.
Abstract: In this paper, we introduce the progressive simplicial complex (PSC) representation, a new format for storing and transmitting triangulated geometric models. Like the earlier progressive mesh (PM) representation, it captures a given model as a coarse base model together with a sequence of refinement transformations that progressively recover detail. The PSC representation makes use of a more general refinement transformation, allowing the given model to be an arbitrary triangulation (e.g. any dimension, non-orientable, non-manifold, non-regular), and the base model to always consist of a single vertex. Indeed, the sequence of refinement transformations encodes both the geometry and the topology of the model in a unified multiresolution framework. The PSC representation retains the advantages of PM’s. It defines a continuous sequence of approximating models for runtime level-of-detail control, allows smooth transitions between any pair of models in the sequence, supports progressive transmission, and offers a space-efficient representation. Moreover, by allowing changes to topology, the PSC sequence of approximations achieves better fidelity than the corresponding PM sequence. We develop an optimization algorithm for constructing PSC representations for graphics surface models, and demonstrate the framework on models that are both geometrically and topologically complex. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling surfaces and object representations. Additional

394 citations


Patent
09 May 1997
TL;DR: In this article, a technique of primitive reprojection is employed to reduce the cost of image generation by reprojecting the visible elements of a previous frame to the reprojected elements.
Abstract: A computer-implemented method of image generation that makes efficient use of reprojective techniques to reduce the cost of image generation. The method employs a technique of primitive reprojection in which convex graphics primitives are the reprojected elements. The visibility of elements known to be visible in a previous frame is first determined by transformation and depth-comparison rasterization of these elements. Regions of the image that may contain newly visible elements are located by occlusion-exposure transitions in the depth buffer and from incremental view volume motion. In these regions a depth-prioritized, data-access method of visible surface determination, spatial-subdivision ray casting, is employed to identify newly visible primitives which are added to the list of previously visible primitives for rasterization. The method employs a system of classifying objects based on their dynamic occlusive properties to increase the accuracy, efficiency and versatility of the reprojective approach. Because the method employs a hybrid approach to visible surface determination in which newly visible primitives are identified for each frame it can be implemented as a graphics server employing an efficient on-demand, progressive geometry transmission protocol for client-server image generation. This client-server system employs a method of visibility event encoding in which data representing newly visible and newly invisible primitives for each frame are transmitted to a client unit which is a conventional graphics display system. The visibility event codec method can also be used to encode and store information representing a computer animation for later interactive playback.

Proceedings ArticleDOI
Michael Gleicher1
30 Apr 1997
TL;DR: This paper presents a method for editing a pre-existing motion such that it meets new needs yet preserves as much of the original quality as possible, and discusses the three central challenges of creating a constraint formulation that is rich enough to be effective, yet simple enough to afford fast solution.
Abstract: In this paper, we present a method for editing a pre-existing motion such that it meets new needs yet preserves as much of the original quality as possible. Our approach enables the user to interactively position characters using direct manipulation. A spacetime constraints solver finds these positions while considering the entire motion. This paper discusses the three central challenges of creating such an approach: defining a constraint formulation that is rich enough to be effective, yet simple enough to afford fast solution; providing a solver that is fast enough to solve the constraint problems at interactive rates; and creating an interface that allows users to specify and visualize changes to entire motions. We present examples with a prototype system that permits interactive motion editing for articulated 3D characters on personal computers. I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism – Animation; I.3.6 [Computer Graphics]: Methodology and Techniques Interaction Techniques; G.1.6 [Numerical Analysis]: Optimization. Spacetime Constraints, Motion Displacement Mapping.

Book
01 Feb 1997
TL;DR: This presentation discusses the design of the Multimedia Toolset, a set of tools and resources for learning through interactive media, and some of the approaches taken in the development of this toolset.
Abstract: Introduction. LEARNING THROUGH INTERACTIVE MEDIA. 1. The Multimedia Learning Revolution. 2. Resources and Tools for Learning. 3. Simulation and Vicarious Experience. 4. Structured Learning. CONCEPTUAL DESIGN: 5. Strategic Approaches to Educational Multimedia Design. 6. Context and Multimedia Design. 7. Design Action Potential. PRESENTATION DESIGN: 9. The Multimedia Toolset. 10. Text and Graphics. 11. Animation. 12. Sound.

Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work has developed algorithms that use caching and lazy creation of texture and geometry to manage scene complexity and increase locality of reference by dynamically reordering the rendering computation based on the contents of the cache.
Abstract: Simulating realistic lighting and rendering complex scenes are usually considered separate problems with incompatible solutions. Accurate lighting calculations are typically performed using ray tracing algorithms, which require that the entire scene database reside in memory to perform well. Conversely, most systems capable of rendering complex scenes use scan-conversion algorithms that access memory coherently, but are unable to incorporate sophisticated illumination. We have developed algorithms that use caching and lazy creation of texture and geometry to manage scene complexity. To improve cache performance, we increase locality of reference by dynamically reordering the rendering computation based on the contents of the cache. We have used these algorithms to compute images of scenes containing millions of primitives, while storing ten percent of the scene description in memory. Thus, a machine of a given memory capacity can render realistic scenes that are an order of magnitude more complex than was previously possible. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Raytracing

Proceedings ArticleDOI
03 Aug 1997
TL;DR: The InfiniteReality system architecture is described and novel features designed to handle extremely large texture databases, maintain control over frame rendering time, and allow user customization for diverse video output requirements are presented.
Abstract: The InfiniteRealityTM graphics system is the first general-purpose workstation system specifically designed to deliver 60Hz steady frame rate high-quality rendering of complex scenes. This paper describes the InfiniteReality system architecture and presents novel features designed to handle extremely large texture databases, maintain control over frame rendering time, and allow user customization for diverse video output requirements. Rendering performance expressed using traditional workstation metrics exceeds seven million lighted, textured, antialiased triangles per second, and 710 million textured antialiased pixels filled per second.

Proceedings ArticleDOI
30 Apr 1997
TL;DR: This paper exploits the presence of large occluders in urban and architectural models to design a real-time occlusion culling algorithm that is conservative, i.e., it overestimates the set of visible polygons; it exploits spatial coherence by using a hierarchical data structure; and it exploits temporalCoherence by reusing visibility information computed for previous viewpoints.
Abstract: Real-Time Occlusion Culling for Models with Large Occluders SATYAN COORG SETH TELLER Computer Graphics Group MIT Laboratory for Computer Science Efficiently identifying polygons that are visible from a dynamic synthetic viewpoint is an important problem in computer graphics. Typically, visibility determination is performed using the z-buffer algorithm. As this algorithm must examine every triangle in the input scene, z-buffering can consume a significant fraction of graphics processing, especially on architectures that have a low performance or software z-buffer. One way to avoid needlessly processing invisible portions of the scene is to use an occlusion culling algorithm to discard invisible polygons early in the graphics pipeline. In this paper, we exploit the presence of large occluders in urban and architectural models to design a real-time occlusion culling algorithm. Our algorithm has the following features: it is conservative, i.e., it overestimates the set of visible polygons; it exploits spatial coherence by using a hierarchical data structure; and it exploits temporal coherence by reusing visibility information computed for previous viewpoints. The new algorithm significantly accelerates rendering of several complex test models. CR

Book
01 Jan 1997
TL;DR: In this article, the authors present a conceptual approach to educational multimedia design and present a multimedia toolset for learning through interactive media, including text and graphics, sound and animation.
Abstract: Introduction. LEARNING THROUGH INTERACTIVE MEDIA. 1. The Multimedia Learning Revolution. 2. Resources and Tools for Learning. 3. Simulation and Vicarious Experience. 4. Structured Learning. CONCEPTUAL DESIGN: 5. Strategic Approaches to Educational Multimedia Design. 6. Context and Multimedia Design. 7. Design Action Potential. PRESENTATION DESIGN: 9. The Multimedia Toolset. 10. Text and Graphics. 11. Animation. 12. Sound.

Journal ArticleDOI
TL;DR: All graphical objects and behaviors of those objects are explicitly represented at run time, so the system can provide a number of high level built-in functions, including automatic display and editing of objects, and external analysis and control of interfaces.
Abstract: The Amulet user interface development environment makes it easier for programmers to create highly interactive, graphical user interface software for Unix, Windows and the Macintosh. Amulet uses new models for objects, constraints, animation, input, output, commands, and undo. The object system is a prototype instance model in which there is no distinction between classes and instances or between methods and data. The constraint system allows any value of any object to be computed by arbitrary code and supports multiple constraint solvers. Animations can be attached to existing objects with a single line of code. Input from the user is handled by "interactor" objects which support reuse of behavior objects. The output model provides a declarative definition of the graphics and supports automatic refresh. Command objects encapsulate all of the information needed about operations, including support for various ways to undo them. A key feature of the Amulet design is that all graphical objects and behaviors of those objects are explicitly represented at run time, so the system can provide a number of high level built-in functions, including automatic display and editing of objects, and external analysis and control of interfaces. Amulet integrates these capabilities in a flexible and effective manner.

Proceedings ArticleDOI
03 Aug 1997
TL;DR: A computational model of visual masking based on psychophysical data is developed that allows us to choose texture patterns for computer graphics images that hide the effects of faceting, banding, aliasing, noise and other visual artifacts produced by sources of error in graphics algorithms.
Abstract: In this paper we develop a computational model of visual masking based on psychophysical data. The model predicts how the presence of one visual pattern affects the detectability of another. The model allows us to choose texture patterns for computer graphics images that hide the effects of faceting, banding, aliasing, noise and other visual artifacts produced by sources of error in graphics algorithms. We demonstrate the utility of the model by choosing a texture pattern to mask faceting artifacts caused by polygonal tesselation of a flat-shaded curved surface. The model predicts how changes in the contrast, spatial frequency, and orientation of the texture pattern, or changes in the tesselation of the surface will alter the masking effect. The model is general and has uses in geometric modeling, realistic image synthesis, scientific visualization, image compression, and image-based rendering. CR Categories: I.3.0 [Computer Graphics]: General;

Journal ArticleDOI
TL;DR: The method, called interpolation synthesis, is based on motion capture data and it provides real time character motion for interactive entertainment or avatars in virtual worlds, which proves useful for both real time graphics and prerendered animation production.
Abstract: Most conventional media depend on engaging and appealing characters. Empty spaces and buildings would not fare well as television or movie programming, yet virtual reality usually offers up such spaces. The problem lies in the difficulty of creating computer generated characters that display real time, engaging interaction and realistic motion. Articulated figure motion for real time computer graphics offers one solution to this problem. A common approach stores a set of motions and lets you choose one particular motion at a time. The article describes a process that greatly expands the range of possible motions. Mixing motions selected from a database lets you create a new motion to exact specifications. The synthesized motion retains the original motions' subtle qualities, such as the realism of motion capture or the expressive, exaggerated qualities of artistic animation. Our method provides a new way to achieve inverse kinematics capability-for example, placing the hands or feet of an articulated figure in specific positions. It proves useful for both real time graphics and prerendered animation production. The method, called interpolation synthesis, is based on motion capture data and it provides real time character motion for interactive entertainment or avatars in virtual worlds.

Patent
03 Jul 1997
TL;DR: In this paper, the concept of "animation by example" is used to define both input to and output from graphical objects in an object-oriented system by providing examples of what the user desires the graphical object to do.
Abstract: A system for providing a simple, easy to learn and flexible means of creating user interfaces to products under development without the need of a programming language or the need to learn a large set of complicated commands. The Visual Software Engineering ("VSE") system of the invention uses a simple concept of defining both input to and output from graphical objects in an object-oriented system by providing examples of what the user desires the graphical object to do. This technique is referred to herein as "animation by example". In accordance with this process, the user creates a user interface by drawing the user interface with a graphics editor and then defining the output behavior (i.e., graphics manipulation) of the user interface components by showing each state or frame as an animation. This is accomplished by changing the object using a graphic editor function such as move or rotate and storing each of the frames with the object as a behavior state. Just as with defining the output, the input is defined by giving the graphic object an example of what type of input to look for, and once it finds that input, it tells the object which frame to output or change to. Application code can then drive the animation or read the input by accessing the frame numbers assigned to each of the example frames.

Journal ArticleDOI
TL;DR: Generic ways of programming observer-related behaviour, such as brushing, dynamic re-expression, and dynamic comparison, are outlined and demonstrated to show that specialist dynamic views can be developed rapidly in an open, flexible, and high-level graphic environment.

Patent
16 Dec 1997
TL;DR: In this paper, the authors present a system, apparatus and method for transmitting logging data from a primary location to a remote location in near real time, where the logs can be viewed almost simultaneously at the primary and remote locations, as data is being acquired.
Abstract: The present invention provides a system, apparatus and method for transmitting logging data from a primary location to a remote location in near real time. The logs can be viewed almost simultaneously at the primary and remote locations, as data is being acquired. The present invention also provides for a system for viewing logs in near real time at a primary location and a remote location which includes a first means for reading while writing at the primary location, a second means for reading while writing at the remote location which is identical to the first means for reading while writing, a first file system at the primary location, the first file system having data written to it by the first means for reading while writing as numerical data or graphics data, a first rendering means for reading the graphics data from the first file system and rendering the graphics data so that it can be displayed, a first display means for displaying the rendered graphics data, a first file transfer utility means for transmitting the data from the primary location to the remote location over a communications system, a second file transfer utility means for receiving the data at the remote location, a second file system at the remote location, to which the second file transfer utility means writes the received data using the second means for reading while writing, a second rendering means for reading graphics data from the second file system and rendering the graphics data so that it can be displayed, an input interface means which directs signals from a user input to the second rendering means to adjust the display of the log, and a second display means for displaying the rendered graphics data at the remote location.

Proceedings ArticleDOI
TL;DR: An image and video search engine which utilizes both text-based navigation and content-based technology for searching visually through the catalogued images and videos is introduced.
Abstract: We describe a visual information system prototype for searching for images and videos on the World-Wide Web. New visual information in the form of images, graphics, animations and videos is being published on the Web at an incredible rate. However, cataloging this visual data is beyond the capabilities of current text-based Web search engines. In this paper, we describe a complete system by which visual information on the Web is (1) collected by automated agents, (2) processed in both text and visual feature domains, (3) catalogued and (4) indexed for fast search and retrieval. We introduce an image and video search engine which utilizes both text-based navigation and content-based technology for searching visually through the catalogued images and videos. Finally, we provide an initial evaluation based upon the cataloging of over one half million images and videos collected from the Web.

Journal ArticleDOI
TL;DR: A framework for Visualization and Optimization and the Modeling Life-cycle is presented, with a focus on Hypermedia and Virtual Reality.
Abstract: Preface. 1. Introduction. I. A Framework for Visualization and Optimization. 2. People. 3. Text and Tables. 4. Graphics and Animation. 5. Sound and Touch. 6. Hypermedia and Virtual Reality. II. Visualization and the Modeling Life-cycle. 7. Conceptual Models. 8. Formulation. 9. Algorithm Execution. 10. Solution Analysis. III. Visualization for Optimization. 11. Text. 12. Hypertext. 13. Networks and Graphs. 14. Multiple Dimensions. 15. Animation. 16. Sound, Touch and Virtual Reality. 17. Visualization Tools. 18. Integration. 19. Research and Future Directions. Colophon. Bibliography. Author Index. Subject Index.

Proceedings ArticleDOI
03 Aug 1997
TL;DR: This paper describes the final realization of PixelFlow, along with hardware and software enhancements heretofore unpublished, which are described in this paper.
Abstract: PlxelFlow is an architecture for high-speed, highly realistic image generation, based on the techniques of object-parallelism and image composition, Its initial architecture was described in [MOLN92]. After development by the original team of researchers at the University of North Carolina, and codevelopment with industry partners, Division Ltd. and HcwlettPackard, PixelFlow now is a much more capable system than initially conceived and its hardware and software systems have evolved considerably. This paper describes the final realization of PixelFlow, along with hardware and software enhancements heretofore unpublished. CR Cntcgorics and Subject Descriptors: C.5.4 [Computer System Implementation]: VLSI Systems; 1.3.1 [Computer Graphics]: Hardware Architecture; 1.3.3 [Computer Graphics]: Picture/Image Generation; 1.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism. Additlonnl

Journal ArticleDOI
01 Mar 1997
TL;DR: This is the first part of a two-part paper that motivates and evaluates a method for the automatic conversion of images from visual to tactile form and the results of an experimental evaluation are presented and discussed.
Abstract: This is the first part of a two-part paper that motivates and evaluates a method for the automatic conversion of images from visual to tactile form. In this part, a broad-ranging background is provided in the areas of human factors, including the human sensory system, tactual perception and blindness, access technology for tactile graphics production, and image processing techniques and their appropriateness to tactile image creation. In Part II, this background is applied in the development of the TACTile Image Creation System (TACTICS), a prototype for an automatic visual-to-tactile translator. The results of an experimental evaluation are then presented and discussed, and possible future work in this area is outlined.

Proceedings ArticleDOI
20 Jun 1997
TL;DR: An automated system that classifies Web images as photographs or graphics is described, based on statistical observations about the image content of the two types, as well as learning techniques which make use of the vast amount of training data available on the Web.
Abstract: When we search for images in multimedia documents, we often have in mind specific image types that we are interested in; examples are photographs, graphics, maps, cartoons, portraits of people, and so on. This paper describes an automated system that classifies Web images as photographs or graphics. The design of the system is based on statistical observations about the image content of the two types, as well as learning techniques which make use of the vast amount of training data available on the Web. Text associated with the image can be used to further improve the accuracy of the classification. The system is used as a part of Webseer, an image search engine for the Web

Proceedings ArticleDOI
30 Apr 1997
TL;DR: A new tracing algorithm is described that supports haptic rendering of NURBS surfaces without the use of any intermediate representation and by using this tracing algorithm in conjunction with algorithms for surface proximity testing and surface transitions, a complete haptic render system for sculptured models has been developed.
Abstract: A new tracing algorithm is described that supports haptic rendering of NURBS surfaces without the use of any intermediate representation. By using this tracing algorithm in conjunction with algorithms for surface proximity testing and surface transitions, a complete haptic rendering system for sculptured models has been developed. The system links an advanced CAD modeling system with a Sarcos force-reflecting exo-skeleton arm. A method for measuring the quality of the tracking component of the haptic rendering separately from the haptic device and force computation is also described. CR Descriptors: H.1.2 [Models and Principles] User/Machine Systems; C.3 [Special-Purpose and Application-Based Systems] Real-Time Systems; I.3.7 [Computer Graphics] Three-Dimensional Graphics and Realism; I.6.4 [Simulation and Modeling] Types of Simulation - Distributed; F.2.2 [Analysis of Algorithms and Problem Complexity] Nonnumerical Algorithms and Problems; J.6 [Computer-Aided Engineering].

Patent
10 Mar 1997
TL;DR: In this paper, a visual link mechanism is proposed for identifying addresses of locations in a plurality of remote systems wherein the local system is connected through a network to the plurality of distributed systems.
Abstract: A visual link mechanism residing in a local system for identifying addresses of locations in the plurality of remote systems wherein the local system is connected through a network to the plurality of remote systems. The visual link mechanism includes a visual link library and a network access mechanism responsive to a visual link including a displayable graphic icon for accessing the location represented by a selected graphic icon. Various structures of visual links are described, each being an entity existing independently of the system environment in which it resides, and the network access mechanism includes a layout table for storing a plurality of plans for arranging and displaying a plurality of visual link graphic icons in a display, a visual links organizer, a visual link screen saver, and a hash protection mechanism for detecting the unauthorized construction or modification of visual links or other forms of files. Also described is a visual link capture engine for extracting graphics information from a data file and generating a corresponding graphic icon and a display layout generator for generating display layouts of sets of predetermined numbers of displayable visual objects.