scispace - formally typeset
Search or ask a question

Showing papers on "Graphics published in 2007"


Journal ArticleDOI
TL;DR: This report describes, summarize, and analyzes the latest research in mapping general‐purpose computation to graphics hardware.
Abstract: The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, have made graphics hardware a compelling platform for computationally demanding tasks in a wide variety of application domains. In this report, we describe, summarize, and analyze the latest research in mapping general-purpose computation to graphics hardware. We begin with the technical motivations that underlie general-purpose computation on graphics processors (GPGPU) and describe the hardware and software developments that have led to the recent interest in this field. We then aim the main body of this report at two separate audiences. First, we describe the techniques used in mapping general-purpose computation to graphics hardware. We believe these techniques will be generally useful for researchers who plan to develop the next generation of GPGPU algorithms and techniques. Second, we survey and categorize the latest developments in general-purpose application development on graphics hardware. This survey should be of particular interest to researchers who are interested in using the latest GPGPU applications in their systems of interest.

1,998 citations


Journal ArticleDOI
TL;DR: The book describes clearly and intuitively the differences between exploratory and confirmatory factor analysis, and discusses how to construct, validate, and assess the goodness of fit of a measurement model in SEM by confirmatory factors analysis.
Abstract: Examples are discussed to show the differences among discriminant analysis, logistic regression, and multiple regression. Chapter 6, “Multivariate Analysis of Variance,” presents advantages of multivariate analysis of variance (MANOVA) over univariate analysis of variance (ANOVA), discusses assumptions of MANOVA, and assesses validations of MANOVA assumptions and model estimation. The authors also discuss post hoc tests of MANOVA and multivariate analysis of covariance. Chapter 7, “Conjoint Analysis,” explains what conjoint analysis does and how it is different from other multivariate techniques. Guidelines of selecting attributes, models, and methods of data collection are presented. Chapter 8, “Cluster Analysis,” studies objectives, roles, and limitations of cluster analysis. Two basic concepts: similarity and distance are discussed. The authors also discuss details of five most popular hierarchical algorithms (singlelinkage, complete-linkage, average-linkage, centroid method, Ward’s method) and three nonhierarchical algorithms (the sequential threshold method, the parallel threshold method, and the optimizing procedure). Profiles of clusters and guidelines for cluster validation are studied as well. Chapter 9, “Multidimensional Scaling and Correspondence Analysis,” introduces two interdependence techniques to display the relationships in the data. The book describes clearly and intuitively the differences between the two techniques and how these two techniques are performed. Chapters 10–12 cover topics in SEM. Chapter 10, “Structural Equation Modeling: An Introduction,” introduces SEM and related concepts such as exogenous, endogenous constructs, and so on, points out the differences between SEM and other multivariate techniques, overviews the decision process of SEM. Chapter 11, “Confirmatory Factor Analysis,” explains the differences between exploratory and confirmatory factor analysis, discusses how to construct, validate, and assess the goodness of fit of a measurement model in SEM by confirmatory factor analysis. Chapter 12, “Testing a Structural Model,” presents some methods of SEM in examining the relationships between latent constructs. The book is an excellent book for people in management and marketing. For the Technometrics audience, this book does not have much flavor of physical, chemical, and engineering sciences. For example, partial least squares, a very popular method in Chemometrics, is discussed but not as detailed as other techniques in the book. Furthermore, due to the amount of materials covered in the book, it might be inappropriate for someone who is new to multivariate analysis.

497 citations


Journal ArticleDOI
TL;DR: This paper investigates the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots, and proposes design principles for creating effective transitions and illustrates the application in DynaVis, a visualization system featuring animated data graphics.
Abstract: In this paper we investigate the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots. We extend theoretical models of data graphics to include such transitions, introducing a taxonomy of transition types. We then propose design principles for creating effective transitions and illustrate the application of these principles in DynaVis, a visualization system featuring animated data graphics. Two controlled experiments were conducted to assess the efficacy of various transition types, finding that animated transitions can significantly improve graphical perception.

495 citations


Book
03 May 2007
TL;DR: An introduction to Geometric Algebra that will give a strong grasp of its relationship to linear algebra and its significance for 3D programming of geometry in graphics, vision, and robotics is found.
Abstract: Within the last decade, Geometric Algebra (GA) has emerged as a powerful alternative to classical matrix algebra as a comprehensive conceptual language and computational system for computer science. This book will serve as a standard introduction and reference to the subject for students and experts alike. As a textbook, it provides a thorough grounding in the fundamentals of GA, with many illustrations, exercises and applications. Experts will delight in the refreshing perspective GA gives to every topic, large and small. -David Hestenes, Distinguished research Professor, Department of Physics, Arizona State University Geometric Algebra is becoming increasingly important in computer science. This book is a comprehensive introduction to Geometric Algebra with detailed descriptions of important applications. While requiring serious study, it has deep and powerful insights into GA's usage. It has excellent discussions of how to actually implement GA on the computer. -Dr. Alyn Rockwood, CTO, FreeDesign, Inc. Longmont, Colorado Until recently, almost all of the interactions between objects in virtual 3D worlds have been based on calculations performed using linear algebra. Linear algebra relies heavily on coordinates, however, which can make many geometric programming tasks very specific and complex-often a lot of effort is required to bring about even modest performance enhancements. Although linear algebra is an efficient way to specify low-level computations, it is not a suitable high-level language for geometric programming. Geometric Algebra for Computer Science presents a compelling alternative to the limitations of linear algebra. Geometric algebra, or GA, is a compact, time-effective, and performance-enhancing way to represent the geometry of 3D objects in computer programs. In this book you will find an introduction to GA that will give you a strong grasp of its relationship to linear algebra and its significance for your work. You will learn how to use GA to represent objects and perform geometric operations on them. And you will begin mastering proven techniques for making GA an integral part of your applications in a way that simplifies your code without slowing it down. Features Explains GA as a natural extension of linear algebra and conveys its significance for 3D programming of geometry in graphics, vision, and robotics. Systematically explores the concepts and techniques that are key to representing elementary objects and geometric operators using GA. Covers in detail the conformal model, a convenient way to implement 3D geometry using a 5D representation space. Presents effective approaches to making GA an integral part of your programming. Includes numerous drills and programming exercises helpful for both students and practitioners. Companion web site includes links to GAViewer, a program that will allow you to interact with many of the 3D figures in the book, and Gaigen 2, the platform for the instructive programming exercises that conclude each chapter. About the Authors Leo Dorst is Assistant Professor of Computer Science at the University of Amsterdam, where his research focuses on geometrical issues in robotics and computer vision. He earned M.Sc. and Ph.D. degrees from Delft University of Technology and received a NYIPLA Inventor of the Year award in 2005 for his work in robot path planning. Daniel Fontijne holds a Master's degree in artificial Intelligence and is a Ph.D. candidate in Computer Science at the University of Amsterdam. His main professional interests are computer graphics, motion capture, and computer vision. Stephen Mann is Associate Professor in the David R. Cheriton School of Computer Science at the University of Waterloo, where his research focuses on geometric modeling and computer graphics. He has a B.A. in Computer Science and Pure Mathematics from the University of California, Berkeley, and a Ph.D. in Computer Science and Engineering from the University of Washington. * The first book on Geometric Algebra for programmers in computer graphics and entertainment computing * Written by leaders in the field providing essential information on this new technique for 3D graphics * This full colour book includes a website with GAViewer, a program to experiment with GA

450 citations


Proceedings ArticleDOI
29 Jul 2007
TL;DR: A set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display is described and a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around thedisplay is presented.
Abstract: We describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display The display consists of a high-speed video projector, a spinning mirror covered by a holographic diffuser, and FPGA circuitry to decode specially rendered DVI video signals The display uses a standard programmable graphics card to render over 5,000 images per second of interactive 3D graphics, projecting 360-degree views with 125 degree separation up to 20 updates per second We describe the system's projection geometry and its calibration process, and we present a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around the display Our projection technique allows correct vertical perspective and parallax to be rendered for any height and distance when these parameters are known, and we demonstrate this effect with interactive raster graphics using a tracking system to measure the viewer's height and distance We further apply our projection technique to the display of photographed light fields with accurate horizontal and vertical parallax We conclude with a discussion of the display's visual accommodation performance and discuss techniques for displaying color imagery

290 citations


Journal ArticleDOI
TL;DR: This work presents a novel streaming CT framework that conceptualizes the reconstruction process as a steady flow of data across a computing pipeline, updating the reconstruction result immediately after the projections have been acquired.
Abstract: The recent emergence of various types of flat-panel x-ray detectors and C-arm gantries now enables the construction of novel imaging platforms for a wide variety of clinical applications. Many of these applications require interactive 3D image generation, which cannot be satisfied with inexpensive PC-based solutions using the CPU. We present a solution based on commodity graphics hardware (GPUs) to provide these capabilities. While GPUs have been employed for CT reconstruction before, our approach provides significant speedups by exploiting the various built-in hardwired graphics pipeline components for the most expensive CT reconstruction task, backprojection. We show that the timings so achieved are superior to those obtained when using the GPU merely as a multi-processor, without a drop in reconstruction quality. In addition, we also show how the data flow across the graphics pipeline can be optimized, by balancing the load among the pipeline components. The result is a novel streaming CT framework that conceptualizes the reconstruction process as a steady flow of data across a computing pipeline, updating the reconstruction result immediately after the projections have been acquired. Using a single PC equipped with a single high-end commodity graphics board (the Nvidia 8800 GTX), our system is able to process clinically-sized projection data at speeds meeting and exceeding the typical flat-panel detector data production rates, enabling throughput rates of 40-50 projections s(-1) for the reconstruction of 512(3) volumes.

250 citations


Patent
20 Aug 2007
TL;DR: In this article, an interactive television system with programming-related links is provided, which includes user television equipment on which interactive program guide and non-program-guide applications may be implemented.
Abstract: An interactive television system with programming-related links is provided. The system may include user television equipment on which interactive program guide and non-program-guide applications may be implemented. Information that is displayed in a display screen for a non-program-guide application may be related to programming. A display screen or overlay for programming that is related to the information may be displayed when a user selects the displayed information. The display or overlay for the programming may include advertisements, video, graphics, options, or programming descriptions. The display screen or overlay may have been displayed by the program guide application.

230 citations


Journal ArticleDOI
TL;DR: This survey analyzes multiresolution approaches that exploit a certain semi-regularity of the data, including dynamic scene management, out-of-core data organization and compression, as well as numerical accuracy.
Abstract: Rendering high quality digital terrains at interactive rates requires carefully crafted algorithms and data structures able to balance the competing requirements of realism and frame rates, while taking into account the memory and speed limitations of the underlying graphics platform. In this survey, we analyze multiresolution approaches that exploit a certain semi-regularity of the data. These approaches have produced some of the most efficient systems to date. After providing a short background and motivation for the methods, we focus on illustrating models based on tiled blocks and nested regular grids, quadtrees and triangle bin-trees triangulations, as well as cluster-based approaches. We then discuss LOD error metrics and system-level data management aspects of interactive terrain visualization, including dynamic scene management, out-of-core data organization and compression, as well as numerical accuracy.

195 citations


Journal ArticleDOI
TL;DR: This work reformulated dynamic-programming-based alignment algorithms as streaming algorithms in terms of computer graphics primitives and shows that the GPU-based approach allows speedups of more than one order of magnitude with respect to optimized CPU implementations.
Abstract: Sequence alignment is a common and often repeated task in molecular biology. Typical alignment operations consist of finding similarities between a pair of sequences (pairwise sequence alignment) or a family of sequences (multiple sequence alignment). The need for speeding up this treatment comes from the rapid growth rate of biological sequence databases: every year their size increases by a factor of 1.5 to 2. In this paper, we present a new approach to high-performance biological sequence alignment based on commodity PC graphics hardware. Using modern graphics processing units (GPUs) for high-performance computing is facilitated by their enhanced programmability and motivated by their attractive price/performance ratio and incredible growth in speed. To derive an efficient mapping onto this type of architecture, we have reformulated dynamic-programming-based alignment algorithms as streaming algorithms in terms of computer graphics primitives. Our experimental results show that the GPU-based approach allows speedups of more than one order of magnitude with respect to optimized CPU implementations.

178 citations


Patent
29 Mar 2007
TL;DR: In this article, a system and method for entering icons and other pieces of media through an ambiguous text entry interface is presented. But the system does not provide a user with a pick list of non-textual media associated with text entry.
Abstract: A system and method for entering icons and other pieces of media through an ambiguous text entry interface. The system receives text entry from users, disambiguates the text entry, and presents the user with a pick list of icons, emoticons, graphics, images, sounds, videos or other non-textual media that are associated with the text entry. The user may select one of the displayed pieces of media, and the text entry may be replaced or supplemented with the piece of media selected by the user. In some cases, the system presents the pick list of media to the user in an order that is related to the probability that the user will select the displayed media.

169 citations


Proceedings ArticleDOI
29 Jul 2007
TL;DR: The concept of visual equivalence is introduced, a new standard for image fidelity in graphics that ensures that images are visually equivalent if they convey the same impressions of scene appearance, even if they are visibly different.
Abstract: Efficient, realistic rendering of complex scenes is one of the grand challenges in computer graphics. Perceptually based rendering addresses this challenge by taking advantage of the limits of human vision. However, existing methods, based on predicting visible image differences, are too conservative because some kinds of image differences do not matter to human observers. In this paper, we introduce the concept of visual equivalence, a new standard for image fidelity in graphics. Images are visually equivalent if they convey the same impressions of scene appearance, even if they are visibly different. To understand this phenomenon, we conduct a series of experiments that explore how object geometry, material, and illumination interact to provide information about appearance, and we characterize how two kinds of transformations on illumination maps (blurring and warping) affect these appearance attributes. We then derive visual equivalence predictors (VEPs): metrics for predicting when images rendered with transformed illumination maps will be visually equivalent to images rendered with reference maps. We also run a confirmatory study to validate the effectiveness of these VEPs for general scenes. Finally, we show how VEPs can be used to improve the efficiency of two rendering algorithms: Light-cuts and precomputed radiance transfer. This work represents some promising first steps towards developing perceptual metrics based on higher order aspects of visual coding.

Patent
30 May 2007
TL;DR: In this article, a transition from the use of the higher power consuming graphics subsystem to the lower power consuming GPU subsystem, while placing the GPU subsystem in a lower power consumption mode, was proposed to reduce overall power consumption.
Abstract: Many computing device may now include two or more graphics subsystems. The multiple graphics subsystems may have different abilities, and may, for example, consume differing amount of electrical power, with one subsystem consuming more average power than the others. The higher power consuming graphics subsystem may be coupled to the device and used instead of, or in addition to, the lower power consuming graphics subsystem, resulting in higher performance or additional capabilities, but increased overall power consumption. By transitioning from the use of the higher power consuming graphics subsystem to the lower power consuming graphics subsystem, while placing the higher power consuming graphics subsystem in a lower power consumption mode, overall power consumption is reduced.

Journal ArticleDOI
TL;DR: In this article, the authors describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display.
Abstract: We describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The di...

01 Jan 2007
TL;DR: Methods and techniques that take advantage of modern graphics hardware for real-time tracking and recognition of feature-points and the generation of feature vectors from input images in the various stages are presented.
Abstract: With the addition of free programmable components to modern graphics hardware, graphics processing units (GPUs) become increasingly interesting for general purpose computations, especially due to utilizing parallel buffer processing. In this paper we present methods and techniques that take advantage of modern graphics hardware for real-time tracking and recognition of feature-points. The focus lies on the generation of feature vectors from input images in the various stages. For the generation of feature-vectors the Scale Invariant Feature Transform (SIFT) method [Low04a] is used due to its high stability against rotation, scale and lighting condition changes of the processed images. We present results of the various stages for feature vector generation of our GPU implementation and compare it to the CPU version of the SIFT algorithm. The approach works well on Geforce6 series graphics board and above and takes advantage of new hardware features, e.g. dynamic branching and multiple render targets (MRT) in the fragment processor [KF05]. With the presented methods feature-tracking with real time frame rates can be achieved on the GPU and meanwhile the CPU can be used for other tasks.

Journal ArticleDOI
TL;DR: The architecture and programming model of modern graphics cards for the lattice practitioner with the goal of exploiting these chips for Monte Carlo simulations is outlined.

03 Jul 2007
TL;DR: In this article, the RapidMind general processing on GPU (GPGPU) framework supports evaluating an entire population of a quarter of a million individual programs on a non-trivial problem in 4 seconds.
Abstract: Mackey-Glass chaotic time series prediction and non-nuclear protein classification show the feasibility of evaluating genetic programming populations on SPMD parallel computing consumer gaming graphics processing units. The C++ framework with a regular disk less Linux KDE desktop equipped with a single leading nVidia GeForce 8800 GTX graphics processing unit card is demonstrated evolving programs at Giga GP operation per second (895 million GPops). The RapidMind general processing on GPU (GPGPU) framework supports evaluating an entire population of a quarter of a million individual programs on a non-trivial problem in 4 seconds. An efficient reverse polish notation (RPN) tree based GP is given.

Patent
07 Dec 2007
TL;DR: The field-of-view sensory augmentation tools provide updated, visual, text, audio, and graphic information associated with the region of interest adjusted for the positional frame of reference of the on-scene or remote personnel viewing the region-of interest, map, document or other surface.
Abstract: Systems, devices, and methods to provide tools enhance the tactical or strategic situation awareness of on-scene and remotely located personnel involved with the surveillance of a region-of-interest using field-of-view sensory augmentation tools. The sensory augmentation tools provide updated, visual, text, audio, and graphic information associated with the region-of-interest adjusted for the positional frame of reference of the on-scene or remote personnel viewing the region-of-interest, map, document or other surface. Annotations and augmented reality graphics projected onto and positionally registered with objects or regions-of-interest visible within the field of view of a user looking through a see through monitor may select the projected graphics for editing and manipulation by sensory feedback from the viewer.

Journal ArticleDOI
TL;DR: In this article, a pixel shader is introduced for display of a high-resolution window over peripherally degraded stimulus, allowing real-time processing of still or streamed images, obviating the need for preprocessing or storage.
Abstract: Advancements in graphics hardware have allowed development of hardware-accelerated imaging displays. This article reviews techniques for real-time simulation of arbitrary visual fields over still images and video. The goal is to provide the vision sciences and perceptual graphics communities techniques for the investigation of fundamental processes of visual perception. Classic gaze-contingent displays used for these purposes are reviewed and for the first time a pixel shader is introduced for display of a high-resolution window over peripherally degraded stimulus. The pixel shader advances current state-of-the-art by allowing real-time processing of still or streamed images, obviating the need for preprocessing or storage.

Proceedings ArticleDOI
07 Jul 2007
TL;DR: This paper describes the technique of general purpose computing using graphics cards and how to extend this technique to genetic programming and demonstrates the improvement in the performance of genetic programming on single processor architectures which can be achieved by harnessing the computing power of these next generation graphics cards.
Abstract: In recent years the computing power of graphics cards has increased significantly. Indeed, the growth in the computing power of these graphics cards is now several orders of magnitude greater than the growth in the power of computer processor units. Thus these graphics cards are now beginning to be used by the scientific community aslow cost, high performance computing platforms. Traditional genetic programming is a highly computer intensive algorithm but due to its parallel nature it can be distributed over multiple processors to increase the speed of the algorithm considerably. This is not applicable for single processor architectures but graphics cards provide a mechanism for developing a data parallel implementation of genetic programming. In this paper we will describe the technique of general purpose computing using graphics cards and how to extend this technique to genetic programming. We will demonstrate the improvement in the performance of genetic programming on single processor architectures which can be achieved by harnessing the computing power of these next generation graphics cards.

Book
05 Sep 2007
TL;DR: A richly illustrated book as discussed by the authors describes the use of interactive and dynamic graphics as part of multidimensional data analysis, including clustering, supervised classification, and working with missing values.
Abstract: This richly illustrated book describes the use of interactive and dynamic graphics as part of multidimensional data analysis. Chapters include clustering, supervised classification, and working with missing values. A variety of plots and interaction methods are used in each analysis, often starting with brushing linked low-dimensional views and working up to manual manipulation of tours of several variables. The role of graphical methods is shown at each step of the analysis, not only in the early exploratory phase, but in the later stages, too, when comparing and evaluating models. All examples are based on freely available software: GGobi for interactive graphics and R for static graphics, modeling, and programming. The printed book is augmented by a wealth of material on the web, encouraging readers follow the examples themselves. The web site has all the data and code necessary to reproduce the analyses in the book, along with movies demonstrating the examples. The book may be used as a text in a class on statistical graphics or exploratory data analysis, for example, or as a guide for the independent learner. Each chapter ends with a set of exercises. The authors are both Fellows of the American Statistical Association, past chairs of the Section on Statistical Graphics, and co-authors of the GGobi software. Dianne Cook is Professor of Statistics at Iowa State University. Deborah Swayne is a member of the Statistics Research Department at AT&T Labs.

Journal ArticleDOI
TL;DR: This work proposes implementing a parallel EA on consumer graphics cards, which can find in many PCs, and lets more people use the authors' parallel algorithm to solve large-scale, real-world problems such as data mining.
Abstract: We propose implementing a parallel EA on consumer graphics cards, which we can find in many PCs. This lets more people use our parallel algorithm to solve large-scale, real-world problems such as data mining. Parallel evolutionary algorithms run on consumer-grade graphics hardware achieve better execution times than ordinary evolutionary algorithms and offer greater accessibility than those run on high-performance computers

Journal ArticleDOI
TL;DR: A SIMD algorithm is presented that performs the convolution-based DWT completely on a GPU, which brings us significant performance gain on a normal PC without extra cost.
Abstract: Discrete wavelet transform (DWT) has been heavily studied and developed in various scientific and engineering fields. Its multiresolution and locality nature facilitates applications requiring progressiveness and capturing high-frequency details. However, when dealing with enormous data volume, its performance may drastically reduce. On the other hand, with the recent advances in consumer-level graphics hardware, personal computers nowadays usually equip with a graphics processing unit (GPU) based graphics accelerator which offers SIMD-based parallel processing power. This paper presents a SIMD algorithm that performs the convolution-based DWT completely on a GPU, which brings us significant performance gain on a normal PC without extra cost. Although the forward and inverse wavelet transforms are mathematically different, the proposed algorithm unifies them to an almost identical process that can be efficiently implemented on GPU. Different wavelet kernels and boundary extension schemes can be easily incorporated by simply modifying input parameters. To demonstrate its applicability and performance, we apply it to wavelet-based geometric design, stylized image processing, texture-illuminance decoupling, and JPEG2000 image encoding

Journal ArticleDOI
TL;DR: Research investigated the application of the global positioning system and 3 degree-of-freedom (3-DOF) angular tracking to address the registration problem during interactive visualization of construction graphics in outdoor augmented reality (AR) environments to create an augmented outdoor environment where superimposed graphical objects stay fixed to their real world locations as the user navigates.
Abstract: This paper describes research that investigated the application of the global positioning system and 3 degree-of-freedom (3-DOF) angular tracking to address the registration problem during interactive visualization of construction graphics in outdoor augmented reality (AR) environments. The global position and the three-dimensional (3D) orientation of a user's viewpoint are tracked, and this information is reconciled with the known global position and orientation of superimposed computer-aided design (CAD) objects. Based on this computation, the relative translation and axial rotations between the user's viewpoint and the CAD objects are continually calculated. The relative geometric transformations are then applied to the CAD objects inside a virtual viewing frustum that is coincided with the real world space that is in the user's view. The result is an augmented outdoor environment where superimposed graphical objects stay fixed to their real world locations as the user navigates. The algorithms are implemented in a software tool called UM-AR-GPS-ROVER that is capable of interactively placing static and dynamic 3D models at any location in outdoor augmented space. The concept and prototype are demonstrated with an example in which scheduled construction activities for the erection of a structural steel frame are graphically simulated in outdoor AR.

Journal ArticleDOI
TL;DR: The most common forms of color vision impairment are discussed, Color Oracle, a new software tool that assists the designer in verifying color schemes is introduced, and Color Oracle filters maps and graphics in real-time and efficiently integrates with existing digital workflows.
Abstract: Eight percent of men are affected by color vision impairment – they have difficulties distinguishing between colors and thus confuse certain colors that the majority of people see readily. Designers of maps and information graphics cannot disregard the needs of this relatively large group of media consumers. This article discusses the most common forms of color vision impairment, and introduces Color Oracle, a new software tool that assists the designer in verifying color schemes. Color Oracle filters maps and graphics in real-time and efficiently integrates with existing digital workflows. The paper also discusses color combinations and alternative visual variables for map symbology that those with color vision impairments can distinguish unambiguously. The presented techniques help the cartographer produce maps that are easy to read for those with color vision impairments and can still look good for those with normal color vision.

Patent
24 Oct 2007
TL;DR: In this article, a computer-implemented method is performed at a portable electronic device with a touch screen display, which includes displaying graphics and an insertion marker at a first location in the graphics on the touch screen, and in response to the detected finger contact, expanding the insertion marker from a first size to a second size on the touchscreen display and expanding a portion of the graphics from an original size to an expanded size.
Abstract: In accordance with some embodiments, a computer-implemented method is performed at a portable electronic device with a touch screen display. The method includes: displaying graphics and an insertion marker at a first location in the graphics on the touch screen display; detecting a finger contact with the touch screen display; and in response to the detected finger contact, expanding the insertion marker from a first size to a second size on the touch screen display and expanding a portion of the graphics on the touch screen display from an original size to an expanded size. The method further includes detecting movement of the finger contact on the touch screen display and moving the expanded insertion marker in accordance with the detected movement of the finger contact from the first location to a second location in the graphics.

Journal ArticleDOI
TL;DR: The computer program DRAWxtl produces crystal structure drawings in the form of an interactive screen representation, as well as VRML files for use on web pages and in classroom teaching, and creates input files for the popular Persistence of Vision Raytracer rendering program for publication-quality graphics.
Abstract: The computer program DRAWxtl produces crystal structure drawings in the form of an interactive screen representation, as well as VRML files for use on web pages and in classroom teaching, and creates input files for the popular Persistence of Vision Raytracer (POV-Ray) rendering program for publication-quality graphics, including generation of stereo pairs. DRAWxtl output produces the standard kinds of graphical representations: spheres, ellipsoids, bonds and polyhedra of any complexity. In addition, it can draw arrows to represent magnetic moments, show capped cones to indicate the location of lone-pair electrons and display Fourier contours in three dimensions. A unique feature of this program is the ability to plot incommensurately modulated and composite structures. This open-source program can be used with operating systems as diverse as Windows (9X, NT, 2000 and XP), Mac OS X, Linux and most other varieties of Unix.

Proceedings Article
01 Jan 2007
TL;DR: In this paper, the authors describe the design of acquisition devices and capture strategies for BRDFs and BSSRDFs, efficient factored representations, and a case study of capturing the appearance of human faces.
Abstract: Algorithms for scene understanding and realistic image synthesis require accurate models of the way real-world materials scatter light This class describes recent work in the graphics community to measure the spatially- and directionally-varying reflectance and subsurface scattering of complex materials, and to develop efficient representations and analysis tools for these datasets We describe the design of acquisition devices and capture strategies for BRDFs and BSSRDFs, efficient factored representations, and a case study of capturing the appearance of human faces

Proceedings ArticleDOI
18 Jun 2007
TL;DR: This paper presents a graphics processor based implementation of the Finite Difference Time Domain, which uses a central finite differencing scheme for solving Maxwell's equations for electromagnetics and shows how GPUs can be used to greatly speedup FDTD simulations.
Abstract: This paper presents a graphics processor based implementation of the Finite Difference Time Domain (FDTD), which uses a central finite differencing scheme for solving Maxwell's equations for electromagnetics. FDTD simulations can be very computationally expensive and require thousands of CPU hours to solve on traditional general purpose processors. Modern Graphics Processing Units (GPUs) found in desktop computers are programmable and are capable of much higher vector floating-point performance than general purpose CPUs. This paper shows how GPUs can be used to greatly speedup FDTD simulations. The main objective is to leverage GPU processing power for FDTD update calculations and complete computationally expensive simulations in reasonable time. This allows researchers to simulate much longer pulse lengths and larger models than was possible in the past. A new FDTD code was developed to leverage graphics processors using Linux, C, OpenGL, Cg, and commodity GeForce 7 series GPUs. The graphics hardware was accessed through standard OpenGL. The FDTD model space was then transferred to the GPU device memory through OpenGL textures and host readable via frame buffer objects exposed by the OpenGL 2.0 application programming interface (API). GPU fragment processors were utilized for the FDTD update computations via Cg fragment programs. For models that were sufficiently large, greater than (140)3 cells, the GPU performed FDTD update calculations at least 12 times faster than the execution of the same simulation on a contemporary multicore CPU from Intel or AMD. The use of GPUs shows great promise for high performance computing applications like FDTD that have high arithmetic intensity and limited or no data dependencies in computation streams. Until recently, to use GPUs as a co-processor, the normalCPU-based code needed to be rewritten extensively using special graphics programming language Cg and OpenGL APIs, which is difficult for non-graphics programmers. However, newer GPUs, such as NVIDIA's G80, provide unified shaders models for programming GPU processing elements and APIs that allow compiler tools to allow direct programming of graphics hardware without extra intermediate graphics programming with OpenGL and Cg. Currently, a message passing interface-based parallel GPU FDTD code is being developed and benchmarked on a cluster of G80 GPUs.

Proceedings ArticleDOI
04 Aug 2007
TL;DR: A hardware redundancy-based approach to reliability for general purpose computation on GPUs that requires minimal change to existing GPU architectures and is completely transparent to general graphics and does not affect the performance of the games that drive the market.
Abstract: General purpose computation on graphics processors (GPGPU) has rapidly evolved since the introduction of commodity programmable graphics hardware. With the appearance of GPGPU computation-oriented APIs such as AMD's Close to the Metal (CTM) and NVIDIA's Compute Unified Device Architecture (CUDA), we begin to see GPU vendors putting financial stakes into this non-graphics, one-time niche market. Major supercomputing installations are building GPGPU clusters to take advantage of massively parallel floating point capabilities, and Folding@Home has even released a GPU port of its protein folding distributed computation client. But in order for GPGPU to truly become important to the supercomputing community, vendors will have to address the heretofore unimportant reliability concerns of graphics processors. We present a hardware redundancy-based approach to reliability for general purpose computation on GPUs that requires minimal change to existing GPU architectures. Upon detecting an error, the system invokes an automatic recovery mechanism that only recomputes erroneous results. Our results show that our technique imposes less than a 1.5 x performance penalty and saves energy for GPGPU but is completely transparent to general graphics and does not affect the performance of the games that drive the market.

Proceedings ArticleDOI
15 Oct 2007
TL;DR: This paper addresses the practical problem of automating the process of translating figures from mathematics, science, and engineering textbooks to a tactile form suitable for blind students by creating a more detailed workflow, translating actual images, and analyzing the translation time.
Abstract: We address the practical problem of automating the process of translating figures from mathematics, science, and engineering textbooks to a tactile form suitable for blind students. The Tactile Graphics Assistant (TGA) and accompanying workflow is described. Components of the TGA that identify text and replace it with Braille use machine learning, computational geometry, and optimization algorithms. We followed through with the ideas in our 2005 paper by creating a more detailed workflow, translating actual images, and analyzing the translation time. Our experience in translating more than 2,300 figures from 4 textbooks demonstrates that figures can be translated in ten minutes or less of human time on average. We describe our experience with training tactile graphics specialists to use the new TGA technology.