scispace - formally typeset
Search or ask a question

Showing papers on "Graphics published in 2005"


Proceedings Article
01 Jan 2005
TL;DR: The techniques used in mapping general-purpose computation to graphics hardware will be generally useful for researchers who plan to develop the next generation of GPGPU algorithms and techniques.
Abstract: The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, have made graphics hardware a compelling platform for computationally demanding tasks in a wide variety of application domains. In this report, we describe, summarize, and analyze the latest research in mapping general-purpose computation to graphics hardware. We begin with the technical motivations that underlie general-purpose computation on graphics processors (GPGPU) and describe the hardware and software developments that have led to the recent interest in this field. We then aim the main body of this report at two separate audiences. First, we describe the techniques used in mapping general-purpose computation to graphics hardware. We believe these techniques will be generally useful for researchers who plan to develop the next generation of GPGPU algorithms and techniques. Second, we survey and categorize the latest developments in general-purpose application development on graphics hardware. This survey should be of particular interest to researchers who are interested in using the latest GPGPU applications in their systems of interest.

1,728 citations


Journal ArticleDOI
01 Jul 2005
TL;DR: This paper defines mesh saliency in a scale-dependent manner using a center-surround operator on Gaussian-weighted mean curvatures to capture what most would classify as visually interesting regions on a mesh.
Abstract: Research over the last decade has built a solid mathematical foundation for representation and analysis of 3D meshes in graphics and geometric modeling. Much of this work however does not explicitly incorporate models of low-level human visual attention. In this paper we introduce the idea of mesh saliency as a measure of regional importance for graphics meshes. Our notion of saliency is inspired by low-level human visual system cues. We define mesh saliency in a scale-dependent manner using a center-surround operator on Gaussian-weighted mean curvatures. We observe that such a definition of mesh saliency is able to capture what most would classify as visually interesting regions on a mesh. The human-perception-inspired importance measure computed by our mesh saliency operator results in more visually pleasing results in processing and viewing of 3D meshes. compared to using a purely geometric measure of shape. such as curvature. We discuss how mesh saliency can be incorporated in graphics applications such as mesh simplification and viewpoint selection and present examples that show visually appealing results from using mesh saliency.

703 citations


Journal ArticleDOI
01 Jul 2005
TL;DR: An approach for fast subspace integration of reduced-coordinate nonlinear deformable models that is suitable for interactive applications in computer graphics and haptics, and presents two useful approaches for generating low-dimensional subspace bases: modal derivatives and an interactive sketching technique.
Abstract: In this paper, we present an approach for fast subspace integration of reduced-coordinate nonlinear deformable models that is suitable for interactive applications in computer graphics and haptics. Our approach exploits dimensional model reduction to build reduced-coordinate deformable models for objects with complex geometry. We exploit the fact that model reduction on large deformation models with linear materials (as commonly used in graphics) result in internal force models that are simply cubic polynomials in reduced coordinates. Coefficients of these polynomials can be precomputed, for efficient runtime evaluation. This allows simulation of nonlinear dynamics using fast implicit Newmark subspace integrators, with subspace integration costs independent of geometric complexity. We present two useful approaches for generating low-dimensional subspace bases: modal derivatives and an interactive sketching technique. Mass-scaled principal component analysis (mass-PCA) is suggested for dimensionality reduction. Finally, several examples are given from computer animation to illustrate high performance, including force-feedback haptic rendering of a complicated object undergoing large deformations.

381 citations


Journal ArticleDOI
TL;DR: This paper shows how the new floating point GPUs can be exploited to perform both analytical and iterative reconstruction from X-ray and functional imaging data, and decompose three popular three-dimensional (3D) reconstruction algorithms into a common set of base modules.
Abstract: The task of reconstructing an object from its projections via tomographic methods is a time-consuming process due to the vast complexity of the data. For this reason, manufacturers of equipment for medical computed tomography (CT) rely mostly on special application specified integrated circuits (ASICs) to obtain the fast reconstruction times required in clinical settings. Although modern CPUs have gained sufficient power in recent years to be competitive for two-dimensional (2D) reconstruction, this is not the case for three-dimensional (3D) reconstructions, especially not when iterative algorithms must be applied. The recent evolution of commodity PC computer graphics boards (GPUs) has the potential to change this picture in a very dramatic way. In this paper we will show how the new floating point GPUs can be exploited to perform both analytical and iterative reconstruction from X-ray and functional imaging data. For this purpose, we decompose three popular three-dimensional (3D) reconstruction algorithms (Feldkamp filtered backprojection, the simultaneous algebraic reconstruction technique, and expectation maximization) into a common set of base modules, which all can be executed on the GPU and their output linked internally. Visualization of the reconstructed object is easily achieved since the object already resides in the graphics hardware, allowing one to run a visualization module at any time to view the reconstruction results. Our implementation allows speedups of over an order of magnitude with respect to CPU implementations, at comparable image quality.

296 citations


Book ChapterDOI
01 Aug 2005
TL;DR: This chapter reviewed the literature about the interface and content features that affect the potential benefits of animation over static graphics, and proposed some guidelines that designers should consider when designing multimedia instruction including animation.
Abstract: Computer animation has tremendous potential to provide visualizations of dynamic phenomena that involve change over time (e.g., biological processes, physical phenomena, mechanical devices, and historical development). However, the research reviewed in this chapter showed that learners did not systematically take advantage of animated graphics in terms of comprehension of the underlying causal or functional model. This chapter reviewed the literature about the interface and content features that affect the potential benefits of animation over static graphics. Finally, I proposed some guidelines that designers should consider when designing multimedia instruction including animation. What Are the Animation Principle and the Interactivity Principle? In the last decade, with the rapid progression of computing capacities and the progress of graphic design technologies, multimedia learning environments have evolved from sequential static text and picture frames to increasing sophisticated visualizations. Two characteristics appear to be popular among instruction designers and practitioners: the use of animated graphics as soon as depiction of dynamic system is involved, and the capability for learners to interact with the instructional material. Conceptions of Animation Despite its extensive use in instructional material, computer animation still is not well understood. Baek and Layne (1988) defined animation as “the process of generating a series of frames containing an object or objects so that each frame appears as an alteration of the previous frame in order to show motion” (p. 132).

276 citations


Proceedings ArticleDOI
06 Nov 2005
TL;DR: This paper proposes using GPUs in approximately the reverse way: to assist in "converting pictures into numbers" (i.e. computer vision) and provides a simple API which implements some common computer vision algorithms.
Abstract: Graphics and vision are approximate inverses of each other: ordinarily Graphics Processing Units (GPUs) are used to convert "numbers into pictures" (i.e. computer graphics). In this paper, we propose using GPUs in approximately the reverse way: to assist in "converting pictures into numbers" (i.e. computer vision). The OpenVIDIA project uses single or multiple graphics cards to accelerate image analysis and computer vision. It is a library and API aimed at providing a graphics hardware accelerated processing framework for image processing and computer vision. OpenVIDIA explores the creation of a parallel computer architecture consisting of multiple Graphics Processing Units (GPUs) built entirely from commodity hardware. OpenVIDIA uses multiple Graphic.Processing Units in parallel to operate as a general-purpose parallel computer architecture. It provides a simple API which implements some common computer vision algorithms. Many components can be used immediately and because the project is Open Source, the code is intended to serve as templates and examples for how similar algorithms are mapped onto graphics hardware. Implemented are image processing techniques (Canny edge detection, filtering), image feature handling (identifying and matching features) and image registration, to name a few.

250 citations


Patent
David R. Blythe1
30 Nov 2005
TL;DR: In this paper, a virtual machine monitor (VMM) technology is used to run a first operating system (OS), such as an original OS version, simultaneously with a second OS, such as a new version OS, in separate virtual machines (VMs).
Abstract: Systems and methods for applying virtual machines to graphics hardware are provided. In various embodiments of the invention, while supervisory code runs on the CPU, the actual graphics work items are run directly on the graphics hardware and the supervisory code is structured as a graphics virtual machine monitor. Application compatibility is retained using virtual machine monitor (VMM) technology to run a first operating system (OS), such as an original OS version, simultaneously with a second OS, such as a new version OS, in separate virtual machines (VMs). VMM technology applied to host processors is extended to graphics processing units (GPUs) to allow hardware access to graphics accelerators, ensuring that legacy applications operate at full performance. The invention also provides methods to make the user experience cosmetically seamless while running multiple applications in different VMs. In other aspects of the invention, by employing VMM technology, the virtualized graphics architecture of the invention is extended to provide trusted services and content protection.

246 citations


Book
01 Jan 2005
TL;DR: This tutorial jumps right in to the power of R without dragging you through the basic concepts of the language.
Abstract: 1. Preface 2. Introduction and preliminaries 3. Simple manipulations numbers and vectors 4. Objects 5. Factors 6. Arrays and matrices 7. Lists and data frames 8. Reading data from files 9. Probability distributions 10. Loops and conditional execution 11. Writing your own functions 12. Statistical models in R 13. Graphics 14. A sample session 15. Invoking R 16. The command line editor 17. Function and variable index 18. Concept index 19. References

200 citations


Book
01 Jan 2005
TL;DR: R Graphics, Paul Murrell, widely known as the leading expert on R graphics, has developed an in-depth resource that helps both neophyte and seasoned users master the intricacies of R graphics.
Abstract: Extensively updated to reflect the evolution of statistics and computing, the second edition of the bestselling R Graphics comes complete with with new packages and new examples Paul Murrell, widely known as the leading expert on R graphics, has developed an in-depth resource that helps both neophyte and seasoned users master the intricacies of R graphics New in the Second Edition Updated information on the core graphics engine, the traditional graphics system, the grid graphics system, and the lattice package A new chapter on the ggplot2 package New chapters on applications and extensions of R Graphics, including geographic maps, dynamic and interactive graphics, and node-and-edge graphs Organized into five parts, R Graphics covers both "traditional" and newer, R-specific graphics systems The book reviews the graphics facilities of the R language and describes Rs powerful grid graphics system The book then covers the graphics engine, which represents a common set of fundamental graphics facilities, and provides a series of brief overviews of some of the major areas of application for R graphics, and some of the major extensions of R graphics

187 citations


Journal ArticleDOI
TL;DR: A particle system for interactive visualization of steady 3D flow fields on uniform grids exploiting features of recent graphics accelerators to advect particles in the graphics processing unit (GPU), saving particle positions in graphics memory, and then sending these positions through the GPU again to obtain images in the frame buffer.
Abstract: We present a particle system for interactive visualization of steady 3D flow fields on uniform grids. For the amount of particles we target, particle integration needs to be accelerated and the transfer of these sets for rendering must be avoided. To fulfill these requirements, we exploit features of recent graphics accelerators to advect particles in the graphics processing unit (GPU), saving particle positions in graphics memory, and then sending these positions through the GPU again to obtain images in the frame buffer. This approach allows for interactive streaming and rendering of millions of particles and it enables virtual exploration of high resolution fields in a way similar to real-world experiments. The ability to display the dynamics of large particle sets using visualization options like shaded points or oriented texture splats provides an effective means for visual flow analysis that is far beyond existing solutions. For each particle, flow quantities like vorticity magnitude and A2 are computed and displayed. Built upon a previously published GPU implementation of a sorting network, visibility sorting of transparent particles is implemented. To provide additional visual cues, the GPU constructs and displays visualization geometry like particle lines and stream ribbons.

172 citations


Patent
27 Oct 2005
TL;DR: In this article, a method and system for messaging on a computer using a communications interface is described, which includes a section which displays graphics representing receivers and senders of messages, and a section that facilitates the user to create, send, receive, and archive messages.
Abstract: A method and system is described herein for messaging on a computer using a communications interface. The communications interface includes a section which displays graphics representing receivers and senders of messages. The communications interface also includes a section which facilitates the user to create, send, receive, and archive messages. Messages are created from audio or typed inputs from the user. A user communicates with other users over a network, such as the Internet.

Proceedings ArticleDOI
14 Jun 2005
TL;DR: The results demonstrate that the graphics processors available on a commodity computer system are efficient stream-processor and useful co-processors for mining data streams.
Abstract: We present algorithms for fast quantile and frequency estimation in large data streams using graphics processors (GPUs). We exploit the high computation power and memory bandwidth of graphics processors and present a new sorting algorithm that performs rasterization operations on the GPUs. We use sorting as the main computational component for histogram approximation and construction of e-approximate quantile and frequency summaries. Our algorithms for numerical statistics computation on data streams are deterministic, applicable to fixed or variable-sized sliding windows and use a limited memory footprint. We use GPU as a co-processor and minimize the data transmission between the CPU and GPU by taking into account the low bus bandwidth. We implemented our algorithms on a PC with a NVIDIA GeForce FX 6800 Ultra GPU and a 3.4 GHz Pentium IV CPU and applied them to large data streams consisting of more than 100 million values. We also compared the performance of our GPU-based algorithms with optimized implementations of prior CPU-based algorithms. Overall, our results demonstrate that the graphics processors available on a commodity computer system are efficient stream-processor and useful co-processors for mining data streams.


Journal ArticleDOI
Tong Lin1, Pengwei Hao1
TL;DR: Experimental results show that the SPEC has very low complexity and provides visually lossless quality while keeping competitive compression ratios.
Abstract: We present a compound image compression algorithm for real-time applications of computer screen image transmission. It is called shape primitive extraction and coding (SPEC). Real-time image transmission requires that the compression algorithm should not only achieve high compression ratio, but also have low complexity and provide excellent visual quality. SPEC first segments a compound image into text/graphics pixels and pictorial pixels, and then compresses the text/graphics pixels with a new lossless coding algorithm and the pictorial pixels with the standard lossy JPEG, respectively. The segmentation first classifies image blocks into picture and text/graphics blocks by thresholding the number of colors of each block, then extracts shape primitives of text/graphics from picture blocks. Dynamic color palette that tracks recent text/graphics colors is used to separate small shape primitives of text/graphics from pictorial pixels. Shape primitives are also extracted from text/graphics blocks. All shape primitives from both block types are losslessly compressed by using a combined shape-based and palette-based coding algorithm. Then, the losslessly coded bitstream is fed into a LZW coder. Experimental results show that the SPEC has very low complexity and provides visually lossless quality while keeping competitive compression ratios.

01 Jan 2005
TL;DR: The study provides a more rounded, albeit partial, view of the online e-Learning user and significantly improves understanding of e-learning user acceptance behavior on the Web.
Abstract: Streaming e-Learning systems have become widely available lately. Web-based streaming media, due to its low production cost, are generally the most popular way of providing e-learning services. However, considering the many different media formats (text, graphics, audio, video, and animations) that can be integrated into a streaming e-learning, how should a cost-effective streaming media system be implemented in the web? This study proposes an integrated theoretical framework for users’ acceptance behavior on web-based streaming e-learning. This study considers the e-Learning systems user as both a system user and a learner. Constructs from information systems (Technology Acceptance Model) and Human Behavior and Psychology (Flow Theory) are tested in an integrated theoretical framework of online e-learning users’ acceptance behavior. The data collected from our experiment show significant evidence in support of our hypothesis. The analytical results confirm the dual identity of the online elearning user as a system user and a learner, since both the flow and the perceived usefulness of the e-learning system strongly predict intention to continue using e-learning. The study provides a more rounded, albeit partial, view of the online e-Learning user and significantly improves understanding of e-learning user acceptance behavior on the Web. The validated metrics should be valuable to both researchers and practitioners.


Journal ArticleDOI
01 Jul 2005
TL;DR: An efficient approach for end-to-end out-of-core construction and interactive inspection of very large arbitrary surface models and the efficiency and generality of the approach is demonstrated with the interactive rendering of extremely complex heterogeneous surface models on current commodity graphics platforms.
Abstract: We present an efficient approach for end-to-end out-of-core construction and interactive inspection of very large arbitrary surface models. The method tightly integrates visibility culling and out-of-core data management with a level-of-detail framework. At preprocessing time, we generate a coarse volume hierarchy by binary space partitioning the input triangle soup. Leaf nodes partition the original data into chunks of a fixed maximum number of triangles, while inner nodes are discretized into a fixed number of cubical voxels. Each voxel contains a compact direction dependent approximation of the appearance of the associated volumetric subpart of the model when viewed from a distance. The approximation is constructed by a visibility aware algorithm that fits parametric shaders to samples obtained by casting rays against the full resolution dataset. At rendering time, the volumetric structure, maintained off-core, is refined and rendered in front-to-back order, exploiting vertex programs for GPU evaluation of view-dependent voxel representations, hardware occlusion queries for culling occluded subtrees, and asynchronous I/O for detecting and avoiding data access latencies. Since the granularity of the multiresolution structure is coarse, data management, traversal and occlusion culling cost is amortized over many graphics primitives. The efficiency and generality of the approach is demonstrated with the interactive rendering of extremely complex heterogeneous surface models on current commodity graphics platforms.

Patent
Glenn F. Evans1
27 Jan 2005
TL;DR: In this paper, the authors describe methods and systems for protecting data that is intended for use and processing on video or graphics cards, including a display converter that converts digital data to signals for use in rendering the data on the monitor, and memory controller is configured to receive encrypted data and decrypt the encrypted data into protected regions of the memory.
Abstract: Methods and systems for protecting data that is intended for use and processed on video or graphics cards are described. In one embodiment, a system comprises a graphics processor unit (GPU) for processing data that is to be rendered on a monitor. Memory is operably associated with the graphics processor unit for holding data that is to be or has been processed by the GPU. A display converter converts digital data to signals for use in rendering the data on the monitor, and a memory controller is configured to receive encrypted data and decrypt the encrypted data into protected regions of the memory.

Proceedings ArticleDOI
09 Oct 2005
TL;DR: The development of the tactile graphics assistant is summarized, which will enable tactile graphics specialists to be more efficient in creating tactile graphics both in batches and individually.
Abstract: Access to graphical images (bar charts, diagrams, line graphs, etc.) that are in a tactile form (representation through which content can be accessed by touch) is inadequate for students who are blind and take mathematics, science, and engineering courses. We describe our analysis of the current work practices of tactile graphics specialists who create tactile forms of graphical images. We propose automated means by which to improve the efficiency of current work practices.We describe the implementation of various components of this new automated process, which includes image classification, segmentation, simplification, and layout. We summarize our development of the tactile graphics assistant, which will enable tactile graphics specialists to be more efficient in creating tactile graphics both in batches and individually. We describe our unique team of researchers, practitioners, and student consultants who are blind, all of whom are needed to successfully develop this new way of translating tactile graphics.

Patent
22 Apr 2005
TL;DR: In this article, a computer system includes an integrated graphics subsystem and a graphics connector for attaching either an auxiliary graphics subsystem or a loopback card, which is used to control a display device.
Abstract: A computer system includes an integrated graphics subsystem and a graphics connector for attaching either an auxiliary graphics subsystem or a loopback card A first bus connection communicates data from the computer system to the integrated graphics subsystem With a loopback card in place, data travels from the integrated graphics subsystem back to the computer system via a second bus connection When the auxiliary graphics subsystem is attached, the integrated graphics subsystem operates in a data forwarding mode Data is communicated to the integrated graphics subsystem via the first bus connection The integrated graphics subsystem then forwards data to the auxiliary graphics subsystem A portion of the second bus connection communicates data from the auxiliary graphics subsystem back to the computer system The auxiliary graphics subsystem communicates display information back to the integrated graphics subsystem, where it is used to control a display device

01 Jan 2005
TL;DR: A spectrum of algorithms for rectification of document images for camera-based analysis and recognition of planar surfaces to remove the perspective effect and computing the frontal view needed for a typical document image analysis algorithm is described.
Abstract: In this paper, we describe a spectrum of algorithms for rectification of document images for camera-based analysis and recognition. Clues like document boundaries, page layout information, organisation of text and graphics components, apriori knowledge of the script or selected symbols etc. are effectively used for removing the perspective effect and computing the frontal view needed for a typical document image analysis algorithm. Appropriate results from projective geometry of planar surfaces are exploited in

Proceedings ArticleDOI
12 Nov 2005
TL;DR: This work presents a streaming algorithm for evaluating an HMM’s Viterbi probability and refine it for the specific HMM used in biological sequence search and demonstrates that this streaming algorithm on graphics processors can outperform available CPU implementations.
Abstract: The proliferation of biological sequence data has motivated the need for an extremely fast probabilistic sequence search. One method for performing this search involves evaluating the Viterbi probability of a hidden Markov model (HMM) of a desired sequence family for each sequence in a protein database. However, one of the difficulties with current implementations is the time required to search large databases. Many current and upcoming architectures offering large amounts of compute power are designed with data-parallel execution and streaming in mind. We present a streaming algorithm for evaluating an HMM’s Viterbi probability and refine it for the specific HMM used in biological sequence search. We implement our streaming algorithm in the Brook language, allowing us to execute the algorithm on graphics processors. We demonstrate that this streaming algorithm on graphics processors can outperform available CPU implementations. We also demonstrate this implementation running on a 16 node graphics cluster.

Proceedings ArticleDOI
31 Jul 2005
TL;DR: This chapter analyzes the technology and architectural trends that motivate the way GPUs are built today and what the authors might expect in the future.
Abstract: Modern technology allows the designers of today's processors to incorporate enormous computation resources into their latest chips. The challenge for these architects is to translate the increase in capability to an increase in performance. The last decade of graphics processor development shows that GPU designers have succeeded spectacularly at this task. In this chapter, we analyze the technology and architectural trends that motivate the way GPUs are built today and what we might expect in the future.

Journal ArticleDOI
Chris Wyman1
01 Jul 2005
TL;DR: This work introduces a simple, image-space approach to refractions that easily runs on modern graphics cards, and allows refraction of a distant environment through two interfaces, compared to current interactive techniques that are restricted to a single interface.
Abstract: Many interactive applications strive for realistic renderings, but framerate constraints usually limit realism to effects that run efficiently in graphics hardware. One effect largely ignored in such applications is refraction. We introduce a simple, image-space approach to refractions that easily runs on modern graphics cards. Our method requires two passes on a GPU, and allows refraction of a distant environment through two interfaces, compared to current interactive techniques that are restricted to a single interface. Like all image-based algorithms, aliasing can occur in certain circumstances, but the plausible refractions generated with our approach should suffice for many applications.

Journal ArticleDOI
Guobin Shen1, Guangping Gao1, Shipeng Li1, Heung-Yeung Shum1, Ya-Qin Zhang1 
TL;DR: A video decoding framework that involves both the central processing unit (CPU) and the GPU is proposed and initial experimental results show that significant speed-up can be achieved by utilizing the GPU power.
Abstract: Most modern computers or game consoles are equipped with powerful yet cost-effective graphics processing units (GPUs) to accelerate graphics operations. Though the graphics engines in these GPUs are specially designed for graphics operations, can we harness their computing power for more general nongraphics operations? The answer is positive. In this paper, we present our study on leveraging the GPUs graphics engine to accelerate the video decoding. Specifically, a video decoding framework that involves both the central processing unit (CPU) and the GPU is proposed. By moving the whole motion compensation feedback loop of the decoder to the GPU, the CPU and GPU have been made to work in parallel in a pipelining fashion. Several techniques are also proposed to overcome the GPUs constraints or to optimize the GPU computation. Initial experimental results show that significant speed-up can be achieved by utilizing the GPU power. We have achieved real-time playback of high definition video on a PC with an Intel Pentium III 667-MHz CPU and an nVidia GeForce3 GPU.

Journal ArticleDOI
TL;DR: The CaveUT game engine gives developers a high-performance, low-cost VR alternative based on the Unreal Tournament and inherit all the Unreal Engine's capabilities along with Unreal Tournament's authoring support, open source code, content library, and large user community.
Abstract: CaveUT, an open source freeware project, uses game technology to make projection-based virtual reality affordable and accessible. The CaveUT works well for low-cost displays and supports real-time spatial tracking and stereographic imaging. Computer games with the most advanced simulation and graphics usually employ a game engine, a commercially available software package that handles basic functions. Based on the Unreal Tournament the CaveUT game engine gives developers a high-performance, low-cost VR alternative. VR applications developed with CaveUT inherit all the Unreal Engine's capabilities along with Unreal Tournament's authoring support, open source code, content library, and large user community.

Journal ArticleDOI
TL;DR: Two different simulation algorithms implementing scattering and gathering operations on the GPU are compared with respect to performance and numerical accuracy and GPU specific issues to be considered in simulation techniques showing similar computation and memory access patterns to mass-spring systems are discussed.

Patent
17 May 2005
TL;DR: In this article, a system and method for designing animated visualization interfaces depicting, at a supervisory level, manufacturing and process control information wherein graphical symbols in the visualization interfaces are associated with components of a process control/manufacturing information application.
Abstract: A system and method are described for designing animated visualization interfaces depicting, at a supervisory level, manufacturing and process control information wherein graphical symbols in the visualization interfaces are associated with components of a process control/manufacturing information application. The system includes a graphical symbol library for maintaining a set of graphical symbol templates wherein the graphical symbol templates including a graphics definition including graphics and a reference to an application component type. The reference facilitates identifying candidate application components for creating an association between a graphical symbol instance created from a graphical symbol template and one of the candidate application components. The system also includes a graphical symbol design environment for selecting the graphical symbol template, specifying an application component corresponding to the component type, and creating an association between the graphical symbol instance and the specified application component.

Journal ArticleDOI
TL;DR: This paper proposes an 2D extension of the angular radial transform, called GART, which allows applying ART to images while insuring robustness to all possible rotations and to perspective deformations, and a 3D shape descriptor, called 3D ART, which has the same properties that the original transform.

Book ChapterDOI
11 Nov 2005
TL;DR: This work focuses on porting to the GPU the most time-consuming loop, which accounts for nearly 50% of the total execution time, and shows preliminary results show that the loop code achieves a speedup of 3x while the whole application with a single loop optimization, achieves aspeedup of 1.2x.
Abstract: Bioinformatics applications are one of the most relevant and compute-demanding applications today. While normally these applications are executed on clusters or dedicated parallel systems, in this work we explore the use of an alternative architecture. We focus on exploiting the compute-intensive characteristics offered by the graphics processors (GPU) in order to accelerate a bioinformatics application. The GPU is a good match for these applications as it is an inexpensive, high-performance SIMD architecture. In our initial experiments we evaluate the use of a regular graphics card to improve the performance of RAxML, a bioinformatics program for phylogenetic tree inference. In this paper we focus on porting to the GPU the most time-consuming loop, which accounts for nearly 50% of the total execution time. The preliminary results show that the loop code achieves a speedup of 3x while the whole application with a single loop optimization, achieves a speedup of 1.2x.