scispace - formally typeset
Search or ask a question

Showing papers on "Software rendering published in 2004"


Book
28 Sep 2004
TL;DR: Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation through a method known as 'literate programming', which serves as an essential resource on physically-based rendering.
Abstract: Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation. Through a method known as 'literate programming', the authors combine human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, users will learn to design and employ a fully-featured rendering system for creating stunning imagery. This completely updated and revised edition includes new coverage on ray-tracing hair and curves primitives, numerical precision issues with ray tracing, LBVHs, realistic camera models, the measurement equation, and much more. It is a must-have, full color resource on physically-based rendering. Presents up-to-date revisions of the seminal reference on rendering, including new sections on bidirectional path tracing, numerical robustness issues in ray tracing, realistic camera models, and subsurface scattering Provides the source code fora complete rendering systemallowing readers to get up and running fast Includes a unique indexing feature, literate programming, that lists the locations of each function, variable, and method on the page where they are first describedServes as an essential resource on physically-based rendering

1,612 citations


Proceedings ArticleDOI
08 Aug 2004
TL;DR: This course provides a detailed introduction to general purpose computation on graphics hardware (GPGPU), emphasize core computational building blocks, ranging from linear algebra to database queries, and review the tools, perils, and tricks of the trade in GPU programming.
Abstract: The graphics processor (GPU) on today's commodity video cards has evolved into an extremely powerful and flexible processor. The latest graphics architectures provide tremendous memory bandwidth and computational horsepower, with fully programmable vertex and pixel processing units that support vector operations up to full IEEE floating point precision. High level languages have emerged for graphics hardware, making this computational power accessible. Architecturally, GPUs are highly parallel streaming processors optimized for vector operations, with both MIMD (vertex) and SIMD (pixel) pipelines. Not surprisingly, these processors are capable of general-purpose computation beyond the graphics applications for which they were designed. Researchers have found that exploiting the GPU can accelerate some problems by over an order of magnitude over the CPU.However, significant barriers still exist for the developer who wishes to use the inexpensive power of commodity graphics hardware, whether for in-game simulation of physics of for conventional computational science. These chips are designed for and driven by video game development; the programming model is unusual, the programming environment is tightly constrained, and the underlying architectures are largely secret. The GPU developer must be an expert in computer graphics and its computational idioms to make effective use of the hardware, and still pitfalls abound. This course provides a detailed introduction to general purpose computation on graphics hardware (GPGPU). We emphasize core computational building blocks, ranging from linear algebra to database queries, and review the tools, perils, and tricks of the trade in GPU programming. Finally we present some interesting and important case studies on general-purpose applications of graphics hardware.The course presenters are experts on general-purpose GPU computation from academia and industry, and have presented papers and tutorials on the topic at SIGGRAPH, Graphics Hardware, Game Developers Conference, and elsewhere.

346 citations


Proceedings ArticleDOI
29 Aug 2004
TL;DR: By combining memory objects with floating-point fragment programs, this work has implemented a particle engine that entirely avoids the transfer of particle data at runtime.
Abstract: We present a system for real-time animation and rendering of large particle sets using GPU computation and memory objects in OpenGL. Memory objects can be used both as containers for geometry data stored on the graphics card and as render targets, providing an effective means for the manipulation and rendering of particle data on the GPU.To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform particle manipulation are essential. Our system implements a versatile particle engine, including inter-particle collisions and visibility sorting. By combining memory objects with floating-point fragment programs, we have implemented a particle engine that entirely avoids the transfer of particle data at run-time. Our system can be seen as a forerunner of a new class of graphics algorithms, exploiting memory objects or similar concepts on upcoming graphics hardware to avoid bus bandwidth becoming the major performance bottleneck.

255 citations


Proceedings ArticleDOI
02 Nov 2004
TL;DR: This work presents a first running video see-through augmented reality system on a consumer cell-phone that supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.
Abstract: We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.

209 citations


Proceedings ArticleDOI
08 Aug 2004
TL;DR: This course introduces points as a powerful and versatile graphics primitive and describes algorithms and discusses current problems and limitations, covering important aspects of point based graphics.
Abstract: This course introduces points as a powerful and versatile graphics primitive. Speakers present their latest concepts for the acquisition, representation, modeling, processing, and rendering of point sampled geometry along with applications and research directions. We describe algorithms and discuss current problems and limitations, covering important aspects of point based graphics.

108 citations


Journal ArticleDOI
TL;DR: This paper revisit and compare a number of recently developed point-based rendering implementations within a common testbed, based on a common view-dependent level-of-detail (LOD) rendering framework.

100 citations


Journal ArticleDOI
TL;DR: Using a prototype implementation of this compression-based multi-resolution rendering of very large volume data sets at interactive to real-time frame rates on standard PC hardware, a high-quality interactive rendering of large data sets on a single off-the-shelf PC is performed.

88 citations


Proceedings ArticleDOI
19 May 2004
TL;DR: The incremental subrange integration method described allows interactive lookup table generation in O(n2) time without the need for approximation or hardware assistance and the interpolated preintegrated lighting algorithm eliminates discontinuities by linearly interpolating illumination along the view direction.
Abstract: Pre-integrated volume rendering is an effective technique for generating high-quality visualizations. The precomputed lookup tables used by this method are slow to compute and can not include truly pre-integrated lighting due to space constraints. The lighting for pre-integrated rendering is therefore subject to the same sampling artifacts as in standard volume rendering. We propose methods to speed up lookup table generation and minimize lighting artifacts. The incremental subrange integration method we describe allows interactive lookup table generation in O(n2) time without the need for approximation or hardware assistance. The interpolated preintegrated lighting algorithm eliminates discontinuities by linearly interpolating illumination along the view direction. Both methods are applicable to any pre-integrated rendering method, including cell projection, ray casting, and hardware-accelerated algorithms.

83 citations


Proceedings ArticleDOI
19 May 2004
TL;DR: This paper presents a simple approach for rendering isosurfaces of a scalar field using the vertex programming capability of commodity graphics cards, and guarantees the absence of T-junctions by satisfying local bounds in the authors' nested error basis.
Abstract: This paper presents a simple approach for rendering isosurfaces of a scalar field. Using the vertex programming capability of commodity graphics cards, we transfer the cost of computing an isosurface from the Central Processing Unit (CPU), running the main application, to the Graphics Processing Unit (GPU), rendering the images. We consider a tetrahedral decomposition of the domain and draw one quadrangle (quad) primitive per tetrahedron. A vertex program transforms the quad into the piece of isosurface within the tetrahedron (see Figure 2). In this way, the main application is only devoted to streaming the vertices of the tetrahedra from main memory to the graphics card. For adaptively refined rectilinear grids, the optimization of this streaming process leads to the definition of a new 3D space-filling curve, which generalizes the 2D Sierpinski curve used for efficient rendering of triangulated terrains. We maintain the simplicity of the scheme when constructing view-dependent adaptive refinements of the domain mesh. In particular, we guarantee the absence of T-junctions by satisfying local bounds in our nested error basis. The expensive stage of fixing cracks in the mesh is completely avoided. We discuss practical tradeoffs in the distribution of the workload between the application and the graphics hardware. With current GPU's it is convenient to perform certain computations on the main CPU. Beyond the performance considerations that will change with the new generations of GPU's this approach has the major advantage of avoiding completely the storage in memory of the isosurface vertices and triangles.

77 citations


Proceedings ArticleDOI
06 Oct 2004
TL;DR: This paper presents a hardware-accelerated solution that further improves the extraction performance of isosurface geometry in a fragment program by rendering only a single screen-sized quadrilateral.
Abstract: Volume visualization using isosurface extraction is a well-researched topic. Research demonstrated that even for unstructured grids peak performances of millions of tetrahedra per second can be achieved by exploiting the parallel processing capabilities of modern GPUs. In this paper we present a hardware-accelerated solution that further improves the extraction performance. In contrary to existing approaches, our technique explicitly extracts the isosurface geometry in a fragment program by rendering only a single screen-sized quadrilateral. The extracted geometry is directly written to an on-board graphics memory object allowing for direct rendering without further bus transfers. Additionally, the geometry can be manipulated by shader programs or read back to the application for further processing. Examples and application scenarios are given that can benefit from our approach.

70 citations


Patent
26 Feb 2004
TL;DR: In this paper, the authors propose to render GUI widgets with generic look and feel by receiving in a display device a master definition of a graphics display, the master definition including at least one graphics definition element, the graphics definition elements including a reference to a protowidget and one or more instance parameter values characterizing an instance of the proowidget.
Abstract: Rendering GUI widgets with generic look and feel by receiving in a display device a master definition of a graphics display, the master definition including at least one graphics definition element, the graphics definition element including a reference to a protowidget and one or more instance parameter values characterizing an instance of the protowidget, the protowidget includes a definition of a generic GUI object, including generic display values affecting overall look and feel of the graphics display, and rendering at least one instance of the protowidget to a graphics display in dependence upon the generic display values and the instance parameter values.

Proceedings ArticleDOI
16 Jun 2004
TL;DR: This work presents an alternative method to the image-based approach by splitting the rendering process between client and server transfering couple of 2D line primitives over the network, which are rendered locally by the mobile device.
Abstract: There is a growing interest for providing interactive graphics also on mobile devices like PDAs, Smartphones, etc. Right now mobile devices are very limited concerning graphical resources basically only simple 2D rasterization operations are available, that run completely on the main CPU of the device. The desire to have more complex 3D graphics on these devices often results in a remote rendering solution. Classical remote rendering solutions produce the final images on a server and transfer these to the client device over a wired or nonwired network. We present an alternative method to the image-based approach by splitting the rendering process between client and server transfering couple of 2D line primitives over the network, which are rendered locally by the mobile device

Journal ArticleDOI
TL;DR: This work strives for flexible rendering - that is, rendering only the interior hierarchy nodes as representatives of the subtree for interior nodes, resulting in a fast, one-pass shadow-mapping algorithm.
Abstract: We have seen the growing deployment of ubiquitous computing devices and the proliferation of complex virtual environments. As demand for detailed and high-quality geometric models increases, typical scene size (often including scanned 3D objects) easily reaches millions of geometric primitives. Traditionally, vertices and polygons (faces) represent 3D objects. These representations, coupled with the traditional rendering pipeline, don't adequately support display of complex scenes on different types of platforms with heterogeneous rendering capabilities. To accommodate these constraints, we use a packed hierarchical point-based representation for rendering. Point-based rendering offers a simple-to-use level-of-detail mechanism in which we can adapt the number of points rendered to the underlying object's screen size. Our work strives for flexible rendering - that is, rendering only the interior hierarchy nodes as representatives of the subtree. In particular, we avoid traversal of the entire hierarchy and reconstruction of model attributes (such as normals and color information) for interior nodes because both operations can be prohibitively expensive. Flexible rendering also lets us traverse the hierarchy in a specific order, resulting in a fast, one-pass shadow-mapping algorithm.

Patent
30 Jun 2004
TL;DR: In this paper, a system and method for optimizing the performance of a graphics intensive software program for graphics acceleration hardware is presented, which encompasses a procedure that validates the different functions of a 3D acceleration capable video card, decides whether to use the acceleration hardware and optimizes the software application to selectively use the functions that work on the specific video acceleration card.
Abstract: A system and method for optimizing the performance of a graphics intensive software program for graphics acceleration hardware. This system and method encompasses a procedure that validates the different functions of a 3D acceleration capable video card, decides whether to use the acceleration hardware and optimizes the software application to selectively use the functions that work on the specific video acceleration card. Functions checked include sub-pixel positioning, opacity, color replacement and fog. If these tests are successful, then the graphics acceleration is used by the software application. However, if the tests are not successful the decision is made not to use graphics accelerator. Those with ordinary skill in the art will realize that it is not necessary to perform all of the tests in a specific order. Additionally, other types of tests could be performed to ensure software application and video card compatibility before the software application is uses graphics acceleration to render 3D graphics.

Proceedings ArticleDOI
24 Oct 2004
TL;DR: A complete transmission system for efficient free viewpoint video extraction, representation, coding, and interactive rendering based on a shape-from-silhouette algorithm and algorithms for view-dependent texture mapping is presented.
Abstract: Free viewpoint video provides the possibility to freely navigate within dynamic real world video scenes by choosing arbitrary viewpoints and view directions. So far, related work only considered free viewpoint video extraction, representation, and rendering methods. Compression and transmission has not yet been studied in detail and combined with the other components into one complete system. In this paper, we present such a complete system for efficient free viewpoint video extraction, representation, coding, and interactive rendering. Data representation is based on 3D mesh models and view-dependent texture mapping using video textures. The geometry extraction is based on a shape-from-silhouette algorithm. The resulting voxel models are converted into 3D meshes that are coded using MPEG-4 SNHC tools. The corresponding video textures are coded using an H.264/AVC codec. Our algorithms for view-dependent texture mapping have been adopted as an extension of MPEG-4 AFX. The presented results illustrate that based on the proposed methods a complete transmission system for efficient free viewpoint video can be built.

Proceedings ArticleDOI
21 Jun 2004
TL;DR: To render the interactive visualization of the "Boeing 777" model, a highly complex model of 350 million individual triangles, a combination of real-time ray tracing, a low-level out of core caching and demand loading strategy, and a hierarchical, hybrid volumetric/lightfield-like approximation scheme for representing not-yet-loaded geometry is used.
Abstract: With the tremendous advances in both hardware capabilities and rendering algorithms, rendering performance is steadily increasing. Even consumer graphics hardware can render many million triangles per second. However, scene complexity seems to be rising even faster than rendering performance, with no end to even more complex models in sight. In this paper, we are targeting the interactive visualization of the "Boeing 777" model, a highly complex model of 350 million individual triangles, which - due to its sheer size and complex internal structure – simply cannot be handled satisfactorily by today's techniques. To render this model, we use a combination of real-time ray tracing, a low-level out of core caching and demand loading strategy, and a hierarchical, hybrid volumetric/lightfield-like approximation scheme for representing not-yet-loaded geometry. With this approach, we are able to render the full 777 model at several frames per second even on a single commodity desktop PC.

Patent
16 Dec 2004
TL;DR: In this paper, a synchronizing agent detects the local ready event and generates a global ready event after all of the graphics processors have generated local ready events, which is transmitted to each graphics processor, which responds by resuming its rendering activity.
Abstract: Coherence of displayed images is provided for a graphics processing systems having multiple processors operating to render different portions of a current image in parallel. As each processor completes rendering of its portion of the current image, it generates a local ready event, then pauses its rendering operations. A synchronizing agent detects the local ready event and generates a global ready event after all of the graphics processors have generated local ready events. The global ready signal is transmitted to each graphics processor, which responds by resuming its rendering activity.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: Interactive techniques to control and render scenes using nonlinear projections, implemented in Maya and used in the production of the animation Ryan, demonstrate how geometric and rendering effects resulting from non linear projections can be seamlessly introduced into current production pipelines.
Abstract: Artistic rendering is an important research area in Computer Graphics, yet relatively little attention has been paid to the projective properties of computer generated scenes. Motivated by the surreal storyboard of an animation in production---Ryan---this paper describes interactive techniques to control and render scenes using nonlinear projections. The paper makes three contributions. First, we present a novel approach that distorts scene geometry such that when viewed through a standard linear perspective camera, the scene appears nonlinearly projected. Second, we describe a framework for the interactive authoring of nonlinear projections defined as a combination of scene constraints and a number of linear perspective cameras. Finally, we address the impact of nonlinear projection on rendering and explore various illumination effects. These techniques, implemented in Maya and used in the production of the animation Ryan, demonstrate how geometric and rendering effects resulting from nonlinear projections can be seamlessly introduced into current production pipelines.

Journal ArticleDOI
01 Aug 2004
TL;DR: An algorithm for rendering faceted colored gemstones in real time, using graphics hardware based on a number of controlled approximations of the physical phenomena involved when light enters a stone, which permit an implementation based on the most recent -- yet commonly available -- hardware features such as fragment programs, cube-mapping.
Abstract: We present an algorithm for rendering faceted colored gemstones in real time, using graphics hardware. Beyond the technical challenge of handling the complex behavior of light in such objects, a real time high quality rendering of gemstones has direct applications in the field of jewelry prototyping, which has now become a standard practice for replacing tedious (and less interactive) wax carving methods. Our solution is based on a number of controlled approximations of the physical phenomena involved when light enters a stone, which permit an implementation based on the most recent -- yet commonly available -- hardware features such as fragment programs, cube-mapping.

Proceedings Article
01 Jan 2004
TL;DR: This paper presents a novel implementation of the Fast Fourier Transform called Split-Stream-FFT, which maps the recursive structure of the FFT to the GPU in an efficient way and visualizes large volumetric data set in interactive frame rates on a mid-range computer system.
Abstract: The Fourier volume rendering technique operates in the frequency domain and creates line integral projections of a 3D scalar field. These projections can be efficiently generated in ) log O( 2 N N time by utilizing the Fourier Slice-Projection theorem. However, until now, the mathematical difficulty of the Fast Fourier Transform prevented acceleration by graphics hardware and therefore limited a wider use of this visualization technique in state-of-theart applications. In this paper we describe how to utilize current commodity graphics hardware to perform Fourier volume rendering directly on the GPU. We present a novel implementation of the Fast Fourier Transform: This Split-Stream-FFT maps the recursive structure of the FFT to the GPU in an efficient way. Additionally, highquality resampling within the frequency domain is discussed. Our implementation visualizes large volumetric data set in interactive frame rates on a mid-range computer system.

Proceedings ArticleDOI
16 Jun 2004
TL;DR: This paper presents an AR framework that uses photorealistic rendering as well as non-photorealism rendering techniques and prototypes based on these techniques show the advantages and disadvantages of both technologies in combination with Augmented Reality.
Abstract: Actual graphic hardware becomes more and more powerful. Consequently, virtual scenes can be rendered in a very good quality integrating dynamic behavior, real-time shadows, bump mapped surfaces and other photorealistic rendering techniques. On the other hand, non-photorealistic rendering became popular as well, because of its artistic merits. An integration in an AR environment is the logical consequence. In this paper we present an AR framework that uses photorealistic rendering as well as non-photorealistic rendering techniques. The prototypes based on these techniques show the advantages and disadvantages of both technologies in combination with Augmented Reality.

Patent
31 Mar 2004
TL;DR: In this paper, a method and apparatus for rendering three-dimensional graphics using a streaming render-cache with a multi-threading, multi-core graphics processor are disclosed, which includes a streaming rend-cache and a controller to maintain the order in which threads are dispatched to the graphics engine, and to maintain data coherency between the render cache and the main memory.
Abstract: A method and apparatus for rendering three-dimensional graphics using a streaming render-cache with a multi-threading, multi-core graphics processor are disclosed. The graphics processor includes a streaming render-cache and render-cache controller to maintain the order in which threads are dispatched to the graphics engine, and to maintain data coherency between the render-cache and the main memory. The render-cache controller blocks threads from being dispatched to the graphics engine out of order by only allowing one sub-span to be in-flight at any given time.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper presents an interactive non-photorealistic rendering system that stylizes and renders outdoor scenes captured by 3D laser scanning by designing novel data structures and algorithms as well as leveraging new features of commodity graphics hardware.
Abstract: This paper presents an interactive non-photorealistic rendering (NPR) system that stylizes and renders outdoor scenes captured by 3D laser scanning. In order to deal with the large size, complexity and inherent incompleteness of data obtained from outdoor scans, our system represents outdoor scenes using points instead of traditional polygons. Algorithms are then developed to extract, stylize and render features from this point representation. In addition to conveying various NPR styles, our system also promises consistency in animation by maintaining stroke coherence and density. We achieve NPR of large data at interactive rates by designing novel data structures and algorithms as well as leveraging new features of commodity graphics hardware.

Journal ArticleDOI
TL;DR: A platform-independent occlusion culling library for dynamic environments, dPVS, can benefit such applications as CAD and modeling tools, time-varying simulations, and computer games.
Abstract: A platform-independent occlusion culling library for dynamic environments, dPVS, can benefit such applications as CAD and modeling tools, time-varying simulations, and computer games. Visibility optimization is currently the most effective technique for improving rendering performance in complex 3D environments. The primary reason for this is that during each frame the pixel processing subsystem needs to determine the visibility of each pixel individually. Currently, rendering performance in larger scenes is input sensitive, and most of the processing time is wasted on rendering geometry not visible in the final image. Here we concentrate on real-time visualization using mainstream graphics hardware that has a z-buffer as a de facto standard for hidden surface removal. In an ideal system only the complexity of the geometry actually visible on the screen would significantly impact rendering time - 3D application performance should be output sensitive.

Proceedings ArticleDOI
19 May 2004
TL;DR: A new approach that allows the interactive rendering and navigation of procedurally-encoded 3D scalar fields by reconstructing these fields on PC class graphics processing units, and can take advantage of the Moore's Law cubed increase in performance of graphics hardware.
Abstract: While interactive visualization of rectilinear gridded volume data sets can now be accomplished using texture mapping hardware on commodity PCs, interactive rendering and exploration of large scattered or unstructured data sets is still a challenging problem. We have developed a new approach that allows the interactive rendering and navigation of procedurally-encoded 3D scalar fields by reconstructing these fields on PC class graphics processing units. Since the radial basis functions (RBFs) we use for encoding can provide a compact representation of volumetric scalar fields, the large grid/mesh traditionally needed for rendering is no longer required and ceases to be a data transfer and computational bottleneck during rendering. Our new approach will interactively render RBF encoded data obtained from arbitrary volume data sets, including both structured volume models and unstructured scattered volume models. This procedural reconstruction of large data sets is flexible, extensible, and can take advantage of the Moore's Law cubed increase in performance of graphics hardware.

Proceedings ArticleDOI
02 Jun 2004
TL;DR: The progressive block based refinement nature of the rendering traversal is well suited to hiding out-of-core data access latency, and lends itself well to incorporate backface, view frustum, and occlusion culling, as well as compression and viewdependent progressive transmission.
Abstract: We present a simple point-based multiresolution structure for interactive visualization of very large point sampled models on consumer graphics platforms. The structure is based on a hierarchy of precomputed object-space point clouds. At rendering time, the clouds are combined coarse-to-fine with a top-down structure traversal to locally adapt sample densities according to the projected size in the image. Since each cloud is made of a few thousands of samples, the multiresolution extraction cost is amortized over many graphics primitives, and host-to-graphics communication effectively exploits on-board caching and object based rendering APIs. The progressive block based refinement nature of the rendering traversal is well suited to hiding out-of-core data access latency, and lends itself well to incorporate backface, view frustum, and occlusion culling, as well as compression and viewdependent progressive transmission. The resulting system allows rendering of complex models at high frame rates (over 60M splat/second), supports network streaming, and is fundamentally simple to implement.

Proceedings ArticleDOI
08 Aug 2004
TL;DR: A perceptually-based image comparison process that can be used to tell when images are perceptually identical even though they contain some numerical differences is described.
Abstract: This paper describes a perceptually-based image comparison process that can be used to tell when images are perceptually identical even though they contain some numerical differences. The technique has shown much utility in the production testing of rendering software.

Proceedings ArticleDOI
23 Aug 2004
TL;DR: It is shown that graphics devices parallelize well and provide significant speedup over a CPU implementation, providing an immediately constructible low cost architecture well suited for pattern recognition and computer vision.
Abstract: Pattern recognition and computer vision tasks are computationally intensive, repetitive, and often exceed the capabilities of the CPU, leaving little time for higher level tasks. We present a novel computer architecture which uses multiple commodity computer graphics devices to perform pattern recognition and computer vision tasks many times faster than the CPU. This is a parallel computing architecture that is quickly and easily constructed from the readily available hardware. It is based on parallel processing done on multiple graphics processing units (GPUs). An eigenspace image recognition approach is implemented on this parallel graphics architecture. This paper discusses methods of mapping computer vision algorithms to run efficiently on multiple graphics devices to maximally utilize the underlying graphics hardware. The additional memory and memory bandwidth provided by the graphics hardware provided for significant speedup of the eigenspace approach. We show that graphics devices parallelize well and provide significant speedup over a CPU implementation, providing an immediately constructible low cost architecture well suited for pattern recognition and computer vision.

Patent
14 Oct 2004
TL;DR: In this article, the server computer determines whether the client computer is able to generate graphics using the higher-level graphics commands or generates graphics using relatively lower level graphics commands, depending on whether the user can generate graphics with higher level or lower level.
Abstract: A server computer hosts one or more application programs that are accessed by a client computer. Higher-level graphics commands describing graphics images are received from the application programs. The server computer determines whether the client computer is able to generate graphics using the higher-level graphics commands or generates graphics using relatively lower-level graphics commands. The server computer sends higher-level or relatively lower-level graphics commands depending on whether the client computer generates graphics using higher-level or relatively lower-level graphics commands.

Proceedings ArticleDOI
17 Oct 2004
TL;DR: A new partitioning scheme based on the rendering time of the previous frame is proposed in order to achieve load balance among the rendering nodes, and a strategy to assign tiles to rendering nodes that effectively uses the available graphics resources, thus improving rendering performance.
Abstract: In this paper we present a multi-threaded sort-first distributed rendering system. In order to achieve load balance among the rendering nodes, we propose a new partitioning scheme based on the rendering time of the previous frame. The proposed load-balancing algorithm is very simple to be implemented and works well for both geometry and rasterization-bound models. We also propose a strategy to assign tiles to rendering nodes that effectively uses the available graphics resources, thus improving rendering performance.