scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Improving the performance of hierarchical scheduling for rendering

01 Nov 2014-pp 457-460
TL;DR: This paper is proposing a hierarchical scheduling policy for rendering to improve its performance as compared to the present day available methods and will evaluate the approach with the existing approaches to show the improvement in performance.
Abstract: With the improvement and development of the high performance computing systems, the need arises to use the resources available at hand efficiently. Rendering is one kind of application which is suitable for high computing. In the modern era of computing we have got the quad processors available, but the processes and data we manipulate on these, is still based on the serial algorithms in many of the cases. As rendering needs more computing and is associated with huge data access, it can be broken into the smaller subtasks of same nature to be executed on the different processors. The parallel approach works on the principle of solving or computing the different similar sub-tasks using the different available resources of computing (processors) in a parallel fashion. Means at the end of one unit of time, we are having the result of as much number of subtasks in hand, as the number of processors available. The only thing we need to keep in mind is how to efficiently share the resources between different subtasks and how to balance the load factor as per different processors are considered. In this paper we are proposing a hierarchical scheduling policy for rendering to improve its performance as compared to the present day available methods. We will evaluate the approach with the existing approaches for rendering to show the improvement in performance.
Citations
More filters
01 Jul 1995
TL;DR: This paper presents rendering algorithms, developed for MPPs, for polygonal, spherical and volumetric data, which uses a data-parallel approach, whereas the sphere and volume rendering use a MIMD approach.
Abstract: As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

5 citations

Book ChapterDOI
Qian Li1, Weiguo Wu1, Long Xu1, Jianhang Huang1, Mingxia Feng1 
28 Oct 2016
TL;DR: The proposed statistics based prediction method improves the prediction accuracy about 60 % and 75.74 % for training set and test set, which provides reasonable basis for scheduling jobs efficiently and saving rendering cost.
Abstract: As an interesting commercial application, rendering plays an important role in the field of animation and movie production. Generally, render farm is used to rendering mass images concurrently according to the independence among frames. How to scheduling and manage various rendering jobs efficiently is a significant issue for render farm. Therefore, the prediction of rendering time for frames is relevant for scheduling, which offers the reference and basis for scheduling method. In this paper a statistics based prediction method is addressed. Initially, appropriate parameters which affect the rendering time are extracted and analyzed according to parsing blend formatted files which offers a general description for synthetic scene. Then, the sample data are gathered by open source software Blender and J48 classification algorithm is used for predicting rendering time. The experimental results show that the proposed method improve the prediction accuracy about 60 % and 75.74 % for training set and test set, which provides reasonable basis for scheduling jobs efficiently and saving rendering cost.

1 citations


Cites background from "Improving the performance of hierar..."

  • ...Absorption intensity of ambient light [1-10] [50....

    [...]

  • ...Generally speaking, the research spot is focus on optimizing and design of scheduling policy [9, 26, 19]....

    [...]

Proceedings ArticleDOI
Mu Kaihui1, Bo Wang1, Chu Qiu1, Pengdong Gao1, Qi Quan1, Yongquan Lu1 
01 Oct 2020
TL;DR: An evaluation experiment between the non-optimized system and the optimized one is done, which shows that the optimized system can achieve a good performance compared to the old system.
Abstract: The rendering cloud platform provides rendering cloud service for animated films, special effects, animated series, effect drawing, creative design and other rendering work. In this paper, a deep research is made into the efficiency and time consumption of short-time rendering frames in a rendering cloud platform with a management scheduling node. An optimized dynamic requesting-interval system is exposed to enhance the performance of this rendering platform to achieve the maximum utilization of rendering nodes when rendering short-time frames. An evaluation experiment between the non-optimized system and the optimized one is done, which shows that the optimized system can achieve a good performance compared to the old system.

Cites background from "Improving the performance of hierar..."

  • ...The challenges also increase as the nodes, jobs, projects, users increase in a large cloud rendering platform: (1) the stability is affected by the number of rendering nodes, network environment, storage stability and performance, job complexity [7], (2) the functional requirements increase greatly to meet the needs of different job types and different users, (3) the performance is under the influence of the number of rendering nodes, rendering job types, network environment, storage performance, scheduling strategy [8], (4) the higher and higher security must be provides for protecting the users’ data in a cloud platform [9], and (5) user experience also should be improved to attract new rendering users and improve user viscosity [10]....

    [...]

References
More filters
Proceedings ArticleDOI
15 Oct 2007
TL;DR: In this paper, a novel point-based volume rendering technique based on tiny particles based on a user-specified transfer function and the rejection method is introduced and applied to volume rendering of multiple volume data as well as irregular volume data.
Abstract: In this paper, we introduce a novel point-based volume rendering technique based on tiny particles. In the proposed technique, a set of tiny opaque particles is generated from a given 3D scalar field based on a user-specified transfer function and the rejection method. The final image is then generated by projecting these particles onto the image plane. The particle projection does not need to be in order since the particle transparency values are not taken into account. During the projection stage, only a simple depth-order comparison is required to eliminate the occluded particles. This is the main characteristic of the proposed technique and should greatly facilitate the distributed processing. Semi-transparency is one of the main characteristics of volume rendering and, in the proposed technique the quantity of projected particles greatly influences the degree of semi-transparency. Sub-pixel processing was used in order to obtain the semi-transparent effect by controlling the projection of multiple particles onto each of the pixel areas. When using sub-pixel processing, the final pixel value is obtained by averaging the contribution from each of the projected particles. In order to verify its usefulness, we applied the proposed technique to volume rendering of multiple volume data as well as irregular volume data.

47 citations

Journal ArticleDOI
TL;DR: SCIVE as mentioned in this paper is a simulation core for intelligent virtual environments based on a semantic net, which ties together the data representations of the various simulation modules, e.g., for graphics, physics, audio, haptics or Artificial Intelligence (AI) representations.
Abstract: This paper introduces SCIVE, a Simulation Core for Intelligent Virtual Environments. SCIVE provides a Knowledge Representation Layer (KRL) as a central organizing structure. Based on a semantic net, it ties together the data representations of the various simulation modules, e.g., for graphics, physics, audio, haptics or Artificial Intelligence (AI) representations. SCIVE's open architecture allows a seamless integration and modification of these modules. Their data synchronization is widely customizable to support extensibility and maintainability. Synchronization can be controlled through filters which in turn can be instantiated and parametrized by any of the modules, e.g., the AI component can be used to change an object's behavior to be controlled by the physics instead of the interaction- or a keyframe-module. This bidirectional inter- module access is mapped by, and routed through, the KRL which semantically reflects all objects or entities the simulation comprises. Hence, SCIVE allows extensive application design and customization from low-level core logic, module configuration and flow control, to the simulated scene, all on a high-level unified representation layer while it supports well known development paradigms commonly found in Virtual Reality applications

16 citations

Book ChapterDOI
19 Jun 2005
TL;DR: This paper presents a novel and efficient two-phase scheduling method and evaluated it both theoretically and via simulation using large and detailed traces collected in DreamWorks Animation's production environment, reporting a surprising performance anomaly involving a workload parameter that was identified as crucial to performance.
Abstract: Recently HP Labs engaged in a joint project with DreamWorks Animation to develop a Utility Rendering Service that was used to render part of the computer-animated feature film Shrek 2 In a companion paper [2] we formalized the problem of scheduling animation rendering jobs and demonstrated that the general problem is computationally intractable, as are severely restricted special cases We presented a novel and efficient two-phase scheduling method and evaluated it both theoretically and via simulation using large and detailed traces collected in DreamWorks Animation's production environment In this paper we describe the overall experience of the joint project and greatly expand our empirical evaluations of job scheduling strategies for improving scheduling performance Our new results include a workload characterization of DreamWorks Animation animation rendering jobs We furthermore present parameter sensitivity analyses based on simulations using randomly generated synthetic workloads Whereas our previous theoretical results imply that worst-case performance can be far from optimal for certain workloads, our current empirical results demonstrate that our scheduling method achieves performance quite close to optimal for both real and synthetic workloads We furthermore offer advice for tuning a parameter associated with our method Finally, we report a surprising performance anomaly involving a workload parameter that our previous theoretical analysis identified as crucial to performance Our results also shed light on performance tradeoffs surrounding task parallelization

11 citations

Proceedings ArticleDOI
24 Apr 2010
TL;DR: Experience has shown that the algorithm of this paper can make CPU and GPU work more harmonious, and the efficient rendering for large-area three-dimensional terrain is realized.
Abstract: This paper presents a method for large-scale terrain rendering based on GPU programming. The total terrain data is partitioned into many smaller pages evenly, and the real-time scheduling for massive terrain data is realized with viewpoint-based pre-loading method and the page buffer pool management technology. With the idea of LOD algorithm, each page is divided into some sub-blocks for batch rendering, and the cracking processing between sub-blocks of different levels is advanced to the pre-processing stage, so that the real-time operation on CPU is reduced. Experience has shown that the algorithm of this paper can make CPU and GPU work more harmonious, and the efficient rendering for large-area three-dimensional terrain is realized.

6 citations

01 Jul 1995
TL;DR: This paper presents rendering algorithms, developed for MPPs, for polygonal, spherical and volumetric data, which uses a data-parallel approach, whereas the sphere and volume rendering use a MIMD approach.
Abstract: As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

5 citations