scispace - formally typeset
Search or ask a question

Showing papers on "Graphics published in 2014"


Journal ArticleDOI
TL;DR: The circlize package is presented, which provides an implementation of circular layout generation in R as well as an enhancement of available software and the flexibility of this package is based on the usage of low-level graphics functions such that self-defined high- level graphics can be easily implemented by users for specific purposes.
Abstract: Summary Circular layout is an efficient way for the visualization of huge amounts of genomic information. Here we present the circlize package, which provides an implementation of circular layout generation in R as well as an enhancement of available software. The flexibility of this package is based on the usage of low-level graphics functions such that self-defined high-level graphics can be easily implemented by users for specific purposes. Together with the seamless connection between the powerful computational and visual environment in R, circlize gives users more convenience and freedom to design figures for better understanding genomic patterns behind multi-dimensional data. Availability and implementation circlize is available at the Comprehensive R Archive Network (CRAN): http://cran.r-project.org/web/packages/circlize/

2,276 citations


Journal ArticleDOI
TL;DR: A new approach to solve the ‘molecular graphics problem’ is described, which shares the work between GPU and multiple CPU cores, generates high-quality results with perfectly round spheres, shadows and ambient lighting and requires only OpenGL 1.0 functionality.
Abstract: SUMMARY: Today's graphics processing units (GPUs) compose the scene from individual triangles. As about 320 triangles are needed to approximate a single sphere-an atom-in a convincing way, visualizing larger proteins with atomic details requires tens of millions of triangles, far too many for smooth interactive frame rates. We describe a new approach to solve this 'molecular graphics problem', which shares the work between GPU and multiple CPU cores, generates high-quality results with perfectly round spheres, shadows and ambient lighting and requires only OpenGL 1.0 functionality, without any pixel shader Z-buffer access (a feature which is missing in most mobile devices). AVAILABILITY AND IMPLEMENTATION: YASARA View, a molecular modeling program built around the visualization algorithm described here, is freely available (including commercial use) for Linux, MacOS, Windows and Android (Intel) from www.YASARA.org. CONTACT: elmar@yasara.org SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

1,026 citations


Book ChapterDOI
06 Sep 2014
TL;DR: This work describes a publicly available OpenDR framework that makes it easy to express a forward graphics model and then automatically obtain derivatives with respect to the model parameters and to optimize over them and demonstrates the power and simplicity of programming with OpenDR by using it to solve the problem of estimating human body shape from Kinect depth and RGB data.
Abstract: Inverse graphics attempts to take sensor data and infer 3D geometry, illumination, materials, and motions such that a graphics renderer could realistically reproduce the observed scene. Renderers, however, are designed to solve the forward process of image synthesis. To go in the other direction, we propose an approximate differentiable renderer (DR) that explicitly models the relationship between changes in model parameters and image observations. We describe a publicly available OpenDR framework that makes it easy to express a forward graphics model and then automatically obtain derivatives with respect to the model parameters and to optimize over them. Built on a new auto-differentiation package and OpenGL, OpenDR provides a local optimization method that can be incorporated into probabilistic programming frameworks. We demonstrate the power and simplicity of programming with OpenDR by using it to solve the problem of estimating human body shape from Kinect depth and RGB data.

530 citations



BookDOI
01 May 2014
TL;DR: This chapter discusses GIS Functionality: An Overview, which focuses on the acquisition of Geo-referenced Data, and Graphics, Images and Visualisation, as well as computer Graphics Technology for Display and Interaction.
Abstract: Part 1: Introduction. 1. Origins and Applications. 2. Geographical Information Concepts and Spatial Models. 3. GIS Functionality: An Overview. Part 2: Acquisition of Geo-referenced Data. 4. Coordinate Systems, Transformations and Map Projections. 5. Digitising, Editing and Structuring. 6. Primary Data Acquisition from Ground and Remote Surveys. 7. Data Quality and Data Standards. Part 3: Data Storage and Retrieval. 8. Computer Data Storage. 9. Database Management Systems. 10. Spatial Data Access Methods for Points, Lines and Polygons. Part 4: Spatial Data Modelling and Analysis. 11. Surface Modelling and Spatial Interpolation. 12. Optimal Solutions and Spatial Search. 13. Knowledge-Based Systems and Automated Reasoning. Part 5: Graphics, Images and Visualisation. 14. Computer Graphics Technology for Display and Interaction. 15. Three Dimensional Visualisation. 16. Raster and Vector Interconversions. 17. Map Generalisation. 18. Automated Design of Annotated Maps.

225 citations


Journal ArticleDOI
TL;DR: The concept of position‐based dynamics is introduced, dynamic simulation based on shape matching and data‐driven upsampling approaches are presented and several applications for these methods are presented.
Abstract: The dynamic simulation of mechanical effects has a long history in computer graphics. The classical methods in this field discretize Newton's second law in a variety of Lagrangian or Eulerian ways, and formulate forces appropriate for each mechanical effect: joints for rigid bodies; stretching, shearing or bending for deformable bodies and pressure, or viscosity for fluids, to mention just a few. In the last years, the class of position-based methods has become popular in the graphics community. These kinds of methods are fast, stable and controllable which make them well-suited for use in interactive environments. Position-based methods are not as accurate as force-based methods in general but they provide visual plausibility. Therefore, the main application areas of these approaches are virtual reality, computer games and special effects in movies. This state-of-the-art report covers the large variety of position-based methods that were developed in the field of physically based simulation. We will introduce the concept of position-based dynamics, present dynamic simulation based on shape matching and discuss data-driven upsampling approaches. Furthermore, we will present several applications for these methods.

178 citations


Journal ArticleDOI
27 Jul 2014
TL;DR: It is believed that it will open a new field of computer graphics if fabricated models can be actuated by APF and make the levitated objects usable in graphic metaphors such as the pixels of raster graphics, moving points of vector graphics, and animation.
Abstract: We propose a novel graphics system based on the expansion of 3D acoustic-manipulation technology. In conventional research on acoustic levitation, small objects are trapped in the acoustic beams of standing waves. We expand this method by changing the distribution of the acoustic-potential field (APF). Using this technique, we can generate the graphics using levitated small objects. Our approach makes available many expressions, such as the expression by materials and non-digital appearance. These kinds of expressions are used in many applications, and we aim to combine them with digital controllability. In the current system, multiple particles are levitated together at 4.25-mm intervals. The spatial resolution of the position is 0.5 mm. Particles move at up to 72 cm/s. The allowable density of the material can be up to 7 g/cm3. For this study, we use three options of APF: 2D grid, high-speed movement, and combination with motion capture. These are used to realize floating screen or mid-air raster graphics, mid-air vector graphics, and interaction with levitated objects. This paper reports the details of the acoustic-potential field generator on the design, control, performance evaluation, and exploration of the application space. To discuss the various noncontact manipulation technologies in a unified manner, we introduce a concept called "computational potential field" (CPF).

162 citations


Journal ArticleDOI
Alun Evans1, Marco Romeo1, Arash Bahrehmand1, Javi Agenjo1, Josep Blat1 
TL;DR: The first survey of the state of the art in the field of real-time 3D graphics rendering in the browser is presented, which briefly summarise the approaches for remote rendering of3D graphics, before surveying complementary research on data compression methods, and notable application fields.

142 citations


Proceedings ArticleDOI
15 Dec 2014
TL;DR: This study compiled 12 graph applications and collected the performance and utilization statistics of the core components of GPU while running the applications on both a cycle accurate simulator and a real GPU card to present detailed application execution characteristics on GPUs.
Abstract: Large graph processing is now a critical component of many data analytics. Graph processing is used from social networking web sites that provide context-aware services from user connectivity data to medical informatics that diagnose a disease from a given set of symptoms. Graph processing has several inherently parallel computation steps interspersed with synchronization needs. Graphics processing units (GPUs) are being proposed as a power-efficient choice for exploiting the inherent parallelism. There have been several efforts to efficiently map graph applications to GPUs. However, there have not been many characterization studies that provide an in-depth understanding of the interaction between the GPGPU hardware components and graph applications that are mapped to execute on GPUs. In this study, we compiled 12 graph applications and collected the performance and utilization statistics of the core components of GPU while running the applications on both a cycle accurate simulator and a real GPU card. We present detailed application execution characteristics on GPUs. Then, we discuss and suggest several approaches to optimize GPU hardware for enhancing the graph application performance.

88 citations


Journal ArticleDOI
TL;DR: The RNetLogo package delivers an interface to embed the agent-based modeling platform NetLogo into the R environment with headless (no graphical user interface) or interactive GUI mode, which enables the modeler to design simulation experiments, store simulation results, and analyze simulation output in a more systematic way.
Abstract: The RNetLogo package delivers an interface to embed the agent-based modeling platform NetLogo into the R environment with headless (no graphical user interface) or interactive GUI mode. It provides functions to load models, execute commands, push values, and to get values from NetLogo reporters. Such a seamless integration of a widely used agent-based modeling platform with a well-known statistical computing and graphics environment opens up various possibilities. For example, it enables the modeler to design simulation experiments, store simulation results, and analyze simulation output in a more systematic way. It can therefore help close the gaps in agent-based modeling regarding standards of description and analysis. After a short overview of the agent-based modeling approach and the software used here, the paper delivers a step-by-step introduction to the usage of the RNetLogo package by examples.

85 citations


DOI
14 Jan 2014
TL;DR: In this article, the authors briefly review literature addressing visual inspection of graphed single-case data and explore graphics percep-formance, considering strategies for enhancing accuracy in visual displays.
Abstract: In this chapter we briefly review literature addressing visual inspection of graphed single-case data. In doing so, we explore graphics percep­ tion, considering strategies for enhancing accuracy in visual displays. We review the emergence of applied behavior analysis, present interpre­ tive methods for visually inspected data in applied behavior analysis, review areas of concern in empirically supporting visual analysis and in relating visual analysis to scientific methods of inference, and discussthe importance of integrating visual analysis with statistical analysis. Our journey explores interactions of stimulus properties with human information processing. Within this context, we identify strategies for enhancing accuracy in visual display.

Journal ArticleDOI
27 Jul 2014
TL;DR: New shading language abstractions are designed that simplify development of shaders for this system, and adaptive techniques that use these mechanisms to reduce the number of instructions performed during shading by more than a factor of three while maintaining high image quality are designed.
Abstract: Due to complex shaders and high-resolution displays (particularly on mobile graphics platforms), fragment shading often dominates the cost of rendering in games. To improve the efficiency of shading on GPUs, we extend the graphics pipeline to natively support techniques that adaptively sample components of the shading function more sparsely than per-pixel rates. We perform an extensive study of the challenges of integrating adaptive, multi-rate shading into the graphics pipeline, and evaluate two- and three-rate implementations that we believe are practical evolutions of modern GPU designs. We design new shading language abstractions that simplify development of shaders for this system, and design adaptive techniques that use these mechanisms to reduce the number of instructions performed during shading by more than a factor of three while maintaining high image quality.

Patent
Frederick Gottesman1, David Clune1, James Andrews1, Gevka Igor1, Satwick Shukla1 
02 Apr 2014
TL;DR: In this article, a method for accessing an electronic image comprising a surface area and dividing the electronic image into a plurality of surfaces is presented, and a percentage of the surface area of the image is occupied by the one or more surfaces determined to comprise the type of graphics.
Abstract: In one embodiment, a method includes accessing an electronic image comprising a surface area and dividing the electronic image into a plurality of surfaces. The method further includes determining that one or more of the surfaces comprise a type of graphics, and determining a percentage of the surface area of the image that is occupied by the one or more surfaces determined to comprise the type of graphics.

Journal ArticleDOI
27 Jul 2014
TL;DR: An interactive framework that allows a user to rapidly explore and visualize a large image collection using the medium of average images, and provides a way to summarize large amounts of visual data by weighted average(s) of an image collection, with the weights reflecting user-indicated importance.
Abstract: This paper proposes an interactive framework that allows a user to rapidly explore and visualize a large image collection using the medium of average images. Average images have been gaining popularity as means of artistic expression and data visualization, but the creation of compelling examples is a surprisingly laborious and manual process. Our interactive, real-time system provides a way to summarize large amounts of visual data by weighted average(s) of an image collection, with the weights reflecting user-indicated importance. The aim is to capture not just the mean of the distribution, but a set of modes discovered via interactive exploration. We pose this exploration in terms of a user interactively "editing" the average image using various types of strokes, brushes and warps, similar to a normal image editor, with each user interaction providing a new constraint to update the average. New weighted averages can be spawned and edited either individually or jointly. Together, these tools allow the user to simultaneously perform two fundamental operations on visual data: user-guided clustering and user-guided alignment, within the same framework. We show that our system is useful for various computer vision and graphics applications.

Proceedings ArticleDOI
22 Oct 2014
TL;DR: This paper shows that machine learning techniques can build accurate predictive models for GPU acceleration, and applies supervised learning algorithms to infer predictive models, based on dynamic profile data collected via instrumented runs on general purpose processors.
Abstract: Graphics processing units (GPUs) can deliver considerable performance gains over general purpose processors. However, GPU performance improvement vary considerably across applications. Porting applications to GPUs by rewriting code with GPU-specific languages requires significant effort. In consequence, it is desirable to predict which applications would benefit most before porting to the GPU. This paper shows that machine learning techniques can build accurate predictive models for GPU acceleration. This study presents an approach which applies supervised learning algorithms to infer predictive models, based on dynamic profile data collected via instrumented runs on general purpose processors. For a set of 18 parallel benchmarks, the results show that a small set of easily-obtainable features can predict the magnitude of GPU speedups on two different high-end GPUs, with accuracies varying between 77% and 90%, depending on the prediction mechanism and scenario. For already-ported applications, similar models can predict the best device to run an application with an effective accuracy of 91%.

Proceedings ArticleDOI
26 Apr 2014
TL;DR: The system combines projected graphics on an artificial climbing wall and body tracking using computer vision technology to accelerate motor skill learning or to make monotonous parts of the training fun by adding relevant goals and encouraging social collaboration.
Abstract: This paper describes our efforts in developing a novel augmented climbing wall. Our system combines projected graphics on an artificial climbing wall and body tracking using computer vision technology. The system is intended for accelerating motor skill learning or to make monotonous parts of the training fun by adding relevant goals and encouraging social collaboration. We describe six initial prototypes and the feedback obtained from testing them with intermediate and experienced climbers.

Journal ArticleDOI
TL;DR: An exploration and a design space that characterize the usage and placement of word-scale visualizations within text documents and identifies six important variables that control the placement of the graphics and the level of disruption of the source text are presented.
Abstract: We present an exploration and a design space that characterize the usage and placement of word-scale visualizations within text documents. Word-scale visualizations are a more general version of sparklines--small, word-sized data graphics that allow meta-information to be visually presented in-line with document text. In accordance with Edward Tufte's definition, sparklines are traditionally placed directly before or after words in the text. We describe alternative placements that permit a wider range of word-scale graphics and more flexible integration with text layouts. These alternative placements include positioning visualizations between lines, within additional vertical and horizontal space in the document, and as interactive overlays on top of the text. Each strategy changes the dimensions of the space available to display the visualizations, as well as the degree to which the text must be adjusted or reflowed to accommodate them. We provide an illustrated design space of placement options for word-scale visualizations and identify six important variables that control the placement of the graphics and the level of disruption of the source text. We also contribute a quantitative analysis that highlights the effect of different placements on readability and text disruption. Finally, we use this analysis to propose guidelines to support the design and placement of word-scale visualizations.

Journal ArticleDOI
TL;DR: This paper extends a user transparent parallel programming model for MMCA to allow the execution of compute intensive operations on the GPUs present in the cluster, and presents a new optimization approach, called adaptive tiling, to implement a highly efficient, yet flexible, library-based convolution operation for modern GPUs.

Book
12 Mar 2014
TL;DR: In this article, the main task of photogrammetry is to estimate the distance of a single image to a single point of interest using a single model and ortho images of the image.
Abstract: Introduction - Basic ideas and main task of photogrammetry Image sources - Analogue and digital cameras Short history of photogrammetric evaluation methods Geometric principles 1 - Flying height, focal length Geometric principles 2 - Image orientation Some definitions Length and angle units Included software and data - Hardware requirements, operating system Image material Overview of the software Installation Additional programmes, copyright, data General remarks Scanning of photos - Scanner types Geometric resolution Radiometric resolution Some practical advice Import of the scanned images Example 1 - A single model - Project definition Model definition Stereoscopic viewing Measurement of object coordinates Creation of DTMs via image matching Ortho images Example 2 - Aerial triangulation - Aerial triangulation measurement Block adjustment with BLUH Mosaics of DTMs and ortho images Example 3 - Some special cases - Scanning aerial photos with an A4 scanner Interior orientation without camera parameters Images from a digital camera An example of close-range photogrammetry A view into the future - Photogrammetry in 2020 Programme description - Some definitions Basic functions Aims and limits of the programme Operating the programme Buttons in the graphics windows File handling Pre-programmes Aerial triangulation measurement Aerial triangulation with BLUH Processing Display Appendix - Codes GCP positions for tutorial 2.

Journal ArticleDOI
TL;DR: This paper presents an interactive approach for shape co-segmentation via label propagation, which is able to produce error-free results and is very effective at handling out-of-sample data.

Journal ArticleDOI
TL;DR: Using a grid‐based method to search the critical points in electron density, it is shown how to accelerate such a method with graphics processing units (GPUs), and what one GPU dedicated for video games can be used without any problem for the application.
Abstract: Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for highperformance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 103 faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. V C 2014 Wiley Periodicals, Inc.

Proceedings ArticleDOI
08 Oct 2014
TL;DR: This paper proposes a measurement-based and statistical approach for the probabilistic characterisation of the best-case execution time of the worst-case temporal behaviour of parallel applications on GPUs.
Abstract: The massive computational power of graphics processor units (GPUs), combined with novel programming models such as CUDA, makes them attractive platforms for many parallel applications. This includes embedded and real-time applications, which, however, also have temporal constraints: computations must not only be correct but also completed on time. This poses a challenge because the characterisation of the worst-case temporal behaviour of parallel applications on GPUs is still an open problem. To address this situtation, this paper proposes a measurement-based and statistical approach for the probabilistic characterisation of the worst-case execution time of such an application.

Journal ArticleDOI
TL;DR: It is found that the fixed step methods can be faster while the adaptive step methods are better in terms of accuracy and robustness.

Patent
Akshay Gadre1, Kerri Breslin1
28 Aug 2014
TL;DR: In this article, a system comprising a computerreadable storage medium storing at least one program, and a computer-implemented method for digital inventories is described. But this system does not address the problem of how to determine availability of a target item at the physical store location.
Abstract: Disclosed are a system comprising a computer-readable storage medium storing at least one program, and a computer-implemented method for digital inventories. An application interface module receives a request message from a user device at a physical store location linked to an online marketplace. The request message indicates a request to determine availability of a target item at the physical store location. The user device is linked to a user. In response to the request message, a database management module accesses inventory data of the online marketplace. An inventory engine determines whether the target item is available at the physical store location. Based on a determination that the target item is not available at the target store, a graphics processing module generates a digital representation of the user and the target item for display within a user interface rendered on the user device.

Journal ArticleDOI
TL;DR: An approach of rapid hologram generation for the realistic three-dimensional (3-D) image reconstruction based on the angular tiling concept is proposed, using a new graphic rendering approach integrated with a previously developed layer-based method for hologram calculation.
Abstract: An approach of rapid hologram generation for the realistic three-dimensional (3-D) image reconstruction based on the angular tiling concept is proposed, using a new graphic rendering approach integrated with a previously developed layer-based method for hologram calculation. A 3-D object is simplified as layered cross-sectional images perpendicular to a chosen viewing direction, and our graphics rendering approach allows the incorporation of clear depth cues, occlusion, and shading in the generated holograms for angular tiling. The combination of these techniques together with parallel computing reduces the computation time of a single-view hologram for a 3-D image of extended graphics array resolution to 176 ms using a single consumer graphics processing unit card.

Proceedings ArticleDOI
TL;DR: A perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements which match the outside world, “visual- Conformal” is proposed.
Abstract: Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to “see” through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. “Situational awareness” of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot’s head orientation relative to the aircraft reference frame. Together with the aircraft’s position and orientation relative to the world’s reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, “visual-conformal”. Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.

Proceedings ArticleDOI
03 Nov 2014
TL;DR: This paper presents LiveRender, an open-source gaming system that remedies the problem of poor scalability in cloud gaming systems by implementing a suite of bandwidth optimization techniques including intraframe compression, interframes compression, and caching, establishing what is called compressed graphics streaming.
Abstract: In cloud gaming systems, the game program runs at servers in the cloud, while clients access game services by sending input events to the servers and receiving game scenes via video streaming. In this paradigm, servers are responsible for all performance-intensive operations, and thus suffer from poor scalability. An alternative paradigm is called graphics streaming, in which graphics commands and data are offloaded to the clients for local rendering, thereby mitigating the server's burden and allowing more concurrent game sessions. Unfortunately, this approach is bandwidth consuming, due to large amounts of graphic commands and geometry data. In this paper, we present LiveRender, an open source gaming system that remedies the problem by implementing a suite of bandwidth optimization techniques including intra-frame compression, inter-frame compression, and caching, establishing what we call compressed graphics streaming. Experiments results show that the new approach is able to reduce bandwidth consumption by 52-73% compared to raw graphics streaming, with no perceptible difference in video quality and reduced response delay. Compared with the video streaming approach, LiveRender achieves a traffic reduction of 40-90% with even improved video quality and substantially smaller response delay, while enabling higher concurrency at the server.

Journal ArticleDOI
TL;DR: This work introduces a novel efficient technique for automatically transforming a generic renderable 3D scene into a simple graph representation named ExploreMaps, where nodes are nicely placed point of views, called probes, and arcs are smooth paths between neighboring probes.
Abstract: We introduce a novel efficient technique for automatically transforming a generic renderable 3D scene into a simple graph representation named ExploreMaps, where nodes are nicely placed point of views, called probes, and arcs are smooth paths between neighboring probes. Each probe is associated with a panoramic image enriched with preferred viewing orientations, and each path with a panoramic video. Our GPU-accelerated unattended construction pipeline distributes probes so as to guarantee coverage of the scene while accounting for perceptual criteria before finding smooth, good looking paths between neighboring probes. Images and videos are precomputed at construction time with off-line photorealistic rendering engines, providing a convincing 3D visualization beyond the limits of current real-time graphics techniques. At run-time, the graph is exploited both for creating automatic scene indexes and movie previews of complex scenes and for supporting interactive exploration through a low-DOF assisted navigation interface and the visual indexing of the scene provided by the selected viewpoints. Due to negligible CPU overhead and very limited use of GPU functionality, real-time performance is achieved on emerging web-based environments based on WebGL even on low-powered mobile devices.

Patent
24 Jul 2014
TL;DR: In this article, the registration of a graphics buffer with a kernel running on a first processor, storing the registered buffer in memory initially without drawing the graphics buffer to a display, and passing the registered graphics buffer directly to a kernel display driver directly to draw the buffer to the display, in response to a trigger.
Abstract: One disclosed method includes registering a graphics buffer with a kernel running on a first processor, storing the registered graphics buffer in memory initially without drawing the graphics buffer to a display, and passing the registered graphics buffer to a kernel display driver directly to draw the graphics buffer to the display, in response to a trigger. The method may further include informing a second processor of the registered graphics buffer and receiving the trigger by the kernel as a message from the second processor. The first processor may receive the trigger as a wake command from the second processor while the first processor is in sleep mode. A partial resume of the kernel is then performed while preventing activation of user space on the primary processor, and the graphics buffer is drawn on the display without using an operating system graphics pipeline of the user space.

Patent
Haitao Guo1, Kenneth I. Greenebaum1, Guy Cote1, Singer David W1, Alexandros Tourapis1 
30 Sep 2014
TL;DR: In this article, a method and system for adaptively mixing video components with graphics/UI components, where the video components and graphics orUI components may be of different types, e.g., different dynamic ranges (such as HDR, SDR) and/or color gamut (e.g. WCG), is presented.
Abstract: A method and system for adaptively mixing video components with graphics/UI components, where the video components and graphics/UI components may be of different types, e.g., different dynamic ranges (such as HDR, SDR) and/or color gamut (such as WCG). The mixing may result in a frame optimized for a display device's color space, ambient conditions, viewing distance and angle, etc., while accounting for characteristics of the received data. The methods include receiving video and graphics/UI elements, converting the video to HDR and/or WCG, performing statistical analysis of received data and any additional applicable rendering information, and assembling a video frame with the received components based on the statistical analysis. The assembled video frame may be matched to a color space and displayed. The video data and graphics/UI data may have or be adjusted to have the same white point and/or primaries.