scispace - formally typeset
Search or ask a question

Showing papers on "Software rendering published in 2020"


Journal ArticleDOI
TL;DR: 3D models used to create 3D models of the Tyler house and Eyrie house, both mid-19th century ruins, are compared to evaluate the user experience for non-specialists between two different but popular software options for SfM modeling used by archaeologists.

26 citations


Posted Content
TL;DR: In this paper, a rendering software is used to generate realistic, synthetic training data to train a state-of-the-art deep neural network for the detection of plastic micro- and nanoparticles.
Abstract: Nanoparticles occur in various environments as a consequence of man-made processes, which raises concerns about their impact on the environment and human health. To allow for proper risk assessment, a precise and statistically relevant analysis of particle characteristics (such as e.g. size, shape and composition) is required that would greatly benefit from automated image analysis procedures. While deep learning shows impressive results in object detection tasks, its applicability is limited by the amount of representative, experimentally collected and manually annotated training data. Here, we present an elegant, flexible and versatile method to bypass this costly and tedious data acquisition process. We show that using a rendering software allows to generate realistic, synthetic training data to train a state-of-the art deep neural network. Using this approach, we derive a segmentation accuracy that is comparable to man-made annotations for toxicologically relevant metal-oxide nanoparticle ensembles which we chose as examples. Our study paves the way towards the use of deep learning for automated, high-throughput particle detection in a variety of imaging techniques such as microscopies and spectroscopies, for a wide variety of studies and applications, including the detection of plastic micro- and nanoparticles.

22 citations


Journal ArticleDOI
01 Dec 2020
TL;DR: A novel two-branch network with different initializations in the first layer to capture diverse features is designed and introduced, introducing a gradient-based method to construct harder negative samples and conduct enhanced training to further improve the generalization of CNN-based detectors.
Abstract: Advanced computer graphics rendering software tools can now produce computer-generated (CG) images with increasingly high level of photorealism. This makes it more and more difficult to distinguish natural images (Nis) from CG images by naked human eyes. For this forensic problem, recently some CNN(convolutional neural network)-based methods have been proposed. However, researchers rarely pay attention to the blind detection (or generalization) problem, i.e., no training sample is available from "unknown" computer graphics rendering tools that we may encounter during the testing phase. We observe that detector performance decreases, sometimes drastically, in this challenging but realistic setting. To study this challenging problem, we first collect four high-quality CG image datasets, which will be appropriately released to facilitate the relevant research. Then, we design a novel two-branch network with different initializations in the first layer to capture diverse features. Moreover, we introduce a gradient-based method to construct harder negative samples and conduct enhanced training to further improve the generalization of CNN-based detectors. Experimental results demonstrate the effectiveness of our method in improving the performance for the challenging task of "blind" detection of CG images. (C) 2020 Elsevier Ltd. All rights reserved.

10 citations


Posted Content
TL;DR: The results show that the effect of the illumination model is important, comparable in significance to the network architecture, and shows that both light probes capturing natural environmental light, and modelled lighting environments, can give good results.
Abstract: The use of computer generated images to train Deep Neural Networks is a viable alternative to real images when the latter are scarce or expensive. In this paper, we study how the illumination model used by the rendering software affects the quality of the generated images. We created eight training sets, each one with a different illumination model, and tested them on three different network architectures, ResNet, U-Net and a combined architecture developed by us. The test set consisted of photos of 3D printed objects produced from the same CAD models used to generate the training set. The effect of the other parameters of the rendering process, such as textures and camera position, was randomized. Our results show that the effect of the illumination model is important, comparable in significance to the network architecture. We also show that both light probes capturing natural environmental light, and modelled lighting environments, can give good results. In the case of light probes, we identified as two significant factors affecting performance the similarity between the light probe and the test environment, as well as the light probe's resolution. Regarding modelled lighting environment, similarity with the test environment was again identified as a significant factor.

2 citations


Proceedings ArticleDOI
20 Sep 2020
TL;DR: The necessary information to interpret NLOS data sets and reconstructions are discussed and the sensing and reconstruction limitations are not completely understood.
Abstract: In the past decade, optical non-line-of-sight (NLOS) sensing has evolved from a fundamental idea and laboratory proof-of-principle experiment to a more and more mature computational sensing method to obtain information which cannot accessed with conventional optical means. For instance, reconstruction of NLOS scenes has been demonstrated with large round trip path length, outdoors and in real-time. To reach this state of art, various powerful reconstruction algorithms and transient rendering software has been developed. Detailed reconstructions of complex scenarios show the feasibility of this technology to be used in various sensing tasks ranging from military and security operations to civilian search and rescue missions and scientific (e.g. archaeology) and medical sensing tasks. Nevertheless, the sensing and reconstruction limitations are not completely understood. This paper starts to discuss the necessary information to interpret NLOS data sets and reconstructions.

1 citations


Patent
07 Jul 2020
TL;DR: In this article, a USD-based 3D software efficient hardware rendering preview method is proposed, which comprises the following steps of converting USD assets into proxy-shaped nodes in Maya so as to integrate a USDbased hardware renderer into a Maya window; exporting the material coloring network of one model in Maya by using a plugin; converting all material networks into corresponding hardware material nodes in the USD by the plug-in in the export process.
Abstract: The invention provides a USD-based 3D software efficient hardware rendering preview method, which comprises the following steps of converting USD assets into proxy-shaped nodes in Maya so as to integrate a USD-based hardware renderer into a Maya window; exporting the material coloring network of one model in Maya by using a plug-in; converting all material networks into corresponding hardware material nodes in the USD by the plug-in in the export process, namely representing various attribute information on nodes in the software rendering material coloring network in the original Maya by usingthe USD hardware material nodes and upstream nodes connected with the USD hardware material nodes; and finally, recording to USD hardware material nodes and related upstream nodes to realize a real-time material effect on a USD hardware renderer, so that an artist can see material information on the 3D model in real time in a Maya window. The hardware rendering engine of the USD is integrated into the Maya window, and efficient and representative hardware rendering preview in the Maya window is achieved.

1 citations


Patent
30 Jul 2020
TL;DR: In this paper, the authors present a lazy loading approach to limit the amount of computer resources used when initially viewing a model or drawing, and also allow for setting which components of a rendering software to load / not load when a model file is selected.
Abstract: Systems and methods for improving 2D and/or 3D model execution in a runtime environment are disclosed. The system uses a novel form of lazy loading to limit the amount of computer resources used when initially viewing a model or drawing. The system also allows for setting which components of a rendering software to load / not load when a model file is selected. The system provides a picture of the a 2D and/or 3D model view in place of the full model upon selection of the model file, and additional attributes and/or rendering software components are called as needed as part of the lazy loading execution when the model is loaded into the runtime environment.

Patent
09 Apr 2020
TL;DR: In this paper, a list of widgets to be rendered on a display stored in a priority queue are sent to rendering software components based on a specified priority in the queue, which is based on whether or not the widgets are in a display area of a display.
Abstract: Embodiments of the present disclosure pertain to rendering on a mobile device. In one embodiment, a list of widgets to be rendered on a display stored in a priority queue. Widgets in the priority queue are sent to rendering software components based on a specified priority in the queue. The priority is based on whether or not the widgets are in a display area of a display. In one embodiment, data for widgets in the queue is retrieved during rendering of other widgets, and priority is based on whether data for a particular widget in the queue is available.

Journal ArticleDOI
TL;DR: There are wave-particle properties of light as discussed by the authors, but these properties are only wave properties, photo from CAD scripts render, and where are particle properties? Similar as for example thermal analysis, but other, by simulating physics of light by standard rendering software of n- enlargement bounces could envoy particle properties, as from glass heart object get similar shape as Calabi Yau variety assumption.
Abstract: There are wave-particle properties of light. On photoreal visualization, we see only wave properties of light, photo from CAD scripts render, and where are particle properties? Similar as for example thermal analysis, but other, by simulating physics of light by standard rendering software of n- enlargement bounces could envoy particle properties, as from glass heart object get similar shape as Calabi Yau variety assumption.