scispace - formally typeset
Search or ask a question
Author

Bernardino Ruiz

Bio: Bernardino Ruiz is an academic researcher. The author has contributed to research in topics: Texture mapping & Visualization. The author has an hindex of 1, co-authored 1 publications receiving 5 citations.

Papers
More filters
Proceedings ArticleDOI
01 Oct 2013
TL;DR: It is shown that it is possible to automatically reproduce realisticlooking virtual objects and scenes, even with photographs taken with an uncalibrated single moving camera and while under uncontrolled and intentionally variable lighting conditions, by extending the present capabilities of IBM with additional capture and modelling of surface appearance.
Abstract: Existing technologies for contact-less 3D scanning and Image Based Modelling (IBM) methods are being extensively used nowadays to digitize cultural heritage elements. With a convenient degree of automation these methods can properly capture and reproduce shape and basic colour textures. However, there is usually a quite evident lack of fidelity in the resulting appearance of the virtual reproductions when compared with the original items. Even when properly photo-textured, the reproduced surfaces often resemble either plaster or plastic, regardless of the properties of the original materials. What is neither captured nor modelled is the natural dynamic response of the actual materials with respect to changes in observation angle and/or the lighting arrangement. The methodology introduced in this paper tries to improve the three-dimensional digitalization and visualization of cultural heritage elements, by extending the present capabilities of IBM with additional capture and modelling of surface appearance. We show that it is possible to automatically reproduce realisticlooking virtual objects and scenes, even with photographs taken with an uncalibrated single moving camera and while under uncontrolled and intentionally variable lighting conditions. This is achieved not only by reconstructing the shape and projecting colour texture maps from photographs, but also modelling and mapping the apparent optical response of the surfaces to light changes, while also determining the variable distribution of environmental illumination of the original scene. This novel approach integrates Physically Based Render (PBR) concepts in a processing loop that combines capture and visualization. Using the information contained in different photographs, where the appearance of the object surface changes with environmental light variations, we show that it is possible to enhance the information contained in the usual colour texture maps with additional layers. This enables the reproduction of finer details of surface normals and relief, as well as effective approximations of the Bi-directional Reflectance Distribution Function (BRDF). The appearance of the surfaces can then be reproduced with a dedicated render engine providing unusual levels of detail and realism due to enriched multi-layer texture maps and custom shading functions. This methodology will be introduced with a real case-study, to illustrate its practical applicability and flexibility; The virtual reproduction of the Lady of Elche was performed only from archived photographs taken at the museum for different documentation purposes, using uncalibrated optics and an uncontrolled studio light arrangement. We discuss the capture on larger architectural elements as well, with uncontrolled (yet still variable) illumination in outdoor environments and challenging items with difficult to capture surfaces such as the brass sculpture of La Regenta, where proper reproduction of surface reflection and environmental lights are fundamental steps to provide a good visualization experience. These cases will show the feasibility of working with field calibration and initial approximations for the camera model and light-maps, addressing thus the flexibility required for practical field documentation in museum environments or outdoors. The potential for diffusion will be shown with the use of open source software tools for enhanced visualization. The presented capture methods are integrated with the specific adaptation of open-source GPU-based (Graphics Processing Unit) render engines to produce two flavours of 3D inspection/visualization tools with proper relighting capabilities, able to reveal very subtle details: A quasi-real time realistic engine (Blender Cycles), which is also the basis for the capture process and is focused on realistic reproduction, and a real-time version based on customized pixel shaders, for the real-time visualization of lightweight models on web browsers and other interactive applications.

5 citations


Cited by
More filters
Proceedings ArticleDOI
12 Aug 2016
TL;DR: In this paper, the authors developed a cross cultural approach for educating people on Monuments that are listed at UNESCO world heritage list, in Cyprus, in order to help the user, in a UX friendly way, to learn about the different phases of the monument, the history, the pathology state, the architectural value and the conservation stage.
Abstract: Digital heritage data are now more accessible through crowdsourcing platforms, social media and blogs. At the same time, evolving technology on 3D modelling, laser scanning and 3D reconstruction is constantly upgrading and multiplying the information that we can use from heritage digitalisation. The question of reusing the information in different aspects rises. Educators and students are potential users of the digital content; developing for them an adaptable environment for applications and services is our challenge. One of the main objective of the EU Europeana Space project is the development of a holistic approach for educating people (grown ups and kids) on Monuments that are listed at UNESCO world heritage list, in Cyprus. The challenge was the use of Europeana Data (Pictures and the 3D objects) in a way that the information on the platform would be comprehensible by the users. Most of the data have little metadata information and they lack history and cultural value description (semantics). The proposed model ction is based on the cross cultural approach which responds to the multicultural features of present era but at the same time to the contemporary pedagogical and methodological directions. The system uses all innovative digital heritage resources, in order to help the user, in a UX friendly way, to learn about the different phases of the monument, the history, the pathology state, the architectural value and the conservation stage. The result is a responsive platform, accessible through smart devices and desktop computers, (in the frame of “Bring Your Own Device” a.k.a. BYOD) where every Monument is a different course and every course is addressed to different age groups (from elementary level to adults’ vocational training).

5 citations

Journal ArticleDOI
TL;DR: A practical methodology for appearance acquisition is demonstrated, previously introduced in (Martos and Ruiz, 2013), applied here specifically for the production of re-illuminable architectural orthoimages, suitable for outdoor environments, where the illumination is variable and uncontrolled.
Abstract: . Software tools for photogrametric and multi-view stereo reconstruction are nowadays of generalized use in the digitization of architectural cultural heritage. Together with laser scanners, these are well established methods to digitize the three-dimensional geometric properties of real objects. However, the acquired photographic colour mapping of the resulting point clouds or the textured mesh cannot differentiate the proper surface appearance from the influence of the particular illumination present at the moment of the digitization. Acquisition of the actual surface appearance, separated from the existing illumination, is still a challenge for any kind of cultural heritage item, but very specially for architectural elements. Methods based on systematic sampling with commuting light patterns in a laboratory set-up are not suitable. Immovable and outdoor items are normally limited to the existing and uncontrolled natural illumination. This paper demonstrates a practical methodology for appearance acquisition, previously introduced in (Martos and Ruiz, 2013), applied here specifically for the production of re-illuminable architectural orthoimages. It is suitable for outdoor environments, where the illumination is variable and uncontrolled. In fact, naturally occurring changes in light among different images along the day are actually desired and exploited, producing an enhanced multi-layer dynamic texture that is not limited to a frozen RGB colour map. These layers contain valuable complementary information about the depth of the geometry, surface normal fine details and other illuminationdependent parameters, such as direct and indirect light and projected self-shadows, allowing an enhanced and re-illuminable ortoimage representation.

3 citations

Journal ArticleDOI
TL;DR: A new methodology based on the combination of photogrammetric and stereo-photometric techniques is described that allows creating virtual replicas reproducing the relief in micrometric scale, with a geometric resolution until 7 microns, to provide quality 3D printing by additive manufacturing methods.
Abstract: This paper describes a new methodology based on the combination of photogrammetric and stereo-photometric techniques that allows creating virtual replicas reproducing the relief in micrometric scale, with a geometric resolution until 7 microns. The finest details of the texture obtained by photogrammetric methods are translated to the relief of the mesh to provide quality 3D printing by additive manufacturing methods. These results open new possibilities for virtual and physical reproduction of archeological items that need a great accuracy and geometric resolution.

2 citations

Journal ArticleDOI
TL;DR: To demonstrate the efficiency of a rendering farm implementation, scalability tests were performed using a 360° equirrectangular model and the work is carried out to achieve highly complex renderings in less time to benefit the direction of the research.
Abstract: Now-a-days, photorealistic images are demanded for the realization of scientific models, so we use rendering tools that convert three-dimensional models into highly realistic images. The problem of generating photorealistic images occurs when the three-dimensional model becomes larger and more complex, so the time to generate an image is much greater due to the limitations of hardware resources, about this problem is implemented the render farm, which consists in a set of computers interconnected by a high-speed network that provides a strip of the global image distributed in each participating computers with the intention of reducing the processing time of highly complex computational images. The research was implemented in a high-performance Beowulf group of the Universidad de Ciencias y Humanidades using a total of 18 computers. To demonstrate the efficiency of a rendering farm implementation, scalability tests were performed using a 360° equirrectangular model with a total of 67 million pixels, the work is carried out to achieve highly complex renderings in less time to benefit the direction of the research.

2 citations

01 Jan 2015
TL;DR: In this article, a new methodology based on the combination of photogrammetric and stereophotometric techniques is described for creating virtual replicas reproducing the relief in micrometric scale, with a geometric resolution until 7 microns.
Abstract: This paper describes a new methodology based on the combination of photogrammetric and stereophotometric techniques that allows creating virtual replicas reproducing the relief in micrometric scale, with a geometric resolution until 7 microns. The finest details of the texture obtained by photogrammetric methods are translated to the relief of the mesh to provide quality 3D printing by additive manufacturing methods. These results open new possibilities for virtual and physical reproduction of archeological items that need a great accuracy and geometric resolution.