Fichier PDF

Partage, hébergement, conversion et archivage facile de documents au format PDF

Partager un fichier Mes fichiers Convertir un fichier Boite à outils Recherche Aide Contact



art%3A10.1007%2Fs11554 009 0137 x .pdf



Nom original: art%3A10.1007%2Fs11554-009-0137-x.pdf

Ce document au format PDF 1.3 a été généré par Arbortext Advanced Print Publisher 9.0.114/W / Acrobat Distiller 8.1.0 (Windows), et a été envoyé sur fichier-pdf.fr le 11/11/2014 à 09:53, depuis l'adresse IP 91.178.x.x. La présente page de téléchargement du fichier a été vue 592 fois.
Taille du document: 557 Ko (12 pages).
Confidentialité: fichier public




Télécharger le fichier (PDF)









Aperçu du document


J Real-Time Image Proc (2010) 5:109–120
DOI 10.1007/s11554-009-0137-x

SPECIAL ISSUE

Rendering techniques for mixed reality
Thomas Gierlinger • Daniel Danch
Andre´ Stork



Received: 12 November 2008 / Accepted: 7 October 2009 / Published online: 11 November 2009
Springer-Verlag 2009

Abstract In mixed reality (MR) design review, the aesthetics of a virtual prototype is assessed by integrating a
virtual model into a real-world environment and inspecting
the interaction between the model and the environment
(lighting, shadows and reflections) from different points of
view. The visualization of the virtual model has to be as
realistically as possible to provide a solid basis for this
assessment and interactive rendering speed is mandatory to
allow the designer to examine the scene from arbitrary
positions. In this article we present a real-time rendering
engine specifically tailored to the needs of MR visualization. The renderer utilizes pre-computed radiance transfer
to calculate dynamic soft-shadows, high dynamic range
images and image-based lighting to capture incident realworld lighting, approximate bidirectional texture functions
to render materials with self-shadowing, and frame postprocessing filters (bloom filter and an adaptive tone mapping operator). The proposed combination of rendering
techniques provides a trade-off between rendering quality
and required computing resources which enables high
quality rendering in mobile MR scenarios. The resulting
image fidelity is superior to radiosity-based techniques
because glossy materials and dynamic environment lighting with soft-shadows are supported. Ray tracing-based
techniques provide higher quality images than the proposed
system, but they require a cluster of computers to achieve
interactive frame rates which prevents these techniques
from being used in mobile MR (especially outdoor)

T. Gierlinger (&) D. Danch
Fraunhofer IGD, Darmstadt, Germany
e-mail: thomas.gierlinger@igd.fraunhofer.de
A. Stork
TU Darmstadt, Darmstadt, Germany

scenarios. The renderer was developed in the European
research project IMPROVE (FP6-IST-2-004785) and is
currently extended in the MAXIMUS project (FP7-ICT-1217039) where hybrid rendering techniques which fuse
PRT and ray tracing are developed.
Keywords Mixed reality rendering
Real-time rendering High dynamic range imaging
Image-based lighting Pre-computed radiance transfer
Approximate bidirectional texture function
Adaptive tone mapping

1 Introduction
Design review is concerned with the assessment of different prototypes of a product in terms of functionality,
feasibility, and aesthetics to select the most suitable one for
production. Since the construction of real-world prototypes
is cost-intensive, the trend in the industry is to base as
many decisions as possible on virtual prototypes which can
be generated at significantly lower costs. If a virtual prototype is used to review the aesthetics of a design, the
challenge is to visualize the model as realistically as possible while still allowing the designer to interact with the
virtual scene to assess the model from every possible point
of view. Furthermore, it is often meaningful (if not necessary) to review the prototype in the context of some realworld location, e.g., a virtual building at the construction
site. Mixed reality (MR) rendering techniques provide the
means for integrating virtual content into real-world
environments.
This article presents the MR rendering engine which we
developed in the European project IMPROVE. One aim of
IMPROVE was to develop hard- and software components

123

110

J Real-Time Image Proc (2010) 5:109–120

to better support designers in assessing the aesthetics of
virtual prototypes. There were representatives of two enduser groups involved in the project, namely Fiat Elasis
from the automotive industry and Page\Park Architects
from the architectural domain. The end-users defined two
major application scenarios with respect to MR: The
automotive designers requested the possibility to review a
virtual car in a real-world showroom using an optical seethrough HMD (this is what we call the indoor scenario). In
this scenario the designer assesses the appearance of the
virtual car by viewing it in the HMD while simultaneously
moving through the room. The designer is interested in the
properties of the car materials, including reflections which
emphasize the shape of the car. The architects wanted to be
able to walk around on a real construction site and review
the model of a planned building in MR at the same place
where it will be built—again using an optical see-through
HMD (this is what we call the outdoor scenario). In this
scenario, the architect views the virtual building in the
HMD and simultaneously moves along the site. The focus
in this scenario is on the appearance of the building due to
the current lighting and on shadows that are cast onto the
environment. Note that this scenario inherently requests
dynamic lighting of the virtual model (since the outdoor
lighting conditions may change during the review session)
and support for soft-shadows (resulting from an overcast
sky). The application scenarios led to the following
requirements for our rendering engine:
1.

2.
3.

Realistic rendering, i.e., (a) support for realistic
materials including reflections and (b) methods to
capture and re-use real-world lighting.
Support for dynamic lighting including soft-shadows.
Interactive rendering speed.

The remaining part of this article is organized as follows: in Sect. 2, we provide an overview of related work
with respect to MR rendering systems as well as MR
rendering techniques. Section 3 describes the details of our
approach to meet the aforementioned requirements, starting
with an overview of the rendering pipeline, followed by an
in depth description of the utilized algorithms. Benchmarks
of our rendering system with models of different sizes are
given in Sect. 4. Section 5 provides a short overview of
current and future work and Sect. 6 concludes the article.

2 Related work
In this section, we give a short overview of MR systems
with emphasis on the different approaches to image generation and afterwards we review the characteristics of
different rendering techniques to assess their ability to meet
the requirements of our renderer.

123

2.1 MR rendering systems
MR design review has been investigated before, e.g., by
Klinker et al. [19], however, the HMDs used were rather
bulky and the rendering techniques employed were missing
important features to integrate the virtual objects into real
scenes convincingly, most important dynamic soft-shadows
were missing. The ARIS Project [1] developed a MR
rendering system which uses high dynamic range images
(HDRIs) to capture incident lighting. They project this
lighting information onto an approximation of the realworld geometry and calculate a radiosity solution for scene
elements that are not directly visible. Furthermore, they
calculate approximate soft-shadows by generating shadow
maps for the most important light sources of the radiosity
mesh. The resulting shadow maps are combined using a
differential rendering technique to generate approximate
soft-shadows. The quality of the shadows depends on the
number of light sources used and generating high quality
soft-shadows involves a huge number of rendering passes.
The advantage of the technique is the support of local light
sources. Wald et al. [34] employ their interactive ray
tracing system to MR rendering. While this approach
delivers high quality results, a cluster of PCs is needed to
perform the rendering at interactive frame rates. Franke
and Jung [10] utilize pre-computed radiance transfer (PRT)
in their X3D-based MR system. Their system is similar to
the one proposed by the authors in the sense that they
support soft-shadows due to dynamic environment lighting
(which is the main feature of PRT). However, they do not
support the advanced material model proposed in this
article and they do not provide performance measurements
for complex models.
2.2 MR rendering techniques
The data required to render a virtual model can be divided
into three classes, namely the geometry of the model, the
material of surfaces and the incident lighting. In this section, we provide an overview of possible approaches to
acquire materials and incident lighting. We also outline
different rendering algorithms that generate images based
on these data and assess them in terms of suitability for
mobile MR applications, i.e., according to the requirements
stated in the introduction we search for the algorithm that
delivers the best image quality (in terms of the most
complete set of supported lighting effects) while still being
fast enough on today’s hardware to deliver interactive
frame rates. The geometry of the model is assumed to be
provided by a CAD/computer aided styling (CAS) program
(this is the design proposal generated by CAS in the
automotive industry or a CAD model generated by an
architect).

J Real-Time Image Proc (2010) 5:109–120

111

2.2.1 Acquisition of incident lighting

spatially varying properties. Spatially varying effects are
usually approximated via texture maps (see [3]). While this
approach delivers good results for flat surfaces, it does not
capture effects of the meso-structure of a material.
Although the standard texture mapping technique has been
extended to store geometric information of the underlying
surface (see [2]), these techniques are not capable of capturing self-shadows (i.e., shadows cast from the material’s
meso-structure onto itself). However, materials with mesostructure are quite common in car or building interiors, e.g.,
all sorts of textiles, where threads cast shadows onto their
neighbors. Dana et al. [6] present an approach to capture
materials with meso-structure and self-shadowing. They
capture images of a material sample for all possible combinations of incident light direction and viewing direction.
The resulting set of images is termed bidirectional texture
function (BTF). Renderings generated using BTFs deliver
high-quality results. However, the BTF approach has two
major problems: first, the acquisition procedure is complex.
Second, the amount of generated data is hard to use in realtime rendering, since the memory on a graphics board is
rather limited. There are approaches to compress BTFs to
better fit the needs of real-time rendering (see [21]), but the
acquisition process remains time consuming. Kautz [16]
proposes to use only a small sub-set of a full BTF [termed
approximate BTF (ABTF)] for real-time rendering. This
vastly simplifies the acquisition process. Although using
ABTFs yields images of lesser quality than BTFs, acquiring ABTFs is far more practical, which justifies their use
for real-time rendering of materials with self-shadows.

To convincingly merge virtual models with real scenes it
is important to have consistent lighting on the virtual
models, i.e., shadows must fall into the same direction as
those of real-world objects and the light sources used for
rendering the virtual models must represent the real-world
lighting situation as accurate as possible. One possibility
to capture a real-world lighting situation for real-time
rendering is to approximate all lights in a real scene by
standard computer graphics light sources like point lights,
directional lights or spot lights. However, for most real
scenes this approach is inefficient since one would have to
use a huge number of light sources to achieve a good
approximation. This is especially true for scenes that
contain area light sources (like windows or the sky).
Debevec [7] proposes the use of photographs to capture
real-world lighting which can be utilized to render virtual
objects [image-based lighting (IBL)]. This approach is
very efficient since only a single image is needed to
describe the lighting environment—independent of the
number of light sources that are present in the real scene.
One problem with the use of photographs to capture the
wide variety of light intensities in real-world environments is the limited dynamic range of standard (offthe-shelf) cameras. Depending on the actual exposure
settings a captured image may either lose details in dark
parts of the scene where the sensor is underexposed (and
the image is constant black) or in bright parts of the scene
where the sensor is overexposed (and the image shows
constant white). High dynamic range imaging (HDRI)
addresses this problem by capturing the whole range of
real-world intensities by either using an image series or
special HDR sensors. In the first approach, an image
series with different exposure settings is generated and the
correctly exposed parts of the images of this series is
combined into the resulting HDRI. In the second approach
the camera sensor itself is capable of detecting a high
dynamic range of intensities. One example of such a
sensor is the SpheroCamHDR by Spheron [33] which can
produce a high resolution HDR environment image in a
single shot.
2.2.2 Material acquisition
The common way to describe materials for real-time rendering is to work with some analytical expression of the
bidirectional reflectance distribution function (BRDF, see
[22]). The BRDF is a four-dimensional function which
returns the fraction of incident light (from a given direction) which is reflected into an outgoing direction. This
approach to material definition is valid only for homogeneous materials, i.e., materials that do not have any

2.2.3 Rendering algorithms
To synthesize physically correct images of a virtual scene,
the propagation of light through the scene has to be simulated. Light is emitted from light sources, travels through
space until it hits a surface in the scene where it is reflected
and/or refracted according to the material properties of the
surface. The reflected (and refracted) light continues its
way through space into the new direction until the next
surface is hit and the next interaction according to the
surface material occurs. This process recurs until the light
finally reaches the sensor of a virtual camera where it forms
the image. Kajiya [15] formalizes the propagation of light
in the rendering equation, the formula that has to be solved
by every rendering algorithm that strives to produce
physically correct images. The full numerical solution of
the rendering equation is computationally intensive and can
currently not be done in real-time. Therefore, all real-time
rendering algorithms approximate the rendering equation
to some extent. This approximation manifests itself in a
limited set of lighting effects that is captured by a specific
algorithm. In the following we characterize different

123

112

J Real-Time Image Proc (2010) 5:109–120

rendering algorithms to assess their suitability for MR
rendering, especially for mobile MR which requires the
algorithm to run in real-time on a single (wearable) PC.
Rasterization Today, the commonly used algorithm for
generating real-time 3D graphics is rasterization where a
local lighting model is evaluated at the vertices of triangles
that compose the scene geometry. The triangles are then
projected onto the image plane of a virtual camera and the
shading of the vertices is interpolated across the interior of
the projected triangle. This approach is very efficient and
hardware support in form of graphics processing units
(GPUs) is available on nearly all current commodity PCs.
Standardized programming interfaces like OpenGL (see
[24]) exist which allow for easy access to features of the
underlying graphics hardware. However, the use of a local
lighting model, where the effect of a light source on a
triangle is calculated without taking the other triangles in
the scene into account, disregards any global lighting
effects like shadows or color bleeding (i.e., diffuse light
interreflection between triangles). The generation of hard
shadows can be integrated into the rasterization framework
by applying additional algorithms like shadow mapping
(see [37]) in additional rendering passes. This approach
works well for abstract computer graphics light sources
like point lights and directional lights. However, the light
sources present in real-world environments are hard to
model with these abstract lights, especially area light
sources like windows would require a huge number of
lights to be placed on the area of a window to create a good
approximation. Consequently, correct soft-shadows are
hard to integrate into this rendering approach and introduce
a severe decrease in performance. Reflections can be
approximated by environment mapping (see [3]). Glossy
reflections can be approximated by pre-filtered environment maps (see [17]). Characteristics of rasterization are as
follows:







real-time capability: yes;
dynamic lighting: yes;
indirect diffuse lighting: no;
soft-shadows: limited;
reflections: approximated by environment mapping;
glossy materials: yes (for local lighting).

Radiosity Radiosity is a rendering method introduced by
Goral et al. [11] which simulates the global diffuse light
transfer in a scene. This algorithm restricts all light sources
and materials in the scene to diffuse characteristics. Under
this assumption it is possible to divide the scene into small
surface elements (patches) and to construct a system of
linear equations that describes the diffuse light transfer in
the scene. Solving this linear system involves the calculation of the inverse of a high-dimensional matrix, which

123

cannot be done in real-time. However, once the solution is
found, the diffuse global illumination in the whole scene,
including soft-shadows and diffuse indirect lighting, is
known. The result can then be explored in interactive walkthroughs because diffuse lighting is view-independent.
Since the calculation of a radiosity solution cannot be done
in real-time, it is usually generated in an offline pre-process
and the result is stored with the model. One problem with
the radiosity method is the fact that a solution, once found,
is only valid as long as the scene configuration (i.e., position and orientation of geometries, material properties, and
position, orientation and intensity of light sources) does not
change. If the scene configuration, e.g., the incident lighting, changes, then the time-consuming pre-process to find a
new radiosity solution has to be repeated. This implies that
dynamic lighting is not possible with the classical radiosity
approach. Note that reflections can be approximated in the
radiosity method by overlaying the diffuse lighting results
with environment mapping. Several approaches have been
proposed to speed up the radiosity pre-process (e.g., [12]).
Drettakis and Sillion [8] construct a line-space hierarchy
for fast identification of those parts of a scene that are
affected by changes of the geometry. This approach allows
them to move a small number of objects and to update the
radiosity solution interactively. Keller [18] presents a stochastic approach to calculate a radiosity solution. His
instant radiosity algorithm emits light particles from the
light sources and traces them through the scene where they
generate secondary light sources [termed virtual point
lights (VPLs)] that are used to approximate indirect illumination. Depending on the number of VPLs generated,
this method can be applied in real-time. However, it does
not support glossy materials. Sillion et al. [31] present a
method to integrate glossy and highly specular materials
into the progressive radiosity framework. Recent work on a
GPU-based implementation of progressive radiosity (see
[35]) indicates, that this approach is currently not possible
in real-time for non-trivial scenes. For complex scenes the
radiosity method can be summarized as follows. Characteristics of radiosity:







real-time capability: yes (for static scenes or small
number of dynamic objects);
dynamic lighting: generally no, but possible with
instant radiosity;
indirect diffuse lighting: yes;
soft-shadows: yes;
reflections: approximated by environment mapping for
real-time application;
glossy materials: no (not in real-time).

Ray tracing In contrast to radiosity, which provides a
view-independent approximation to the light distribution in
a scene, ray tracing, as introduced by Whitted [36], is a

J Real-Time Image Proc (2010) 5:109–120

view-dependent technique. Rays originating from the eye
of a virtual observer are traced through the scene until they
hit a surface. At the hit-point, the direct lighting which is
reflected back to the observer is calculated by tracing
additional rays to each light source (termed shadow feelers). If a shadow feeler reaches the light source without
intersecting the geometry of the scene, then the contribution of the light source to the shading of the hit-point is
calculated. Otherwise the hit-point is in shadow with
respect to the light source and the light source does not
contribute to the shading at the hit-point. Afterwards,
secondary rays (into the ideal reflected and/or refracted
direction) are generated at the hit-point and traced further
through the scene to account for the light that is specularly
reflected or refracted towards the observer. This Whittedstyle ray tracing can consequently account for hardshadows, ideal reflections and ideal refractions, but not for
soft-shadows and glossy reflections/refractions. Cook et al.
[5] extend Whitted-style ray tracing for support of softshadows and glossy reflections amongst others. They
accomplish this in the distributed ray tracing algorithm by
distributing multiple rays to evaluate an effect, e.g., multiple shadow feelers are distributed on area light sources to
capture soft-shadows. However, this approach requires to
trace more rays than Whitted-style ray tracing and is
therefore slower. Even more rays have to be traced in path
tracing (see [15]) where Monte Carlo integration is used to
evaluate the rendering equation. Path tracing-based techniques can deliver the most accurate images since all
possible light paths are captured by the algorithm. However, the method is computationally intensive and cannot
be applied in real-time.
Jensen [14] proposes photon mapping, an algorithm
that is significantly more efficient than path tracing and
still able to capture color bleeding and caustics. In this
approach, photons are emitted from light sources and traced
through the scene. If a surface of the scene is hit by a
photon, then it is stored in a KD tree (a data structure that
supports fast nearest neighbor searches). The photon is then
reflected or refracted according to the material properties at
the hit-point and traced further through the scene. After the
KD tree has been built with a sufficient number of traced
photons, it is used in a ray tracing pass to calculate diffuse
interreflections (caustics and color bleeding) based on a
density estimation of the stored photons. While this
technique is faster than path tracing, it is slower than
Whitted-style ray tracing due to the additional resources
needed for KD tree construction and density estimation.
Wald et al. [34] show that ray tracing-based techniques can
produce interactive results if a cluster of PCs is used.
Although the computing power of CPUs and GPUs has
increased since their work, and even a first dedicated ray
tracing accelerator has been developed (the Caustic One

113

board by Caustic Graphics [4]), real-time ray tracing of
complex scenes at high resolutions on a single PC is not yet
possible. The Caustic One board does a remarkable job at
accelerating ray tracing (an increase of performance by a
factor of 20 over CPU-based ray tracing is reported),
however, interactive speed of 3–5 frames per second for a
complex scene has been demonstrated only for low display
resolutions (namely 640 9 480 pixels). The same is true for
current GPU-based ray tracers. Zhou et al. [38] report
interactive frame rates for their GPU-based ray tracer at a
resolution of 1,024 9 1,024 (5–32 fps), but the scenes used
were rather small (11–300 k triangles). They also provide a
photon mapping implementation that is reported to be 10
times faster than a CPU implementation, but the absolute
frame rates of 10 fps at a resolution of 800 9 600 for model
sizes below 20k triangles do not yet allow for real-time
rendering with complex models at high resolutions. Characteristics of ray tracing-based techniques are as follows:







real-time capability: limited (small scenes or low
resolutions);
dynamic lighting: yes;
indirect diffuse lighting: yes;
soft-shadows: yes;
reflections: yes;
glossy materials: yes.

Pre-computed radiance transfer PRT is a real-time rendering algorithm proposed by Sloan et al. [32] to render
low-frequency global illumination effects under dynamic
environmental lighting. The algorithm consists of two
passes: a pre-process and the actual run-time calculations.
The pre-process calculates transfer functions at points on a
model (either at the vertices or on a pre-texel basis). These
transfer functions encode global illumination effects like
shadowing and indirect diffuse lighting and they are projected onto the function space of the spherical harmonic
(SH) functions, yielding a set of projection coefficients that
are stored together with the model and re-used at run-time.
As with radiosity, this pre-process cannot be done in realtime for complex models. The run-time calculations use a
description of the lighting environment which is projected
onto the SH basis and applied to the stored transfer functions. The core of the technique is the projection of all
functions relevant to the lighting calculation onto the SH
basis. To calculate the light reflected towards the viewer at
a given point, requires an integration over all incident
lighting directions. As it turns out, this reduces to computing a simple dot product of projection coefficients in the
SH basis. This dot product can be performed in realtime and allows for rendering scenes with dynamic
environmental lighting and low-frequency global illumination effects in a convincing manner. Note that this
approach supports glossy reflections (see [20]) but not

123

114

J Real-Time Image Proc (2010) 5:109–120

highly specular, mirror-like reflections. The latter may be
approximated by environment mapping. Characteristics of
PRT are as follows:







real-time capability: yes (for static scenes);
dynamic lighting: yes (for environmental lighting);
indirect diffuse lighting: yes;
soft-shadows: yes;
reflections: approximated by environment mapping for
real-time application;
glossy materials: yes.

Fig. 1 Comparison of rendering techniques

3 Approach
Figure 1 summarizes the characteristics of the rendering
algorithms presented in the previous section. From comparing the features of these rendering techniques we conclude that the most appropriate technique to support the
requirements of advanced materials, dynamic environment
lighting and interactive rendering speed is PRT because
rasterization does not support global illumination effects,
radiosity cannot handle glossy materials and ray tracing is
not yet fast enough on a single PC which prevents its
applicability in our outdoor mobile MR scenario. We use
HDRIs to capture incident lighting because of the efficiency of the approach and it’s natural integration with
PRT. ABTFs were incorporated into our MR rendering
engine because we needed a practical technique to capture
materials with meso-structure for the rendering of car
interiors. Our rendering engine is based on the OpenSG
scenegraph (see [25]), which we extended to support PRT
and IBL. ABTFs are supported through special shaders
developed using the OpenGL Shading Language (see [29]).
The workflow for using our rendering engine is depicted in
Fig. 2. First a model is loaded into the VRED (see
http://www.vred.org) editor and materials with PRT shaders are applied. Then we run our PRT pre-processor on the
model which calculates the projection coefficients. The
result is stored with the model. For real-time rendering, the
model with projection coefficients is loaded together with a
lighting environment in form of an HDR environment map.
The lighting environment is projected onto the SH basis.
Afterwards the actual PRT rendering is done by calculating
the dot product of the projection coefficients of the transfer
functions on the model and the lighting environment in a
vertex shader. The resulting image is stored in a framebuffer object and handed over to our post-processing
pipeline which is required to map the HDR rendering result
to low dynamic range display hardware.
In the following sections, we provide a more detailed
description of the rendering techniques we use in our MR
renderer together with some implementation details.

123

Fig. 2 Workflow in our renderer. Model preparation (left) and realtime rendering (right)

3.1 High dynamic range imaging
As discussed before, HDRI enables the efficient acquisition
of real-world environmental lighting. To capture an HDRI
using a standard digital camera it is necessary to take a
series of photographs at different exposure times. These
images are then combined to form an HDRI (see [7] for
details), which can be done using HDRShop (see [13]). The
acquisition of the photo series can be done manually, but
this usually leads to incorrectly registered images due to
small movements of the camera when pushing the shutter
release. It is possible to register the images in a post-process (e.g., using pfstools, see [27]), but an easier way to get
correctly registered images is to remote-control the camera.
AHDRIA (see [23]) is a software package specifically
developed for this purpose. To capture a full environment
image in one shot, it is possible to utilize a mirror sphere
located at the position where the virtual object will be
placed. However, this approach results in rather low quality
environment images due to the limited resolution provided
by off-the-shelf cameras. For this reason we use a
SpheroCam HDR (see [33]), which is a special high resolution HDR environment camera. The resulting images
have a resolution of 11,000 9 5,500 pixels. From these
images we generate a low resolution light probe image

J Real-Time Image Proc (2010) 5:109–120

115

Fig. 3 Building model in
different lighting environments.
Model courtesy of Page\Park
Architects, light probes courtesy
of Paul Debevec

(128 9 128–512 9 512 pixels) which we actually use for
the lighting calculation in the renderer and a high resolution cube map which we use for specular reflections.
3.2 Pre-computed radiance transfer
In our renderer we pre-calculate shadowed transfer at every
vertex of a model (i.e., we project the visibility function at
every vertex). The ith projection coefficient Ti at a vertex is
calculated as
Z
Ti ¼ ðv ðxÞ cos ðhÞÞ yi ðxÞ dx
X

where v(x) is the visibility function at the vertex, x is an
incident lighting direction from the upper hemisphere X, h
is the angle between the surface normal and the incident
lighting direction and yi(x) is the ith SH basis function.
The pre-process is done on the CPU and the resulting
projection coefficients are stored in the texture coordinate
sets of the model. We utilize the Galileo ray tracer (see
[30]) to evaluate the visibility function. During run-time we
project a lighting environment provided as an HDR light
probe image onto the SH basis and upload the projection
coefficients as uniform variables to the graphics board. The
calculation of the dot product of transfer coefficients and
lighting coefficients is performed locally on the graphics
card using a vertex program. The accuracy of the global
illumination effects can be controlled by the number of
basis functions (and hence the number of resulting projection coefficients) used during the projection process.
Due to the limited number of available texture coordinate
sets on current graphics hardware we currently support a

Fig. 4 Set-up for ABTF acquisition

maximum of 5 SH bands (i.e., 25 coefficients) in our
implementation. The images presented in this article are
generated using three SH bands (i.e., 9 coefficients). We
choose three SH bands for approximation because this
setting already delivers plausible shadows in our test
scenes and causes minimal run-time overhead. For a discussion of shadow quality with respect to the order of SH
approximation we refer the reader to the original paper by
Sloan et al. [32]. Figure 3 shows two renderings of a
building with different environmental lighting. Note the
soft-shadows below the building. The lighting environment
can be changed interactively. The composition of the
shadows cast by the model with the background image
works as follows: first of all we have a white plane below
the model that acts as the shadow receiver. For this plane
we calculate the color due to the environment lighting
(without the geometry of the building casting shadows).
This unshadowed color is calculated once per frame on the
CPU since it is the same color for all vertices of the plane
(it is only dependent on the plane normal and the actual
lighting environment). We then compute the shadowed
color for the plane on the GPU and derive the change in
illumination as (unshadowedColor - shadowedColor)/unshadowedColor. The result is then multiplied to the color
values at each pixel of the background image.
3.3 Approximate bidirectional texture functions
As mentioned earlier, the approximate bidirectional texture
function (ABTF) algorithm proposed by Kautz [16] is a
practical approach to the acquisition and rendering of
materials that exhibit a significant degree of self-shadowing. Although the resulting rendering quality is not as good

Fig. 5 Source images of a textile material

123

116

as the one achieved using full BTFs, it is better than
standard 2D texturing and the acquisition process is relatively simple. The algorithm is based on the assumption
that the material under consideration has the following
properties: first, it is diffuse, i.e., light is reflected equally
to all directions, and second, it is isotropic, i.e., the
appearance does not change if the material sample is
rotated about the surface normal. Using these simplifying
assumptions it is possible to capture a material using very
few photographs. Since the material is diffuse, it is sufficient to capture images only from a single viewing direction (i.e., we can fix the camera position/orientation), and
since the material is isotropic, is it sufficient to sample the
incident light direction only on an arc around the material
sample (as opposed to the whole hemisphere when capturing a full BTF). The set-up we use for material acquisition is depicted in Fig. 4. The material sample is placed
vertically on a table, a camera is pointed orthogonally at
the material and a tripod with a rotating arm is used to
move a light source on an arc around the material. Using
this set-up we capture a few (7–10) images of the material
under varying lighting. A resulting image series is shown in
Fig. 5. These source images are then re-sampled to change
linearly with average intensity and stored as a 3D texture
map (i.e., the average intensity of the slices of the 3D
texture varies linearly in the depth dimension (r-coordinate) and trilinear interpolation can be used to lookup a
value in the 3D texture with respect to average intensity).
Due to limitations of current graphics boards we are bound
to a maximum resolution of 512 9 512 pixels for slices of
a 3D texture (Kautz calls this the shading map). The resampled image stack is shown in Fig. 6. To render with an
ABTF it is first necessary to calculate the intensity of a
point on the model. This intensity value is then used to
perform a lookup into the shading map. While Kautz uses
the Phong lighting model to calculate the intensity values,
we apply PRT instead. Figure 7 compares 2D texture
mapping to the ABTF approach1. The top row visualizes
the different slices of the shading map that are used during
rendering (left). The standard 2D texturing uses only a
single image (right). The bottom row shows a close-up of a
car seat with ABTF rendering (left) as compared to 2D
texturing (right). Note that the ABTF approach introduces
more variety into the rendered image and shows some
amount of self-shadowing. It must be noted though, that the
assumptions made by the ABTF approach are not strictly
valid for materials with meso-structure. Materials that cast
shadows onto themselves are not isotropic. Consequently
self-shadows are generally cast into the ’’wrong’’ direction.
1

The screenshots were made using the VRED editor which we
currently use for shader development and material application with
OpenSG models.

123

J Real-Time Image Proc (2010) 5:109–120

Fig. 6 Re-sampled image stack that we use as 3D texture

Fig. 7 Comparison of ABTF rendering (left) and standard 2D texture
mapping (right)

However, the overall amount of self-shadows is correctly
approximated, which by itself provides the observer with a
better intuition to the material structure.
3.4 Frame post-processing: adaptive tone mapping
and bloom filter
Our rendering engine implements a full HDR rendering
pipeline, i.e., we use floating point numbers for all calculations. Lighting input is specified via HDR light probe
images, lighting calculations are performed using floating
point calculations in vertex and fragment shaders and the
result is written to a floating point framebuffer object. As
usual for HDR rendering, a tone mapping operator (TMO)
has to be applied to the rendering result to map the HDR
framebuffer content to a low dynamic range display device.
For performance reasons, the three-channel RGB color
values in the framebuffer are first converted to singlechannel luminance values. The TMO is then applied to the
luminance values and the result is mapped back to RGB
colors. We follow Reinhard et al. [28] who propose a TMO

J Real-Time Image Proc (2010) 5:109–120

that adapts to the actual image content. The adaptation is
performed using the log-average luminance as the key (i.e.,
’’middle-gray value’’) of the scene. The log-average
luminance is calculated as
!
1X
Lw ¼ exp
logðd þ Lw ðx; yÞ
N x;y
where Lw is the log-average luminance, Lw(x, y) is the
world luminance and d is a small value to avoid the
singularity at black pixels (we use 0.0001). World
luminances are then scaled to display luminances L(x, y)
via
a
Lðx; yÞ ¼
Lw ðx; yÞ
Lw
where a is the target to which the key value is mapped.
However, instead of using the log-average luminance as the
key we just take the average luminance for efficiency
reasons. The average luminance of the image is currently
calculated on the CPU. Subsequent mappings are calculated locally on the GPU using framebuffer objects. After
conversion of the framebuffer content to luminance values
and before application of the TMO we insert an additional
bloom filter that qualitatively simulates the cross-talk of the
human eye’s receptors when exposed to very bright light.
To do this, we first find those parts of the luminance image
whose values are above some user defined threshold (since
we do not work with absolute radiance values of calibrated
light probes this value has to be provided by the user). The

117

resulting image defines the bloom sources, on which we
apply a separated Gaussian filter multiple times. Afterwards the filtered image is scaled and added to the original
luminance image. The full post-processing pipeline is
shown in Fig. 8.
Table 1 Benchmarks for the Lotus scene

Number of triangles

115,000

Number of vertices

71,000

Number of transformed vertices

130,000

Pre-processing time preview quality

25 s

Pre-processing time medium quality

125 s

Pre-processing time high quality

8 min

Rendering speed

68 fps

Table 2 Benchmarks for the BMW scene

Number of triangles

1.0 million

Number of vertices

770,000

Number of transformed vertices

1.4 million

Pre-processing time preview quality

7 min

Pre-processing time medium quality

41 min

Pre-processing time high quality

165 min

Rendering speed

50 fps

Table 3 Benchmarks for the building scene

Fig. 8 Frame post-processing pipeline

Number of triangles

1.5 million

Number of vertices

700,000

Number of transformed vertices

3.3 million

Pre-processing time preview quality

5 min

Pre-processing time medium quality

28 min

Pre-processing time high quality

106 min

Rendering speed

25 fps

123

118

J Real-Time Image Proc (2010) 5:109–120

Fig. 9 Difference in shadow
quality: preview (left), medium
(middle) and high (right)

4 Results
In this section, we present benchmarks for three scenes
rendered using our system. The first model is a Lotus
courtesy of DMI (http://dmi.chez-alice.fr), the second
model is a BMW courtesy of BMW and the third one is a
building which was provided by Page\Park Architects. The
timings for pre-processing and rendering are shown in
Tables 1, 2, and 3.
The computer used for pre-processing was an Intel
Core2Duo 6850 (3 GHz) with 2 GB of RAM running
Windows Vista x64. The machine that was used for rendering was an Intel Core2Quad 3 GHz with 8 GB RAM
and a Geforce 8800GT (1 GB RAM). The rendering was
done using nine coefficients (3 SH bands) for the representation of the environment light and the transfer functions at a display resolution of 1,280 9 1,024 pixels with
29 supersampling antialiasing. We achieve real-time
frame rates even for large models at high display resolutions. The transfer coefficients were calculated in the preprocess using 100 visibility samples per vertex for preview
quality, 625 visibility samples for medium quality and
2500 samples for high quality. The difference in shadow
quality is shown in Fig. 9. The square in the top left of each
image shows a close-up of the area highlighted by the
corresponding red rectangle. Note that the close-ups have
been contrast-enhanced to exaggerate the differences for
printing. As can be seen, the low quality shadows are quite
noisy, while the medium quality setting shows a much
smoother transition from shadowed to unshadowed areas.
The difference between the medium quality and high
quality settings is not as visible as the difference between
low quality and medium quality. However, the noise in the
high quality setting is even lower than in the medium
quality settings.
Figure 10 shows the dependency of our rendering
engine on the model size. To simulate models of different
complexity, we have loaded the BMW model multiple
times. As can be seen from the figure, the performance of
the rendering engine is approximately linear in the number
of triangles in the model. However, it must be noted that
the structure of the model (number of transform nodes in

123

Fig. 10 Dependency of our rendering engine on the model size
(simulated by loading the BMW model multiple times)

the scenegraph) also plays a significant role in the final
rendering performance.

5 Current and future work
The work on the presented MR rendering engine is currently continued in the European research project MAXIMUS (FP7-ICT-1-217039). Our goal there is to further
extend the quality of the rendering by lifting some of the
restrictions of PRT and by integrating the renderer into a
full HDR pipeline (from HDR material acquisition via
HDR rendering to HDR display) by supporting measured
HDR BRDFs and output to an HDR projector. One drawback of the presented rendering approach is the approximation of highly specular effects (i.e., mirror-like
reflections) by environment mapping. Although we argue
that full ray tracing (with color bleeding and soft-shadows)
of complex scenes at high resolutions is not yet possible in
real-time on a single PC, we believe that fast ray tracing
can be used to evaluate a subset of light paths in real-time,
namely the mirror-like reflections. We are currently
working on hybrid rendering techniques that utilize different algorithms for different classes of light paths. Our
approach is to fuse PRT and ray tracing by handling lowfrequency light transport with PRT and high-frequency
light transport with ray tracing. This approach is inspired
by Sillion et al. [31], but we leverage the strength of PRT
instead of radiosity. In Fig. 11, a first result is depicted

J Real-Time Image Proc (2010) 5:109–120

119

Fig. 11 Hybrid rendering.
Low-frequency light transport
in PRT (left), high-frequency
reflections in ray tracing
(middle) and composed result
(right)

which illustrates the validity of the approach. The left
image shows low-frequency lighting and soft-shadows due
to PRT, the image in the middle depicts ray traced reflections, and the image on the right shows the composited
result which includes soft-shadows as well as correct
reflections. Note that this rendering was not done in realtime since a CPU-based ray tracer was used. We expect
interactive frame rates from utilizing GPU-based ray tracing. Our second line of future work involves investigating
into the extension of PRT towards dynamic scenes and
local light sources.

6.

7.

8.

6 Conclusions
We have shown how PRT, IBL and ABTFs can be adapted
and integrated into a rendering system that is suitable for
(mobile) MR rendering. The utilization of image-based
techniques for lighting and material acquisition allows for
consistent integration of virtual objects into real-world
environments and by utilizing PRT the rendering with
global illumination effects can be performed in real-time
on commodity hardware.
Acknowledgments The research leading to these results has
received funding from the European Community’s Sixth and Seventh
Framework Programme (under grant agreement FP6-IST-2-004785
and FP7-ICT-1-217039).

References

9.
10.

11.

12.

13.
14.
15.

1. ARIS Project: European research project ARIS, augmented
reality image synthesis through illumination reconstruction and
its integration in interactive and shared mobile ar-systems for e(motion)-commerce applications. http://aris-ist.intranet.gr (2004)
2. Blinn, J.F.: Simulation of wrinkled surfaces. In: SIGGRAPH ’78:
Proceedings of the 5th Annual Conference on Computer Graphics
and Interactive Techniques, pp. 286–292. ACM Press, New York
(1978). doi:http://doi.acm.org/10.1145/800248.507101
3. Blinn, J.F., Newell, M.E.: Texture and reflection in computer
generated images. In: SIGGRAPH ’76: Proceedings of the 3rd
Annual Conference on Computer Graphics and Interactive
Techniques, pp. 266–266. ACM Press, New York (1976). doi:
http://doi.acm.org/10.1145/563274.563322
4. Caustic Graphics.: (2009). http://caustic.com/docs_faq.php
5. Cook, R.L., Porter, T., Carpenter, L.: Distributed ray tracing. In:
SIGGRAPH ’84: Proceedings of the 11th Annual Conference on
Computer Graphics and Interactive Techniques, pp. 137–145.

16.

17.

18.

19.

ACM Press, New York (1984). doi:http://doi.acm.org/10.1145/
800031.808590
Dana, K.J., Nayar, S.K., Ginneken, B.V., Koenderink, J.J.:
Reflectance and texture of real-world surfaces. In: CVPR ’97:
Proceedings of the 1997 Conference on Computer Vision and
Pattern Recognition (CVPR ’97), p. 151. IEEE Computer Society, Washington, DC (1997)
Debevec, P.: Rendering synthetic objects into real scenes:
bridging traditional and image-based graphics with global illumination and high dynamic range photography. In: SIGGRAPH
’98: Proceedings of the 25th Annual Conference on Computer
Graphics and Interactive Techniques, pp. 189–198. ACM Press,
New York (1998). doi:http://doi.acm.org/10.1145/280814.280864
Drettakis, G., Sillion, F.X.: Interactive update of global illumination using a line-space hierarchy. In: SIGGRAPH ’97: Proceedings of the 24th Annual Conference on Computer Graphics
and Interactive Techniques, pp. 57–64. ACM Press, New York
(1997). doi:http://doi.acm.org/10.1145/258734.258772
Fiat Elasis.: (2009). http://www.elasis.it
Franke, T., Jung, Y.: Precomputed radiance transfer for x3d based
mixed reality applications. In: Web3D ’08: Proceedings of the
13th International Symposium on 3D Web Technology, pp. 7–10.
ACM Press, New York (2008). doi:http://doi.acm.org/10.1145/
1394209.1394213
Goral, C.M., Torrance, K.E., Greenberg, D.P., Battaile, B.:
Modeling the interaction of light between diffuse surfaces. In:
SIGGRAPH ’84: Proceedings of the 11th Annual Conference on
Computer Graphics and Interactive Techniques, pp 213–222.
ACM Press, New York (1984). doi:http://doi.acm.org/10.1145/
800031.808601
Hanrahan, P., Salzman, D., Aupperle, L.: A rapid hierarchical
radiosity algorithm. In: SIGGRAPH ’91: Proceedings of the 18th
Annual Conference on Computer Graphics and Interactive
Techniques, pp. 197–206. ACM Press, New York (1991). doi:
http://doi.acm.org/10.1145/122718.122740
HDRShop: (2009). http://www.hdrshop.com
Jensen, H.W.: Realistic image synthesis using photon mapping.
A. K. Peters Ltd, Natick (2001)
Kajiya, J.T.: The rendering equation. SIGGRAPH Comput.
Graph. 20(4), 143–150 (1986). doi:http://doi.acm.org/10.1145/
15886.15902
Kautz, J.: Approximate bidirectional texture functions. In: Pharr,
M. (ed.) GPU Gems 2, pp. 177–187. Addison-Wesley, Reading
(2005)
Kautz, J., McCool, M.D.: Approximation of glossy reflection
with prefiltered environment maps. In: Graphics Interface,
pp. 119–126 (2000)
Keller, A.: Instant radiosity. In: SIGGRAPH ’97: Proceedings of
the 24th Annual Conference on Computer Graphics and Interactive Techniques, pp 49–56. ACM Press, New York (1997). doi:
http://doi.acm.org/10.1145/258734.258769
Klinker, G., Dutoit, A.H., Bauer, M., Bayer, J., Novak, V.,
Matzke, D.: Fata morgana—a presentation system for product
design. In: ISMAR ’02: Proceedings of the 1st International
Symposium on Mixed and Augmented Reality, p. 76. IEEE
Computer Society, Washington, DC (2002)

123

120
20. Lehtinen, J., Kautz, J.: Matrix radiance transfer. In: I3D ’03:
Proceedings of the 2003 Symposium on Interactive 3D Graphics,
pp. 59–64. ACM Press, New York (2003). doi:http://doi.acm.org/
10.1145/641480.641495
21. Mu¨ller, G., Meseth, J., Klein, R.: Fast environmental lighting for
local-pca encoded btfs. In: CGI ’04: Proceedings of the Computer
Graphics International, pp. 198–205. IEEE Computer Society,
Washington, DC (2004). doi:http://dx.doi.org/10.1109/CGI.2004.49
22. Nicodemus, F.E., Richmond, J.C., Hsia, J.J., Ginsberg, I.W.,
Limperis, T.: Geometrical considerations and nomenclature for
reflectance. Final Report National Bureau of Standards, Washington, DC (1977)
23. O’Malley, S.M.: A simple, effective system for automated capture of high dynamic range images. In: ICVS ’06: Proceedings of
the Fourth IEEE International Conference on Computer Vision
Systems, p. 15. IEEE Computer Society, Washington, DC (2006).
doi:http://dx.doi.org/10.1109/ICVS.2006.8
24. OpenGL Architecture Review Board, Shreiner, D., Woo, M.,
Neider, J., Davis, T.: OpenGL(R) Programming Guide: The
Official Guide to Learning OpenGL(R), Version 2.1. AddisonWesley, Reading (2007)
25. OpenSG.: (2009). http://opensg.vrsource.org
26. Page\Park Architects: (2009). http://www.pagepark.co.uk
27. pfstools (2009): http://pfstools.sourceforge.net
28. Reinhard, E., Stark, M., Shirley, P., Ferwerda, J.: Photographic
tone reproduction for digital images. In: SIGGRAPH ’02: Proceedings of the 29th Annual Conference on Computer Graphics
and Interactive Techniques, pp. 267–276. ACM Press, New York
(2002). doi:http://doi.acm.org/10.1145/566570.566575
29. Rost, R.J.: OpenGL(R) Shading Language. Addison-Wesley,
Redwood City (2004)
30. Shirley, P., Morley, R.K.: Realistic Ray Tracing. A. K. Peters
Ltd, Natick (2003)

123

J Real-Time Image Proc (2010) 5:109–120
31. Sillion, F.X., Arvo, J.R., Westin, S.H., Greenberg, D.P.: A global
illumination solution for general reflectance distributions. In:
SIGGRAPH ’91: Proceedings of the 18th Annual Conference on
Computer Graphics and Interactive Techniques, pp. 187–196.
ACM Press, New York (1991). doi:http://doi.acm.org/10.1145/
122718.122739
32. Sloan, P.P., Kautz, J., Snyder, J.: Precomputed radiance transfer
for real-time rendering in dynamic, low-frequency lighting
environments. In: SIGGRAPH ’02: Proceedings of the 29th
Annual Conference on Computer Graphics and Interactive
Techniques, pp. 527–536. ACM Press, New York (2002). doi:
http://doi.acm.org/10.1145/566570.566612
33. Spheron: (2009). http://www.spheron.com
34. Wald, I., Dietrich, A., Benthin, C., Efremov, A., Dahmen, T.,
Gu¨nther, J., Havran, V., Seidel, H.P., Slusallek, P.: Applying ray
tracing for virtual reality and industrial design. In: Proceedings of
the 2006 IEEE Symposium on Interactive Ray Tracing, pp. 177–
185 (2006)
35. Wallner, G.: GPU radiosity for triangular meshes with support of
normal mapping and arbitrary light distributions. J. WSCG 16(1–
3), 1–8 (2008)
36. Whitted, T.: An improved illumination model for shaded display.
Commun. ACM 23(6), 343–349 (1980). doi:http://doi.acm.org/
10.1145/358876.358882
37. Williams, L.: Casting curved shadows on curved surfaces. SIGGRAPH Comput. Graph. 12(3), 270–274 (1978). doi:http://doi.
acm.org/10.1145/965139.807402
38. Zhou, K., Hou, Q., Wang, R., Guo, B.: Real-time kd-tree construction on graphics hardware. In: SIGGRAPH Asia’08: ACM
SIGGRAPH Asia 2008 papers, pp. 1–11. ACM Press, New York
(2008). doi:http://doi.acm.org/10.1145/1457515.1409079


Documents similaires


Fichier PDF art 3a10 1007 2fs11554 009 0137 x
Fichier PDF hdr vdp article original 2005 pdf
Fichier PDF j 1467 8659 2010 01841 x
Fichier PDF volumetric rendering raymarching
Fichier PDF breakdown sheet
Fichier PDF u4s75ou


Sur le même sujet..