28.06.2014 Views

Realtime Ray Tracing and Interactive Global Illumination - Scientific ...

Realtime Ray Tracing and Interactive Global Illumination - Scientific ...

Realtime Ray Tracing and Interactive Global Illumination - Scientific ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

14 Chapter 2: An Introduction to <strong>Ray</strong> <strong>Tracing</strong><br />

clearly separated the shading process from the actual ray tracing, <strong>and</strong> allowed<br />

to describe the shading process using a convenient, easy-to-use high-level<br />

language. Shaders could be written independently of the application <strong>and</strong><br />

independent of other shaders. The effects of different shaders could be easily<br />

combined in a plug-<strong>and</strong>-play manner.<br />

2.2.4.1 The Shader Concept<br />

Originally, ray tracers usually supported only one kind of “shader” that was<br />

attached to all objects at the same time. This function for computing the<br />

“color” of a ray typically consisted of a physically motivated lighting/material<br />

model that was the same for all primitives, <strong>and</strong> which could be parameterized<br />

by different parameters like material properties or textures.<br />

Today however ray tracers usually follow a much more general concept<br />

in which each respective primitive may have a different function for “shading”<br />

the ray (i.e. for computing its color). This function can be completely<br />

independent of all other surfaces, <strong>and</strong> does not have to follow any physical<br />

properties at all. All that has to be done to implement this concept is to<br />

require that each object has one shader, <strong>and</strong> that this shader alone is responsible<br />

for computing the color of each ray hitting that object. Using this<br />

concept allows for a “plug <strong>and</strong> play” mechanism in which different shaders<br />

can cooperate in rendering an image, without any shader having to know<br />

anything out the other ones.<br />

For example, a scene might contain a sphere with a specular metal shader,<br />

as well as another object with a shader simulating wood with procedural<br />

methods. In order to shade a ray hitting the metal sphere, the metal shader<br />

simply casts a ray into the reflection direction, <strong>and</strong> recursively calls the shader<br />

of the respective hitpoint to compute that reflected ray’s color. This way,<br />

the wooden object can reflect off the metal sphere without the metal having<br />

to know anything about a “wood” shader at all.<br />

This kind of abstraction can also be applied to light sources, the camera,<br />

or the environment. Each of these concepts is described by a separate kind<br />

of shader, resulting in camera, surface, light, environment, volume, <strong>and</strong> pixel<br />

shaders (also see Figure 2.3).<br />

Camera Shaders: Camera shaders are responsible for generating <strong>and</strong> casting<br />

the primary rays. Typically ray tracers use a perspective pinhole camera<br />

(see e.g. [Glassner89]), but other effects like e.g. fish-eye lenses or realistic<br />

lens systems are easy to implement, too. More importantly, it is possible<br />

to also compute advanced effects like motion blur <strong>and</strong> depth of field by using<br />

camera shaders that simulate real cameras with finite lens aperture <strong>and</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!