27.12.2012 Views

ARUP; ISBN: 978-0-9562121-5-3 - CMBBE 2012 - Cardiff University

ARUP; ISBN: 978-0-9562121-5-3 - CMBBE 2012 - Cardiff University

ARUP; ISBN: 978-0-9562121-5-3 - CMBBE 2012 - Cardiff University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

found that it is extremely difficult to identify useful transfer functions in this 4D space. The<br />

importance function more naturally fits the natural methodology which people utilize when<br />

confronted with the problem of visualizing time-dependent data. Namely, the extraction and<br />

display of features of interest are utilized just as they are in existing packages for 3D<br />

visualization, and then after this step, the user adjusts the parameters to explore the time<br />

dimension. Currently, we enforce that the importance function maps to opacity or color<br />

saturation. While we can of course define any such mapping, limited choice in this domain<br />

seems to be more useful than the complex mappings available through the more general 4D<br />

representation.<br />

4 IMPLEMENTATION<br />

As mentioned earlier, a problem with existing chronophotographic visualization methods is that<br />

they require reconstruction of the entire volume on each change of the window function. This<br />

effectively makes it impossible to explore a data set’s temporal evolution interactively. To<br />

circumvent this drawback we propose a different approach which reconstructs the volume on the<br />

fly while performing the ray cast traversal. This approach is based on a novel volume rendering<br />

mode integrated into the out of core volume rendering system ImageVis3D 8 .<br />

4.1. Bricking<br />

The basic idea to visualize large volumes interactively on commodity hardware is to build a<br />

bricked, level of detail (LoD) hierarchy of the data, such as an octree. Therefore, the volume is<br />

first cut into smaller bricks, each representing a cube shaped sub-volume of the original data set.<br />

To visualize a data set that is too large to fit into memory, the renderer traverses the data brick by<br />

brick by submitting them to the graphics processing unit (GPU) and composites the visualizations<br />

of each brick into the final image. This effectively decouples the size of the data set from the<br />

available memory, which makes rendering of arbitrarily sized data possible.<br />

This does not necessarily result in interactive performance, as a large number of bricks may have<br />

to be traversed. To circumvent this issue, the volume is stored at multiple resolutions, a number<br />

of down-sampled versions of the volume are generated in a pre-processing step. At run-time the<br />

renderer first computes the most appropriate resolution based on the size of the volume on screen.<br />

Next, it traverses only the bricks of that down-sampled resolution, considering only those bricks<br />

that are in the view frustum. With this approach the amount of data required to render a frame is<br />

effectively decoupled from the data set size, depending only on the screen resolution.<br />

Our novel traversal is based on the same data structure as described above but instead of predetermining<br />

a resolution to use and then traversing all visible bricks, we generate the list of<br />

required bricks during the ray cast traversal. This allows us to discard many more bricks than<br />

only view frustum culling would allow alone.<br />

4.2. GPU Based LoD traversal<br />

The idea is based on a GPU based multi-grid traversal algorithm, a 3D texture atlas based brick<br />

storage, and a hashing function implemented on the GPU.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!