osgVolume Rendering

Phase 1 — Project Proposal

VolView in CAVE

Project Background

The rendering and interaction of a volumetric data has been limited in traditional desktop environment, because of the huge data size and the nature of volumetric data.

Given the CAVE, which has a huge real estate in terms of display, I believe we have this capability to build an application that would make it easier for people to perceive volume data and cooperate with each other to understand the data.

Traditional volume viewer provides 4 view at a time:
1. Top view;
2. Profile;
3. Front view;
4. Interactive view
volviewVolumeViewer

This design is aiming at providing a working environment for single user. The limitation comes from the limited perspective. And there is no easy way for multi-user cooperation, however, in volume data analysis cooperation is a practical way to handle the huge information in volume data.

Objective

This project is trying to develop an application to render volumetric data inside the CAVE.

To take the full advantage of the huge display space that CAVE provides, I’ll try to provide more perspective simultaneously of the single volumetric data that is being rendered.

This application consists of two parts:

The volume data — showing the whole volume data inside the CAVE.

Displaying the whole set of volume data would be helpful in cooperation scenario, where everyone might need a bigger picture of the whole data while concentrating on their current work.

The small multiples — different perspective of the volume data.

This small multiples should be interactive. So each participant have their own place to work and manipulate without interfering their colleagues.

In this environment, small multiples would become more important. Because it’s the main place for people to work with. While the main view would be only a platform for people to get to know others’ progress. The main view would also be encouraging individuals in CAVE to work together and help each other.

Implementation

Environment: CAVE
Platform: OmegaLib + osgVolume

The main challenges in this project would be to integrate osgVolume inside omegaLib. Although omegaLib did include osgVolume inside its package, there is no module integrated into its python package. So this will be the first phase of my work.
Then I need to come up with the interaction of the small multiples and a good way to integrate it with the main view.

Last thing is, I need to come up with a new name for this application. Stealing the name from volview make me feel guilty.


 

Phase 2 — Project Implementation

coverPic

volDraw

Application Structure

There are two major library used to build the application:

omegaLib
openSceneGraph

Tech jargon:

1. How it works?

I’m using osgVolume to handle the volume rendering part. Some interesting thing I did with it:

4 available shading model: standard, light, isosurface, maximum intensity projection
real-time adjustable alpha function value, sample density value and transparency value
real-time hacked clipping
volume rotation and scaling

In the same time, omegaLib is handling all UI and Events:

sceneManager that draws osgVolume together with omegaLib cyclops module
handles all user interaction with wand: including passing clipping command
omegaToolkit that handles user menu
widget display current active transfer function
setting and updating transfer function to osgVolume
small multiples display each slice of CT data
a coordinate symbol indicating current rotation of osgVolume

The whole structure of the scene graph tree in this app is like:
scenegraph
The reason to hack the osgVolume node into Cyclops::SceneManager’s root node instead of using an omegaOsg::OsgModule is, using the OsgModule would cause it to write the its current root node into the root node for the whole scene graph. And the sceneManager would do the same thing. This would cause a conflict and only one part of the scene can be shown on the screen.

2. Recap with even more nonsense.

How the osgVolume gets to work?

Basically I wrapped a osgNode that containing the osgVolume handling everything within itself. And I exposed several interfaces to python that actually makes it possible to exchange some data between the python code and myvolume module. Here’s actually a list of all interfaces I exposed to python:
volumeheader

So osgVolume?

For osgVolume, it is a very nice tool to use. However, the lacking of several functionality make it way less competitive among other volume rendering tools like vtk and open Inventor:
Pros:
-Very nice shading module
-Useful tools for interaction and statistics monitoring from osg library
-Wide range of input file format support. (Although requires compiling osg-plugins, which is a pain)
-Rich options for feature in rendering
Cons:
-Very poor documentation, if there is any at all. Not cool bro
-No fully functional clipping implemented. (OK, I know I have a clipping kinda running on my app. But it’s just a hack. I’m manipulating with this _locator object that kinda performs as a bounding box or something. But I cannot apply rotation or translation on it. Come on, I need a clipping plane!)

Last thing for those who are really interested. The volume technique I’m using in my application is ray traced technique. osgVolume also provides fixed function technique. Well, if it means anything to you.

Application Feature

Four shading model (from top left to bottom right are standard, light, maximum intensity projection and isoSurface):

standard light mip isosurface

Your would notice that I have the identical transfer function for all of them. But for isoSurface model, I did use a different alphaFunc to get a more obvious view.

3 pictures show difference of alphaFunc (From left to right to bottom left the AlphaFunc value decreases):

standard_alphaFunc_comp standard_alphaFunc standard_alphaFunc_comp2

Pictures show different effects from transfer function (light, light, maximum intensity projection, standard):

tf0 tf1 tf1_mip tf1_standard

I have set the alphaFunc unchanged. With changing the transfer function, the new color scheme has been shown on the visualization then.

Pictures showing volume clipping:
clip0 clip1 clip2 clip3

You can see how the volume get clipped in two direction independently.
How to:
Well the clipping is basically controlled like the freeFly. By pressing a button, you can move the Wand freely in space. And according to the movement you have, the volume will get clipped.

Pictures show different sample density ( right one has a relatively lower sample density):

original original_sampleDensity
Clearly there’s more zigzag on the right volume.

Application Running In Cave

pic1 pic2 pic3 pic4

Application Installation

The source code of this application contains 3 components:
1. Omegalib being the library that everything works upon. You need to download it and build it on your computer first.
2. Program Souce contains two components:

i. cpp code
CMakelists.txt osgvolume.h osgvolume.cpp is the source file for our python module. You need to use cmake to generate the project and compile it on your computer. After that you should move the pyd module to a directory where omegaLib::python ( Or the python you installed on your own machine ) could locate.
i. Python
cave.py tomography.py tf.py are the python scripts that should be loaded on your orun to launch the application.
To be more specific, you should do orun -s cave.py

Future Works

Future works could possibly be:

GUI

During the testing it seems a better way to configure the transfer function is really needed in this application. It might be easy enough to get an GUI via html whenever the module has been included by omegaLib.

More contents

As moving the program to CAVE, it appears that the huge space of CAVE has not been fully deployed. There should be something like multiple volume rendering or more augmented statistics on the wall.

Interactive smallMultiples

Currently the small multiples have not been taken into real use. Instead of displaying just a slice of the image. It might make sense to interact with those small multiples as well as with the volume data itself. This will be tricky to do and it would require the rewrite of myvolume source to expose more interfaces for such interaction.