IEEE Visualization 2004 Tutorial
Interactive Texture-Based Flow Visualization

2.5D

 
Abstract:

Interactive texture-based flow visualization has become an active field of research in the last three or four years. Recent progress in this field has led to efficient vector field visualization methods and, in particular, to improved techniques for time-dependent data. This tutorial covers approaches for vector fields given on 2D planes, on surfaces, and within 3D volumes. Both the theoretical background and the GPU-oriented implementations of many of these techniques are presented, along with a demonstration of their usefulness by means of typical applications.

The 2.5D part focuses on recent developments for texture-based flow visualization on surfaces. Both ISA and IBFVS work on image space to avoid problems of older approaches that require a time-consuming computation in object space or a parameterization of the surface. A brief comparison between object-space, image-space, and parameterization approaches serves as introduction to this part. Special attention is paid to the application of these techniques to real-world examples from CFD because a good choice for the surface is essential to an intuitive and effective visualization.

This page contains the animations that were used in the 2.5D portion of the tutorial.

Supplementary Material: Interactive Texture-Based Flow Visualization by Gordon Erlebacher, Robert S. Laramee, and Daniel Weiskopf at IEEE Visualization (Vis 2004), tutorial, October 15-19, 2004, Austin, Texas ( PDF file, ~20MB)

The central web page with a description of each part can be found here.

MPEG Animations Used in 2.5F Portion of the Tutorial: (Click on images for MPEG animation)

[1] Visualization of flow at the surface of a combustion chamber with two intake ports. The method used here is ISA which generates a dense, time-dependent representation of the flow, in this case resembling spot noise. (650 frames)

[2] This animation illustrates the noise injection and advection process for both ISA and IBFVS, with no edge detection and no image overlay. (350 frames).

[3] This animation illustrates the edge detection and blending process of ISA. For the first 100 animated frames edge detection and blending is disabled, For the second 100 animated frames edge detection and blending is enabled. (320 frames total)

[4] This animation also illustrates the edge detection and blending process of ISA. For the first 100 animated frames edge detection and blending is disabled, For the second 100 animated frames edge detection and blending is enabled. (320 frames total)

[5] This animation illustrates the edge detection and blending process of ISA and how it prevents the background color from "bleeding" into the resulting animation. For the first 100 frames edge detection and blending is disabled, For the second 100 frames edge detection and blending is enabled. (200 frames total)

[6] This animation also illustrates the need for noise injection. Here, texture is advected without any noise injection. We asked André Neuebauer about why we need noise injection during the course of the tutorial. (200 frames)

[7] This animation illustrates both the temporal and spatial characteristics of the noise that is constantly blended into the scene, without advection, without an overlay, but with edge detection enabled. (200 frames)

[8] This animation also illustrates the noise advection, with edge detection, but without an image overlay. (200 frames)

[9] This animation illustrates the notion of texture clipping. Texture clipping is applied to eliminate artifacts resulting from a quadrilateral advection mesh. The first 50 frames show noise injection only. The second 100 frames show noise advection with no texture clipping, and in the last 100 frames texture clipping is enabled. Note that this illustration is exaggerated somewhat. (250 frames)

[10] This animation also illustrates the application of the image overlay. The first 50 frames show noise advection and edge blending with no image overlay. The image overlay is applied the next 50 frames. (100 frames total)

[11] This animation illustrates how the opacity value is arbitrary and can be set by the user. (200 frames)

[12] This animation tries to put all the elements together. The first 50 frames show the velocity mask. The second 50 frames show the noise injection. The third 50 frames show the noise advection. The forth 50 frames enable the edge detection and blending. The last 50 frames enable the image overlay. (250 frames total)

[13] Moving on to the applications, this animation shows the visualization of flow at the surface of a cooling jacket. Note that the amount of texture smearing reflects velocity magnitude. (300 frames)

[14] The same cooling jacket data set, with a color map used to highlight areas of low velocity magnitude. These are trouble areas in the case of a cooling jacket. (615 frames)

[15] The visualization of weather patterns at the surface of the earth. (63 frames)

[16] IBFVS being used to enhance the perception of a surface torso. (64 frames)

[17] The visualization of blood at the intersection of three blood vessels. An LIC like result has been generated. (650 frames)

[18] Here the user zooms the view towards the center of an intake port. Spot Noise textures are computed on the fly. Also a velocity mask has been used to dim high spatial frequency noise (low velocity). (600 frames)

[19] The visualization of a time-dependent surface mesh composed of a 79K polygons (at largest point) with dynamic geometry and topology. This intake valve and piston cylinder can also be used to analyse the formation of wall film, the term used to describe the liquid buildup on surfaces. (ISA, 900 frames)

This page is maintained by Robert S. Laramee. 
In case of questions, comments, collaboration ideas, etc., please mail to Laramee "at" VRVis.at