Tuesday, 22 April 2008
Wednesday, 16 April 2008
FreeImage library
Downloaded the FreeImage library to convert bitmaps/jpegs to tga texture files in order to convert the jpeg normal maps I find on google to tga textures so that I can create the depth and cone maps.
As FreeImage would require something like Visual C++ 2005, which I don't currently have installed, I'll have to leave it till tomorrow now, as it'll atleast take an hour to have it installed with SP1.
As FreeImage would require something like Visual C++ 2005, which I don't currently have installed, I'll have to leave it till tomorrow now, as it'll atleast take an hour to have it installed with SP1.
Normal to depth mapping
I've just downloaded normal2depth.exe from Fabio Policarpo's Quake4 Relief Mapping to change the normal maps to depth maps.
Once thats done, I'll have to use depth2cone.exe for changing the depth map which I obtain to a cone map, this would then allow me to use use the cone texture map on Fabio's relaxed cone step mapping shader program.
Just a small update, while I'm testing w.bloggar for my future posts.
Once thats done, I'll have to use depth2cone.exe for changing the depth map which I obtain to a cone map, this would then allow me to use use the cone texture map on Fabio's relaxed cone step mapping shader program.
Just a small update, while I'm testing w.bloggar for my future posts.
Tangent Space
Reading the article on tangent space has made a lot more things clear, especially:
1. how the lighting works when using vertices, and per-pixels.
2. What the RGB values correspond to, and how they are calculated for a normal map
3. and ofcourse, what a tangent space is.
This makes the shader program on relaxed cone step mapping of Fabio Policarpo much easier to understand, and I think in general, it will make the concept of shaders much easier for me. Well written article, easy to understand.
Summary:
Deriving World to tangent space transformation matrix:
u,v,n vectors are Tangent(T), Bi-normal(B), Normal(N).
Matrix for T,B,N:
Mwt = (Tx, Ty, Tz; Bx, By, Bz; Nx Ny Nz);
Same point in tangent space (Posts) will be
Posts = Posws x Mwt
Creating tangent space matrix for a face:
Step 1: Calculate any two edge vectors
Step 2: Calculate the T vector
Step 3: Calculate the normal vector (N)
Step 4: Calculate the Bi-normal vector (B)
Step 5: Build Mwc from T,B,N
Creating tangent space matrix for a vertex:
We need to split the matrix for each face, to each vertex that defines the face.
This is done by calculating the average of the vectors from all the faces that share that vertex.
Per-pixel lighting using pixel and vertex shaders:
Note: this is for bump mapping
The vertex shader - calculating the light vector
Step 1: Calculate the light vector, we assume that both the light position and the vertex position are in the world space coordinate system
Step 2: Convert the light vector from the world space coordinate system to a vector in the tangent space coordinate system. Assuming that you have already calculated the tangent space matrix for each vertex and passed it into the vertex shader via the texture channels, this is just a simple matrix multiplication.
Step 3: Normalize the light vector. This light vector is passed to the pixel shader in a previously decided texture stage. One very big advantage of doing this is that the light vector gets interpolated across the face for free.
Note: The interpolated light vector is no longer normalized when it reaches the pixel shader
Pixel Shader - calculating diffuse brightness for each pixel
Step 1: Get the light vector
Step 2: Get the normal at the texel
Step 3: Calculate the surface brightness
Step 4: Modulate with diffuse colour
Aside:
Why does a normal map always look blueish?
The z component of a surface normal at point never points backwards
Vn.z > 0.0
This results in
In.B > 127.5
Another point to remember is that a normal map is always encoded in the tangent space coordinate system instead of the world space coordinate system.
As the RGB values of normal maps are pretty clear, now onto changing normal maps to depth maps, then to cone maps in order to use with Fabio's relaxed cone step mapping shader program.
1. how the lighting works when using vertices, and per-pixels.
2. What the RGB values correspond to, and how they are calculated for a normal map
3. and ofcourse, what a tangent space is.
This makes the shader program on relaxed cone step mapping of Fabio Policarpo much easier to understand, and I think in general, it will make the concept of shaders much easier for me. Well written article, easy to understand.
Summary:
Deriving World to tangent space transformation matrix:
u,v,n vectors are Tangent(T), Bi-normal(B), Normal(N).
Matrix for T,B,N:
Mwt = (Tx, Ty, Tz; Bx, By, Bz; Nx Ny Nz);
Same point in tangent space (Posts) will be

Posts = Posws x Mwt
Creating tangent space matrix for a face:
Step 1: Calculate any two edge vectors
Step 2: Calculate the T vector
Step 3: Calculate the normal vector (N)
Step 4: Calculate the Bi-normal vector (B)
Step 5: Build Mwc from T,B,N
Creating tangent space matrix for a vertex:
We need to split the matrix for each face, to each vertex that defines the face.
This is done by calculating the average of the vectors from all the faces that share that vertex.
Per-pixel lighting using pixel and vertex shaders:
Note: this is for bump mapping
The vertex shader - calculating the light vector

Step 1: Calculate the light vector, we assume that both the light position and the vertex position are in the world space coordinate system
Step 2: Convert the light vector from the world space coordinate system to a vector in the tangent space coordinate system. Assuming that you have already calculated the tangent space matrix for each vertex and passed it into the vertex shader via the texture channels, this is just a simple matrix multiplication.
Step 3: Normalize the light vector. This light vector is passed to the pixel shader in a previously decided texture stage. One very big advantage of doing this is that the light vector gets interpolated across the face for free.
Note: The interpolated light vector is no longer normalized when it reaches the pixel shader
Pixel Shader - calculating diffuse brightness for each pixel
Step 1: Get the light vector
Step 2: Get the normal at the texel
Step 3: Calculate the surface brightness
Step 4: Modulate with diffuse colour
Aside:
Why does a normal map always look blueish?
The z component of a surface normal at point never points backwards
Vn.z > 0.0
This results in
In.B > 127.5
Another point to remember is that a normal map is always encoded in the tangent space coordinate system instead of the world space coordinate system.
As the RGB values of normal maps are pretty clear, now onto changing normal maps to depth maps, then to cone maps in order to use with Fabio's relaxed cone step mapping shader program.
Tuesday, 15 April 2008
Relaxed Cone Step Mapping Shader

After reading about the relaxed cone step mapping from GPU Gems 3 chap 18, I'm having a look at the shader program written by Fabio Policarpo, the author of the chapter in GPU gems.
Having worked on the simple shaders as discussed a few posts earlier, I'm trying to work out how he has used the vertex and pixel shaders to come up with the results for the relaxed cone map. The image here shows the final results of using this kind of cone step mapping.
Currently, reading on tangent space to have a better idea as searching around, it seems that its used a lot for per pixel lighting.
Tangent space: coordinate system in which the texture coordinates for a face are specified. Z-axis is the face normal.
Why use tangent space?
Certain per pixel lighting techniques and many other shaders require normals and other height information declared at each pixel point. This means that we have one normal vector at each texel and the n axis will very for each texel.
Lets see where tangent space takes me, and then I'll be posting more stuff about texture maps soon, converting normal maps to depth maps, and then depth to cone in order to use it on objects using FX composer.
Texture Mapping
My project involves working on a model of london, and mainly looking at different texture maps to use on the model. We've been discussing which map to use, and it seems likely that I'll be using a relaxed cone step mapping, in order to get the flat horizons have some kind of 3D surface optimizing it further.
I've read a few papers, the first one on block mapping, which stores block maps into small fixed size texture chunks and rendered through raycasting. It uses block maps for far away geometry, while using textured polygons for buildings closer to the viewer.
Another technique is relaxed cone step mapping, this is really an evolution from relief mapping. Relief mapping can use 2 different techniques:


1. Binary search - takes points halfway between until it converges to a point where it pierces a surface. The problem with binary search is that it can end up choosing the wrong point as shown in the picture.
2. Linear search - Takes smaller steps till under surface, then uses binary search by taking points under a surface and previous points as inputs to get the desired intersection.
Cone Step Mapping - Single search based on a cone map without binary or linear search. Cone map associates a circular cone to each texel of the depth texture, where angle of each cone is the maximum angle that would not cause the cone to intersect the height field.
Relaxed cone step mapping - Basically cone step mapping with binary search. As b
inary search requires one input to be under and another to be over the relief surface, constraints of cone step mapping can be relaxed. Therefore, the cones are forced to intersect the surface whenever possible and make the radius as large as possible with the following contraint: As a viewing ray travels inside a cone, it cannot pierce the relief more than once.
I've read a few papers, the first one on block mapping, which stores block maps into small fixed size texture chunks and rendered through raycasting. It uses block maps for far away geometry, while using textured polygons for buildings closer to the viewer.
Another technique is relaxed cone step mapping, this is really an evolution from relief mapping. Relief mapping can use 2 different techniques:


1. Binary search - takes points halfway between until it converges to a point where it pierces a surface. The problem with binary search is that it can end up choosing the wrong point as shown in the picture.
2. Linear search - Takes smaller steps till under surface, then uses binary search by taking points under a surface and previous points as inputs to get the desired intersection.

Cone Step Mapping - Single search based on a cone map without binary or linear search. Cone map associates a circular cone to each texel of the depth texture, where angle of each cone is the maximum angle that would not cause the cone to intersect the height field.
Relaxed cone step mapping - Basically cone step mapping with binary search. As b
inary search requires one input to be under and another to be over the relief surface, constraints of cone step mapping can be relaxed. Therefore, the cones are forced to intersect the surface whenever possible and make the radius as large as possible with the following contraint: As a viewing ray travels inside a cone, it cannot pierce the relief more than once.
Vertex and Pixel shaders

These first few posts are not going to be in order, but, oh well, I'll get there, started this a few weeks ago, so need to catch up with the posting.
Had to first read up on vertex and pixel shaders, how they replace certain parts of the graphic pipeline. The picture shows how this is done, where
1. vertex shader replaces fixed function transform & lighting
2. pixel shader replaces the texture stages
Vertex Shader: called for each data tuple (position, normal, vertexColor, textureCoord 0...n). Their outputs are per vertex parameters which are linearly interpolated across each pixel of each triangle of a mesh.
As a note from the rocket commander tutorials, to calculate as much as possible in vertex shaders as it requires computation of less points
Pixel Shader: or fragment shader gets executed on each pixel, input is interpolated set of parameters, the output is the final colour to be passed to frame buffer.
Shaders: Tiny programs written in a javascript/c like languate, which run natively on a graphics card.
Subscribe to:
Comments (Atom)