Normal Mapping

Normal mapping is a great way of showing more detail on a 3D model by simulating the way surface detail responds to light. It is a 2D effect so it will not change the shape of an object, but inside the profile outlines it can simulate a
tremendous amount of extra detail. This is very useful for real time game engines where processing power is a limiting factor or animations where it's render time that can be the limiting factor.

Original and normal mapped object

Original and normal mapped object

Normal mapping is not new but with the latest graphics cards it's become more and more affordable. It's very similar to bump mapping in that they both achieve the same effect.However, bump mapping simulates vertical offsets relative to the face direction it's projected on, new directions are a result of the height differences between neighboring pixels. Normal mapping on the other hand uses colors to indicate directional offsets, making it far more efficient. It doesn't use it's neighbors to determine it's angle, a single pixel is enough. The red channel in a normal map encodes the left-right axis of normal blue channel encodes vertical depth.

This makes for a pretty interesting image that is, unfortunately, impossible to paint properly by hand (unlike bump maps). Which means you have to generate a normal map, this is done in several ways, you can bake it, render it or convert a height/bump map.

Baking

Baking is the process of capturing the surface detail of the source object and transferring it to the target object, During the baking of a normal map the software will compare the two surfaces and determine which colors are needed where on the target surface to compensate for the difference in shape between the two objects.

Source object and the target object partially enclosing it

Source object and the target object partially enclosing it

Baking is done by casting rays from the low poly surface (also called target since it will ultimately receive the normal map) onto the high poly surface (also called source) , the direction of the rays is determined by the vertex normals, they can be cast in both directions, forward and backwards. If  you have a smooth surface (or one smoothing group) the normals are "averaged", when you have a hard edged surface the normals point in the direction of the face normal, i.e. straight up from the face.

This is an important difference to be aware of when you start baking because when your edges are hard the vertex normals for one face point in a different direction than the vertex normals of it's neighbors meaning you get "blind spots" in the projection.

Soft and hard edged projections seen from the side

Soft and hard edged projections seen from the side

Another thing to be aware of is in order to bake a map properly every single polygon that is going to be included in the bake needs to have it's own space in the UV layout. You cannot have overlapping UV's since the software will not know which of the faces you want to be used in the calculation and instead will draw them all at once.
Since the target surface is one big, very complex  projector  you should treat every face on it as if it where an inverse UV projector, sucking up the image instead of projecting it. By looking through a face on the target mesh, as if it where a window, you should be able to see every polygon on your source mesh pointing more or less in your direction. If a source poly is perpendicular to you, or pointing away from you, it won't be captured on the normal map. Another thing to take into account is that you need to see enough of each source poly so it can be given enough pixels to clearly define it on the normal map.

Check projection directions and visible surface detail

Check projection directions and visible surface detail

All in all baking is the most time consuming method but it will also give you the best results. It's worth it!

Rendering

Another way of making a normal map is by rendering it.

Rendering a normal map can be done by applying a normal material on a surface. This material will, during render time, color the surface relative to the camera you're rendering from. This method does not use a source and target mesh and is in a way like making a texture map. You model your surface detail as you would paint it in Photoshop, top down,  with only relatively small directional changes, any polys perpendicular to the camera  will of course not be visible and there for require their own little bit of space in render, just like a texture map.

Converting a height/bump map

Converting a black and white image can be done in Photoshop using the Nvidia Photoshop plugin, this allows you to paint a black and white image and encode it into a normal map. This is in a way similar to the render method and not suited for complex shapes. You can however use this method to add extra detail to a baked normal map like surface textures, stitching, scars, pocks etc. if you can't be bothered to model everything down to the last millimeter.
Some bake software will allow you to slap a bump map for this kind of detail on your source mesh and bake that with it which gives a much more accurate result.
You have to wonder if the resolution of your normal map will be able to display this kind of detail at all though.

One thing to bear in mind is that whenever you make a change to your normal map in Photoshop you will need to run the "normalize only" option in the Nvidia plugin. You need to do this because a normal map needs to be mathematically correct in order to work properly (this has got something to do with the dot product of combined pixel values, all very boring tech stuff and too in depth for this chapter)

Tangent, Object- and World space

The surface detail or offsets simulated by a normal map are always relative to either the polygon it's projected on (tangent space) or the object's orientation (object space) or the world (world space).

Tangent space

Tangent space is computationally the most expensive one for real time game engines but also the most flexible one. It looks at each target face's direction as defined by it's vertex normals and calculates the new lighting solution by offsetting it with the information from the normal map. This is the only type of normal mapping that you can use on deformable objects such as skinned or boned characters. It also allows you tile your map or re-use elements of your mesh on other places, rotating and twisting these as you see fit. This is a great way of freeing up more texture space on your normal map.

Object space

An object space normal map only looks at the orientation of the entire model it's applied on to calculate the per pixel directions on it's surface. This is great for non-deformable objects like, say, barrels and crates. It will allow you to duplicate elements on your mesh as long as you do not change their orientation.

When an object space normal map tells a face it's supposed to receive light coming from the left of the object it will always do so, even if the face is pointing away from the light source.

World space

World space normal maps are great for objects that don't ever move or deform like, say, the world.

Tangent space (left) and object space (right) normal maps of the same cube

Tangent space (left) and object space (right) normal maps of the same cube

Putting it into practice

Since this is not a character modeling tutorial I will not go into the authoring process of modeling a high poly character, one tip I can give you however and that is to work in small parts when working on the high poly mesh. This way you can limit the amount of polys per object you are working on, making it all a lot less painfull when your total polycount is reaching the 500.000 mark. For normal map generating purposes you don't even have to attach and weld the different sub components together, as long as they fit seamlessly you should be fine.

I started out with a rough version of  the low poly in game model (low is rather relative since it's aimed at next gen platforms and these can easily deal with 6000-8000 poly character models, in any way it's lower than the source mesh, so low poly is still apt.).

Making a rough low poly version helps you to get started and will give you an idea of  where your detail should go and how you are going to capture it in the baking process.

This is of course not a rule and every method is valid as long as the end result is good.

Then I subdivided the organic components such as arms and legs and started working into them, I used a poly proxy object/mesh smoothing on the trousers using zig zag patterns to create the creases. The arms where imported into Z-Brush 2  for some quick soft wrinkling. Z-Brush is great for organic stuff, it can however be a bit tricky to get used to it.

The head I modeled entirely in Z-Brush2 using Z-spheres. I later remodeled the low poly version in Maya to make sure I get the right edgeloops for facial animations.

The hard surface detail I kept fairly low poly compared to the rest, there's no point in making your objects unnessecarily heavy, it will only slow you down and it makes it harder to reshape them when you feel like it. And you will want to do that, since you will find yourself going back and forth between your source and target meshes.

Like I said earlier, cut your model up in smaller manageable parts, this way you can keep focused and organized.

When the high poly version was finally ready I remodeled the entire low poly model from scratch snapping it's vertices directly on the high poly model, of course you need to make sure you place them strategically, this can be a bit challenging when you also need to worry about your target mesh's layout for deformation later. Another good reason to rough it out first.

Finished low poly model, 6611 polys

Finished low poly model, 6611 polys

When your low poly model is finally finished you need to properly unwrap it, assessing each component and giving it the right amount of texture space. I generally give the face a bit more resolution since it is the main focal point on any character, I also decided to tile the ammo belt boxes since the detail was so fine it needed extra space on the map to capture all the tiny details. I did this by only baking a few boxes and then re-attaching the remaining boxes and
map them to the normal map. Of course duplicating the boxes is also possible but was in this case not the
fastest solution.

Then it's time to start baking, interestingly there aren't that many normal map baking plugins out there. For Maya there's Turtle's Surface Transfer Editor (which I used and it's great) and Maya's own proprietary tool (which is next to useless) then there's also Microwave which ,allegedly, is also pretty good. Max 7 comes with it's own pretty accomplished normal mapping tools and then there's a bunch stand alone normal mapping tools such as Cry Tek's Polybump and ATI's normal mapping tool.  
No doubt all the other big packages out there have their own solutions for this but since I have no experience with them I can't recommend anything.

Anyway, back to baking: this is when the errors start to appear, look out for extreme and hard edged color differences, these can be fixed by painting them away in Photoshop (use the "normalize only" option in Nvidia's Photoshop plugin before saving) or pushing the target mesh in or outward a bit, globally or locally. Don't worry too much about ruining your low poly objects shape, you can always fix this later when the normal map is finished, it may not be entirely accurate but no one's gonna notice. Besides you are an Artiste right? Whatever you made was intentional, even the unintentional.

Final tangent space normal map for the character

Final tangent space normal map for the character

Low poly character (left) and high poly character models

Low poly character (left) and high poly character models

Enter content...

Fetching comments...

Post a comment