Another way of making a normal map is by rendering it.
Rendering a normal map can be done by applying a normal material on a surface. This material will, during render time, color the surface relative to the camera you're rendering from. This method does not use a source and target mesh and is in a way like making a texture map. You model your surface detail as you would paint it in Photoshop, top down,Â with only relatively small directional changes, any polys perpendicular to the cameraÂ will of course not be visible and there for require their own little bit of space in render, just like a texture map.
Converting a height/bump map
Converting a black and white image can be done in Photoshop using the Nvidia Photoshop plugin, this allows you to paint a black and white image and encode it into a normal map. This is in a way similar to the render method and not suited for complex shapes. You can however use this method to add extra detail to a baked normal map like surface textures, stitching, scars, pocks etc. if you can't be bothered to model everything down to the last millimeter.
Some bake software will allow you to slap a bump map for this kind of detail on your source mesh and bake that with it which gives a much more accurate result.
You have to wonder if the resolution of your normal map will be able to display this kind of detail at all though.
One thing to bear in mind is that whenever you make a change to your normal map in Photoshop you will need to run the "normalize only" option in the Nvidia plugin. You need to do this because a normal map needs to be mathematically correct in order to work properly (this has got something to do with the dot product of combined pixel values, all very boring tech stuff and too in depth for this chapter)
Tangent, Object- and World space
The surface detail or offsets simulated by a normal map are always relative to either the polygon it's projected on (tangent space) or the object's orientation (object space) or the world (world space).
Tangent space is computationally the most expensive one for real time game engines but also the most flexible one. It looks at each target face's direction as defined by it's vertex normals and calculates the new lighting solution by offsetting it with the information from the normal map. This is the only type of normal mapping that you can use on deformable objects such as skinned or boned characters. It also allows you tile your map or re-use elements of your mesh on other places, rotating and twisting these as you see fit. This is a great way of freeing up more texture space on your normal map.
An object space normal map only looks at the orientation of the entire model it's applied on to calculate the per pixel directions on it's surface. This is great for non-deformable objects like, say, barrels and crates. It will allow you to duplicate elements on your mesh as long as you do not change their orientation.
When an object space normal map tells a face it's supposed to receive light coming from the left of the object it will always do so, even if the face is pointing away from the light source.
World space normal maps are great for objects that don't ever move or deform like, say, the world.
Tangent space (left) and object space (right) normal maps of the same cube