We initially white-boxed the scene in 3ds Max using placeholder objects to establish camera and object placement. When a direction became clear, we were in a good position to take the basic geometry a step or two further before using it as a basis for reassembling the scene in ZBrush.
We've had a lot of success using sculpted 'digital maquettes' for character development: the immediate visual feedback and short iteration times mean this method is a great way to start injecting detail and mood at an early stage. This project was the perfect opportunity to expand this process for environment assets (Fig.09).
As our models progressed, we made much use of ZBrush's ability to work on low subdivision levels externally and re-apply high level detail. Non-destructive tweaks to underlying topology and UV-maps are essential when preparing sculpts for clean texture extraction and practical use in renders.
We brought meshes back into 3ds Max (via the .obj file format) at a variety of resolutions, baking out bump and displacement maps for the more visually important elements.
Displacement maps in Mental Ray almost need a whole 'Making Of' to themselves! Establishing a robust workflow was far from trivial, so here are some pointers for anyone else attempting this:
- Retopologise and re-UV your underlying meshes before baking. A good, clean mesh with roughly equal quads and clean UVs is the key to getting a good bake.
- Use Mudbox to bake. A bit cheeky perhaps, but in our experience - compared to ZBrush 3.1 - Mudbox 2009 is quicker, more reliable and easier to tweak to get good maps. Your mileage may of course vary.
- Use 16-bit displacement maps. Displacement should ideally be stored as 32-bit floating point textures, which store actual world unit offsets. In practice, we couldn't get Mental Ray to use these in a remotely stable way - most of our renders either crashed or had crazy artefacts (Fig.10 - 11).
- Use material displacement rather than the Displace modifier. The former optimises what it does based on the screen-space size of the displaced object; the latter requires modifier-based mesh subdivision which will eat up all your memory before you get to a decent poly resolution - especially if you're displacing many objects in a scene. Of course, depending on the density of your low-res mesh you may need to meshsmooth it a little - do this within the NURMS section of the Editable Poly.
- Worry lots about gamma. Displacement textures represent distances and as such should not have gamma or colour correction applied at any point. Opening, editing and converting them in Photoshop can do this if you have colour profiles enabled. 3ds Max and Mental Ray have numerous sneaky ways of doing this behind your back - be it through the overall app. preferences, the bitmap material node, the Mental Ray map manager and so on.
- Test out your maps as you make them. As you start to establish your displacement pipeline, produce comparison renders of your displaced meshes against their high-res sources. It's often the case that the whole mesh will have receded or bloated (particularly with gamma issues) and laying one on top of the other is the best way of spotting these problems.
- When it comes to shaders we use whatever tricks and techniques we know to produce a good result, and then optimise the network and its textures to keep both the scene and render times manageable. In this case, textures were created from a combination of photographs and hand-drawn elements, layered with baked cavity maps and a lot of good old-fashioned grunge (Fig.12 - unfinished and unlit).
In addition to 3ds Max, we use a variety of tools for UV mapping and shader development. For more detail on these have a look at our tools page: http://slidelondon.com/iv
The result was a rich and stylised setting for our scene, full of juicy detail but light enough for easy manipulation and quick renders. From here we moved onto lighting.
Lighting & Rendering
When it came to lighting this scene we wanted to concentrate on achieving an evocative and expressive feel using a simple Mental Ray Sun + Sky setup (with light portals, left), rather than a traditional point/spot rig.
While we anticipated slightly more difficulty in getting the exact look we wanted, we knew we had enough tricks up our sleeves to get over any hurdles. We also liked the idea of establishing a robust and predictable lighting rig which would hold up to full scene animation with a minimum of post work required.
As an added bonus, Sun + Sky setups can also be pretty snappy to render.
Our approach here reflected the freedom of an internal project: we started off with a goal in mind but allowed the finer details to evolve over the course of production (Fig.13 - 14).
Creatively, we wanted to achieve a rich image with strong contrast in both tone and colour. The central characters needed to stand out against the detail in the environment, and overall the piece should have an oppressive feel to hint at the crowded and threatening back-alley surroundings beyond the frame. Light and shadow were to play a big part in the composition.
To explore the lighting we used a combination of concept sketches and paintovers of 'low quality' renders. By flipping between these methods, we could easily identify and flag-up key areas which weren't working and get a clear idea of what needed to change to bring these up. At this stage we were making extensive use of two great tools: LPM and PhotoStudio.
Lukas Lepicovsky's LPM (http://www.lukashi.com/LPM.php
) is a render pass manager for Max which allows you to set off multiple render variations without having to remember the complex per-render tweaks every time (Fig.15). This was really useful for crafting our own specific render elements, such as a tweaked direct-light-only pass, and it also let us create and then automatically re-use high-quality/low-resolution Final Gather maps in a completely hands-off way - keeping render times down when we needed to iterate the most.
< previous page continued on next page >