It was only once all the elements of the model were finished, retopologised and I'd even opened the UV that I decided to make a big change. I was satisfied with the head much more than the rest of the body. As the body was pretty tall in proportion, the head had less importance overall. I thought that making a small body and a big head would give much more importance to his face. After a very basic sketch I made a few tests in ZBrush and became convinced that it was the good choice. It meant redesigning the body and accessories, but I think it was natural as I was not completely satisfied by the old ones. When I look back on them now, I'm sure it was not lost time!
I generated Normal and Displacement maps out of ZBrush and used them both in combination to get faster render times in animation. I generated a displacement map from the 0 level in ZBrush and I used it to displace the model, which was divided three or four times by a subdiv approximation in mental ray. Then I generated a normal map from the corresponding level in ZBrush so it took into consideration only the details from the fourth to the eight level, for example.
The first column shows what the high-res model looked like. The second column shows how it would be if I was using the Normal map from the 0 level. As you can see, it doesn't look right because the displacement map deforms the geometry and the normal map fakes the deformation, which results in a double transformation and the surface not reacting properly to the light. The correct solution is shown in the third column. You can notice the differences in the Normal maps. Where the displacement finishes, the normal map takes over (Fig.06).
As the eyes of the character are so big, one displacement map was not enough to support so much stretching in the area of the eyelids. For this reason I re-sculpted this portion of the face with the eyes open. It gave me two different displacement maps that I would blend between when the blend-shape controller went from closed to opened. The problem was that there are four sliders to control both eyes up and down separately. Instead of having eight displacement map (two for each controller), I went for another solution. I painted four different masks for the eyelids area, and I just used my two original displacement maps and masked them by area to locally control each eyelid before recombining them, one on top of the other (Fig.07).
The texturing took me quite a lot of time. As the character was designed to be the main character of an animated short film, I wanted to make sure that he had enough resolution to match every need for all the camera angles, even the close-ups. I worked with textures up to 8k in resolution for the face. Then I converted all the textures to a .map format to let mental ray choose what it needed to use in this huge amount of data at render time. I worked in Mudbox for the texturing because the layer system in ZBrush is a bit poor and because I was not able to run Mari on my computer at the time.
As my computer was not very powerful and as I was working with large resolutions, I had to split my work into small groups. First I did the base for the skin – exporting the first layers to a PSD file where I was keeping them separated and clean. Then I combined the result and exported it as a JPG back to Mudbox to create a new set of layers to add on top of the other ones in Photoshop. It's not a very smooth method, but it worked for me and let me use a lot of layers at large resolutions. I'm definitively looking forward to working with Mari in the future!
I think that it is important to think the details over in terms of different scales. If you go too deep into micro details, when you look from a non-macro angle it just looks flat as all the details melt together. Equally, representing only broad details may look good from far away, but will suffer in close-ups. It means that every level need its amount of variation to look detailed from any distance.
For the shading, I worked exclusively with the mia material and the misss-fast-skin of mental ray. It matched all of my needs. The only problem with this type of shader is that they don't display properly in the viewport. For example, the cornea wouldn't show its transparency and the pupil was not visible in the viewport while animating. To solve that, I connected some Maya shaders to the rigged character to benefit from the transparency in the viewport to let the animators see where the character is looking toward. I then connected the render shader to the shading group in order
to get the mr shader only at render time, while keeping an animator-friendly viewport (Fig.08).
The rig was created by Ceyhan Kapusuz. As the character is going to be animated, we needed a fully controllable and animator-friendly rig, so the one we used was pretty standard and robust. For the character to be able to make extreme poses we had to use the plugin poseDeformer in order to achieve nicer deformations and volume conservation. The facial rig is based on a mix of blend-shapes and local deformers. We also used a fake muscle system using some influence objects deformed by jiggle or ncloth deforming the main geometry. It was faster and more controllable than a muscle system. In the future Ceyhan might make a more complete presentation of the rig.
One important point while presenting a character, I think, is to make them look alive. The best way to make a pose that convey this impression is to build up a small scenario that justifies why the character is standing there with that pose and expression. If you know why the character is there and what their behaviour is, you can start to think about all the details that would show it to the spectator. It gives you a base to know where to direct the look, how to shape the mouth, how to position the arms etc.
In my case I wanted to show the curious and playful nature of the character. So I decided to depict a moment where he is so pleased because he's been able to find something he's been searching for for a long time. If you've decided how your character would react to a situation by giving him a past and a reason for his behaviour, it will help you to keep a coherent balance between all the elements of the scene and bring life to your picture.
I first made some rough poses. Then I chose the one I liked the most and refined it for the still frame (Fig.09).
For his face, I wanted to get a marvelling expression. The only references I was able to find of this kind of experience were of babies. References often come from unexpected sources!
There's nothing special about the render; I just used very common techniques to render this picture (Fig.10).
As the character is going to be integrated into live action footage in the short film, I decided to use the same technique for this still frame. The goal of this still frame was to present the character before the film gets released and to create interest around him. It was also a good test to check if all the shaders were okay. I actually had to make a few changes while rendering this picture and I applied them on the base setup. It was a good way to see if the character was well received before the release of the actual movie. The feedback has been pretty positive so far and has motivated us to keep on moving forward with this project (Fig.11).
If you want to see more of my projects and follow the development of the upcoming ones, you can visit my blog at this link:
. You can also take a look at the blog of Ceyhan Kapusuz for some information about rigging and about his own projects: http://ceyhankapusuz.wordpress.com/