Making of 'Unexpected Gifts'

Nicolas Brunet explains the process he used to create his short film based on object tracking and facial motion capture...

Unexpected Gifts is a VFX-based short film. It's a proof-of-concept for a future short-film that utilizes object tracking and facial motion capture effects. A video compilation of VFX explained in this making of is available at the end of the article.

Just like the many ‘unpacking videos' you can watch on YouTube, Nicolas gets a unique gift and unveils it on camera. (Many subtitles are available in the CC options).

The making of Unexpected Gifts

Experiment time

I have had an interest in object tracking from a single camera and facial motion capture for a few months now. After many fails and some good results, I finally understand the shooting conditions to get the desired result in both processes.


Object tracking tests


DIY time

With the basic workflow and process in mind, I could jump straight into building the main prop of the film – a wooden Keyblade based on the CG model I created in 3ds Max – with the help of a friend.

Working on building the main prop

Working on building the main prop

Tracking

On the first tests, the wooden Keyblade only had black dots painted on. I had trouble with the tracking process with these alone, so I also painted white dots on it and then everything worked fine.

I also wanted to shoot the entire process in one shot; the Keyblade, head-tracking and facial motion capture, all at the same time. Unfortunately my compositing skills aren't good enough to remove a heavy facial mocap rig from footage like this. I'd like to make some changes in the design of this helmet in the future.

HDRI and lighting

The film was shot with my old hacked GH1 and a 20mm F.1.7 lens. I shot a 360-degree HDR image used for lighting and reflection in the rendering process, again with the GH1, Peleng 8mm lens and Nodal Ninja 3 pano head. For additional light sources I used photometric lights.

3D scanning the head

The moon head is based on the scan of my own face made for an earlier project – it was achieved with
Agisoft Photoscan..

3D scanning the teeth

The same is true for the teeth, for which I had help from my dentist. I painted the replica she created with a random pattern to get more details in the scan, as I had read in the CG feedback forum that that method works really well. I never had that much detail in a small scale object like this before.

I could get models up to 7-million polygons with the painted pattern method

I could get models up to 7-million polygons with the painted pattern method

The fake world

The second half of the short introduces full CG shots. Most of the elements are based on real objects populating my desk and room – the Revoltech Rei Ayanami model I made a few years ago was useful.

The objects have pretty simple topology. The blanket is another model scanned with Agisoft Photoscan and my GH1. About the crumpled paper-ball modeling process, I invite you to follow the nice tutorial from Matt Chandler. All the models have arch and design materials.

Some of the wire meshes used in the model room

Some of the wire meshes used in the model room

Texturing

I changed my workflow here for a faster one. Until now, when I was working on a complex model with multiple parts, I had separate texture files for each part. For Terra's Armor and Keyblade I merged objects together; armor elements, suit elements and so on, then extracted UV information and detached the elements again. That way I got a consistent look from one part of the armor to another. I feel inexperienced for using this method up to now though – I thought it was used only for single low-poly mesh objects for videogames.

Terra main armor textures

Terra main armor textures

Camera animation

The motion for each camera used a different method. In the second shot it's a key-framed camera with noise controllers in position and rotation. The third shot was with my GH1, of a piece of my room full of contrasting elements. I then used MatchMover to track it to get a realistic hand-held motion. I adjusted the camera angle here and there to
match Terra's animation.

The noise controllers effect on the initial key-framed trajectory

The noise controllers effect on the initial key-framed trajectory

Motion blur tricks

Rendering motion blur takes years, even with unified samples. Exporting a proper velocity pass to be used in post with plug-ins and without artifacts is impossible, of course Spotmask plug-in is here, but it takes a hell of a long time to render passes in MR.

My solution: I composited all the elements together and rendered them as an image sequence and projected it on the original geometry of my 3D scene using camera mapping, then I turned on Motion Blur. I used that method for the third shot and it worked great, at 10 seconds/frames I got real 3D motion blur.

The camera-mapped scene

The camera-mapped scene

Put the Shutter Offset to 0 to avoid artifacts

Put the Shutter Offset to 0 to avoid artifacts

Conclusion

In the end it was a fun project to work on. I learned a lot of new tricks and I'm still learning how to perform good video-based motion capture. I wish to thank 3dtotal for the opportunity of talking about my project on their website and hope to come back soon with a new short film. For now, here is a video compilation of all the process explained in this making of. Thanks for watching!

All the processes explained in this making of

Related links

Visit Nicolas Brunet's website for more inspiration
An error of software was used to create this project, including 3ds Max, mental ray, Photoshop, After Effects, Agisoft Photoscan, IPi recorder, MatchMover, Movimento

Need help with 3ds Max? Try 3ds Max Projects

Fetching comments...

Post a comment