When I prepare a mask, I do some render tests with numeric values and, when I'm happy with the look, I convert those values into a base color in my maps, and then add brighter or darker areas according to the effect I need.
I used the same method to prepare all the other textures, sharing the maps as much as I could (Fig.12).
All the shaders were pretty simple. Since my friend was not going to use V-Ray for his renders, I tried to achieve every aspect of the shaders just with masks and textures. For example, for the car paint I needed to have a clean base with subtle and blurry reflections, and then a coat for higher ones. I also needed metal scratches with high specular values and, on top, flat shaded dirt and mud to cover the reflections underneath.
I used V-Ray's car paint shader, which already has a base layer and a coat layer, with a mask to cut off the reflections and specular from the coat (Fig.13).
As I was testing the shaders I started to set up the final scene with proper lighting, cameras and environment. I wanted to place the vehicle in a real urban environment, which I wanted to modify slightly to make it look a little bit more like the original City 17 of Half Life 2, which looks like a huge prison camp. I found a free set of photos I liked and an HDR image I could use to create my image-based lighting on www.smcars.net.
I made a selection of five shots, and I tried to match the photo angle with my cameras (Fig.14). I discarded the last two mainly because of bad lighting: the first was in favor of the sun, so was pretty flat; the last one had the opposite problem.
There are various methods to it, according to what you have on the original image. In my case I didn't have a reference cube, so I assumed that the shots were taken on a tripod that was 1.50 meters tall (except for the second camera). I created different V-Ray cameras and tried to match the existing fugue points with a grid placed in 0,0,0.
It was trial and error here, but every image had a quite recognizable grid on the ground, so it wasn't very difficult. The scale was another guess, but the parking lot lines helped.
When I work with a scene that will produce more than one still image, I create an animation: one frame per camera, and I key all the values I change between them. In this case I had three cameras, so I set keys for the three positions and rotations of the APC, and made some slight adjustments to the position of the lights and depth of field values, the angle of the HDR map, even shader values if needed, and so on. It may not be the most orthodox way to handle this, but it works for me, and allows me to maintain only one scene for all the renders, and gives me an easy restore point when I do experiments.
The light setup is very basic, as I just had to match the existing lighting of the backplates. I used one HDRI for image-based lighting and one VrayLight to simulate the sun. I didn't use VraySun or a directional light (which are better lights in a daylight condition) because I liked the shadows generated from the VrayLight more.
One of the great advantages of linear workflow is that there is no need to pump up the values of lights to get decent results. I kept both my lights invisible, since I didn't want them to interfere with my HDRI, and let only the IBL cast reflections (Fig.15).