go back
1

The future of indie filmmaking


By 3dtotal staff

Web: www.kevinmargo.com (will open in new window)
Web: www.constructfilm.com (will open in new window)

|
(150 Views) | 0 Comments
| Comments 0
Date Added: 10th August 2014

Blur Studio's Kevin Margo offers a glimpse into the making of his groundbreaking short movie, 'Construct', and looks towards what's next...


268_tid_Sc003_S0009.jpg

268_tid_zfeature_artist_profile.jpg

Kevin Margo is a visual effects and CG supervisor at Blur Studio in Culver City, California. Now 11 years into Blur and the director of his own groundbreaking short movie, 'Construct', Kevin talks to 3dtotal about the future of independent filmmaking.


What do you think sets Blur apart from other studios?

I think it starts with probably the goal and the culture of the company. Blur was formed by Tim Miller and the intent was to do the type of work that he wanted to do and he wanted for his future company. He's a videogame, comic book and sci-fi fan, and very often he's passing up projects that financially look more appealing, but aren't as appealing content wise. He's always trying to get projects in the door that are going to be exciting and motivate the team, and in turn you're going to get the best work out of the team because of that. Hopefully that kinda steamrolls into a company that's producing a lot of cool content, and that attracts more talent to join us and do the same.
http://www.blur.com/

More broadly, we really try to make it about the artists instead of about the machine or the corporation. For instance, get good artists in the door but then don't suffocate them with a lot of rules and boundaries and divisions and labor. We try and keep the roles as wide and broad as possible, within reason given the needs of some sort of pipeline to execute a project. Our character modelers will sculpt, UV, texture and do all the shading and often do all the hair, and typically that's the responsibility, at least in larger studios, of maybe two or three people. Here we have one artist doing that. On the scene assembly side the guys that are modeling the environments and also texturing and lighting those environments are also the ones doing scene assembly on the scene, so they're lighting, rendering and compositing those scenes too. We've tried to make it so the artists have a sense of ownership over what they're doing, and I feel like that's going to give everybody more motivation and you're going to get better work out of them.

268_tid_1.jpg

268_tid_2.jpg

268_tid_3.jpg

268_tid_4.jpg

Does that mean they have to work harder, though?

It's an open-ended invitation if they would like to work harder and put more into it to make it look all the better. Years of hearing stories from other companies â€" I can't speak first hand but I know over the past several years Blur has become attentive to not overworking artists, and really tried to structure schedules and bidding as accurately as possible to avoid those sorts of situations. But we put the cool content in front of the artists and if they're having fun and enjoying themselves modeling superheroes, they might wind up staying a little extra just to put a little more polish on their character. It's something they can throw up on their website and claim as wholly their own.

What are the stand-out projects you've worked on since you started at Blur?

The first project I supervised was very near and dear to my heart â€" it was cinematics for the Aeon Flux videogame. I was a huge fan of the MTV Liquid Television cartoon so that was a nice way to start supervising. Working on Rockfish, when I started at Blur, was a great experience. It got shortlisted for the Oscars, it didn't get nominated, but it was really exciting to be part of something and to create something that got a lot of industry attention. More recently working on the Halo 4 live action commercial, that had a ton of visual effects in it and was produced by David Fincher and directed by Tim Miller, was an exciting experience because after years of doing only CG cinematics, and in my own life exploring live action through 5D independent filmmaking, to be able to bring those two worlds together for the first time at Blur on a live-action CG hybrid commercial â€" that was exciting.

Blur also did the opening four-minute prologue to Thor 2: The Dark World, and that was my first feature film visual effects supervision. It was intense and exciting â€" I learned a lot in that process

268_tid_5.jpg

268_tid_6.jpg

268_tid_7.jpg

268_tid_8.jpg

268_tid_9.jpg

Can you tell us how you've seen the CG and VFX industry evolve?

From a financial and business model perspective I can't say it's been the healthiest of evolutions. From the technical side it's exciting to see how much CG and visual effects are being tacked to produce tent-pole films. It's great to see that it's working its way and ingraining itself more firmly into the broader production process, which kind of leads up to the 'Construct' stuff that I'm trying to do.

Just on a technical level, in the 11 years since I've been at Blur, when I started there Brazil was the renderer of choice; it was pretty attractive at the time, but it didn't do indirect illumination or glossy reflections, so there were a lot of limitations. We've made two renderer changes since I've been there: we went from Brazil to mental ray, and then we stumbled into limitations with its implementation into 3ds Max. That eventually led us to V-Ray as where we're at now, and getting the benefits of all these great features and the benefits of a deep collaboration with the Chaos Group guys to be able to drive the development of their own software. That's been really exciting.

"It's not using live action - but I'm trying to replicate it as closely as possible"


Tell us about your teaser for the short movie, 'Construct'. What's the story?

'Construct' is essentially a one-minute extraction from an eight-minute short film that I'm still working on. The world of 'Construct' is a near-future society where AI and robotics have developed to a point where you're seeing androids insert themselves or try to assimilate into broader society, and there are political and social tensions because of that.

268_tid_Sc003_S0009.jpg

Can you tell us about how it's using live action and VFX?

It's not using live action â€" but I'm trying to replicate it as closely as possible. Chaos Group and I are developing V-Ray RT for MotionBuilder. I had done a live-action short film three or four years ago, but at the same time at Blur I was doing all this CG-only stuff, and I've been trying to find a way to merge the two processes as closely as possible and bring the two together to create a new, really cool, efficient workflow for producing virtual content.

Basically, we ported into MotionBuilder, so we had our performers moving around inside the motion-capture volume, and then we had a camera operator also filming the scene. And when I say 'camera operator', it was really just a monitor on sticks with a few input commands â€" it wasn't a live-action screen, it was a tracked viewing device that, when pumped through MotionBuilder, had the robot characters with V-Ray lighting and shading in it. We were ray-tracing or path-building on top of this MotionBuilder scene in real time through the camera that we were capturing in the volume at the same time.

268_tid_Sc003_S0012.jpg

"That's the really exciting thing we're doing: bringing all these real-world lighting and shading, physically-based rendering algorithms into the motion capture volume, and enabling all of that to happen in real time"


The result is that inside the volume the camera operator can see through his monitor viewport a real-time path-traced representation of the performers in front of him, as robots in the scene, in the set of 'Construct'. That's the really exciting thing we're doing: bringing all these real-world lighting and shading, physically-based rendering algorithms into the motion capture volume, and enabling all of that to happen in real time.

Once you do that it's really exciting because now you can start replicating real-world camera concepts, like depth of field, target distance, f-stops, shutter speed, white balance. And lighting, too... You can block-out scenes, you can figure out camera angles, you can work out where your characters are going to go, and then you can think about: 'What do I want out of this shot? What f-stop do I want? How do I want to frame this shot? How do I want to use the lighting, and a lighting toolkit through V-Ray to light this shot in a naturalistic manner?'

268_tid_Sc003_S0014.jpg

The end result is a real-time image that's a little noisy due to the technology limitations right now - you're only able to shoot a sample or two into a pixel per frame, so it's a little grainy, but the idea that we're using the same rendering algorithms in real-time that has production scalability to a final delivery production frame is super exciting.

"In two or three seconds you're staring at something that looks buttery, creamy smooth, GI-lit, and photorealistic, which is super exciting"


The other thing is, the way V-Ray RT works is it's progressive, so it'll throw one or two samples into a pixel and it'll look noisy. But if you essentially freeze the scene state and let V-Ray operate for another few seconds, by that time it's had enough time to catch up, and it's thrown in another like 50 or 100 samples into each pixel, so in two or three seconds you're staring at something that looks buttery creamy smooth, GI lit, and photo-realistic, which is super exciting â€" the idea that real-time is coarse and grainy, but you're able to frame and compose an image and get a rough sense of what you're looking at through the camera in front of you. So you can go real-time in coarse, and then hit pause and go offline, or go back to your studio, and let it render to a final delivery frame, that could take a few minutes or an hour. It's the scalability that's really exciting.

Until this point all these real-time things, like MotionBuilder, are Open GL viewports. Try replicating the quality of V-Ray in an Open GL viewport â€" you can't. That's what this addresses. Before it was a diverged, broken pipeline â€" at a certain point when you decided to go into MotionBuilder, you had these assets made, and when you went into MotionBuilder you would have had to have torn it apart, it would not have looked identical, the shaders might have been broken, or the texture maps were low resolution, whatever. It was not a one-to-one representation of what you would have been getting in your final production frame out of V-Ray. This expands the umbrella that V-Ray can exist in, and it creates a clearer communication and understanding of what this scene will be, early in production.

268_tid_video-screen01.jpg

How does that change the way film production works? Is there quite a lot of setting up?

It will create a lot of efficiencies. I think what's going to happen is there's going to be one big swirling pot of production, and there's going to be creative gains out of that because now you have more freedom and flexibility to explore different ideas without it being overly costly â€" if you embrace this idea that there's a little more front-end work to prepare the assets to be able to look this good in MotionBuilder in real-time. MotionBuilder has typically been thought of as a motion capture and pre-visualization tool; what this is suggesting is that it becomes your central production tool around which everything swarms and into which everything feeds. You find and create the film inside MotionBuilder because you can get imagery that looks so awesome out of it now. You can look at it as either you're delaying the start of pre-vis or shooting or filming, or you're accelerating the development of your assets to a higher quality earlier in production so you can get into the motion capture with these good-looking assets. It's two ways to look at the same thing: the end result is that it's a more singular process and it's not a stepped or linear progression. It's all happening at the same time.

There's a lot of creative benefits from that. The fact that a director on set can have access and begin to visualize all these different elements at the same time is really freakin' awesome. You ask live-action directors making films that have a lot of visual effects, and they feel very divorced and removed from the visual effects pipeline. It's also extremely inefficient â€" if you're a live action director, and you're filming something, and three months later you finally see the result of the visual effects added into your shot, only you realize that's not how you imagined the lighting would operate. This enables you to see and give attention to the camera and lighting decisions at the same time you're capturing the performances at that stage. You know three months later when all the simulations happen and the super-high-detail model is refined, at its core it's going to look identical to what you were looking at in the motion capture volume three months earlier. That can alleviate a lot of unexpected notes and costly revisions.

268_tid_video-screen02.jpg

I also feel that a lot of visual effects departments have to put themselves in a position for maximum flexibility all the way up until delivery, and laying the foundation for that is expensive. The reason comp departments exist is because you need to spit out 30 passes just in case somebody wants to change a few elements of a scene. A whole department with what is probably 99-percent of the time irrelevant unused render passes has to exist just in case something need to change last minute. If you can enable the director to be able to see the shot in its entirety nearer to the final product earlier in production, then it suggests that you might not need to have this amount of flexibility on the back end, because those decisions have already been established.

So there's all these great creative and efficiency benefits, but put it into the hands of the wrong people, people who fail to plan or don't have a clear vision, it can get very costly very quickly and become just as messy as any other technology that claims to have benefits.

268_tid_video-screen03.jpg

How does it integrate with live-action?

The way we're trying to set this whole paradigm up is to replicate the live-action workflow. Currently it's operating entirely in a CG environment, which is cool because there's a lot of creative freedom there and flexibility. There are many ways that this could be applied moving forward. If the time is taken to prepare assets in a way that are representative of live-action costumes, like the suits of armor and everything, were prepared in a way that these V-Ray IES lights that are replicating real-world lighting when shone upon these costumes, that again are replicating the real-world shader parameters. When all this stuff is replicating in the live action experience, now you can start making creative decisions inside the volume that has direct implications to a live action set.

Let's say you know you're going to fly to Hong Kong to film a huge action sequence down town. You only have like a few days to shoot this whole scene. If you can replicate down to the lighting and physical shader level in your virtual production inside MotionBuilder, and you can block out all your scenes and you know the time of day you're going to be shooting, you can have a very solid understanding of what you're going to need and how you're going to shoot this stuff on set on the day you get there. You can know that for this close-up, I'm going to need this 4Bank Kino Flo 10 feet from the character as a fill light because they're standing in a shaded alley. You can start to understand and draw comparisons between real-world lighting parameters and your virtual toolkit. And your camera properties too â€" V-Ray cameras have a lot of parameters that replicate live-action cameras (shutter speed, white balance, film gate, f-stop information) all that stuff can be replicated into the live action experience.

268_tid_video-screen04.jpg

Are the studios interested in it?

There's been a lot of industry attention and a lot of buzz. I've received emails from people I haven't talked to in years saying 'everybody's talking about this at this place or that place.' I know people have seen it, and are seeing the potential that's there. I'm just the driver of the technology, not the people actually doing it (that's Chaos Group) I just came to them with the idea and the workflow that I wanted to see happen, and that motivated them to develop V-Ray for MotionBuilder, so I'm sure Chaos Group are getting a lot of attention too and people are calling them up and saying, 'when do we get to use this?'

How does it feel to be behind it all?

It's definitely exciting. All of this came from a desire to create a workflow for myself that resonated with me, and just through experiencing the frustrations of a standard visual effects pipeline, realizing that there's things that frustrate me about it, and then trying to take the best out of it and put it together to create something new; and that was all driven by the fact that I want to create a workflow for myself that will enable me to tell the types of stories that I want to tell.

268_tid_video-screen05.jpg


"The thing that I'm really excited and hopeful about is that when this process matures a little more, that it becomes what the DSLRs did for independent filmmaking in the
last decade"


The other thing that I'm really excited and hopeful about is that when this process matures a little more, that it becomes what the DSLRs did for independent film making in the last decade. That technology got to the point where it was enabling an individual or a small team of guys to create content very quickly that could grab people's attention and hold its own and compete against more costly-looking productions. I'm hoping that when this stuff starts to mature a little bit more and visual effects artists realize that, 'Hey, I know all these tools, and this is a way for me to tell my own stories, and to create my own stuff,' I'm hoping it's going to empower visual effects artists to tell their own stories.

Then because there's just more people with access to this technology, to get all these really cool quirky unique perspectives and unique stories being told, using the technology that up until this point was so expensive and so inaccessible that only summer blockbusters could be applying it or using it. If I can go into a motion capture volume and use this software and get this kind of rendering quality, I'm just one guy and a few other guys, just a small team of guys doing this, not a huge studio doing this. You can have a lot of really cool creative unique stories being told that happen to look like Avatar or something.

268_tid_video-screen06.jpg

I guess you could imagine bedroom directors settings this up and coming up with some amazing productions?

Totally. There's a Kinect plug-in for MotionBuilder, and using the Kinect sensor essentially means you have your own motion capture studio. The fidelity is not going to be the greatest, but the idea is that I can take this out of the motion capture volume and I can even do this in my living room. I can be generating and doing performances that have this quality of rendering to it, in real-time, in my apartment.

You're a fine artist by education â€" how does that translate to where you are at the moment?

I went to the Maryland Institute of Art, and I'd gone there as a fine artist doing painting and drawing, representational painting, some abstract work, but it was very much from the angle of the fine artist. I've always had a technical mindset and a process-orientated nature. Photoshop 1 or 2.0 had just come out in my sophomore year in college, and I started exploring that and the technology aspect and the process aspect kind of engaged me. That led to me taking some 3D courses towards the end of college, and that led to me getting a job at a game company after college, followed by Blur.

While I was in college it was pretty much a 50/50 thing. I was still doing all this fine art, painting and drawing, but then also exploring this technology. Photoshop and 3ds Max had just come out too, so I was exploring those things as well. What I really latched onto with the fine art side of things was plein air landscape painting â€" these one- or two-hour sitting paintings, where you just go out and you really quickly capture the moment and then the lighting and weather changes and you're done. But the idea that it's this kind of passive-responsive relationship to creating art â€" I'm just going to go out there and find something that I respond to and very organically and fluidly let it translate through me onto this small little canvas for an hour or so, and then it's done and I'm going to move on. I'm not trying to assert some grander design or conceptual content into this piece, or I'm not overly thinking it from a designer standpoint. I'm being passive in that manner and just essentially trying to take in the world in front of me and then translate it onto the canvas.

268_tid_video-screen07.jpg

That also led to an interest in photography too, in that I can go out and frame and compose the world in front of me very quickly. This was very appealing. I think that ties in with what I'm trying to do now; that idea of being an observer, and the fact that now with this motion capture volume, I can go in there with a monitor and essentially observe a performance or a scene that looks really good, fully path-traced rendered. If I can observe it and respond to it, in an observer passive sense, and just let it come in and let me compose to it and then it's just saved on disk â€" there lives the shot and the moment! And there wasn't a whole lot of proactive energy or intent gone into creating that moment. It just existed in front of me and I was able to transcribe it. It's a very cinematic approach.

How are the new offices in Culver City working out?

So far so good! We moved in there about a year ago and the space is really beautiful. It's an Eric Owen Moss building â€" he's a famous architect â€" but it's beautiful with a lot of light and atmosphere, and it has a nice creative vibe and energy to it. The transition's gone well and we've filled it up really quickly, bringing on about 30 or 40 people these last few months. We're working on 60 minutes of stuff on a big Microsoft project, so we've had to expand quite rapidly and it's a very active, busy time right now.

What does the future hold for Blur and you personally?

We are working on some cutscenes for Microsoft's Halo 2 re-envisioning for Xbox One. It's a huge project and we're using it as an opportunity to flesh out our pipeline a little more and put some attention into tools that we previously didn't have time to do, and hopefully that lays the foundation for even bigger projects moving forward. And for me, on the side, just to finish 'Construct' and see where that can lead. Hopefully put myself in the position where I can tell more of my kind of stories in the future.

Have you got an ETA for 'Construct'?

Don't hold me to it, but I'm going to say late fall or early wintertime, but things could change. It's definitely an ambitious undertaking, but the team and I were able to crank out a minute pretty fast and right now I'm just laying the foundation for the rest of the seven minutes. The last few months have been a little sluggish just because it hasn't felt like I've produced a lot more content, but I know we're building out the rest of the assets and the rest of the environments, and I hope that in the late summer and fall that's all going to accelerate and steamroll since we'll have all these assets produced, and then the minutes of footage will start rolling in.


Related links

Check out Kevin Margo's site
Click to find out more about 'Contstruct'
 
1
Related Items.

Interview

Thor: The Dark World sequences by Blur Studio

Blur Studio creates a 3-minute prologue sequence for Thor: The Dark World, as well as the awesome end-title sequence that closes the film! Read on to find out m ...

Go to galleries 1
Comments 0 Views 7697

Interview

Career path interview with Bruno Câmara

Bruno Câmara is a freelancer with many years of experience as a 3D generalist. We catch up with him to discuss his career path in this interview. ...

Go to galleries 1
Comments 0 Views 7437

Interview

Interview with Pacific Rim CG supervisor, Julian Sarmiento

Pacific Rim CG supervisor, Julian Sarmiento, talks to us about the unique experience of making Guillermo del Toro's prologue for his latest film. ...

Go to galleries 1
Comments 0 Views 6189

Interview

10 concept vehicles with a mean streak!

Ever wish you could drive your fantasy car? Here are some mean-looking cars, tanks and bikes we wish we could take out for a spin! ...

Go to galleries 1
Comments 0 Views 5429
Readers Comments (Newest on Top)
no comments!
No comments yet. Be the first to comment!
Add Your Comment