What has been the hardest part behind developing a successful stereo photogrammetry process?
Money. I started a business when I was in debt (which I have now paid off, finally!) so I could have been doing this two or three years ago, but it took time to build the system, save up, and research different solutions and software. First came 2 cameras, then 4, then 18, then 32. I'm now running 74 cameras. It involved a lot of trial and error, a lot of late nights, sacrificing my social life, a huge amount of patience, research, risk and careful, diligent bookkeeping!
You have to be prepared to do the leg work, especially if, like me, you don't have a strong technical, financial or academic background. You have to take some big risks, but the key is research: understanding the market and what others are doing. Ironically, while I was trying to study and build the system, Google searches kept on revealing me. Everywhere I looked, figuratively and literally, my 3D face and body stared back at me! There are so few others out there doing this kind of research who are willing (and confident enough) to share their exploits and results, so it was a huge learning curve. Because I don't have funding or investment, I have no board of directors or self-imposed NDA's so I can share most of what I do. I started to finally see good results this year and I really encourage others to get involved and share their results and research data as well; it helps the industry grow and evolve, instead of harboring secrets and innovations. Personally I can't stand the idea of patents, it stifles growth.
How complicated is it to get the raw data into a useable format ready for 3D rendering and animation?
It's a good question; I see a lot of incorrect assumptions online and in forums about how to handle this kind of scan data. It's really quite trivial. Thanks to the amazing new tools in programs like ZBrush (Remesh, AutoUV) and other applications it's super-easy to retopologize a mesh or use an existing low poly frame (already UV mapped and grouped) to shape conform and re-project with, transferring over the details. This process can be handled with ease in less than 10-20 minutes. Luckily the data this system can now produce is 90% cleaner than it used to be, so very minimal clean up is needed.
What differentiates your company from ones offering a similar service?
It's just me and a few dedicated freelancers overseas, like a talented artist called Alexander Tomchuk
. My overheads are very low compared to other studios and all projects get my full attention. My company isn't technology-based, or founded on money. It was purely founded on the passion for creativity, art, and specifically the replication of digital humans, which is something I am extremely passionate about and continue to try and excel in every day. There are also other studios dabbling in similar forms of capture, like Ten24
, who are also getting interesting results. It's not just the big VFX studios and research institutes anymore. It's becoming more mainstream and anyone can do it.
Technology is constantly evolving, so after having developed your system how do you anticipate it changing in the future?
I'm already looking into grant funding and investment, so the next phase is to go 4D, capturing
at 60-120fps which is now quite feasible with current technology, but the amount of data involved is astronomical. There is also the looming specter of the next version of Kinects and TOF (time of flight) hardware, which is just on the horizon. Soon I anticipate people will be able to do extremely high resolution scanning from the comfort of their living rooms. The current generation of Kinects aren't quite there yet (even with the use of Reconstructme, which is amazing), as the Kinects were really designed for motion analysis as the IR laser pattern is too coarse for 3D reconstruction. The camera sensors are also too low at the moment, but I think the next iterations (if Microsoft don't hold back to milk the franchise) will be incredible.
How would you like to see your efforts and technology best used?
I would like to see it used to improve virtual character content in games and film. This technology, specifically used for face capture, is fairly new; some studios are starting to test and adopt it, but I think it will soon become commonplace in most studio pipelines until the next iteration of technology kicks in.
This kind of change and progression is a good thing. It will help tell more engaging and emotional stories as I think at some point, game and film will merge.
From IR I would like to say thank you to 3DTotal for the opportunity of this interview. Sample Scans and 3D models are available to purchase at www.triplegangers.com