Tue, 05 Feb 2019
Applications of Motion & Performance Capture
Mocap. What is it? Well, to give you a simple idea, here is a clip from a short video we produced for fun a few months back now. In this short promo piece, we had an actor dance around the office and through computer magic we turned them into a hairy dance monster.
In the past three decades the use of computer-generated imagery has really pushed creativity beyond all known limits. Early CGI characters tended to look quite wooden in their movement due to the way computers smoothly interpolated the movements between a few key poses.
In the 90’s, systems were developed that allowed a precisely measured array of cameras to capture footage of an actor wearing markers strategically placed on their body. By comparing the relative position of each marker from each camera feed, a point location could be calculated in three-dimensional space for each one. Drawing a line between the points created a basic ‘skeleton’ which could then drive a digital character, giving it the natural movement a living actor. Of course, there is much more to creating a convincing looking character, but that’s beyond the scope of this article. This technology became known as ‘Motion Capture’ or simply ‘Mocap’ for short.
In successive years, the technology and methods for motion capture have improved allowing for greater levels of fidelity and even several actors at once. Along the way, methods have been devised for capturing an actor’s facial performance as well as the movements of their body. This is known as ‘Full Performance Capture’, as it takes the actors entire performance and digitally recreates it inside the computer. Every nuanced detail and emotion can then be transplanted onto a digital avatar.
Besides film and television, this exciting technology has had many medical, sports and military applications. Likewise, mocap -and later performance capture- has been instrumental in video game development, giving greater and greater depth to the characters placed under your control. Modern games require many hundreds of per character animations and the use of mocap to achieve this saves a lot of time. With real-time graphics achieving pseudo-cinematic quality, characters are able to emote in ever more life-like ways thanks to the talented actors driving them.
Why stop there though? Technology has improved to the point that devices such as Microsoft’s Kinect uses cameras to track and compute a players’ position and movement in the room, allowing them to use their entire body as a controller. This is done more simplistically, using a camera and infrared depth sensor without the need for any physical trackers. Similarly, home VR experiences such as Oculus Rift, Vive and PlayStation VR uses a camera to track a player’s movements immersing them in the game world like never before. This system relies on trackable headsets and controllers (as well as gyroscopes and accelerometers in the devices) to capture and process the movements the player with low enough latency to convince them that they’re genuinely somewhere else.
Applications for motion and performance capture are ever growing, but it is in video that we find the best implementation. Having dabbled with trackerless mocap in the past with some excellent results, we're sold on the benefits of the technology and have invested heavily into our own mocap suits offering us a far greater degree of flexibility and dynamism in the work we’re able to produce. We remain excited about the practical uses of mocap, and the ways in which we can incorporate it into our creative pipeline.