MotionScan

MotionScan
Process type Motion capture
Product(s) L.A. Noire
Leading companies Depth Analysis
Team Bondi
Year of invention 2011
Developer(s) Depth Analysis

MotionScan is a motion capture technology developed by Australian company Depth Analysis, a sister company of Team Bondi.[1] It was first showcased at E3 2010 and its use was first popularized by the successful 2011 video game L.A. Noire.[2] Unlike other motion capture technologies, MotionScan relies on 32 high definition cameras to capture an actor's performance without having them wear a special suit.[2][3]

Technology

Processing

MotionScan captures the views from 32 cameras arrayed as stereo pairs to capture the face of an actor. The video stream from these camera pairs are processed in a computer using Stereo Vision techniques to produce a 3D model for each frame from the combined video streams. Included in the data is a composite texture map of all the views and a normal map. The processing of the data is done in the background to the capture process and various parameters can be tuned depending on what the customer requires in terms of quality and resolution. Once these parameters have been selected the data is placed in a server queue for processing. The speed at which the data can be completed is a function of the number of processors that are available for processing. Depth Analysis currently uses a 64 blade cluster to process data but more can added if the customer requires quicker turnaround. There is no user intervention required during the processing stage so the data is unadulterated or interpreted in any way.[4]

Tools

Depth Analysis has developed a tools pipeline for game customers and CG productions. A plug-in for Autodesk 3ds Max, Motion Builder allows CG customers to build cut scenes for games or for CGI films to visualize captured heads of actors in real time within MotionBuilder. This means that CGI teams can quickly light their shots using close approximations of final dialogue and performance. For game teams, Depth Analysis provide a server infrastructure that allows designers and artists to visualize lines of dialogue and types of performance and mix, match and trim these lines for in game use. Depth Analysis captured 50 hours worth of raw data for L.A. Noire and used over 21 hours of final dialogue in the game. MotionScan is played back in game using code that can plug into any game engine and is compressed and decompressed using a video codec.[5]

References

  1. Crecente, Brian (November 13, 2010). "Why Gameplay In L.A. Noire’s New Trailer May Not Matter". Kotaku. Retrieved 4 September 2011.
  2. 1 2 Horner, Kyle. "The "Fallout" of MotionScan". 1UP.com. Retrieved 4 September 2011.
  3. Peckham, Matt (4 February 2011). "In L.A. Noire Asking Questions Trumps Firing Bullets". PCWorld. Retrieved 4 September 2011.
  4. Processing » Depth Analysis
  5. Tools & Pipeline » Depth Analysis
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.