A few years ago, if you asked most people about motion capture, they would talk about actors in tight body suits with bright lights or reflective balls attached as motion markers. These allowed people to move around the stage so that their motion could be tracked with cameras, and then digitized using image-processing software. While this approach makes it easier for motion capture systems to track people, it has a number of inherent drawbacks.
First of all, the markers need to be applied very precisely to specific motion points, otherwise the tracking becomes inaccurate, which makes the results useless for subsequent animation as well as other applications such as medical diagnostics. The amount of time and effort needed to precisely position these markers is high, adding significantly to the ongoing costs of motion capture. At the same time, when subjects are not cooperative, such as is the case with children undergoing clinical imaging, placing the markers and then making sure they stay in place is next to impossible.
Image source: http://news.cnet.com/i/bto/20081203/_MG_5488.jpg
Secondly, the use of markers limits what types of motion can be tracked. For example, when two or more people interact in a scene, there is a strong possibility that some of the markers will become hidden. In these cases, the motion capture system stops working properly because of the lack of input data. This limits the applications of traditional motion capture technology to situations where this does not occur, such as solo scenes or scenes with little or no interpersonal contact. Clearly this is at odds with the needs of the animation and gaming industries.
However, the launch of Microsoft’s Kinect platform back in October 2010 changed the public perception of what a motion capture system could actually accomplish. Natural body gestures replaced waving batons and joysticks, providing a level of interactivity that home users had never experienced before. However, this very basic motion capture capability was only a shadow of what was already possible with a professional grade motion capture system. Most commercial systems today still use old-fashioned lights and other markers, as these simplify processing, but the first markerless system actually made it to market in 2007, a full three years before Microsoft launched Kinect.
Of course, the most obvious application of motion capture systems is the creation of avatars for the gaming and animation industries. However, these systems are also finding surprising uses in a number of other inventive applications now that time-consuming markers are no longer needed. For example, BMC, a leading Swiss bicycle manufacturer, is using motion capture technology to track their customers’ precise movements so that they can build a model of their physical behavior. Once this model is created, they then use it to customize the bicycles so that they are adapted perfectly to each individual customer.
Another area where motion capture is starting to play a larger role is in military applications. For instance, trainers can be inserted in real time into simulated training environments, providing more realistic and interactive training for soldiers. The other advantage of this is that the trainers and soldiers do not have to be in the same location – this drives down costs and makes the best use of scarce subject matter experts.
Motion capture is also being used for non-military training. For example, Marshall University’s 3D Visualization Lab is working on virtual-world simulators for mine safety and rescue training. Clearly, given the dangers of real-world situations, a simulated environment is a pragmatic way of providing training that saves lives. There is government interest in this also – Marshall University received $4 million from the federal government to develop this technology after two mining disasters occurred in 2006.
Image source: http://www.wired.com/images_blogs/photos/uncategorized/2007/08/29/holodeck.jpg
More broadly, there is a significant amount of investment into motion capture research today, and there seem to be few limits to the creativity of the academic community. For example, researchers at the University of Michigan have recently been working on total immersion environments that allow users of interactive applications to enter complete virtual worlds that interact with them on a detailed level. While the concept of virtual reality has been around for decades, it is the recent advances in software algorithms and computer processing power that have finally made it possible for people to suspend their disbelief and behave in a virtual world in much the same way that they would in a real one. Let’s be clear – we have not reached the point of having a Star Trek holodeck, but with the pace at which things are advancing, who knows what the future holds.