The lab is the first of its kind. Supported by a grant from the National Science Foundation, Carnegie Mellon University's School of Computer Science has built a $250,000 motion capture facility. It will be used for offline and real time motion capture as well as wireless virtual reality in a 10'x20' space.

Equipment
The equipment we are using includes a Vicon 512 motion capture system, with the Vicon Workstation software, as well as the Vicon Tarsus real time server.

The Vicon 512 system has 8 high resolution cameras which are used to find reflective markers attached to the subject in the space. These 3d marker positions are then distinguished from one another and given their appropriate label (e.g. "clavicle", "left front head") by either the Workstation software or the real time server. The real time server can then map these points onto a simplified skeleton figure. We then send the position and orientation of each bone over to our real time display system, which applies them to a 3D virtual characters. We can then broadcast an NTSC signal to the subject's head mounted display (currently a Sony Glasstron monitor) so that the subject can see him or her self in virtual reality.

The Guide
Link to our 'zoom' guide for information on how to use the equipment in the space and our marker placement guide to learn where all those reflective markers go.

 
more photos
 
hardware diagram

OVERVIEW | YIN | YANG | THE LAB
ENTERTAINMENT TECHNOLOGY CENTER
(c) 2001