MIT researchers have developed a system for gesture-based computing using an ordinary webcam, a pair of brightly colored lycra gloves, and software that combs a database of images. The multicolored gloves cost about a dollar to make, a bargain for Minority Report-style interfacing.
The system works by translating gestures made with the gloved hand(s) into corresponding gestures of a 3-D model of the hand on screen with barely any discernible lag time. The glove is covered with 20 irregularly shaped patches that use 10 different colors that are distinguishable by the system. The current design provides the best results after the team tried dots and patches of different shapes and colors says Robert Wang, a graduate student in the Computer Science and Artificial Intelligence Laboratory who developed the new system together with Jovan Popović, an associate professor of electrical engineering and computer science.
Previous prototypes used reflective or colored tape attached to the fingertips, but that meant that only the fingertips were getting registered. The new system gets the 3-D configuration of your hand and your fingers, according to Wang. “We get how your fingers are flexing.”
As for the algorithm written for the system, once a webcam has captured an image of the glove, the software uses several hundred megabytes of memory to search through visual data in a database. Once it makes a match it looks up the corresponding hand position and, in a fraction of a second, the system provides an answer since there’s no need to calculate the relative positions of the fingers, palm, and back of the hand on the fly.
Applications for the low budget gesture-based technology includes video games and 3-D manipulation of models of commercial products or large civic structures.