A recent study highlights the significant impact of hidden assumptions in motion capture technology, especially when AI is applied to the human body. Researchers from the University of Michigan, including Abigail Jacobs, an assistant professor at the School of Information and Center for the Study of Complex Systems, have pointed out that the use of inaccurate depictions of the human body can make certain AI applications unsafe for individuals who do not match the assumed body types. These assumptions, which often consider a “standard” or “representative” body to be that of healthy, adult men, have been integrated into AI through motion capture technologies. This practice dates back to historical methods from the 1930s and continues to influence modern systems, potentially causing harm by misrepresenting bodies that do not conform to these narrow standards.
Motion capture systems, which are utilized in a wide range of applications from video game animation to health diagnostics and workplace ergonomics, rely on sensors and cameras to collect data. This data is then used to model “digital skeletons” on computers. However, the study reveals that these systems are often based on stylized and flawed assumptions about human bodies, thereby affecting their accuracy and safety. The study also draws parallels between the biases in motion capture and those in the development of color photography, which was initially designed to best capture lighter skin tones, thereby inadequately representing darker-skinned individuals.
The implications of these biases are far-reaching, affecting not just the entertainment and health industries but also areas like vehicle safety, where crash test dummies based on male bodies have led to higher injury rates for women and children. The researchers propose an analytical framework to scrutinize and mitigate these hidden assumptions in technology design and development, urging for a more inclusive approach that accurately represents the diversity of human bodies.