Projection mapping has for years transformed buildings – from Buckingham Palace to H&M’s Oxford Circus flagship – store windows, and various objects for spectacular shows. Facial projection mapping, however, has been less widespread, first entering our radar when we previewed Jean Paul Gaultier’s retrospective, Inside the World of Jean Paul Gaultier, in 2014 at the Fashion Space Gallery in London’s Barbican. The French fashion legend’s face was beamed onto the heads of mannequins wearing his most iconic looks, narrating the exhibit to visitors as if there in person.
Reminded of it again while strolling past Galeries Lafayette’s extravagant Christmas windows on a visit to Paris, where the mannequins were seen to wink at admiring passers-by, we were intrigued by the idea of animating mannequins through realistic projection mapping.
“I loved that it brought the faces to life in a spectacularly life-like way,” Nude Mannequins’ Stefan Parsons, who saw the Galeries Lafayette windows, told us. “We often use realistic faces on mannequins, like the bespoke makeup and wigs used on our project for Missguided’s first store, but these faces are difficult to change in any quick or economical way. You could change their clothes every day, so it is a great idea that projection allows this to happen easily with faces too.”
Taking the projection onto static “faces” to a new level, we were recently mesmerised by a dance performance video that uses real-time facial projection mapping to change the look of the dancers’ faces. Over the course of the video, the dancers are made to look like skulls, big-toothed clowns and terrifying dolls.
The video features Japanese dance duo, Ayabambi, who appeared in the video for Madonna’s 2015 single ‘Bitch, I’m Madonna’ as well as New York-based designer Alexander Wang’s autumn/winter campaign the same year.
The University of Tokyo created the world’s fastest high-speed projector that projects 1000 frames per second. By using the projector alongside a 3D-mapping system and precise sensor tracking, the video creators were able to change the look of the dancers’ faces as well as the actual video’s aesthetics in real time.
While this facial-tracking type of projection mapping is much more extreme and sophisticated than anything that might need to be used on a mannequin, it makes us excited about how things might develop in future.
Watch the video here.
Photo credit: Claire Mead