ML-driven facial interaction
Video courtesy of Tableau Films
A dense grid of 468 landmarks generated by an ML model allowed us to pin point the locations and attachment points of each of our 26 facial muscles.
A separate ML model estimates visitors' emotions - and if they may be yawning - and allows for live feedback.
Human after all
Visitors' reactions to the synthetic emotions get analyzed in in real-time, providing a glimpse into our inner workings.
A Sony FCB Block Camera's feed is being funneled through an HDMI interface, split into four concurrent streams and fed into three separate applications; two python apps that run ML-based face and emotion detection and a main Unity application that communicates with these side-apps.