Our web applications take advantage of high-end standardized technologies like WebGL or WebRTC, with the concern to always perfectly match modern mobile browsers requirements.

In our demo of glasses virtual try-on in augmented reality, a neural network detects the user face. Then the web application overlays a 3D rendering of a glasses mesh. The neural network was designed and trained from scratch by our deep learning framework.

Our algorithm is more flexible and more improvable than classic face detection, often based on active shape models. The detection is global: the neural network inputs an image, and outputs simultaneously whether this is a face or not, the face orientation, how to move the input to better match the face… It even learns the ambient and directionnal lighting to coherently enlight the 3D glasses model. The deep learning based face detection and tracking is also very robust to bad lighting conditions or to image noise.

This demo was initially designed to fit any web browser. It was the first virtual try-on system compatible with Safari when Apple released iOS11 on September 19, 2017. For the first time webapps were able to communicate with the camera through ‘getUsermedia’ in the iOS environment.