Write once, run everywhere
We have created a deep learning technology to bring the power of artificial neural networks into web apps. It is so fast that we can analyze video feed in real-time in the web browser.
After more than 2 years of research and development, we completed a highly optimized end-to-end system. We get rid of all kinds of platform constraints. Our computer vision solution works on websites (in the web browser), on mobile applications (Progressive Web Applications), on desktop applications (with Electron) and event in embedded hardware (using Nvidia Jetson).
We invented a hardware agnostic computer vision solution. Now, we aim to spread it to make existing cameras smarter.
Real-time video analysis for webapps
For multiple use cases like object detection, emotion detection... through regular cameras.
Our webapps take advantage of high-end standardized technologies like WebGL or WebRTC, with the concern to always perfectly match modern mobile browsers requirements.
In our demonstration of virtual try-on in augmented reality, the webapp detects the user face and overlays virtual glasses. We first designed a neural network from scratch within our WebGL deep learning technology. This neural networks inputs an image and outputs whether it is a face or not, what are the position and the rotation of the face, and even lighting parameters. Then we can render the glasses at the right place, with the right orientation and coherently enlighted. This process should be done for each new frame of the video feed, so dozens of times per second. That’s why efficiency and optimization matters. It marks a major breakthrough in the VTO area.
This demo was initially designed to work in any web browser. It was the first virtual try-on system compatible with Safari when Apple released iOS11 on the 19th of September 2017. For the first time webapps were able to capture the video feed from the camera using MediaStream API in the iOS environment.
We bet on GPUs
A 100% GPU workflow highly optimized and designed for a large scalability.
Betting on the client GPU is relevant given the increasing GPU power made available on modern computers and phones. GPUs are less impacted by the Moore’s law than CPU’s because they can scale horizontally. So their computing power is still increasing consequently at each new device generation.
By transforming neural network weights into WebGL textures and implementing common layers as fragment shaders, we can use the graphics capabilities of browsers designed for 3D games to speed up the execution of neural networks.
By designing a 100% GPU workflow, we always take advantage of the maximum amount of computational power available in a device.
Why Jeeliz technology is so unique?
Unlike Tensorflow.js, Jeeliz librairies are already able to solve real-life problems.
Read more: Why is our technology unique?