Write once, run everywhere

Deep learning based computer vision

We have created a deep learning technology to bring the power of artificial neural networks into web apps. It is so fast that we can analyze video feed in real-time in the web browser.

After more than 2 years of research and development, we completed a highly optimized end-to-end system. We get rid of all kinds of platform constraints. Our computer vision solution works on websites (in the web browser), on mobile applications (Progressive Web Applications), on desktop applications (with Electron) and event in embedded hardware (using Nvidia Jetson).

We invented a hardware agnostic computer vision solution. Now, we aim to spread it to make existing cameras smarter.

Real-time video analysis for webapps
For multiple use cases like object detection, emotion detection... through regular cameras.

Object detection in the browser

Our webapps take advantage of high-end standardized technologies like WebGL or WebRTC, with the concern to always perfectly match modern mobile browsers requirements.

In our demonstration of virtual try-on in augmented reality, the webapp detects the user face and overlays virtual glasses. We first designed a neural network from scratch within our WebGL deep learning technology. This neural networks inputs an image and outputs whether it is a face or not, what are the position and the rotation of the face, and even lighting parameters. Then we can render the glasses at the right place, with the right orientation and coherently enlighted. This process should be done for each new frame of the video feed, so dozens of times per second. That’s why efficiency and optimization matters. It marks a major breakthrough in the VTO area.

This demo was initially designed to work in any web browser. It was the first virtual try-on system compatible with Safari when Apple released iOS11 on the 19th of September 2017. For the first time webapps were able to capture the video feed from the camera using MediaStream API in the iOS environment.

We bet on GPUs
A 100% GPU workflow highly optimized and designed for a large scalability.

Using our unique deep learning framework programmed in javascript/webGL, we are able to create and train outstanding artificial neural networks, frictionless and running fully on the GPU.

Betting on the client GPU is relevant given the increasing GPU power made available on modern computers and phones. GPUs are less impacted by the Moore’s law than CPU’s because they can scale horizontally. So their computing power is still increasing consequently at each new device generation.

By transforming neural network weights into WebGL textures and implementing common layers as fragment shaders, we can use the graphics capabilities of browsers designed for 3D games to speed up the execution of neural networks.
By designing a 100% GPU workflow, we always take advantage of the maximum amount of computational power available in a device.

Why Jeeliz technology is so unique?
Unlike Tensorflow.js, Jeeliz librairies are already able to solve real-life problems.

We are often asked: Why shall I a use Jeeliz librairies since tensorflow.js seems to do the same job? tensorflow.js also runs in the browser in JavaScript/WebGL, the developer community is huge and it is made by Google…

This is true that Tensorflow.js is great. But this framework was built with a top-down approach: JavaScript/WebGL is just another way to train and run deep learning models in the browser. In our case, we started from what we have in a web development environment: the JavaScript/WebGL workflow. Then we contraint the neural network structure in order to be able to apply advanced optimizations. That is why our technology is ready for real-time video processing.

Read more: Why is our technology unique?