The cutting-edge-technology and APIs to unlock the potential of the webcams.
We built a computer vision solution that is hardware agnostic.
A single solution supported by mobiles, desktops and embedded systems.
At JEELIZ, we have created a WebGL deep learning technology to bring the power of artificial neural networks into web apps. Thus, we can perform extremely fast and very accurate real-time videofeed analysis in a web app context.
After more than 1.5 years of R&D, we completed a highly optimized end-to-end system, allowing us to get rid of all kinds of platform constraints and provide a workable computer vision solution for all kinds of devices.
We invented a hardware agnostic computer vision solution. Now, we aim to spread it to help making existing cameras smarter.
We bring real-time videofeed analysis into simple web-apps.
For multiple use cases like object detection, emotion detection... through regular cameras.
Our webapps take advantage of high-end standardized technologies like WebGL or WebRTC, with the concern to always perfectly match modern mobile browsers requirements.
In our demo of virtual try-on in augmented reality, the webapp detects the user face and overlays virtual glasses. To do so, we designed a neural network from scratch within our WebGL deep learning technology. Combined with our highly efficient implementation, it marks a major breakthrough in the VTO area: for the first time ever, all the frames of a video feed can be analyzed in real time.
This demo was initially designed to fit any web browser. It was the first virtual try-on system compatible with Safari when Apple released iOS11 on September 19, 2017. For the first time webapps were able to communicate with the camera through ‘getUsermedia’ in the iOS environment.
We run deep learning algorithms directly in the browser.
A 100% GPU workflow highly optimized and designed for a large scalability.
An approach that became widely relevant given the increasing GPU power made available for modern computers and phones for day to day highly-parallel computation tasks, like running sophisticated games.
By transforming neural network weights into WebGL textures and implementing common layers as fragment shaders, we can use the graphics capabilities of browsers designed for 3D games to speed up the execution of neural networks.
By designing a 100% GPU workflow, we always take advantage of the maximum amount of computational power available in a device.
Feel free to contact us if you want to know more… 🙂