Jeeliz AR is a JavaScript library that can detect an object in a video feed and give its precise position. Under the hood, we are using our custom deep learning algorithms to parse each video frame and analyse it, to try and find a target object. Here’s a demo:

Currently, our network can detect chairs, mugs, bicycles and cats (see the online demo). Let’s see together how to use the Jeeliz AR API in our everyday projects.

Project setup

For starters, we’ll want to add the Jeeliz AR scripts to our project. Let’s firstly clone the Jeeliz AR repository on Github. The files that we’ll link in our head section are located in the ./dist/ and ./helpers/ folders. In the head section of our index.html file, let’s add:

<!-- JEEAR SCRIPT -->
<script type="text/javascript" src="./jeelizAR.js"></script>

Great ! We can now access all the methods of the JeelizAR API.
We will also be needing the JeelizMediaStreamAPIHelper located in the ./helpers/ folder, in the JeelizAR folder.

<script src="./helpers/JeelizMediaStreamAPIHelper.js"></script>

Okay, we can now start experimenting with the Jeeliz AR API.

Getting the video stream

In this project, we’ll be showcasing the basic detection feature of the JeelizAR API. To do so, we’ll add a canvas element bearing the
“debugJeeARCanvas” id, in our body:

<canvas id="debugJeeARCanvas"></canvas>

Now, in a script tag, we’ll create a video element, using the get_videoElement() method of the JeelizMediaStreamAPIHelper.

const DOMVIDEO = JeelizMediaStreamAPIHelper.get_videoElement();

We’ll now use the get() method to get the video feed from the webcam.

JeelizMediaStreamAPIHelper.get(DOMVIDEO, init, () =&gt; {
alert('Cannot get video bro :(');
}, {
video: true, // mediaConstraints
audio: false
});

The second parameter passed the the get() method is the success callback to be executed in case the helper successfully gets the video stream.

Initializing the JEEAR API

Now that we have our video stream we need to pass it to the JEEAR API in order to initialize it. In the init() callback, we call the .init() method of the JEEAR API, passing it our canvas id and the video we previously created.

function init() {
JEEARAPI.init({
canvasId: 'debugJeeARCanvas',
video: DOMVIDEO,
callbackReady: (errLabel) =&gt; {
if (errLabel) {
alert('An error happens bro: ', errLabel);
} else {
load_neuralNet();
}
}
});
}

In case of error, the callbackReady function returns an error label, errLabel. Otherwise, everything went right and we can now load the neural network that will analyze the video stream.

Loading our neural network

The neural network will analyze the video stream it receives, and detect the object it has been trained to label. Let’s create the load_neuralNet(), that will load the neural network file located in ./neuralNets/, in the JeelizAR folder:

function load_neuralNet() {
JEEARAPI.set_NN('./neuralNets/basic4.json', (errLabel) =&gt; {
if (errLabel) {
console.log('ERROR: cannot load the neural net', errLabel);
} else {
iterate();
}
});
}

If everything goes well, the neural network should now be loaded and we only need to observe the detect state of the JEEAR API to see if an element is detected.

Detecting elements

The debug canvas we created at the beginning of this tutorial should now be displaying the video feedback from the JEEAR API, with the detection window moving all over the canvas. In order to use the results provided by the neural network, we’ll create the iterate() function, that will get the detection status from the JEEAR API and log it into the console:

function iterate() {
const detectState = JEEARAPI.detect(3);
if (detectState.label) {
console.log(detectState.label, 'IS DETECTED YEAH !!!');
}
window.requestAnimationFrame(iterate);
}

Along with a label, the position of the element is passed to the detect state. This is what we’ll use in a next tutorial to add AR elements and animations.
Our neural nets can be trained to detect anything you want. Please get in touch here if you need a custom network for your web app.