Learning where you are looking at (in the browser)
Updated 29 JAN 2020: Updated TFJS version to 1.5 and fixed an issue
Today, let’s talk about how to learn a complex computer vision problem right in the browser! Thanks to TensorFlow’s JavaScript Library, this is easy to realize. Instead of training our own model and serving it as a finished product, we will let the user collect their own data and then train the model right there, on the client machine. Absolutely no server is neccessary!
Try out the complete project here. This requires a modern browser, a webcam, and a mouse. I did not have the time to tweak this to work on tablets/phones 😨. And of course, things get much harder when the camera is not stationary.
The Idea
Let’s predict where on the website a user is looking at by watching their eyes through the webcam!
In the browser, we can easily get access to the user’s webcam. Taking the whole image would be too large an input for the net, and it would have to do a lot of work before it could even find out where the eyes are. This might be fine in a model that we train offline and deploy on a server, but to be trained and used in the browser, this would be too daunting a task.
In order to help the network, we can provide it with only the part of the image around the user’s eyes. This rectangle surrounding the eyes can be located using a third party library. So the first part of the pipeline looks like this:
The JS library I use to detect and locate the face is called clmtrackr. It’s not perfect, but pretty lightweight, fast and overall great out of the box.
With only this small but relevant image as input, a simple convolutional neural network shouldn’t have much trouble learning the problem:
This blog post describes a fully working but minimal version of this idea. To see the complete thing in action with many additional features, check out my GitHub repository.
Preparation
First off, download clmtrackr.js
from its repository. We will start with an empty HTML file that only imports jQuery, TensorFlow.js, clmtrackr.js
and a main.js
that we will be working on later:
Streaming from the webcam
We need to get the user’s permission to activate the webcam and render its video stream to the page. In this tutorial, I’m skipping a lot of backwards-compatibility-related boilerplate code. We’ll simply assume that our users are surfing on the latest Chrome 😁
First, add this to your HTML (inside the body but above the script tags):
Now let’s start with our main.js:
Try it out! Your browser should be asking for permissions, then stream your face live onto the page.
We can add more code to the onStreaming()
function later on.
Finding your face
Next, let’s use clmtrackr.js to find your face in the video. First, initialize the tracker right underneath const video = ...
:
Now in onStreaming()
, we can let the tracker work on the video stream by adding:
And that’s it! Your face should be detected!
… don’t believe me? Okay, let’s draw a shape around your face to prove it.
For that, we need a way to draw over the video element. Drawing in general is done via the <canvas>
tag in HTML. So we need to create an overlayed canvas element right on top of the video.
Inside the HTML, add this under the existing <video>
-element:
(Feel free to move the inline style to a dedicated stylesheet).
This adds a canvas with the same size. The CSS guarantees that they are exactly at the same position.
Now each time the browser renders, we want to draw something to the canvas. Running a method at each frame is done via requestAnimationLoop()
. Before we draw something to the canvas, we should remove the current content by clearing it. Then finally we can tell clmtrackr to draw straight to the canvas.
Here’s the code. Add it underneath ctrack.init()
:
Now call trackingLoop()
inside onStreaming()
right after ctrack.start()
. It will re-run itself at each frame.
Refresh your browser. Your face should get a funny green mask in the video. Sometimes you have to move around a bit for it to capture your face correctly.
Cropping the eyes
Next, we need to crop a rectangle around the eyes and put it in a separate canvas.
Luckily, cmltracker gives us the location of not only the face, but of 70 facial features. By looking at their docs, we can select the exact points we’re looking for:
Let’s define the eyes as the rectangle touching points 23, 28, 24 and 26, expanded by 5px in each direction. This rectangle should cover everything important as long as the user doesn’t tilt their head too far (or is standing on their head 🙃).
We need another canvas to capture this cropped image before we can use it. It can simply be set to 50x25 pixels. Don’t worry, the actual rectangle will resize to fit in there. A little bit of deformation is okay.
Add this new eyes-canvas to the HTML:
This function will return the x, y coordinates and width and height of the rectangle surrounding the eyes. It takes as input the position-array we get from clmtrackr. Note that each position we get from clmtrackr has an x and a y component. Add it to your JS file:
So now in each frame, we want to extract the eyes rectangle, stroke it in red in the overlay canvas, then copy it over to the new eyes canvas. Note that the video might have a different resolution internally and so we add some resizing factors for that.
Replace the if-block inside trackingLoop()
:
With this, you should see a red rectangle around your eyes and that part cropped onto a second
canvas. If your eyes are bigger than mine, toy around with the getEyesRectangle
function.
Collecting data
There are many ways to collect data. I decided to use the mouse in conjunction with the keyboard. The user moves the cursor over the page, following it with their eyes, and hits the space key whenever an example should be recorded. That way, it is easy to create a large dataset quickly.
Tracking mouse movement
To know where in the window the mouse is located at any time, we need to add an event listener to document.onmousemove
. This function also normalizes the coordinates to be within [-1, 1]
:
Capturing images
In order to capture an image from a canvas and store it as a tensor, TensorFlow.js offers the helper function tf.browser.fromPixels()
. Let’s use it to store and then normalize an image from our eyes canvas:
Note that tf.tidy()
makes TensorFlow.js clean up our mess after we’re done.
We could simply store all examples in one big training set, but in machine learning it is important to make sure that your model is not just learning the data by heart. That’s why we should save some examples in a separate validation set. Then we can check how the model fares on unseen data and make sure it’s not overfitting to the training set. For this, I split off 20% to go into the validation set.
Here’s the code to add new data points:
Finally, we need to bind this function to the space key:
Now, each time you hit space, an image with the corresponding mouse position should be added to one of the datasets.
Training a model
Let’s create a simple convolutional neural network. TensorFlow.js provides a Keras-like API for this. The network should have a conv layer, max-pooling, and finally a dense layer with two output values (the screen coordinates). In between, I added dropout as a regularizer and flatten
to convert 2D-data into 1D. Training is done with the Adam optimizer.
I arrived at these values after toying around on my MacBook Air. Feel free to experiment with them or to add more layers!
Here’s the code for the model:
Before training the network, we set a fixed epoch number and a variable batch size (since we might be dealing with very small datasets).
Let’s add a button on the page to trigger this:
And JS:
Time to predict!
Now that we can collect a dataset and have a model set up, we can start predicting where the user is looking at. Let’s display this with a green sphere that moves over the screen.
First, let’s add the sphere by itself:
To move the sphere, we periodically pass the current eyes image to the the neural network and ask “Where are they looking at?”. The model answers with two coordinates where we should move the sphere to:
I set the interval to be 100 ms. If your computer is slower than mine, you might want to increase that.
Done!
Alright! We have implemented all the moving parts.
Try them out: Move your mouse over the browser window, follow it with your eyes, and spam the space key. Then hit the train button from time to time. The green sphere should start following your eyes around! It may be bad in the beginning, but starting at around ~50 training examples, repeated training and some luck, it should get pretty good.
Find the complete code of this tutorial here.
While this is quite cool already, there is much room for improvement! What if the user moves their head or changes their posture? It would be great to have some additional features, like the size, position and angle of the eyes rectangle. These and many more ideas I added to the code in my repository. You can try out the full version here and look at the code here. Some of the additional features I built:
- Additional features as mentioned above
- Convert images to grayscale
- CoordConv (info)
- Heatmap to check where the model succeeds or fails
- Ability to save/load the dataset
- Ability to save/load the model
- Retain weights with lowest validation loss after training
- Better UI with a short tutorial
Thanks for reading! Please leave your comments, feedback and ideas below.