Facial Recognition with JavaScript

Reynaldo Ayala
3 min readAug 13, 2020

As technology develops we’re now stepping in to the new world where facial recognition is just another piece of technology. As a new software developer I begin to think of all the great application I can come up with and stumble across an api that helps you detected and recognize the human face.

This api known as face-api.js can be found on git hub and it walks you through a complete set up for javascript.

If you just want to try out the demo and show some friends I have also listed a demo link.

Let’s take a look at some code and see how we can set up this api on our Visual Studio Code. For our index.html it’s going to look pretty simple we simple turn on the webcam to render a video.

On line 12 is where we call on the webcam

Moving forward were now going to style it a little with some css to render the video in the middle of the screen.

css style sheet

Our main.js is going to have a little bit more code so I made available for you to just copy and paste.

const video = document.getElementById("video");let predictedAges = [];Promise.all([faceapi.nets.tinyFaceDetector.loadFromUri("build-folder/models"),faceapi.nets.faceLandmark68Net.loadFromUri("build-folder/models"),faceapi.nets.faceRecognitionNet.loadFromUri("build-folder/models"),faceapi.nets.faceExpressionNet.loadFromUri("build-folder/models"),faceapi.nets.ageGenderNet.loadFromUri("build-folder/models")]).then(startVideo);function startVideo() {navigator.getUserMedia({ video: {} },stream => (video.srcObject = stream),err => console.error(err));}video.addEventListener("playing", () => {const canvas = faceapi.createCanvasFromMedia(video);document.body.append(canvas);const displaySize = { width: video.width, height: video.height };faceapi.matchDimensions(canvas, displaySize);setInterval(async () => {const detections = await faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks().withFaceExpressions().withAgeAndGender();const resizedDetections = faceapi.resizeResults(detections, displaySize);canvas.getContext("2d").clearRect(0, 0, canvas.width, canvas.height);faceapi.draw.drawDetections(canvas, resizedDetections);faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);faceapi.draw.drawFaceExpressions(canvas, resizedDetections);const age = resizedDetections[0].age;const interpolatedAge = interpolateAgePredictions(age);const bottomRight = {x: resizedDetections[0].detection.box.bottomRight.x - 50,y: resizedDetections[0].detection.box.bottomRight.y};new faceapi.draw.DrawTextField([`${faceapi.utils.round(interpolatedAge, 0)} years`],bottomRight).draw(canvas);}, 100);});function interpolateAgePredictions(age) {predictedAges = [age].concat(predictedAges).slice(0, 30);const avgPredictedAge =predictedAges.reduce((total, a) => total + a) / predictedAges.length;return avgPredictedAge;}

To make all of this work you’re going to need to insert the face-api.min.js library and a couple of models to your file. This is going to be a little hard to copy and paste so I have listed a youtube video that help you on this process.

When I first ran this on my web browser it gave me an error that it wasn’t fetching the models api. I fixed this error by creating a folder called build-folder and putting all the models in here and calling it on my js.main.

After you set up your face-api.min.js and all your models you can now go on your terminal and open index.html and see if everything is working the way it should be.

With this new peace of technology I am excited to see what you all create and wish you the best in your future endeavors.

--

--