Ready to Order

FACE MASK DETECTOR

Weather is getting better, people are going out. I was one of them last week. I went to a restaurant where they ask customers to put their mask before the waiter come close to your table. Getting inspired from this situation I’ve created a system for the tables.

This is how it works, whenever a customer put their mask on the green lights up on their table, so it’s easy for waiters to see if you’re ready to order. Only thing you need to do is to put your mask on.

For this mask detection project, I’ve used ML5 image classifier to create 2 classes “with_mask“ and “without_mask“. Then, I’ve uploaded my model to my Arduino nano Sense BLE to light up LEDs. I’ve also designed an interface where, it is detected whether a person is putting mask or not. Depending on the circumstance, a label on the bottom of the video appears to alert user to put their masks on. I’ve used different emojis for each class.

DEMO

MASK ON - Ready to Order

MASK OFF - Still deciding

FINAL Project Idealization

The recent coronavirus pandemic has pushed people around the world to new challenges. More than 40% of Americans have now gotten at least one dose of the vaccine. But, there are still a lot of people out there who have not yet got any shots. So, putting mask is still a crucial way of protecting ourselves from virus. I still haven’t seen any efficient face mask detection applications which now must be in high demand for transportation means, densely populated areas, residential districts or enterprises to ensure safety. Maybe, it might me not be legal in USA to use face detection technology in common use. Anyways, I want to create an application to detect face mask use.

I want to build this project with my raspberry, I’m planning to use TensorFlow lite. To train my models, I’m going to use ML5 image classifier to create 2 class “with_mask“ and “without_mask“. Then I’ll transfer my data to my raspberry pi4.

WORKFLOW:

workflow_ML.png

Requirements List:

  • Arducam uno mEGA 2560

  • Raspberry pi4

  • ML Teachable Machine

  • Tensorflow

  • Google Colab

Question: Is it possible to use Arducam uno mEGA 2560 with Raspberry pi4 or do I need an adapter?

Running Models

MAGIC WAND

I’ve tried Magic Wand to get results from a model and by adding some code to light up LEDs.

DEMO

For this one I’ve used Arduino_LSM9DS1 library version 1.1.0 and Arduino_TensorFlowLite library 2.1.0-ALPHA. When I open the serial motion, it supposed to show “Magic Starts!” but I didn’t see it. I found it's easy to get the "Wing" gesture, but hard to get the "Ring" and "Slope" gestures. This experiment help me understand TensorFlow Lite can run a 20 kilobyte neural network model to recognize gestures with an accelerometer and it is designed to on systems such as microcontrollers.
Link to Github

Gestures:

  1. Wing

  2. Ring

  3. Slope

Screen Shot 2021-04-14 at 3.44.15 PM.png

The following video shows the demo:

DEBUGGING

  • The tensorflow lite path while including the libraries in the beginning of the code was different in our classes github page and the one that’s in Arduino’s library example. I was getting the fallowing error. <c_api.h no such file or directory> Than, I realized they were different examples. One of them was, magicwand_LED and the other one is the original one.

Screen Shot 2021-04-14 at 4.14.48 PM.png

  • Another problem was one of my LED pinout mistake in <arduino_output_handler.cpp >

  • There’s an error I’ve seen in my first trial, but never seen again.

Screen Shot 2021-04-14 at 12.17.06 PM.png

Thumb up and down classifier

Classification and Teachable Machine

I’ve used teachable machine image classifier model to train and created my two classes “piece/okay“, later sent the results to my Arduino board to light green/blue LEDs. Running the P5 sketch on web editor it was important to remember update the <portname> and the <poseModelUrl>, and update the class names.


DEMO:

Demo on class “Peace”

Demo on class “Peace”

Demo on class “Okay“

Demo on class “Okay“

I took ~ 1000 image samples for my each of my models, which made it easy to recognize my hand moves after training the model. My results were quite accurate.

Exporting model-Teachable Machine

Exporting model-Teachable Machine

Below, you can see my output which was quite accurate.

DEMO RESULT:

TensorFlow Lite Micro

I’ve started with running the pre-trained micro_speech inference example which simply uses neural network on the Arduino board to recognize simple voice commands like “yes/no“. For this example I’ve used Arduino Nano 33 BLE Sense that supports tensorflow lite mirco. I’ve worked with other microcontrollers but I found Nano 33 BLE very impressive on training models and running directly on it, with the variety of onboard sensors like voice, motion, environmental and light.

  • micro_speech – speech recognition using the onboard microphone

On this example, the Arduino board is getting the LED to flash either green or red. Here, here I’m using TensorFlow Lite Micro to recognize voice keywords. It has a simple vocabulary of “yes” and “no.”

Demo:

Next, I’ve tried to capture sensor data with my microcontroller. Basically, I’ve used ML to enable the Arduino board to recognize gestures. I’ve captured motion data from Arduino Nano 33 BLE Sense board, imported it into TensorFlow to train a model and deploy the resulting classifier onto the board.

  • magic_wand – gesture recognition using the onboard IMU

Demo:

Sensor data: flex and punch gesture movements

Sensor data: flex and punch gesture movements

TinyML_epochs

TinyML_epochs

TinyML gesture model_google colab

TinyML gesture model_google colab