Utilizing the robust libraries and machine learning algorithms from OpenMV, let's explore how to do object detection on trails to see how much they get used!
Happy first day of fall, SparkFans!
This summer, amidst the pandemic, I've tried to spend as much time as I can outdoors on trails. I’m from a small town in Colorado, where I grew up riding and racing mountain bikes on the local trails. The youth bike program that I rode with often partnered with another local organization that built and maintained trails, so as riders, we could give back and build more trails for the greater community.
Over the years, as trails in my hometown and throughout Colorado have become more popular, I’ve noticed that some local governments are funding efforts to determine what type of trail user is most prominent and on which trails. By understanding if one trail sees more bikers vs. hikers, or if one trail generally sees very little traffic at all, trail building organizations can focus their efforts to ensure that pre-existing trails fit the needs of the user. Usually, this research is conducted by paying someone to sit at trail and manually tally what kind of users pass by.
This could be a perfect place for technology to survey and compile data about trail users instead of it being manually done. Specifically, this is a prime example of when the OpenMV Cam H7 Plus could implement some object recognition and tally up each time a person is detected.
The basic starting point in the OpenMV IDE is setting up the sensor settings. You can change the contrast, window size, and other formats for whatever you might be viewing.
sensor.set_contrast(3) sensor.set_gainceiling(16) sensor.set_framesize(sensor.QVGA) sensor.set_windowing((240, 240)) sensor.skip_frames(time=2000)
The other important piece in the script is the the frame rate (FPS) clock - it will determine how many snapshots the camera should take and in what time frame.
For this specific use case, I turned to TensorFlow to load Google's Person Detection Model to see if a person is in view. The person detection network is built-in to the OpenMV Cam's firmware, so it has already been trained to classify images as either containing a person, not containing a person, or unsure whether there is a person.
net = tf.load('person_detection') labels = ['unsure', 'person', 'no_person']
I found the biggest issue to be that there's so much data when running the model; it's overwhelming to sift through the saved SD card data. The camera needs to be running to determine if there's a person in the frame, but it'd be optimal if the camera was on low-power sleep mode and would start the frame rate clock once it determines if there is a person in view. Otherwise, it returns thousands of data points.
However, since OpenMV is built on Python, it is easy to develop a visualization within an open source documentation platform like Jupyter Notebooks that can query those data points and only return values where a person was detected over a certain threshold (i.e. 80 percent certainty).
The other issue is that the person detection model is based off what the Google machine learning library has already been trained to see, so tracking a person on a bicycle isn't completely accurate, because it is a different object than just a person. To ensure accuracy for different trail users, you'd have to retrain the model to include every different kind of trail user and classify them individually. So for the purpose of this project, it's useful to just determine how many people are on a trail at a time.
However, the model as it is is really quite accurate, and is ideal for collecting data in a short time span. If you are interested in further explanation of the ML model, would like to see the code for the project, or just see the visualization that displays how many people were detected, just let me know in the comments and we can post that!
Happy fall - go out build some machine vision projects with the OpenMV Cam H7 Plus! I can't even express how fun this module is, so happy hacking with it!