How to Detect Clownfish in a Video with Make Sense and YOLOv5

In this tutorial, you’ll learn how to use Make Sense and YOLOv5 to create, train, and test your custom datasets for feature detection using Machine Learning.

YOLOv5 and Make Sense

YOLO is an acronym for “you only look once”, and YOLOv5 is an open-source initiative from Ultralytics to contribute to research in the field of vision AI methods. It is now one of the most famous object detection algorithms.

Make Sense is a free-to-use online tool for labeling photos. It is used to easily create datasets for training Machine Learning algorithms.

Creating labels with Make Sense

For this tutorial, our goal is to train a Machine Learning algorithm to detect clownfish. For that, we’ll use Make Sense to manually build a custom dataset with 21 images for training and 9 for validation. Here are the steps:

  • Go to makesense.ai and click Get Started. A new page will appear for you to upload the images you would like to label. We’ll label all the training and validation images.
  • After loading the images, choose Object Detection. A new window will popup for you to create the labels you’ll need. Create a label called “clownfish” and click Start Project.
  • Now, for each image that was uploaded, you’ll need to manually select the feature you’re interested in detecting, which for us is the clownfish, and choose the “clownfish” label.
  • Now, after all the images are labeled, you’ll need to export the labels. Go to Actions ยป Export Annotations, choose the option containing the YOLO format, and export. That will give you a .zip file with one .txt file for each image you’ve labeled.
  • Finally, you’ll need to separate the training data from the validation data. So, create a folder called train_data with the structure shown below, and separate the images and labels into their respective folders. In the end, the train_data folder should be compressed to a .zip file.

Training a model with YOLOv5 on Google Colab

Now, we’ll use the Colab Notebook version of YOLOv5 to train and test our model for detecting the clownfish. I suggest you open the notebook and save a copy of it to your own Drive. Then, follow these steps:

  • First, go to the Setup session of the notebook. This cell is going to install all the packages and dependencies you’ll need to run YOLOv5. Before running this setup, create a new cell below it and type the command !unzip -q ../train_data.zip -d ../, but don’t run it yet. This will be used once we have uploaded our train_data.zip file to the notebook.
  • Run the setup cell. Once the setup is done, you should see a folder called yolov5 in the Files tab of your notebook.
  • Upload the train_data.zip file you’ve created before.
  • Then, run the cell created earlier to unzip the file. A new folder called train_data will appear in the Files tab once that’s done (it might take some time), and it should look like the image bellow.
  • Then, run the cell created earlier to unzip the file. A new folder called train_data should appear in the Files tab once that’s done (it might take some time).
  • Now it’s time to train the model. Go to the 3. Train session of the notebook. The third cell in this session is the one we’ll use to train our model, and it’s currently configured to train a model based on the coco128 dataset. So, we’ll need to make some changes to that.
  • First you’ll have to create a new .yaml file using the code below, in VS Code or another text editor you prefer. Save this file as custom_data.yaml and upload it to yolov5/data in your notebook.
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../train_data  # dataset root dir
train: ../train_data/images/train  # train images (relative to 'path') 
val: ../train_data/images/val  # val images (relative to 'path') 
test:  # test images (optional)

# Classes
nc: 1 #number of classes
names: ['clownfish']
  • Now, change the command in the train cell to use your custom_data.yaml instead. Additionally, you can change the values for batch and epochs as you’d like. Run this cell.
  • After training the model, its results will appear in yolov5/runs/train/exp, and a new folder (exp2, exp3…) is added each time the model is trained. So, each time you run this cell, you won’t lose other results and it’s easier to compare them.
  • Now, go to the 1. Inference session of the notebook. That’s where you can test the model you’ve trained with an input of your choice. Upload to the notebook the file you want to use as input and write its path to –source. Then, change the –weights path to the result of the model you’ve just trained (runs/train/exp/weights/best.pt, if you’ve trained it for the first time). Run this cell.
  • The result will appear in yolov5/runs/detect/exp and you can download it.

Results

The model was trained 3 times, with different parameters for batch and epochs. The batch size is the number of samples processed before the model is updated, and epochs refer to the number of complete passes through the training dataset in the learning process.

In this example, as the batch size and epochs increased, the model became more accurate in detecting the clownfish in the image below.

Result with –batch 16 –epochs 50
Result with –batch 16 –epochs 100
Result with –batch 20 –epochs 150

YOLOv5 also allows us to test our model with a video input. The result below was achieved by using an excerpt from this YouTube video to test our third model (batch 20, epochs 150).

References

[1] https://docs.ultralytics.com/

[2] https://github.com/SkalskiP/make-sense

[3] https://www.youtube.com/watch?v=6_HvIN6wFVo

1 thought on “How to Detect Clownfish in a Video with Make Sense and YOLOv5

Leave a Reply

Your email address will not be published. Required fields are marked *

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!

Hello! We have noticed you are using an ad blocker. Our website is funded by advertising which allows you to access all our content for free. By disabling your ad blocker, you are contributing to the sustainability of our project and ensuring we continue to provide high-quality, useful tutorials. We appreciate your support!