How to Train YOLOv5 Object Detection on Custom Data

Derrick Mwiti
Derrick Mwiti
5 min read

Table of Contents

One of the most popular choices for object detection models is the YOLO family of models. This model stands out because it utilizes a single detector for object detection, simplifying the detection process and making it more efficient. Initially developed in DarkNet, a deep learning framework written in C, the YOLO model was later adapted to PyTorch by Ultralytics. This adaptation makes it easier to customize the model using the Ultralytics package and allows for fine-tuning to meet specific requirements using a custom dataset. Overall, the YOLO model is a highly versatile and effective tool for object detection.

This article aims to provide a comprehensive guide on fine-tuning the YOLOv5 model with Ultralytics. We will delve into the intricacies of the process and provide detailed instructions on achieving the desired outcome. Furthermore, we will explore how to make the model smaller for deployment using SparseML. This will involve a step-by-step approach that is easy to follow and implement. By the end of this article, you will thoroughly understand how to fine-tune YOLOv5 and deploy it efficiently.


Getting Started

First, install the necessary packages:

pip install wandb sparseml[yolov5]  roboflow  ultralytics deepsparse

Training YOLOv5 With Ultralytics

To train a YOLOv5 object detection model with Ultralytics, you need the data in the YOLOv5 format:

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco128  # dataset root dir
train: images/train2017  # train images (relative to 'path') 128 images
val: images/train2017  # val images (relative to 'path') 128 images
test:  # test images (optional)

# Classes (80 COCO classes)
names:
  0: person
  1: bicycle
  2: car
  ...
  77: teddy bear
  78: hair drier
  79: toothbrush

The data is passed with a YAML file that contains:

  • Path to the data containing the images and txt files with their annotations
  • Name of the classes

You can use tools like Roboflow to annotate the dataset and download it in the required format. In this case, let's download the red blood cells dataset that has already been annotated.

Obtain data from Roboflow

Download the dataset using the Roboflow package:

from roboflow import Roboflow
rf = Roboflow(api_key="your_api_key")
project = rf.workspace("joseph-nelson").project("bccd")
dataset = project.version(4).download("yolov5")

Train YOLOv5 model

Set up experiment tracking using Weights and Biases.

import wandb
wandb.init(project='yolov5')

Train a YOLOv5 small model with ultralytics by passing the path to the data.yaml file.

from ultralytics import YOLO

# Load a model
model = YOLO('yolov5s.pt')  # load a pretrained model (recommended for training)
# Train the model
model.train(data='BCCD-4/data.yaml', epochs=100, imgsz=640)

Run Inference on the Trained Model

You can run inference using the trained YOLOv5 model when training is completed.

from ultralytics import YOLO
model = YOLO('runs/detect/train/weights/last.pt')
results = model.predict("BCCD-4/test/images/BloodImage_00227_jpg.rf.d1790b0cdc042312d1e0af86a5c13519.jpg",save=True)

Sparsifying the YOLOV5 model

Next, let's reduce the size of the trained YOLOv5 using SparseML. Reducing the model the models makes it faster during deployment.

In the following example, the dense YOLOv5s model pre-trained on the red blood cells dataset is sparsified and fine-tuned on the same dataset using SparseML.

The yolov5.train command expects the following arguments:

  • The pruning process starts using the provided weights. This can be a path to a local model or a SparseZoo stub.
  • data specifies the path to the dataset YAMl file.
  • recipe provides the pruning hyperparameters. This can be the path to a SparseZoo Sub or a local YAML file.
sparseml.yolov5.train \
  --weights  /content/runs/detect/train/weights/last.pt \
  --data /content/BCCD-4/data.yaml \
  --cfg yolov5s.yaml \
  --recipe zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85_quant-none

Check your experiment on Weights and Biases for the training results:

Deploy the YOLOv5 Model With DeepSparse

DeepSparse is an inference runtime offering GPU-class performance on CPUs and APIs to integrate machine learning into your application.

Sparsification is a powerful technique for optimizing models for inference, reducing the compute needed with a limited accuracy tradeoff. DeepSparse is designed to take advantage of model sparsity, enabling you to deploy models with the flexibility and scalability of software on commodity CPUs with the best-in-class performance of hardware accelerators, allowing you to standardize operations and reduce infrastructure costs.

Export YOLOv5 Model to ONNX

To deploy the model with DeepSparse, you need to convert the model to ONNX.

sparseml.yolov5.export_onnx \
    --weights yolov5_runs/train/exp2/weights/last.pt \
    --dynamic

Perform Object Detection

DeepSparse provides pipelines for computer vision and NLP that wrap the model with proper pre- and post-processing to run performantly on CPUs using sparse models. Run the YOLOv5 model using the exported ONNX file.

from deepsparse import Pipeline
image_path = "BCCD-4/valid/images/BloodImage_00335_jpg.rf.a6e7e0bdb343a8c39c49bae71c2e864a.jpg"
model_stub = "yolov5_runs/train/exp2/DeepSparse_Deployment/last.onnx"
images = [image_path]
class_names = {"0":"Platelets","1":"RBC","2":"WBC"}
yolo_pipeline = Pipeline.create(
    task="yolo",
    model_path=model_stub,
    class_names=class_names
)
pipeline_outputs = yolo_pipeline(images=images, iou_thres=0.6, conf_thres=0.001)

Use the YOLO utils from DeepSparse to annotate the image:

from deepsparse.yolo.utils import *
from PIL import Image
import numpy as np
image = Image.open(image_path)
sparse_annotation = annotate_image(image=image, prediction=pipeline_outputs)
PIL_image = Image.fromarray(np.uint8(sparse_annotation)).convert('RGB')
PIL_image

Final Thoughts

This article shows how easy it is to train object detection models with the ultralytics package. Furthermore, you have learned how to reduce the model's size using SparseML and deploy it with DeepSparse. Are you interested in learning more about model optimization? Check out these resources from Neural Magic.  


Whenever you're ready, there is 2 way I can help you:

If you're looking to accelerate your career, I'd recommend starting with an affordable ebook:

Writing for Data Scientists: The exact path I followed to get technical work that pays between $250-$500 from machine learning companies such as Comet, Neptune, cnvrg, Paperspace, Layer, Neural Magic, Determined, Activeloop, and many more. Get your copy.

Data Science and Machine Learning Ebook: I offer numerous free and paid data science and machine learning ebooks to help you in your data science and machine learning career. Check them out.

yolo

Derrick Mwiti Twitter

Google Developer Expert - Machine Learning

Discussion

Community guidelines