For developers, educators, and hobbyists interested in deploying advanced object detection algorithms on compact hardware platforms, this guide offers a comprehensive overview. If you're looking to explore the integration of AI with embedded systems, especially using the YOLOv10 algorithm on the UNIHIKER - IoT Python Single Board Computer, you'll find it particularly useful. You'll learn how to set up the environment, optimize the model for performance, and implement practical applications. This makes it a valuable resource for enhancing your skills in both AI and embedded system development.
YOLO (You Only Look Once) series is currently one of the mainstream end-side object detection algorithms, first proposed by Joseph Redman and others. Over time, multiple versions have been released, each version “seemingly” improving in performance and speed.
This article introduces YOLOv10 running on the UNIHIKER. YOLOv10, proposed by the research team from Tsinghua University, follows the design principles of the YOLO series and is dedicated to creating a real-time end-to-end high-performance object detector. YOLOv10 addresses the shortcomings of the YOLO series in post-processing and model architecture. By eliminating non-maximum suppression (NMS) operations and optimizing the model architecture, YOLOv10 significantly reduces computational overhead while achieving state-of-the-art performance. Extensive experiments on standard object detection benchmarks show that YOLOv10 significantly outperforms previous state-of-the-art models in terms of computation-accuracy trade-offs across various model scales. As shown in the figure below, YOLOv10-S / X is 1.8 times / 1.3 times faster than RT-DETR R18 / R101 with similar performance. Compared to YOLOv9-C, YOLOv10-B achieves a 46% reduction in latency with the same performance. Additionally, YOLOv10 demonstrates extremely high parameter utilization efficiency. YOLOv10-L / X has 1.8 times and 2.3 times fewer parameters, respectively, outperforming YOLOv8-L / X by 0.3 AP and 0.5 AP. YOLOv10-M, with 23% and 31% fewer parameters, achieves similar AP to YOLOv9-M / YOLO-MS.
Comparison of yolov10 performance parameters
The UNIHIKER - IoT Python Single Board Computer is a new generation of domestically produced open-source hardware designed specifically for Python learning and usage. It adopts a single-board computer architecture and integrates an LCD color screen, WiFi Bluetooth, various common sensors, and a rich set of expansion interfaces. Additionally, it comes with a Linux operating system and Python environment pre-installed with common Python libraries, allowing students and teachers to start Python teaching in just two steps.
The UNIHIKER Board is based on the RK3308 Arm 64-bit quad-core processor with a clock speed of 1.2GHz, equipped with 512MB DDR3 memory and 16GB eMMC hard drive, running the Debian 10 operating system. It also supports 2.4G Wi-Fi and Bluetooth 4.0, using the RTL8723DS chip. The UNIHIKER Board also integrates a GD32VF103C8T6 RISC-V coprocessor with a clock speed of 108MHz, 64KB Flash, and 32KB SRAM.
The UNIHIKER Board has various onboard components, including a Home button, A/B buttons, a 2.8-inch touch-enabled color screen with a resolution of 240x320, a capacitive silicon microphone, a PT0603 photo-sensitive triode light sensor, a passive buzzer, and a blue LED. It also has an ICM20689 six-axis sensor that includes a three-axis accelerometer and a three-axis gyroscope.
In terms of interfaces, the UNIHIKER Board offers multiple connection options. It has a USB Type-C port for connecting the CPU to a PC for programming or powering the main board. There is also a USB TYPE-A port for connecting external USB devices. Additionally, the board features a microSD card interface for expanding storage space, a 3Pin I/O supporting three 10-bit PWM channels and two 12-bit ADC channels, a separate 4Pin I2C interface, and 19 independent I/O gold fingers compatible with micro:bit, supporting various communication protocols and functions.
In this article, we will use the Unihiker Board developed by DFRobot to run YOLOv10 and attempt to accelerate it by converting to ONNX format.
Deploying YOLOv10 on the Unihiker Board has significant practical and educational implications:
1. Portability and deployment flexibility: The compact size of the Unihiker Board makes it suitable for embedding into space-constrained devices, enabling portable object detection deployments. Compared to large computers, the Unihiker Board is more suited for on-site and mobile scenarios.
2. Cost-effectiveness: The Unihiker Board is relatively low-cost, suitable for budget-conscious projects and educational purposes. By running YOLOv10 on the Unihiker Board, low-cost object detection application development and experimentation can be conducted.
3. Learning and experimental platform: The Unihiker Board provides a wealth of interfaces and onboard components, making it suitable as a learning and experimental platform. By running YOLOv10 on the Unihiker Board, students and developers can gain a deep understanding of the integration of embedded systems and artificial intelligence, learning about optimization and acceleration algorithms in resource-constrained environments.
4. Technical challenges and innovation: Running YOLOv10 on the resource-limited Unihiker Board involves overcoming challenges related to computational performance and memory limitations. This provides developers with an opportunity to explore and innovate, trying various optimization techniques such as model compression, quantization, and hardware acceleration.
To successfully run YOLOv10 on the UNIHIKER Board, we will use the library provided by Ultralytics for deployment. First, we need to ensure that the Python environment on the UNIHIKER Board meets the requirements for YOLOv10, specifically upgrading to Python version 3.8 or higher. For this purpose, we recommend using MiniConda for version management, allowing easy switching and management of different Python environments.
The steps are as follows:
1. Install MiniConda: First, download and install MiniConda on the UNIHIKER Board. MiniConda is a lightweight Python distribution specifically designed to simplify the management of Python environments and package installations.
2. Create a new environment: Use MiniConda to create a new virtual environment with Python 3.8 or higher. This ensures that our deployment of YOLOv10 is not affected by the default system Python environment, avoiding compatibility issues.
3. Activate the environment: Activate the newly created virtual environment to make it the current working environment.
4. Install the Ultralytics library: In the activated virtual environment, install the YOLO library provided by Ultralytics using the pip command. This will download and install all necessary dependencies and components, enabling us to smoothly run YOLOv10.
By following these steps, we can successfully deploy YOLOv10 on the UNIHIKER Board, fully utilizing its powerful object detection capabilities. This method not only ensures efficient operation of YOLOv10 but also facilitates environment management and version control, providing a stable foundation for subsequent development and experimentation. Additionally, by using MiniConda for version management, we can more flexibly address different projects' Python environment requirements, improving development efficiency.
Below are the detailed steps:
In the terminal, enter:
python
--version
The terminal displays:
Python
3
.
7
.
3
Ultralytics does not support lower versions of Python; thus, it is necessary to upgrade Python. We choose to use MiniConda for version management and upgrading.
Note: Do not use Anaconda, as it may cause errors when running on the UNIHIKER Board.
In the terminal, enter:
wget
https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge
3
-Linux-aarch
64
.sh
After the download is complete, the terminal displays:
Saved
“
Miniforge3-Linux-aarch64
.sh
”
[74300552/74300552]
)
In the terminal, enter:
sudo
bash Miniforge
3
-Linux-aarch
64
.sh
Follow the prompts to press ENTER or yes keys when encountered. The terminal finally displays:
Added mamba to
/root/
.bashrc
==> For changes to take effect,
close
and
re-
open
your current shell. <==
Thank
you for installing Miniforge
3
!
In the terminal, enter:
source
~/.bashrc
After installation, enter in the terminal:
conda
The terminal displays:
In the terminal, enter:
conda
activate
The terminal display changes from:
root@unihiker:
to:
(base)root@unihiker:
indicating successful activation of Conda.
Name the environment "yolo" and choose Python version 3.11. In the terminal, enter:
conda
create -n yolo python==
3
.
11
During the process, the display shows:
Enter 'y'
After the environment is created, the terminal displays:
In the terminal, enter:
conda
activate yolo
The terminal display changes from:
(base)root@unihiker:
to:
(yolo)root@unihiker:
indicating successful activation of the YOLO environment.
In the terminal, enter:
pip
install ultralytics
After completion, the terminal displays:
In the terminal, enter:
pip
install pillow
If already installed, the terminal displays:
Requirement
already satisfied: pillow in /root/miniforge
3
/envs/yolo/lib/python
3
.
11
/site-packages (
10
.
3
.
0
)
In the terminal, enter:
pip
install opencv-python
If already installed, the terminal displays:
Requirement
already satisfied: opencv-python in /root/miniforge
3
/envs/yolo/lib/python
3
.
11
/site-packages (
4.9.0.80
)
Requirement
already satisfied: numpy>=
1
.
21
.
2
in /root/miniforge
3
/envs/yolo/lib/python
3
.
11
/site-packages (from opencv-python) (
1
.
26
.
4
)
In the terminal, enter:
pip
install huggingface
In the terminal, enter:
pip
install huggingface_hub
YOLOv10 project URL: https://github.com/THU-MIG/yolov10
In the terminal, enter:
git
clone
https:
//github.com/THU-MIG/yolov10.git
Download the YOLOv10 project. Then enter the directory by typing in the terminal:
cd
yolov10
Weight files:
[yolov10n.pt]
quick_start.py
Sample code:
Python
from ultralytics import YOLO
# Load a pretrained YOLO model (recommended for training)
model = YOLO("yolov10n.pt")
# Perform object detection on an image using the model
results = model("https://ultralytics.com/images/bus.jpg")
# Save results to disk
results[0].save(filename=f"result_bus.jpg")
In the terminal, enter:
conda
activate yolo
Confirm the terminal display:
In the terminal, enter:
python
quick_start
.py
The terminal display shows:
Using the native YOLOv10n model for single image inference takes about 7 seconds.
The image "bus.jpg" was automatically downloaded, which we prepared for YOLO inference, as shown below:
Finally, the model inference results are stored as "result_bus.jpg":
When running the native YOLOv10n, the speed is quite slow, so we need to convert it to ONNX format to speed up its operation.
This section describes how to convert yolov10n.pt to ONNX format to accelerate its operation. Here are the detailed steps:
In the terminal, enter:
cd yolo
Python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
# load an official model
# Export the model
model.export(format="onnx")
In the terminal, enter:
conda
activate yolo
Confirm the terminal display:
In the terminal, enter:
python
export_onnx
.py
The terminal displays:
You can see that the converted file "yolov10n.onnx" was automatically saved.
Create a file named predict_onnx.py and write the following code:
Python
from ultralytics import YOLO
# Load the exported NCNN model
onnx_model = YOLO("yolov10n.onnx", task = 'detect')
# Run inference
results = onnx_model("https://ultralytics.com/images/bus.jpg")
# Save results to disk
results[0].save(filename=f"result_bus_onnx.jpg")
In the terminal, enter:
python
predict_onnx
.py
The terminal displays:
The directory generates the prediction result:
You can see that using the ONNX model for object detection takes about 6.5 seconds, which is 0.5 seconds faster than the native model.
It can be seen that the current inference speed is still slow.
If you want to increase the speed of inference, you can reduce the size of the input image. When exporting the ONNX model, we can set the parameter imgsz to specify the size of the input image. If the size of the input image is uncertain, we can also set the dynamic parameter to True. In this case, the exported ONNX model can accept any size of the image input for inference. The specific steps are as follows:
In the terminal, input:
cd yolo
Python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
# Export the model
model.export(format="onnx", dynamic = True)
In the terminal, enter:
conda
activate yolo
Confirm the terminal display:
In the terminal, enter:
python
export_onnx
.py
The terminal displays:
You can see that the file "yolov10n.onnx" was automatically saved.
Create a file named predict_onnx.py and write the following code:
Python
from ultralytics import YOLO
import cv2
# Load the exported NCNN model
onnx_model = YOLO("yolov10n.onnx", task = 'detect')
image = cv2.imread('bus.jpg')
print(image.shape)
# Run inference
print('original')
results = onnx_model("bus.jpg")
results[0].save(filename='bus_640.jpg')
print(256)
results = onnx_model("bus.jpg", imgsz = 256)
results[0].save(filename='bus_256.jpg')
print(128)
results = onnx_model("bus.jpg", imgsz = 128)
results[0].save(filename='bus_128.jpg')
print(64)
results = onnx_model("bus.jpg", imgsz = 64)
results[0].save(filename='bus_64.jpg')
In this code, we test the running conditions of the original size image, 256 size image, 128 size image, and 64 size image.
In the terminal, enter:
python
predict_onnx
.py
The terminal displays:
The original image size is 1080*810, and the maximum prediction size of the native YOLOv10n is 640, which takes about 6 seconds. The results are as follows:
When the input size is 448, the time taken is about seconds. The results are as follows:
When the input size is 320, the time taken is about 1.2 seconds. The results are as follows:
When the input size is 256, the time taken is about 0.8 seconds.
When the input size is 128, the time taken is about 0.4 seconds.
In summary:
Note: Performance is based on the bus.jpg image. If you have your own dataset, further testing may be needed.
In this test, the accuracy judgment standard is:
Accuracy | Very Good | Good | Average | Poor | Very Poor |
Difference in the number of objects recognized compared to Ground Truth | 0 | 1 | 2 | 3 | >3 |
Size | Time Taken | Accuracy |
640 | 6s | Very Good |
448 | 2.5s | Very Good |
320 | 1.2s | Good |
256 | 0.7s | Good |
128 | 0.28s | Average |
64 | Unsupported | Unsupported |
Using official code, the comparison of image size and running time is as follows:
Note: Accuracy is tested with bus.jpg. If you have your own dataset, further testing may be needed.
Size | yolov8n | yolov10n | ||
Time Taken | Accuracy | Time Taken | Accuracy | |
640 | 22s | Very Good | 6s | Very Good |
448 | 3.5s | Very Good | 2.5s | Very Good |
320 | 2.2s | Very Good | 1.2s | Good |
256 | 0.8s | Very Good | 0.7s | Good |
128 | 0.4s | Good | 0.28s | Average |
64 | 0.1s | Poor | Unsupported | Unsupported |
Using the Unihiker Board, YOLOv10n can be run with simple code, and the speed is slightly faster than YOLOv8n.
Although YOLOv10 is developed based on Ultralytics, simply installing the Ultralytics library does not allow direct running of YOLOv10; it is still necessary to clone the official repository.
If using the Unihiker Board for image object detection, consider using an input resolution of 448, where the processing time for one image is about 2.5 seconds, while the accuracy is very good. If using the Unihiker Board for video object detection, given the computational power of the Unihiker Board, it is recommended to use an input resolution of 128, where the processing time for one image is about 0.28 seconds, but the performance is average at this time. It is worth considering using YOLOv8n's input resolution of 128, where the accuracy performance is better. Again, accuracy performance is based on bus.jpg testing. If you have your own dataset, further testing may be needed.
We will continue to optimize the running of YOLOv10n on the Unihiker Board, stay tuned.
If you need any help or want to join more discussions, feel free to join our Discord: https://discord.gg/PVAWBMPwsk