This tutorial guides you through running YOLOv8 on the UNIHIKER, a next-gen open-source hardware platform designed for Python learning and usage. YOLOv8, the latest object detection model by Ultralytics, combines high accuracy and speed, making it ideal for real-time applications. However, its resource demands pose challenges on lightweight devices. The UNIHIKER, equipped with an RK3308 Arm 64-bit quad-core processor, provides a compact, cost-effective solution for deploying YOLOv8. You'll learn to set up the environment, install necessary libraries, and optimize performance by converting the model to ONNX format. This project is perfect for those seeking practical experience in embedded systems and AI.
As object detection technology continues to expand into various fields, more industrial and commercial users are turning to YOLO for real-time detection, object tracking, and other applications. In 2023, Ultralytics released YOLOv8, which has attracted significant attention. YOLOv8 offers higher detection accuracy and speed but requires substantial computational resources, potentially causing lagging issues on lightweight computing devices. While high-performance computers can meet YOLO's demands, they are often bulky and inconvenient for portability and deployment.
YOLOv8 (You Only Look Once version 8) is the latest version of the YOLO object detection model launched by Ultralytics. It has the following features:
YOLOv8 is widely used in fields such as intelligent security, autonomous driving, industrial inspection, and medical image analysis, significantly boosting automation and intelligence levels in these areas.
The UNIHIKER is a next-generation, domestically-produced open-source hardware platform specifically designed for Python learning and usage. It features a single-board computer architecture with an integrated LCD color screen, WiFi, Bluetooth, various common sensors, and numerous expansion interfaces. It also comes with a built-in Linux operating system and Python environment, pre-installed with commonly used Python libraries, making it easy for teachers and students to conduct Python teaching with just two simple steps.
The UNIHIKER is based on the RK3308 Arm 64-bit quad-core processor, with a main frequency of up to 1.2GHz. It is equipped with 512MB DDR3 memory and a 16GB eMMC hard drive, running the Debian 10 operating system. It supports 2.4G Wi-Fi and Bluetooth 4.0, utilizing the RTL8723DS chip. The UNIHIKER also integrates a GD32VF103C8T6 RISC-V coprocessor, with a main frequency of 108MHz, 64KB Flash, and 32KB SRAM.
The UNIHIKER includes various onboard components such as a Home button, A/B buttons, and a 2.8-inch touch-enabled color screen with a resolution of 240x320. The device also features a capacitive silicon microphone, a PT0603 phototransistor light sensor, a passive buzzer, and a blue LED. Additionally, it has an ICM20689 six-axis sensor, which includes a three-axis accelerometer and a three-axis gyroscope.
In terms of interfaces, the UNIHIKER offers multiple connectivity options. It has a USB Type-C interface for connecting the CPU to a PC for programming or powering the main board. There is also a USB Type-A interface for connecting external USB devices. Moreover, the board includes a microSD card slot for expanding storage, a 3Pin I/O supporting three 10-bit PWM and two 12-bit ADC channels, an independent 4Pin I2C interface, and 19 independent I/O golden fingers compatible with micro:bit, supporting various communication protocols and functions.
In this article, we will use the UNIHIKER, developed by DFRobot, to run YOLOv8 and attempt to accelerate it by converting it to the ONNX format.
Deploying YOLOv8 on the UNIHIKER has significant practical and educational implications:
To successfully run YOLOv8 on the UNIHIKER, we will use the library provided by Ultralytics for deployment. First, we need to ensure that the Python environment on the UNIHIKER meets the requirements for YOLOv8, specifically that the Python version is upgraded to 3.8 or higher. We recommend using MiniConda for version management, as it allows for easy switching and management of different Python environments.
The general steps are as follows:
Following these steps, we can successfully deploy YOLOv8 on the UNIHIKER, leveraging its powerful object detection capabilities. This method ensures efficient operation of YOLOv8 and facilitates environment management and version control, providing a stable foundation for subsequent development and experimentation. Additionally, using MiniConda for version management allows us to flexibly meet different project requirements for Python environments, enhancing development efficiency.
Here are the detailed steps:
In the terminal, input:
python
--version
The terminal should display:
Python
3
.
7
.
3
Since Ultralytics does not support lower Python versions, we need to upgrade Python. We choose to use MiniConda for version management and upgrading.
Note: Do not use Anaconda as it may cause errors on the UNIHIKER.
Since Ultralytics does not support lower Python versions, we need to upgrade Python. We choose to use MiniConda for version management and upgrading.
Note: Do not use Anaconda as it may cause errors on the UNIHIKER.
In the terminal, input:
wget
https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh
After downloading, the terminal should display:
Saved
“
Miniforge3-Linux-aarch64
.sh
” [74300552/74300552])
In the terminal, input:
sudo bash Miniforge3-Linux-aarch64.sh
Follow the prompts to press the ENTER key or type yes as needed. Finally, the terminal should display:
Added mamba to
/root/
.bashrc
==> For changes to take effect,
close
and
re-
open
your current shell. <==
Thank
you for installing Miniforge
3
!
In the terminal, input:
source ~/.bashrc
When the installation is complete, input:
conda
Terminal display
In the terminal, input:
conda activate
You should see the terminal display:
root@unihiker:
Change to
(base) root@unihiker:~
You have successfully activated conda
Name the environment yolo and select Python version 3.11. In the terminal, input:
conda create -n yolo python=
3.11
During the process, the terminal will display:
Input y to proceed. After setting up the environment, the terminal will display:
In the terminal, input:
conda activate yolo
You should see the terminal display:
root@unihiker:
Change to
(base) root@unihiker:~
You have successfully activated conda
In the terminal, input:
pip install ultralytics
After completion, the terminal displays:
In the terminal, input:
pip install pillow
If installed, the terminal displays:
Requirement
already satisfied: pillow in /root/miniforge
3
/envs/yolo/lib/python
3
.
11
/site-packages (
10
.
3
.
0
)
In the terminal, input:
pip install opencv-python
If installed, the terminal displays:
Requirement
already satisfied: opencv-python in /root/miniforge
3
/envs/yolo/lib/python
3
.
11
/site-packages (
4.9.0.80
)
Requirement
already satisfied: numpy>=
1
.
21
.
2
in /root/miniforge
3
/envs/yolo/lib/python
3
.
11
/site-packages (from opencv-python) (
1
.
26
.
4
)
The YOLOv8 object detection model comes in five variants, all trained on the COCO dataset. The suffixes and corresponding model performance are as follows:
n: Nano (ultra-lightweight)
s: Small
m: Medium
l: Large
x: Extra Large
Each model variant offers a balance between performance and computational requirements, allowing you to choose the one that best fits your specific needs and available resources.
Due to the limited performance of the Unihiker, we will only use the native YOLOv8n for image object detection tasks. YOLOv8n is the lightest model in the YOLOv8 series, capable of performing object detection tasks with lower computational resources and faster speed while maintaining accuracy, making it very suitable for resource-constrained application scenarios.
Below are the specific steps:
In the terminal, input:mkdir yolo
cd yolo
Create a file named quick_start.py
Sample code:
Python
from ultralytics import YOLO
# Load a pretrained YOLO model (recommended for training)
model = YOLO("yolov8n.pt")
# Perform object detection on an image using the model
results = model("https://ultralytics.com/images/bus.jpg")
# Save results to disk
results[0].save(filename=f"result_bus.jpg")
In the terminal, input:
conda activate yolo
You should see the terminal display:
(yolo) root@unihiker:~/yolo#
In the terminal, input:
python quick_start.py
You should see the terminal display:
Using the native YOLOv8n model for inference on a single image takes approximately 27 seconds, which is relatively slow. At this point, several new files have appeared in our directory:
You can see that Ultralytics automatically downloaded yolov8n.pt, which is the weight file for YOLOv8n. It also downloaded the image bus.jpg, which is the picture we prepared for YOLO inference, as shown below:
Finally model inferenced, storing the results as result_bus.jpg:
When running the native YOLOv8n, the speed is very slow, so we need to convert its format to accelerate its performance. This section describes how to convert yolov8n.pt to ONNX format to speed up execution. Below are the detailed steps:
In the terminal, input:
cd yolo
Python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt")
# load an official model
# Export the model
model.export(format="onnx")
In the terminal, input:
conda activate yolo
You should see the terminal display change to:
(yolo) root@unihiker:~/yolo#
In the terminal, input:
python export_onnx.py
You should see the terminal display:
As you can see, to convert formats, you need to install the ONNX library, which Ultralytics does automatically. Finally, the converted file is automatically saved as yolov8n.onnx:
Create the predict_onnx.py file and write the following code:
Python
from ultralytics import YOLO
# Load the exported NCNN model
onnx_model = YOLO("yolov8n.onnx", task = 'detect')
# Run inference
results = onnx_model("https://ultralytics.com/images/bus.jpg")
# Save results to disk
results[0].save(filename=f"result_bus_onnx.jpg")
In the terminal, input:
python predict_onnx.py
You should see the terminal display:
Prediction results are generated in the directory:
As you can see, object detection with the onnx model takes about 20 seconds, which is 7 seconds faster than the native model.
It can be seen that the current inference speed is still slow.
If you want to increase the speed of inference, you can reduce the size of the input image. When exporting the ONNX model, we can set the parameter imgsz to specify the size of the input image. If the size of the input image is uncertain, we can also set the dynamic parameter to True. In this case, the exported ONNX model can accept any size of the image input for inference. The specific steps are as follows:
In the terminal, input:
cd yolo
Python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
# Export the model
model.export(format="onnx", dynamic = True)
In the terminal, input:
conda activate yolo
You should see the terminal display: (yolo) root@unihiker:~/yolo#
In the terminal, input:
python export_onnx.py
You should see the terminal display:
As you can see, to convert formats, you need to install the ONNX library, which Ultralytics does automatically. Finally, the converted file is automatically saved as yolov8n.onnx:
Create the predict_onnx.py file and write the following code:
Python
from ultralytics import YOLO
import cv2
# Load the exported NCNN model
onnx_model = YOLO("yolov8n.onnx", task = 'detect')
image = cv2.imread('bus.jpg')
print(image.shape)
# Run inference
print('original')
results = onnx_model("bus.jpg")
results[0].save(filename='bus_640.jpg')
print(448)
results = onnx_model("bus.jpg", imgsz = 448)
results[0].save(filename='bus_448.jpg')
print(320)
results = onnx_model("bus.jpg", imgsz = 320)
results[0].save(filename='bus_320.jpg')
print(256)
results = onnx_model("bus.jpg", imgsz = 256)
results[0].save(filename='bus_256.jpg')
print(128)
results = onnx_model("bus.jpg", imgsz = 128)
results[0].save(filename='bus_128.jpg')
print(64)
results = onnx_model("bus.jpg", imgsz = 64)
results[0].save(filename='bus_64.jpg')
In this code, we test the operation of the raw size image, the 448 size image, the 320 size image, the 256 size image, the 128 size image, and the 64 size image.
In the terminal, input:
python predict_onnx.py
You should see the terminal display:
The size of the original image is 1080*810, and the maximum predicted size of the original yolov8n is 640, which takes about 22 seconds. The results are as follows:
If the input size is 448, it takes about 3.5 seconds. The results are as follows:
If the input size is 320, it takes about 2.2 seconds. The results are as follows:
If the input size is 256, it takes about 0.8 seconds. The results are as follows:
When the input size is 128, it takes about 0.4 seconds.
If the input size is 64, it takes about 0.1 seconds.
Be summarized as follows:
Notice: the performance tested on bus.jpg. Maybe you need more tests if you have your own dataset.
As you can see, you can run yolov8n with simple code using the line board.
If you use the line board for image object detection, you can consider using 448 resolution input, in this case, the processing time of an image is about 3.5 seconds, and the performance is excellent.
If the line board is used for fast detection or vedio detection, limited to the computing power of the line board, it is recommended to use 128 resolution input, in this case, the processing time of an image is about 0.4 seconds.
If you need any help or want to join more discussions, feel free to join our Discord: https://discord.gg/PVAWBMPwsk