YOLOv8s Models
Overview
YOLOv8s is a variant of the YOLOv8 (You Only Look Once version 8) family of object detection models, recognized for its advancements in speed, accuracy, and ease of use. Developed by Ultralytics, YOLOv8 represents the latest iteration in the YOLO series, building upon the successes of previous versions such as YOLOv4 and YOLOv5, with a focus on modern deep learning practices and integration with popular frameworks.
Model Conversion Flow
Precondition
Note
For better compatibility, it is recommended to use Python 3.7 when working with these models, as it has higher compatibility with certain libraries and frameworks.
Before you begin, ensure that the NeuroPilot Converter Tool is installed. If you haven’t installed it yet, please follow the instructions in the “Install and Verify NeuroPilot Converter Tool” section of the same guide.
Clone the YOLOv5 repository:
The export script needed for conversion is available in the YOLOv5 repository. Clone it using the following command:
git clone https://github.com/ultralytics/yolov5.git cd yolov5 git reset --hard 485da42
Install Python packages and dependencies:
pip3 install -r requirements.txt pip3 install torch==1.9.0 torchvision==0.10.0
Note
The mtk_converter.PyTorchConverter only supports PyTorch versions between 1.3.0 and 2.0.0. The detected version v2.3.1+cu121 is not within this supported range, causing a runtime error. Therefore, it is necessary to install a compatible version of PyTorch and torchvision to ensure compatibility.
Get Source Model
Download the YOLOv8s model:
Use the following wget command to download the YOLOv8s model into the YOLOv5 source code directory:
wget https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt
Export the PyTorch model to TorchScript:
Use the following command to convert the model from PyTorch format to TorchScript:
python3 export.py --weights yolov8s.pt --img-size 640 640 --include torchscript
Converting Model for Deployment
Quant8 Conversion Process
Convert to TFLite format:
The following script demonstrates how to convert the YOLOv8s model to a quantized TFLite format:
Data Generation: A generator function creates random input data for calibration.
Model Loading: The YOLOv8s model is loaded from a TorchScript file.
Quantization: The model is configured for quantization with specified input value ranges.
Conversion: The quantized model is converted to TFLite format and saved.
python3 convert_to_tflite_quantized.py
import mtk_converter
import numpy as np
def data_gen():
for i in range(100):
yield [np.random.randn(1, 3, 640, 640).astype(np.float32)]
converter = mtk_converter.PyTorchConverter.from_script_module_file(
'yolov8s.torchscript', [[1, 3, 640, 640]],
)
converter.quantize = True
converter.input_value_ranges = [(-1.0, 1.0)]
converter.calibration_data_gen = data_gen
_ = converter.convert_to_tflite(output_file='yolov8s_quant.tflite')
Convert to DLA format:
NeuroPilot SDK tools Download:
Download NeuroPilot SDK All-In-One Bundle:
Visit the download page: NeuroPilot Downloads
Extract the Bundle:
tar zxvf neuropilot-sdk-basic-<version>.tar.gz
Setting Environment Variables:
export LD_LIBRARY_PATH=/path/to/neuropilot-sdk-basic-<version>/neuron_sdk/host/lib
TFLite Model convert to DLA format:
Use the NeuroPilot Converter Tool to convert your TFLite model into the DLA format. The following example shows how to convert an INT8 TFLite model to DLA format using the specified architecture (mdla3.0):
/path/to/neuropilot-sdk-basic-<version>/neuron_sdk/host/bin/ncc-tflite --arch=mdla3.0 yolov8s_quant.tflite
Note
To ensure compatibility with your device, please download and use NeuroPilot SDK version 6. Other versions might not be fully supported.
FP32 Conversion Process
Convert to TFLite format:
The following script demonstrates how to convert the YOLOv8s model to a non-quantized (FP32) TFLite format:
Data Generation: Similar to the quantization process, a generator function creates random input data for conversion.
Model Loading: The YOLOv8s model is loaded from a TorchScript file.
Conversion: The model is converted to TFLite format without quantization and saved.
python3 convert_to_tflite.py
import mtk_converter
import numpy as np
def data_gen():
for i in range(100):
yield [np.random.randn(1, 3, 640, 640).astype(np.float32)]
converter = mtk_converter.PyTorchConverter.from_script_module_file(
'yolov8s.torchscript', [[1, 3, 640, 640]],
)
converter.input_value_ranges = [(-1.0, 1.0)]
converter.calibration_data_gen = data_gen
_ = converter.convert_to_tflite(output_file='yolov8s.tflite')
Convert to DLA format:
Setting Environment Variables:
export LD_LIBRARY_PATH=/path/to/neuropilot-sdk-basic-<version>/neuron_sdk/host/libTFLite Model convert to DLA format:
Use the NeuroPilot Converter Tool to convert your FP32 TFLite model into the DLA format. The following example shows how to convert an FP32 TFLite model to DLA format using the specified architecture (mdla3.0) and enabling relaxed FP32 operations:
/path/to/neuropilot-sdk-basic-<version>/neuron_sdk/host/bin/ncc-tflite --arch=mdla3.0 --relax-fp32 yolov8s.tflite
Model Information
Note
The models and benchmark data mentioned below have been processed using the mtk_converter.
General Information
The information in the table below is sourced from the Detection section of the Ultralytics repository, which can be found at ultralytics repository.
Property |
Value |
Category |
Detection |
Input Size |
640x640 |
FLOPs (B) |
28.6 |
#Params (M) |
11.2 |
Training Framework |
PyTorch |
Inference Framework |
TFLite |
Pre-converted Model
Deployable Model
Model Type |
Download Link |
Supported Backend |
---|---|---|
Quant8 Model package |
CPU,GPU,ArmNN,Neuron Stable Delegate,NeuronSDK |
|
Float32 Model package |
CPU,GPU,ArmNN,Neuron Stable Delegate,NeuronSDK |
Model Properties
YOLOv8s-quant8
Inputs
Property |
Value |
Name |
input.49 |
Tensor |
int8[1,3,640,640] |
Identifier |
35 |
Quantization |
Linear |
Quantization Range |
-1.0039 ≤ 0.0078 * q ≤ 0.9961 |
Outputs
Property |
Value |
Name |
80 |
Tensor |
int8[1,84,8400] |
Identifier |
378 |
Quantization |
Linear |
Quantization Range |
-10.1582 ≤ 2.5395 * (q + 124) ≤ 637.4246 |
Name |
77 |
Tensor |
int8[1,144,80,80] |
Identifier |
37 |
Quantization |
Linear |
Quantization Range |
-18.2789 ≤ 0.1115 * (q - 36) ≤ 10.1426 |
Name |
78 |
Tensor |
int8[1,144,40,40] |
Identifier |
270 |
Quantization |
Linear |
Quantization Range |
-17.3353 ≤ 0.1008 * (q - 44) ≤ 8.3653 |
Name |
79 |
Tensor |
int8[1,144,20,20] |
Identifier |
155 |
Quantization |
Linear |
Quantization Range |
-23.8304 ≤ 0.1288 * (q - 57) ≤ 9.0169 |
YOLOv8s-fp32
Inputs
Property |
Value |
Name |
input.49 |
Tensor |
float32[1,3,640,640] |
Identifier |
145 |
Outputs
Property |
Value |
Name |
80 |
Tensor |
float32[1,84,8400] |
Identifier |
78 |
Name |
77 |
Tensor |
float32[1,144,80,80] |
Identifier |
235 |
Name |
78 |
Tensor |
float32[1,144,40,40] |
Identifier |
73 |
Name |
79 |
Tensor |
float32[1,144,20,20] |
Identifier |
343 |
Benchmark Results
Note
The benchmark results shown below were measured with performance mode enabled. These numbers are for reference only, as actual performance may vary depending on the hardware and platform used.
Please note the following limitations:
The G350 does not support Neuron Stable Delegate (APU) and APU (MDLA) because the hardware does not yet support these features.
Running models on the G350 using ArmNN inference may result in a crash due to the model size being too large for the platform to handle.
YOLOv8s-quant8
Run model (.tflite) 10 times |
CPU (Thread:8) |
GPU |
ARMNN(GpuAcc) |
ARMNN(CpuAcc) |
Neuron Stable Delegate |
NeuronSDK |
G350 |
1030.54 ms (Thread:4) |
1306.24 ms |
752.383 ms |
730.644 ms |
N/A |
N/A |
G510 |
360.921 ms |
437.340 ms |
232.064 ms |
178.786 ms |
29.564 ms |
25.51 ms |
G700 |
169.113 ms |
301.178 ms |
162.038 ms |
160.934 ms |
27.336 ms |
17.01 ms |
G1200 |
170.408 ms |
207.360 ms |
104.634 ms |
88.477 ms |
29.590 ms |
28.04 ms |
YOLOv8s-fp32
Run model (.tflite) 10 times |
CPU (Thread:8) |
GPU |
ARMNN(GpuAcc) |
ARMNN(CpuAcc) |
Neuron Stable Delegate |
NeuronSDK |
G350 |
2191.58 ms (Thread:4) |
1273.35 ms |
1145.45 ms |
N/A |
N/A |
N/A |
G510 |
832.005 ms |
410.635 ms |
373.830 ms |
428.679 ms |
68.295 ms |
70.95 ms |
G700 |
465.654 ms |
284.420 ms |
258.681 ms |
361.914 ms |
49.723 ms |
50.04 ms |
G1200 |
415.078 ms |
190.626 ms |
164.658 ms |
209.437 ms |
55.250 ms |
55.84 ms |
Run Benchmark Tools
This section will guide you on how to execute the benchmark tool with different delegates and hardware configurations.
First, push your TFLite model to the target device:
adb push <your_tflite_model> /usr/share/label_image/
Make sure to replace <your_tflite_model> with the actual path of your TFLite model.
Next, open an ADB shell to the target device:
adb shell
After this, you can execute the following commands directly from the shell.
Execute on CPU (8 threads)
To execute the benchmark on the CPU using 8 threads, run the following command:
benchmark_model --graph=/usr/share/label_image/<your_tflite_model> --num_threads=8 --num_runs=10
Execute on GPU, with GPU delegate
To execute the benchmark on the GPU using the TensorFlow Lite GPU delegate, run the following command:
benchmark_model --graph=/usr/share/label_image/<your_tflite_model> --use_gpu=1 --allow_fp16=0 --gpu_precision_loss_allowed=0 --num_runs=10
Execute on GPU, with Arm NN delegate
To execute the benchmark on the GPU using the Arm NN delegate, use the following command:
benchmark_model --graph=/usr/share/label_image/<your_tflite_model> --external_delegate_path=/usr/lib/libarmnnDelegate.so.29 --external_delegate_options="backends:GpuAcc" --num_runs=10
Execute on CPU, with Arm NN delegate
To run the benchmark on the CPU using the Arm NN delegate, use the following command:
benchmark_model --graph=/usr/share/label_image/<your_tflite_model> --external_delegate_path=/usr/lib/libarmnnDelegate.so.29 --external_delegate_options="backends:CpuAcc" --num_runs=10
Execute on APU, with Neuron Delegate
For executing on the APU using the Neuron delegate, run the following command:
benchmark_model --stable_delegate_settings_file=/usr/share/label_image/stable_delegate_settings.json --use_nnapi=false --use_xnnpack=false --use_gpu=false --min_secs=20 --graph=/usr/share/label_image/<your_tflite_model>
Note
If you are using the G350 platform, please make the following adjustments:
For CPU-based benchmarks, change the –num_threads parameter to 4:
benchmark_model --graph=/usr/share/label_image/<your_tflite_model> --num_threads=4 --use_xnnpack=0 --num_runs=10
For all benchmarks (CPU, GPU, Arm NN), add the parameter –use_xnnpack=0 to disable the XNNPACK delegate
Neuron SDK
Follow these steps to benchmark your TensorFlow Lite model using the Neuron SDK with MDLA 3.0:
Transfer the Model to the Device:
Use adb to push your TFLite model to the device:
adb push <your_tflite_model> /user/share/benchmark_dla/
Access the Device Shell:
Connect to your device’s shell:
adb shell
Navigate to the Benchmark Directory:
Change to the directory where the model is stored:
cd /user/share/benchmark_dla/
Run the Benchmark:
Execute the benchmarking script with the following command:
python3 benchmark.py --file <your_tflite_model> --target mdla3.0 --profile --options='--relax-fp32'
Description:
The benchmark.py script runs a performance evaluation on your model using MDLA 3.0.
The –file parameter specifies the path to your TFLite model.
The –target mdla3.0 option sets the target hardware to MDLA 3.0.
The –profile flag enables profiling to provide detailed performance metrics.
The –options=’–relax-fp32’ option allows relaxation of floating-point precision to improve compatibility with MDLA.