Accuracy Evaluation
Section
Overview
This page provides a comprehensive comparison of the accuracy of YOLOv5s models across various formats and conversion processes. It includes:
Validation metrics for the original PyTorch model, as well as the Quant8 and FP32 TFLite models converted with the open-source converter.
Evaluation results for the TFLite models converted using the NeuroPilot Converter tool.
Performance metrics for models evaluated on DLA devices, tested on the G700 platform.
Note
For better compatibility, it is recommended to use Python 3.7 when working with these models, as it has higher compatibility with certain libraries and frameworks.
This page provides an end-to-end example specifically for the YOLOv5s model. For additional information on other models, please visit the Model Hub for more details and resources.
Accuracy Comparison
Model Validation Type |
Pytorch Model (PC) |
Int8 TFLite (Source, PC) |
FP32 TFLite (Source, PC) |
Int8 TFLite (MTK, PC) |
FP32 TFLite (MTK, PC) |
Int8 DLA (MTK, Device) |
FP32 DLA (MTK, Device) |
P (Precision) |
0.709 |
0.723 |
0.669 |
0.659 |
0.669 |
0.633 |
0.667 |
R (Recall) |
0.634 |
0.583 |
0.661 |
0.638 |
0.661 |
0.652 |
0.661 |
mAP@50 |
0.713 |
0.675 |
0.712 |
0.699 |
0.699 |
0.698 |
0.712 |
mAP@50-95 |
0.475 |
0.416 |
0.472 |
0.461 |
0.458 |
0.459 |
0.472 |
Note
Source: The TFLite model converted using an open source converter.
MTK: The TFLite model converted using the NeuroPilot Converter Tool.
PC: The model’s accuracy calculated on a PC.
Device: The model’s accuracy calculated on the G700 platform.
End-to-End Conversion Flow and Accuracy Evaluation
This section provides detailed steps and results for verifying the accuracy of YOLOv5s models in different formats and after various conversion processes.
Source Model Evaluation
Note
This evaluation is performed on a PC. Please ensure that the necessary dependencies and hardware requirements are met for successful execution.
PyTorch Model Evaluation:
Get PyTorch source model
git clone http://github.com/ultralytics/yolov5 cd yolov5 git reset --hard 485da42 pip install -r requirements.txt
Evaluate the model
python val.py --weights yolov5s.pt --data coco128.yaml --img 640
Note
Description of the parameters used in the above command:
--weights
: Specifies the path to the PyTorch model weight file.--data
: Specifies the data configuration file.--img
: Specifies the input image size.
Result
Metric
Value
P (Precision)
0.709
R (Recall)
0.634
mAP@50(Mean Average Precision at IoU=0.50)
0.713
mAP@50-95(Mean Average Precision at IoU=0.50:0.95)
0.475
Note
Description of the metrics in the result table:
P (Precision): The precision of the model, indicating the percentage of true positive predictions among all positive predictions.
R (Recall): The recall of the model, indicating the percentage of true positive predictions among all actual positives.
mAP@50 (Mean Average Precision at IoU=0.50): The mean average precision calculated at an Intersection over Union (IoU) threshold of 0.50.
mAP@50-95 (Mean Average Precision at IoU=0.50:0.95): The mean average precision calculated over multiple IoU thresholds ranging from 0.50 to 0.95.
TFLite Model Evaluation
Note
This evaluation is performed on a PC. Please ensure that the necessary dependencies and hardware requirements are met for successful execution.
From Open-Source Evaluation
INT8 Model:
Export the model to TFLite with INT8 quantization:
python export.py --weights yolov5s.pt --include tflite --int8
Note
Description of the parameters used in the export command:
--weights
: Specifies the path to the PyTorch model weight file.--include tflite
Specifies that the model should be exported to TensorFlow Lite format.--int8
: Converts the model to INT8 quantized format; if not specified, the model will be converted to FP32 format by default.
Validate Model:
python val.py --weights yolov5s-int8.tflite --data coco128.yaml --img 640
Result:
Metric
Value
P (Precision)
0.723
R (Recall)
0.583
mAP@50(Mean Average Precision at IoU=0.50)
0.675
mAP@50-95(Mean Average Precision at IoU=0.50:0.95)
0.416
FP32 Model:
Download the required scripts:
Before applying patch, download and extract the necessary scripts and patches:
wget https://mediatek-aiot.s3.ap-southeast-1.amazonaws.com/aiot/download/model-zoo/scripts/model_conversion_YOLOv5s_example_20240916.zip unzip -j model_conversion_YOLOv5s_example_20240916.zip
Export the model to TFLite with FP32 precision:
git apply export_fp32.patch python export.py --weights yolov5s.pt --include tflite
Note
The export_fp32.patch modifies the export script to support exporting the model in FP32 (32-bit float) TFLite format instead of FP16 (16-bit float). The changes include:
Changing the output filename to indicate FP32 format.
Updating the supported types to use tf.float32 for higher precision.
Validate Model:
python val.py --weights yolov5s-fp32.tflite --data coco128.yaml --img 640
Result:
Metric
Value
P (Precision)
0.669
R (Recall)
0.661
mAP@50(Mean Average Precision at IoU=0.50)
0.712
mAP@50-95(Mean Average Precision at IoU=0.50:0.95)
0.472
From NeuroPilot Converter Tool
INT8 Model:
Before you begin, ensure that the NeuroPilot Converter Tool is installed. If you haven’t installed it yet, please follow the instructions in the “Install and Verify NeuroPilot Converter Tool” section of the same guide.
Export and convert the model using the following commands:
git apply Fix_yolov5_mtk_tflite_issue.patch python export.py --weights yolov5s.pt --img-size 640 640 --include torchscript
Prepare the calibration dataset:
Run the following command to prepare the calibration data:
python prepare_calibration_data.py
Convert the model to INT8 TFLite format:
After preparing the calibration data, convert the model to INT8 format:
python convert_to_quant_tflite.py
Note
The Fix_yolov5_mtk_tflite_issue.patch adds support for MTK TensorFlow Lite (MTK TFLite) in the YOLOv5 model export script. It includes:
Adding mtk_tflite as a supported export format.
Modifying the Detect module’s forward method to only include convolution operations.
Implementing post-processing operations for MTK TFLite.
Extending the DetectMultiBackend class to handle MTK TFLite models.
Validate The MTK INT8 TFLite Model:
python val.py --weights yolov5s_int8_mtk.tflite --data coco128.yaml --img 640
Result:
Metric
Value
P (Precision)
0.659
R (Recall)
0.638
mAP@50(Mean Average Precision at IoU=0.50)
0.699
mAP@50-95(Mean Average Precision at IoU=0.50:0.95)
0.461
FP32 Model:
Export and convert the model:
python export.py --weights yolov5s.pt --img-size 640 640 --include torchscript python convert_to_tflite.py
Validate the MTK FP32 TFLite model:
python val.py --weights yolov5s_mtk.tflite --data coco128.yaml --img 640
Result:
Metric
Value
P (Precision)
0.669
R (Recall)
0.661
mAP@50(Mean Average Precision at IoU=0.50)
0.699
mAP@50-95(Mean Average Precision at IoU=0.50:0.95)
0.458
DLA Model Evaluation
Note
This evaluation is performed on the G700 platform.
To ensure compatibility with your device, please download and use NeuroPilot SDK version 6. Other versions might not be fully supported
NeuroPilot SDK tools Download:
Download NeuroPilot SDK All-In-One Bundle:
Visit the download page: NeuroPilot Downloads
Extract the Bundle:
tar zxvf neuropilot-sdk-basic-<version>.tar.gz
Setting Environment Variables:
export LD_LIBRARY_PATH=/path/to/neuropilot-sdk-basic-<version>/neuron_sdk/host/lib
INT8 DLA Model:
INT8 TFLite Model convert to DLA format:
/path/to/neuropilot-sdk-basic-<version>/neuron_sdk/host/bin/ncc-tflite --arch=mdla3.0 yolov5s_int8_mtk.tflite
Prepare and push the files to the device:
python prepare_evaluation_dataset.py adb shell mkdir /tmp/yolov5 adb shell mkdir /tmp/yolov5/device_outputs adb push yolov5s_int8_mtk.dla /tmp/yolov5 adb push evaluation_dataset /tmp/yolov5 adb push run_device.sh /tmp/yolov5
Run the evaluation on the device:
adb shell cd /tmp/yolov5 sh run_device.sh
Pull the results and validate:
adb pull /tmp/yolov5/device_outputs . python val_int8_inference.py --weights yolov5s_int8_mtk.tflite
Note
The val_int8_inference.py specialized script for validating INT8 quantized TFLite models on MediaTek hardware, with dedicated handling for quantized inputs and outputs. It tightly integrates with MediaTek’s TFLite executor and includes dequantization processes.
Result:
Metric
Value
P (Precision)
0.633
R (Recall)
0.652
mAP@50(Mean Average Precision at IoU=0.50)
0.698
mAP@50-95(Mean Average Precision at IoU=0.50:0.95)
0.459
FP32 DLA Model:
Setting Environment Variables:
export LD_LIBRARY_PATH=/path/to/neuropilot-sdk-basic-<version>/neuron_sdk/host/lib
INT8 TFLite Model convert to DLA format:
/path/to/neuropilot-sdk-basic-<version>/neuron_sdk/host/bin/ncc-tflite --arch=mdla3.0 --relax-fp32 yolov5s_mtk.tflite
Prepare and push the files to the device:
python prepare_evaluation_dataset_fp32.py adb shell mkdir /tmp/yolov5 adb shell mkdir /tmp/yolov5/device_outputs_fp32 adb push yolov5s_mtk.dla /tmp/yolov5 adb push evaluation_dataset_fp32 /tmp/yolov5 adb push run_device_for_fp32.sh /tmp/yolov5
Run the evaluation on the device:
adb shell cd /tmp/yolov5 sh run_device_for_fp32.sh
Pull the results and validate:
adb pull /tmp/yolov5/device_outputs_fp32 . python val_fp32_inference.py --weights yolov5s_mtk.tflite
Note
The val_fp32_inference.py specialized script for validating FP32 TFLite models on MediaTek hardware, featuring integration with MediaTek’s TFLite executor and support for binary input/output processing, with a specific focus on FP32 TFLite inference.
Result:
Metric
Value
P (Precision)
0.667
R (Recall)
0.661
mAP@50(Mean Average Precision at IoU=0.50)
0.712
mAP@50-95(Mean Average Precision at IoU=0.50:0.95)
0.472
Trouble Shooting
Resolving PyTorch Version Compatibility
During the process of converting the model to TFLite format using the following command:
python convert_to_quant_tflite.py
You might encounter the following error:
RuntimeError: `PyTorchConverter` only supports 2.0.0 > torch >= 1.3.0. Detected an installation of version v2.4.0+cu121. Please install the supported version.
Cause: This error occurs because the installed PyTorch version is incompatible with the PyTorchConverter. The converter requires a PyTorch version between 1.3.0 and 2.0.0.
Solution: To resolve this issue, install a compatible version of PyTorch by running the following command:
pip3 install torch==1.9.0 torchvision==0.10.0
This ensures that the correct version of PyTorch is used for the conversion process.