.. include:: /keyword.rst =================== Accuracy Evaluation =================== Section ======= - `Overview`_ - `Accuracy Comparison`_ - `End-to-End Conversion Flow and Accuracy Evaluation`_ - `Source Model Evaluation`_ - `PyTorch Model Evaluation`_ - `TFLite Model Evaluation`_ - `From Open-Source Evaluation`_ - `From NeuroPilot Converter Tool`_ - `DLA Model Evaluation`_ .. _Overview: Overview ======== This page provides a comprehensive comparison of the accuracy of YOLOv5s models across various formats and conversion processes. It includes: - Validation metrics for the original PyTorch model, as well as the Quant8 and FP32 TFLite models converted with the open-source converter. - Evaluation results for the TFLite models converted using the NeuroPilot Converter tool. - Performance metrics for models evaluated on DLA devices, tested on the G700 platform. .. note:: For better compatibility, it is recommended to use **Python 3.7** when working with these models, as it has higher compatibility with certain libraries and frameworks. This page provides an end-to-end example specifically for the YOLOv5s model. For additional information on other models, please visit the :doc:`Model Hub ` for more details and resources. .. figure:: /_asset/sw_yocto_ml-guide_neuron-dev-flow_model-converter_acc-eval.png :align: center :width: 80% .. raw:: html

.. _Accuracy Comparison: Accuracy Comparison =================== .. csv-table:: YOLOv5s Model Accuracy Comparison :class: longtable :file: /_asset/tables/accuracy_comparison.csv :width: 100% :widths: 16 12 12 12 12 12 12 12 .. note:: - **Source**: The TFLite model converted using an open source converter. - **MTK**: The TFLite model converted using the NeuroPilot Converter Tool. - **PC**: The model's accuracy calculated on a PC. - **Device**: The model's accuracy calculated on the G700 platform. .. _End-to-End Converison Flow and Accuracy Evaluation: End-to-End Conversion Flow and Accuracy Evaluation ================================================== This section provides detailed steps and results for verifying the accuracy of YOLOv5s models in different formats and after various conversion processes. .. _Source Model Evaluation: Source Model Evaluation ----------------------- .. note:: This evaluation is performed on a PC. Please ensure that the necessary dependencies and hardware requirements are met for successful execution. .. _PyTorch Model Evaluation: - **PyTorch Model Evaluation**: 1. Get PyTorch source model .. code-block:: bash git clone http://github.com/ultralytics/yolov5 cd yolov5 git reset --hard 485da42 pip install -r requirements.txt 2. Evaluate the model .. code-block:: bash python val.py --weights yolov5s.pt --data coco128.yaml --img 640 .. note:: Description of the parameters used in the above command: - ``--weights``: Specifies the path to the PyTorch model weight file. - ``--data``: Specifies the data configuration file. - ``--img``: Specifies the input image size. 3. Result +----------------------------------------------------+---------+ | Metric | Value | +====================================================+=========+ | P (Precision) | 0.709 | +----------------------------------------------------+---------+ | R (Recall) | 0.634 | +----------------------------------------------------+---------+ | mAP\@50(Mean Average Precision at IoU=0.50) | 0.713 | +----------------------------------------------------+---------+ | mAP\@50-95(Mean Average Precision at IoU=0.50:0.95)| 0.475 | +----------------------------------------------------+---------+ .. note:: Description of the metrics in the result table: - `P (Precision)`: The precision of the model, indicating the percentage of true positive predictions among all positive predictions. - `R (Recall)`: The recall of the model, indicating the percentage of true positive predictions among all actual positives. - `mAP@50 (Mean Average Precision at IoU=0.50)`: The mean average precision calculated at an Intersection over Union (IoU) threshold of 0.50. - `mAP@50-95 (Mean Average Precision at IoU=0.50:0.95)`: The mean average precision calculated over multiple IoU thresholds ranging from 0.50 to 0.95. .. _TFLite Model Evaluation: TFLite Model Evaluation ----------------------- .. note:: This evaluation is performed on a PC. Please ensure that the necessary dependencies and hardware requirements are met for successful execution. .. _From Open-Source Evaluation: From Open-Source Evaluation ^^^^^^^^^^^^^^^^^^^^^^^^^^^ - **INT8 Model**: 1. Export the model to TFLite with INT8 quantization: .. code-block:: bash python export.py --weights yolov5s.pt --include tflite --int8 .. note:: Description of the parameters used in the export command: - ``--weights``: Specifies the path to the PyTorch model weight file. - ``--include tflite`` Specifies that the model should be exported to TensorFlow Lite format. - ``--int8``: Converts the model to INT8 quantized format; if not specified, the model will be converted to FP32 format by default. 2. Validate Model: .. code-block:: bash python val.py --weights yolov5s-int8.tflite --data coco128.yaml --img 640 3. Result: +----------------------------------------------------+---------+ | Metric | Value | +====================================================+=========+ | P (Precision) | 0.723 | +----------------------------------------------------+---------+ | R (Recall) | 0.583 | +----------------------------------------------------+---------+ | mAP\@50(Mean Average Precision at IoU=0.50) | 0.675 | +----------------------------------------------------+---------+ | mAP\@50-95(Mean Average Precision at IoU=0.50:0.95)| 0.416 | +----------------------------------------------------+---------+ - **FP32 Model**: 1. Download the required scripts: Before applying patch, download and extract the necessary scripts and patches: .. code-block:: bash wget https://mediatek-aiot.s3.ap-southeast-1.amazonaws.com/aiot/download/model-zoo/scripts/model_conversion_YOLOv5s_example_20240916.zip unzip -j model_conversion_YOLOv5s_example_20240916.zip 2. Export the model to TFLite with FP32 precision: .. code-block:: bash git apply export_fp32.patch python export.py --weights yolov5s.pt --include tflite .. note:: The `export_fp32.patch `_ modifies the export script to support exporting the model in FP32 (32-bit float) TFLite format instead of FP16 (16-bit float). The changes include: - Changing the output filename to indicate FP32 format. - Updating the supported types to use `tf.float32` for higher precision. 3. Validate Model: .. code-block:: bash python val.py --weights yolov5s-fp32.tflite --data coco128.yaml --img 640 4. Result: +----------------------------------------------------+---------+ | Metric | Value | +====================================================+=========+ | P (Precision) | 0.669 | +----------------------------------------------------+---------+ | R (Recall) | 0.661 | +----------------------------------------------------+---------+ | mAP\@50(Mean Average Precision at IoU=0.50) | 0.712 | +----------------------------------------------------+---------+ | mAP\@50-95(Mean Average Precision at IoU=0.50:0.95)| 0.472 | +----------------------------------------------------+---------+ .. _From NeuroPilot Converter Tool: From NeuroPilot Converter Tool ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - **INT8 Model**: Before you begin, ensure that the :doc:`NeuroPilot Converter Tool ` is installed. If you haven't installed it yet, please follow the instructions in the "Install and Verify NeuroPilot Converter Tool" section of the same guide. 1. Export and convert the model using the following commands: .. code-block:: bash git apply Fix_yolov5_mtk_tflite_issue.patch python export.py --weights yolov5s.pt --img-size 640 640 --include torchscript 2. Prepare the calibration dataset: Run the following command to prepare the calibration data: .. code-block:: bash python prepare_calibration_data.py 3. Convert the model to INT8 TFLite format: After preparing the calibration data, convert the model to INT8 format: .. code-block:: bash python convert_to_quant_tflite.py .. note:: The `Fix_yolov5_mtk_tflite_issue.patch `_ adds support for MTK TensorFlow Lite (MTK TFLite) in the YOLOv5 model export script. It includes: - Adding `mtk_tflite` as a supported export format. - Modifying the `Detect` module's forward method to only include convolution operations. - Implementing post-processing operations for MTK TFLite. - Extending the `DetectMultiBackend` class to handle MTK TFLite models. 4. Validate The MTK INT8 TFLite Model: .. code-block:: bash python val.py --weights yolov5s_int8_mtk.tflite --data coco128.yaml --img 640 5. Result: +----------------------------------------------------+---------+ | Metric | Value | +====================================================+=========+ | P (Precision) | 0.659 | +----------------------------------------------------+---------+ | R (Recall) | 0.638 | +----------------------------------------------------+---------+ | mAP\@50(Mean Average Precision at IoU=0.50) | 0.699 | +----------------------------------------------------+---------+ | mAP\@50-95(Mean Average Precision at IoU=0.50:0.95)| 0.461 | +----------------------------------------------------+---------+ - **FP32 Model**: 1. Export and convert the model: .. code-block:: bash python export.py --weights yolov5s.pt --img-size 640 640 --include torchscript python convert_to_tflite.py 2. Validate the MTK FP32 TFLite model: .. code-block:: bash python val.py --weights yolov5s_mtk.tflite --data coco128.yaml --img 640 3. Result: +----------------------------------------------------+---------+ | Metric | Value | +====================================================+=========+ | P (Precision) | 0.669 | +----------------------------------------------------+---------+ | R (Recall) | 0.661 | +----------------------------------------------------+---------+ | mAP\@50(Mean Average Precision at IoU=0.50) | 0.699 | +----------------------------------------------------+---------+ | mAP\@50-95(Mean Average Precision at IoU=0.50:0.95)| 0.458 | +----------------------------------------------------+---------+ .. _DLA Model Evaluation: DLA Model Evaluation ^^^^^^^^^^^^^^^^^^^^ .. note:: This evaluation is performed on the **G700 platform**. To ensure compatibility with your device, please download and use **NeuroPilot SDK version 6**. Other versions might not be fully supported - **NeuroPilot SDK tools Download**: 1. Download NeuroPilot SDK All-In-One Bundle: Visit the download page: `NeuroPilot Downloads `_ 2. Extract the Bundle: .. code-block:: bash tar zxvf neuropilot-sdk-basic-.tar.gz 3. Setting Environment Variables: .. code-block:: bash export LD_LIBRARY_PATH=/path/to/neuropilot-sdk-basic-/neuron_sdk/host/lib - **INT8 DLA Model**: 1. INT8 TFLite Model convert to DLA format: .. code-block:: bash /path/to/neuropilot-sdk-basic-/neuron_sdk/host/bin/ncc-tflite --arch=mdla3.0 yolov5s_int8_mtk.tflite 2. Prepare and push the files to the device: .. code-block:: bash python prepare_evaluation_dataset.py adb shell mkdir /tmp/yolov5 adb shell mkdir /tmp/yolov5/device_outputs adb push yolov5s_int8_mtk.dla /tmp/yolov5 adb push evaluation_dataset /tmp/yolov5 adb push run_device.sh /tmp/yolov5 3. Run the evaluation on the device: .. code-block:: bash adb shell cd /tmp/yolov5 sh run_device.sh 4. Pull the results and validate: .. code-block:: bash adb pull /tmp/yolov5/device_outputs . python val_int8_inference.py --weights yolov5s_int8_mtk.tflite .. note:: The `val_int8_inference.py `_ specialized script for validating INT8 quantized TFLite models on MediaTek hardware, with dedicated handling for quantized inputs and outputs. It tightly integrates with MediaTek's TFLite executor and includes dequantization processes. 5. Result: +----------------------------------------------------+---------+ | Metric | Value | +====================================================+=========+ | P (Precision) | 0.633 | +----------------------------------------------------+---------+ | R (Recall) | 0.652 | +----------------------------------------------------+---------+ | mAP\@50(Mean Average Precision at IoU=0.50) | 0.698 | +----------------------------------------------------+---------+ | mAP\@50-95(Mean Average Precision at IoU=0.50:0.95)| 0.459 | +----------------------------------------------------+---------+ - **FP32 DLA Model**: 1. Setting Environment Variables: .. code-block:: bash export LD_LIBRARY_PATH=/path/to/neuropilot-sdk-basic-/neuron_sdk/host/lib 2. INT8 TFLite Model convert to DLA format: .. code-block:: bash /path/to/neuropilot-sdk-basic-/neuron_sdk/host/bin/ncc-tflite --arch=mdla3.0 --relax-fp32 yolov5s_mtk.tflite 3. Prepare and push the files to the device: .. code-block:: bash python prepare_evaluation_dataset_fp32.py adb shell mkdir /tmp/yolov5 adb shell mkdir /tmp/yolov5/device_outputs_fp32 adb push yolov5s_mtk.dla /tmp/yolov5 adb push evaluation_dataset_fp32 /tmp/yolov5 adb push run_device_for_fp32.sh /tmp/yolov5 4. Run the evaluation on the device: .. code-block:: bash adb shell cd /tmp/yolov5 sh run_device_for_fp32.sh 5. Pull the results and validate: .. code-block:: bash adb pull /tmp/yolov5/device_outputs_fp32 . python val_fp32_inference.py --weights yolov5s_mtk.tflite .. note:: The `val_fp32_inference.py `_ specialized script for validating FP32 TFLite models on MediaTek hardware, featuring integration with MediaTek's TFLite executor and support for binary input/output processing, with a specific focus on FP32 TFLite inference. 6. Result: +----------------------------------------------------+---------+ | Metric | Value | +====================================================+=========+ | P (Precision) | 0.667 | +----------------------------------------------------+---------+ | R (Recall) | 0.661 | +----------------------------------------------------+---------+ | mAP\@50(Mean Average Precision at IoU=0.50) | 0.712 | +----------------------------------------------------+---------+ | mAP\@50-95(Mean Average Precision at IoU=0.50:0.95)| 0.472 | +----------------------------------------------------+---------+ Trouble Shooting ================ Resolving PyTorch Version Compatibility --------------------------------------- During the process of converting the model to TFLite format using the following command: .. code-block:: bash python convert_to_quant_tflite.py You might encounter the following error: .. code-block:: text RuntimeError: `PyTorchConverter` only supports 2.0.0 > torch >= 1.3.0. Detected an installation of version v2.4.0+cu121. Please install the supported version. **Cause:** This error occurs because the installed PyTorch version is incompatible with the `PyTorchConverter`. The converter requires a PyTorch version between 1.3.0 and 2.0.0. **Solution:** To resolve this issue, install a compatible version of PyTorch by running the following command: .. code-block:: bash pip3 install torch==1.9.0 torchvision==0.10.0 This ensures that the correct version of PyTorch is used for the conversion process. Resolving `NCC-TFLite` Shared Library Error ------------------------------------------- During the process of converting the TFLite model to DLA format using the following command: .. code-block:: bash ncc-tflite --arch=mdla3.0 yolov5s_int8_mtk.tflite You might encounter the following error: .. code-block:: text ../../neuropilot-sdk-basic-6.0.5-build20240103/neuron_sdk/host/bin/ncc-tflite: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory **Cause:** This error occurs because the `libtinfo.so.5` library is missing from your system. **Solution:** To resolve this issue, install the missing library by running the following command: .. code-block:: bash sudo apt-get install libtinfo5