Install NeuroPilot Packages

This page shows you how to install the firmware, tools and libraries for machine learning hardware on Genio 700 and Genio 1200.

Note

Genio 350 does not support these NeuroPilot packages.

Important

Make sure you’ve added Genio package PPA to your board before continuing.

NeuroPilot Hardware Packages

To enable the machine learning accelerator in Genio 700 and Genio 1200, follow the instructions below. There are no machine learning packages for Genio 350.

First, install the platform-dependent firmware package:

# Genio 510 only
sudo apt install mediatek-apusys-firmware-genio510

# Genio 700 only
sudo apt install mediatek-apusys-firmware-genio700

# Genio 1200 only
sudo apt install mediatek-apusys-firmware-genio1200

Important

You must install different mediatek-apusys-firmware-<platform> packages for each platform. Incorrect installation breaks the video codec driver.

Then install the Neuron runtime packages. These packages work on Genio 510, Genio 700 and Genio 1200:

# for Genio 510, Genio 700 and Genio 1200
sudo apt install mediatek-libneuron mediatek-neuron-utils mediatek-libneuron-dev
sudo reboot

After the system reboot, check if the machine learning accelerator is up and running with:

sudo vpu5_test -a ks -l 10

The testing result should be PASS. For a simple example, run a simple program.

# This is a workaround for benchmark_dla packages
sudo mkdir -p /usr/share/benchmark_dla
sudo cp /usr/share/neuropilot/benchmark_dla/* /usr/share/benchmark_dla/

# Install required packages
sudo apt install python3-pip
sudo pip3 install numpy

# Run the example program:
sudo python3 /usr/share/benchmark_dla/benchmark.py --auto

The result should look like this:

root@mtk-genio:/usr/share/benchmark_dla# python3 benchmark.py --auto
2023-07-31 07:04:19,029 [INFO] /usr/share/benchmark_dla/ssd_mobilenet_v1_coco_quantized.tflite, mdla3.0, avg inference time: 2.53
/usr/share/benchmark_dla/ssd_mobilenet_v1_coco_quantized.tflite, mdla3.0, avg inference time: 2.53
2023-07-31 07:04:24,499 [INFO] /usr/share/benchmark_dla/ssd_mobilenet_v1_coco_quantized.tflite, vpu, avg inference time: 46.14
/usr/share/benchmark_dla/ssd_mobilenet_v1_coco_quantized.tflite, vpu, avg inference time: 46.14
2023-07-31 07:04:26,113 [INFO] /usr/share/benchmark_dla/ResNet50V2_224_1.0_quant.tflite, mdla3.0, avg inference time: 6.04
/usr/share/benchmark_dla/ResNet50V2_224_1.0_quant.tflite, mdla3.0, avg inference time: 6.04
2023-07-31 07:04:37,041 [INFO] /usr/share/benchmark_dla/ResNet50V2_224_1.0_quant.tflite, vpu, avg inference time: 91.79
/usr/share/benchmark_dla/ResNet50V2_224_1.0_quant.tflite, vpu, avg inference time: 91.79
2023-07-31 07:04:37,701 [INFO] /usr/share/benchmark_dla/mobilenet_v2_1.0_224_quant.tflite, mdla3.0, avg inference time: 1.04
/usr/share/benchmark_dla/mobilenet_v2_1.0_224_quant.tflite, mdla3.0, avg inference time: 1.04
2023-07-31 07:04:40,504 [INFO] /usr/share/benchmark_dla/mobilenet_v2_1.0_224_quant.tflite, vpu, avg inference time: 22.83
/usr/share/benchmark_dla/mobilenet_v2_1.0_224_quant.tflite, vpu, avg inference time: 22.83
2023-07-31 07:04:42,069 [INFO] /usr/share/benchmark_dla/inception_v3_quant.tflite, mdla3.0, avg inference time: 6.61
/usr/share/benchmark_dla/inception_v3_quant.tflite, mdla3.0, avg inference time: 6.61
2023-07-31 07:04:55,195 [INFO] /usr/share/benchmark_dla/inception_v3_quant.tflite, vpu, avg inference time: 115.31
/usr/share/benchmark_dla/inception_v3_quant.tflite, vpu, avg inference time: 115.31

This example runs several machine learning networks and shows required inference time. For more information, please visit IoT Yocto’s Neuron SDK documents.