Tensorflow-lite
About tflite
We use tensorflow-lite as the enference engine for machine learning. Tensorflow-lite will delegate the computation on the available hardware through the use of delegates. We support several delegates:
gpu delegate
armnn delegate
nnapi delegate (on supported platforms, see NNAPI)
Tflite models
The image installs one tensorflow-lite model that can be used with the Python image recognition demo or benchmark_model tool
/usr/share/label_image/mobilenet_v1_1.0_224_quant.tflite
: quantized model
Python image recognition demo
The image installs a python demo application for image recognition inside the /usr/share/label_image
directory.
It is the upstream label_image.py, modified according to explanations available here.
We also added three flags: --use_gpu
, --use_armnn
and --use_nnapi
to allow using gpu, armnn and nnapi delegates. To run gpu/armnn/nnapi delegate we use the experimental load_delegate API.
To run the demo, run the following commands
cd /usr/share/label_image
python3 label_image.py --label_file labels_mobilenet_quant_v1_224.txt --image grace_hopper.jpg --model_file mobilenet_v1_1.0_224_quant.tflite #to run on the cpu
python3 label_image.py --label_file labels_mobilenet_quant_v1_224.txt --image grace_hopper.jpg --model_file mobilenet_v1_1.0_224_quant.tflite --use_gpu #to run on the gpu
python3 label_image.py --label_file labels_mobilenet_quant_v1_224.txt --image grace_hopper.jpg --model_file mobilenet_v1_1.0_224_quant.tflite --use_armnn #to run on the gpu, using armnn delegate
python3 label_image.py --label_file labels_mobilenet_quant_v1_224.txt --image grace_hopper.jpg --model_file mobilenet_v1_1.0_224_quant.tflite --use_nnapi #to run using nnapi delegate
The image should contain necessary packages to do python development such as the python tflite runtime and pip3.
benchmark_model tool
We can also use benchmark_model
to measure performances.
To run benchmark_model
with gpu delegate, use the following command:
benchmark_model --graph=/usr/share/label_image/mobilenet_v1_1.0_224_quant.tflite --use_gpu=1
To run benchmark_model
with the armnn delegate use the following command:
benchmark_model --external_delegate_path=/usr/lib(64)/libarmnnDelegate.so.25 --external_delegate_options="backends:GpuAcc,CpuAcc" --graph=/usr/share/label_image/mobilenet_v1_1.0_224_quant.tflite --num_runs=1
Note
You should adapt the command depending on whether your system supports multilib or not i.e /usr/lib/libarmnnDelegate.so.25
or /usr/lib64/libarmnnDelegate.so.25
To run benchmark_model
with the nnapi delegate use the following command:
benchmark_model --graph=/usr/share/label_image/mobilenet_v1_1.0_224_quant.tflite --use_nnapi=1 --num_runs=1