Model Visualization

In this section, we will introduce how to use the model visualization tool - Netron, or command line tool ncc-tflite to view the information of the tensor.

Becuase the version of NeuronAPI now we support on IoT Yocto is v5.0.1, there is no way for users to insert the meta information of the model when compiling the dla model. Also, model metadata cannot be read out using NeuronRuntime_getMetadata at runtime. That is why in Example: Integrate with Multimedia Frameworks, the value of scale and zero_point in ConvertArrayToFixedPoint and ConvertArrayToFloatingPoint are constants.

Users must have a clear understanding of the model information to have the correct implementation for the inference. We provides kinds of way for user to get useful model information here.

NETRON

Follwing visualized model is as same as what we used in AI Demo App - mobilenet_ssd_pascal_quant.tflite, which can be found in /usr/share/gstinference_example/object_detection.

  • After opening the model in Netron, users can have a full view on it.

../../../../../_images/sw_yocto_ml-guide_neuron-sdk_neuron-dev-flow_model-visualization_netron-vis.png

Full view of mobilenet_ssd_pascal_quant.tflite

Note

Not only Tensorflow Lite, Netron supports various frameworks, like ONNX, Caffe, Keras, Darknet ..etc.

  • You can see that Netron provides a lot of useful information about tensor. For example, the input tensor, Preprocessor/sub, is a 3-channel image of size 300 * 300, and the pameraters of quantized field: zero_point is 128 and scale is 0.0078.

../../../../../_images/sw_yocto_ml-guide_neuron-sdk_neuron-dev-flow_model-visualization_tensor-info.png

Information for Specific Tensor

ncc-tflite

Neuron compiler(ncc-tflite) provides simple command-line interface for user to obtain model information. Following shows some examples of how to use it for model visualization.

  • Show input and output tensors of the TFLite model. For example:

$ ncc-tflite --show-io-info mobilenet_ssd_pascal_quant.tflite

# of input tensors: 1
[0]: Preprocessor/sub
├ Type: kTfLiteUInt8
├ AllocType: kTfLiteArenaRw
├ Shape: {1,300,300,3}
├ Scale: 0.00787402
├ ZeroPoint: 128
└ Bytes: 270000

# of output tensors: 2
[0]: concat_1
├ Type: kTfLiteUInt8
├ AllocType: kTfLiteArenaRw
├ Shape: {1,1917,21}
├ Scale: 0.141151
├ ZeroPoint: 159
└ Bytes: 40257
[1]: Squeeze
├ Type: kTfLiteUInt8
├ AllocType: kTfLiteArenaRw
├ Shape: {1,1917,4}
├ Scale: 0.06461
├ ZeroPoint: 173
└ Bytes: 7668
  • Show tensors and nodes in the TFLite model. For example:

$ ncc-tflite --show-tflite mobilenet_ssd_pascal_quant.tflite

Tensors:
[0]: BoxPredictor_0/BoxEncodingPredictor/BiasAdd
├ Type: kTfLiteUInt8
├ AllocType: kTfLiteArenaRw
├ Shape: {1,19,19,12}
├ Scale: 0.06461
├ ZeroPoint: 173
└ Bytes: 4332
[1]: BoxPredictor_0/BoxEncodingPredictor/Conv2D_bias
├ Type: kTfLiteInt32
├ AllocType: kTfLiteMmapRo
├ Shape: {12}
├ Scale: 7.66586e-05
├ ZeroPoint: 0
└ Bytes: 48
[2]: BoxPredictor_0/BoxEncodingPredictor/weights_quant/FakeQuantWithMinMaxVars
├ Type: kTfLiteUInt8
├ AllocType: kTfLiteMmapRo
├ Shape: {12,1,1,512}
├ Scale: 0.00325812
├ ZeroPoint: 132
└ Bytes: 6144

...

Note

For the details of ncc-tflite. Please refer to Neuron Compiler section.