AI Demo App
Overview
IoT Yocto has integrated two open-source gstreamer streaming pipeline frameworks for complex neural network applications.
GstInference:
It is an open-source project from Ridgerun Engineering that provides a framework for integrating deep learning inference into GStreamer. Either use one of the included elements to do out-of-the-box inference using the most popular deep learning architectures or leverage the base classes and utilities to support your custom architecture.
This repo uses R²Inference, an abstraction layer in C/C++ for a variety of machine learning frameworks. With R²Inference a single C/C++ application may work with models on different frameworks. This is useful to execute inference taking advantage of different hardware resources such as CPU, GPU, or AI optimized accelerators.
NNStreamer:
It is an open-source project which was initially developed by Samsung and then transferred to LF AI Foundation as an incubation project. It proivdes a set of Gstreamer plugins that allow Gstreamer developers to adopt neural network models easily and efficiently and neural network developers to manage neural network pipelines and their filters easily and efficiently.
NNStreamer provides the new Gstreamer stream data type and a set of Gstreamer elements (plugins) to construct media stream pipeline with neural network models. It is well documented through its online document site and it supports well-known neural network frameworks including Tensorflow, Tensorflow-lite, Caffe2, PyTorch, OpenVINO and ARMNN.
IoT Yocto have provided a new
tensor_filter
for Neuron SDK. Users can usetensor_filter_neuronsdk
to create gstreamer media stream pipeline and leverage Genio platform’s powerful AI hardware accelerator, such as MDLA. You can find the implementation of thetensor_filter_neuronsdk
in IoT Yocto NNStreamer source ($BUILD_DIR/tmp/work/armv8a-poky-linux/nnstreamer/$PV/git/ext/nnstreamer/tensor_filter/tensor_filter_neuronsdk.cc
).
For a detailed introduction and usage guide of these frameworks, please refer to the: