CPU Guidelines
General Restrictions
Data layout supports NHWC only.
Dynamic weights on
Conv2D
andFullyConnected
layers are unsupported.Dynamic tensor shape is unsupported.
Supported OPs Specification
OP Name |
TFLite OP |
NNAPI |
Data Type |
Restrictions |
---|---|---|---|---|
Abs |
ABS |
ABS |
FP32 |
|
ArgMax |
ARG_MAX |
ARGMAX |
FP32 |
|
AvgPooling |
AVERAGE_POOL_2D |
AVERAGE_POOL_2D |
FP32 |
|
BatchToSpace |
BATCH_TO_SPACE_ND |
BATCH_TO_SPACE_ND |
FP32 |
|
Concat |
CONCATENATION |
CONCATENATION |
FP32 |
|
Conv2D |
CONV_2D |
CONV_2D |
FP32 |
|
DepthToSpace |
DEPTH_TO_SPACE |
DEPTH_TO_SPACE |
FP32 |
Block size should be greater than 1. |
DepthwiseConv2D |
DEPTHWISE_CONV_2D |
DEPTHWISE_CONV_2D |
FP32 |
Requires constant filter and bias. |
Dequantize |
DEQUANTIZE |
DEQUANTIZE |
Input - UINT8 |
Per channel quantization is unsupported. |
ElementWiseAdd |
ADD |
ADD |
FP32 |
|
ElementWiseDiv |
DIV |
DIV |
FP32 |
|
ElementWiseMul |
MUL |
MUL |
FP32 |
|
ElementWiseSub |
SUB |
SUB |
FP32 |
|
Elu |
ELU |
ELU |
FP32 |
|
Equal |
EQUAL |
EQUAL |
FP32 |
|
FullyConnected |
FULLY_CONNECTED |
FULLY_CONNECTED |
FP32 |
Requires constant filter and bias. |
Greater |
GREATER |
GREATER |
FP32 |
|
GreaterEqual |
GREATER_EQUAL |
GREATER_EQUAL |
FP32 |
|
GroupConv2D |
Composite pattern of CONV_2D |
GROUPED_CONV_2D |
FP32 |
|
HardSwish |
HARD_SWISH |
HARD_SWISH |
FP32 |
|
InstanceNorm |
INSTANCE_NORMALIZATION |
FP32 |
Only support 4D. |
|
Less |
LESS |
LESS |
FP32 |
|
LessEqual |
LESS_EQUAL |
LESS_EQUAL |
FP32 |
|
LogicalAnd |
LOGICAL_AND |
LOGICAL_AND |
BOOL |
|
L2Norm |
L2_NORMALIZATION |
L2_NORMALIZATION |
FP32 |
Only support 4D. |
L2Pooling |
L2_POOL_2D |
L2_POOL_2D |
FP32 |
|
MaxPooling |
MAX_POOL_2D |
MAX_POOL_2D |
FP32 |
|
Maximum |
MAXIMUM |
MAXIMUM |
FP32 |
|
Mean |
MEAN |
MEAN |
FP32 |
|
Minimum |
MINIMUM |
MINIMUM |
FP32 |
|
Neg |
NEG |
NEG |
FP32 |
|
NotEqual |
NOT_EQUAL |
NOT_EQUAL |
FP32 |
|
Pad |
PAD |
PAD |
FP32 |
Support pad with non-zero value. |
PRelu |
PRELU |
PRELU |
FP32 |
Per channel quantization is unsupported. |
QLSTM |
LSTM |
QUANTIZED_16BIT_LSTM |
UINT8 |
Only support constant weight. |
Quantize |
QUANTIZE |
QUANTIZE |
Input - FP32 |
|
ReLU |
RELU |
RELU |
FP32 |
|
Reshape |
RESHAPE |
RESHAPE |
FP32 |
Requantization is unsupported. |
Resize::BILINEAR |
RESIZE_BILINEAR |
RESIZE_BILINEAR |
FP32 |
Requantization is unsupported. |
RSqrt |
RSQRT |
RSQRT |
FP32 |
|
Sigmoid |
LOGISTIC |
LOGISTIC |
FP32 |
|
SoftMax |
SOFTMAX |
SOFTMAX |
FP32 |
Only the last dimension of the input tensor is supported as the axis. |
SpaceToBatch |
SPACE_TO_BATCH_ND |
SPACE_TO_BATCH_ND |
FP32 |
Block size should be greater than 1. |
SpaceToDepth |
SPACE_TO_DEPTH |
SPACE_TO_DEPTH |
FP32 |
Block size should be greater than 1. |
Sqrt |
SQRT |
SQRT |
FP32 |
|
StridedSlice |
STRIDED_SLICE |
STRIDED_SLICE |
FP32 |
Rank greater than 4 is not supported. |
Tanh |
TANH |
TANH |
FP32 |
|
Transpose |
TRANSPOSE |
TRANSPOSE |
FP32 |
Rank 4 input tensors supported only. |
TransposeConv2D |
TRANSPOSE_CONV |
TRANSPOSE_CONV_2D |
FP32 |
Requires constant filter and bias. |