CPU Guidelines

General Restrictions

  1. Data layout supports NHWC only.

  2. Dynamic weights on Conv2D and FullyConnected layers are unsupported.

  3. Dynamic tensor shape is unsupported.

Supported OPs Specification

OP Name

TFLite OP

NNAPI

Data Type

Restrictions

Abs

ABS

ABS

FP32
FP16

ArgMax
ArgMin

ARG_MAX
ARG_MIN

ARGMAX
ARGMIN

FP32
FP16
UINT8

AvgPooling

AVERAGE_POOL_2D

AVERAGE_POOL_2D

FP32
UINT8

BatchToSpace

BATCH_TO_SPACE_ND

BATCH_TO_SPACE_ND

FP32
FP16
UINT8

Concat

CONCATENATION

CONCATENATION

FP32
UINT8

Conv2D

CONV_2D

CONV_2D

FP32
FP16
UINT8

DepthToSpace

DEPTH_TO_SPACE

DEPTH_TO_SPACE

FP32
FP16
UINT8

Block size should be greater than 1.
Requant is unsupported.

DepthwiseConv2D

DEPTHWISE_CONV_2D

DEPTHWISE_CONV_2D

FP32
FP16
UINT8

Requires constant filter and bias.

Dequantize

DEQUANTIZE

DEQUANTIZE

Input - UINT8
Output - FP32

Per channel quantization is unsupported.

ElementWiseAdd

ADD

ADD

FP32
FP16
UINT8

ElementWiseDiv

DIV

DIV

FP32
FP16
UINT8

ElementWiseMul

MUL

MUL

FP32
FP16
UINT8

ElementWiseSub

SUB

SUB

FP32
FP16
UINT8

Elu

ELU

ELU

FP32
FP16

Equal

EQUAL

EQUAL

FP32
FP16
UINT8
INT32

FullyConnected

FULLY_CONNECTED

FULLY_CONNECTED

FP32
FP16
UINT8

Requires constant filter and bias.
Per channel quantization is unsupported.

Greater

GREATER

GREATER

FP32
FP16
UINT8
INT32

GreaterEqual

GREATER_EQUAL

GREATER_EQUAL

FP32
FP16
UINT8
INT32

GroupConv2D

Composite pattern of CONV_2D

GROUPED_CONV_2D

FP32
FP16
UINT8

HardSwish

HARD_SWISH

HARD_SWISH

FP32
FP16
UINT8

InstanceNorm

INSTANCE_NORMALIZATION

FP32

Only support 4D.

Less

LESS

LESS

FP32
FP16
UINT8
INT32

LessEqual

LESS_EQUAL

LESS_EQUAL

FP32
FP16
UINT8
INT32

LogicalAnd
LogicalOr
LogicalNot

LOGICAL_AND
LOGICAL_OR
LOGICAL_NOT

LOGICAL_AND
LOGICAL_OR
LOGICAL_NOT

BOOL

L2Norm

L2_NORMALIZATION

L2_NORMALIZATION

FP32

Only support 4D.
Only the last dimension of the input tensor is supported as the axis.

L2Pooling

L2_POOL_2D

L2_POOL_2D

FP32

MaxPooling

MAX_POOL_2D

MAX_POOL_2D

FP32
UINT8

Maximum

MAXIMUM

MAXIMUM

FP32
FP16
UINT8

Mean

MEAN

MEAN

FP32
FP16
UINT8

Minimum

MINIMUM

MINIMUM

FP32
FP16
UINT8

Neg

NEG

NEG

FP32
FP16

NotEqual

NOT_EQUAL

NOT_EQUAL

FP32
FP16
UINT8
INT32

Pad

PAD
PADV2

PAD
PAD_V2

FP32
FP16
UINT8

Support pad with non-zero value.

PRelu

PRELU

PRELU

FP32
FP16
UINT8

Per channel quantization is unsupported.

QLSTM

LSTM

QUANTIZED_16BIT_LSTM

UINT8

Only support constant weight.

Quantize

QUANTIZE

QUANTIZE

Input - FP32
Output - UINT8

ReLU
ReLU1
ReLU6

RELU
RELU_N1_TO_1
RELU6

RELU
RELU1
RELU6

FP32
FP16
UINT8

Reshape

RESHAPE

RESHAPE

FP32
FP16
UINT8

Requantization is unsupported.

Resize::BILINEAR
Resize::NEAREST

RESIZE_BILINEAR
RESIZE_NEAREST_NEIGHBOR

RESIZE_BILINEAR
RESIZE_NEAREST_NEIGHBOR

FP32
FP16
UINT8

Requantization is unsupported.
Unsupported align corners and half pixel centers.

RSqrt

RSQRT

RSQRT

FP32
FP16

Sigmoid

LOGISTIC

LOGISTIC

FP32
FP16
UINT8

SoftMax

SOFTMAX

SOFTMAX

FP32
UINT8

Only the last dimension of the input tensor is supported as the axis.

SpaceToBatch

SPACE_TO_BATCH_ND

SPACE_TO_BATCH_ND

FP32
FP16
UINT8

Block size should be greater than 1.

SpaceToDepth

SPACE_TO_DEPTH

SPACE_TO_DEPTH

FP32
FP16
UINT8

Block size should be greater than 1.

Sqrt

SQRT

SQRT

FP32
FP16

StridedSlice

STRIDED_SLICE

STRIDED_SLICE

FP32
FP16
UINT8

Rank greater than 4 is not supported.

Tanh

TANH

TANH

FP32
FP16
UINT8

Transpose

TRANSPOSE

TRANSPOSE

FP32
FP16
UINT8

Rank 4 input tensors supported only.

TransposeConv2D

TRANSPOSE_CONV

TRANSPOSE_CONV_2D

FP32
FP16
UINT8

Requires constant filter and bias.