ConvNeXt Models
Overview
ConvNeXts, built using standard ConvNet components, offer strong competition to Transformers in both accuracy and scalability. They achieve 87.8% top-1 accuracy on ImageNet and surpass Swin Transformers in COCO detection and ADE20K segmentation, all while maintaining the simplicity and efficiency characteristic of traditional ConvNets.
Getting Started
Follow these steps to use and convert ConvNeXt models using PyTorch and TorchVision.
Install Required Libraries:
Ensure you have the necessary libraries installed:
pip install torch torchvision
Load and Convert ConvNeXt Model:
Load a pretrained ConvNeXt Base model using PyTorch and TorchVision. Create a dummy input tensor for tracing, trace the model to convert it to TorchScript, and finally save the traced model.
import torch import torchvision model = torchvision.models.convnext_base(pretrained=True) trace_data = torch.randn(1, 3, 224, 224) trace_model = torch.jit.trace(model.cpu().eval(), trace_data) torch.jit.save(trace_model, 'convnext_base.pt')
How It Works ?
Before you begin, ensure that the NeuroPilot Converter Tool is installed.
Quant8 Conversion Process
Generate Calibration Data:
The following script creates a directory named data and generates 100 batches of random input data, each saved as a .npy file. This data is used for calibration during the quantization process.
import os import numpy as np os.mkdir('data') for i in range(100): data = np.random.randn(1, 3, 224, 224).astype(np.float32) np.save(f'data/batch_{i}.npy', data)
Convert to Quantized TFLite Format:
Use the following command to convert the model to a quantized TFLite format using the generated calibration data:
mtk_pytorch_converter \ --input_script_module_file=convnext_base.pt \ --output_file=convnext_base_ptq_quant.tflite \ --input_shapes=1,3,224,224 \ --quantize=True \ --input_value_ranges=-1,1 \ --calibration_data_dir=data/ \ --calibration_data_regexp=batch_.*\.npy
FP32 Conversion Process
To convert the model to a non-quantized (FP32) TFLite format, use the following command:
mtk_pytorch_converter \
--input_script_module_file=convnext_base.pt \
--output_file=convnext_base.tflite \
--input_shapes=1,3,224,224
Model Details
General Information
Property |
Value |
---|---|
Category |
Classification |
Input Size |
224x224 |
GFLOPS |
15.36 |
#Params (M) |
88.59 |
Training Framework |
PyTorch |
Inference Framework |
TFLite |
Quant8 Model package |
Download |
Float32 Model package |
Download |
Model Properties
Quant8
Format: TensorFlow Lite v3
Description: Exported by NeuroPilot converter v7.14.1+release
Inputs
Property |
Value |
---|---|
Name |
x.4 |
Tensor |
int8[1,3,224,224] |
Identifier |
83 |
Quantization |
Linear |
Quantization Range |
-1.0039 ≤ 0.0078 * q ≤ 0.9961 |
Outputs
Property |
Value |
---|---|
Name |
1383 |
Tensor |
int8[1,1000] |
Identifier |
306 |
Quantization |
Linear |
Quantization Range |
-1.8282 ≤ 0.0294 * (q + 66) ≤ 5.6910 |
Fp32
Format: TensorFlow Lite v3
Description: Exported by NeuroPilot converter v7.14.1+release
Inputs
Property |
Value |
---|---|
Name |
x.4 |
Tensor |
float32[1,3,224,224] |
Identifier |
600 |
Outputs
Property |
Value |
---|---|
Name |
1383 |
Tensor |
float32[1,1000] |
Identifier |
84 |
Performance Benchmarks
ConvNeXt-quant8
Run model (.tflite) 10 times |
CPU (Thread:8) |
GPU |
ARMNN(GpuAcc) |
ARMNN(CpuAcc) |
Neuron Stable Delegate(APU) |
APU(MDLA) |
APU(VPU) |
G350 |
N/A |
N/A |
N/A |
N/A |
N/A |
N/A |
N/A |
G510 |
N/A |
N/A |
N/A |
N/A |
N/A |
55.03 ms |
N/A |
G700 |
N/A |
N/A |
N/A |
N/A |
N/A |
38.22 ms |
N/A |
G1200 |
N/A |
N/A |
N/A |
N/A |
N/A |
N/A |
N/A |
ConvNeXt-fp32
Widespread: CPU only, light workload.
Performance: CPU and GPU, medium workload.
Ultimate: CPU, GPU, and APUs, heavy workload.