Fixed-Point Model Inference ERROR
您好,我根據
https://doc.kneron.com/docs/#toolchain/manual_1_overview/#14-floating-point-model-preparation
進行測試,我想把yolov8n的模型轉為nef格式
執行到1.5.2. Fixed-Point Model Inference的時候會發生以下錯誤
File "/workspace/devDir/test3.py", line 57, in <module> fixed_point_inf_results = ktc.kneron_inference(input_data, bie_file=bie_path, input_names=["images"], platform=630) File "/workspace/miniconda/envs/onnx1.13/lib/python3.9/site-packages/ktc/inferencer.py", line 35, in kneron_inference res = e2e.kneron_inference(*args, **kwargs) File "/workspace/E2E_Simulator/python_flow/kneron_inference.py", line 71, in kneron_inference output = dynasty.dynasty_inference( File "/workspace/E2E_Simulator/python_flow/utils/dynasty.py", line 58, in dynasty_inference float_dict, fixed_dict = inference_dynasty_fx( File "/workspace/miniconda/envs/onnx1.13/lib/python3.9/site-packages/sys_flow/inference.py", line 398, in inference_dynasty_fx _, input_list, input_fns = dynasty.np2txt( File "/workspace/miniconda/envs/onnx1.13/lib/python3.9/site-packages/sys_flow/dynasty_v3.py", line 317, in np2txt assert set(input_nodes) == set(input_np.keys()), \ AssertionError: ERROR: input name does not match: onnx input (['images', '/model.22/Constant_9_output_0', '/model.22/Constant_12_output_0']) vs given np (dict_keys(['images']))
請問我該如何排除該問題?
附件為我使用的模型
Tagged:
The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.
Comments
您好,
請問一下您的yolov8n onnx model是否有經過onnx optimization?
因為最新版本的toolchain已經不支持ktc.onnx_optimizer.onnx2onnx_flow,所以請用:
python /workspace/libs/kneronnxopt/optimize.py -o [您的輸出模型的名稱] [您要optimize的模型,也就是yolov8n.onnx]
來optimize onnx模型,然後再用那個模型來轉換
詳細資訊: Kneronnxopt - Document Center
如果已經有optimized模型卻會有error,請提供您轉換模型的python script,謝謝!
再麻煩您協助了,我測試的過程如下:
1. 安裝 ultralytics
2. python3 export.py
from ultralytics import YOLO # Load a model model = YOLO('yolov8n.pt') # load an official model # Export the model model.export(format='onnx', opset=11)3. optimized模型
4. python3 onnx2nef.py
import cv2 import numpy as np import onnx import os import cv2 import ktc def preprocess(input_file): oimg = cv2.imread(input_file) img = cv2.cvtColor(oimg, cv2.COLOR_BGR2RGB) img = cv2.resize(img, (640, 640)) image_data = np.array(img) / 255.0 image_data = np.transpose(image_data, (2, 0, 1)) image_data = np.expand_dims(image_data, axis=0).astype(np.float32) return image_data optimized_m = onnx.load("yolov8n-o.onnx") km = ktc.ModelConfig(12345, "abcd", "630", onnx_model=optimized_m) eval_result = km.evaluate() input_data = [preprocess("/workspace/examples/mobilenetv2/images/000007.jpg")] floating_point_inf_results = ktc.kneron_inference(input_data, onnx_file='yolov8n-o.onnx', input_names=["images"]) print(floating_point_inf_results) raw_images = os.listdir("/workspace/examples/mobilenetv2/images") input_images = [preprocess("/workspace/examples/mobilenetv2/images/" + image_name) for image_name in raw_images] input_mapping = {"images": input_images} bie_path = km.analysis(input_mapping, threads = 8, output_dir = './') fixed_point_inf_results = ktc.kneron_inference(input_data, bie_file=bie_path, input_names=["images"], platform=630) print(fixed_point_inf_results) nef_path = ktc.compile([km]) binary_inf_results = ktc.kneron_inference(input_data, nef_file=nef_path, input_names=["images"], platform=630) print(binary_inf_results)您好,
yolov8n的onnx model裡面有Softmax的operator,而我們不支援Softmax
請參照Kneron支援的operators: Hardware Supported Operators - Document Center (kneron.com)
我們會建議您從onnx model剪掉Concat下面,Slice以下(包含Slice)的operators,將這些operators寫成postprocess function
用剪過的onnx model就可以成功轉換model
還有,因為onnx model的input是1x3x384x384,所以preprocess function的resize需要是 img = cv2.resize(img, (384, 384))
感謝您的協助,如有問題再向您請教