無法將onnx模型編譯成nef格式

您好:

我在使用kneron toolchain且想將onnx 模型編譯成nef ,並且依照這個網頁內https://doc.kneron.com/docs/#toolchain/appendix/operators/ 查看支援的算子,但我已經排除了kl720不支援的算子,仍然有以下錯誤:

use toolchain binaries


use toolchain binaries

========================================

Processing model: best

INFO:kneronnxopt:The input model has been checked. Start simplifying.

Checking 0/1...

INFO:kneronnxopt:Start independent preprocessing optimization passes.

INFO:kneronnxopt:Start shape preparation and optimizations.

INFO:kneronnxopt:Start operator related optimizations.

INFO:kneronnxopt:Start pattern related optimizations.

INFO:kneronnxopt:Start independent postprocessing optimization passes.

INFO:kneronnxopt:Comparing inference result between original/optimized ONNX... (add flag '--skip_check' if want to skip this step)

WARNING:kneronnxopt.checker:

Arrays are not almost equal to 4 decimals


Mismatched elements: 99 / 210000 (0.0471%)

Max absolute difference: 0.00024414

Max relative difference: 5.7531443e-06

 x: array([1.1304e+01, 1.7465e+01, 2.1873e+01, ..., 3.7253e-06, 2.3246e-06,

    3.4571e-06], dtype=float32)

 y: array([1.1304e+01, 1.7465e+01, 2.1873e+01, ..., 3.7253e-06, 2.3246e-06,

    3.4571e-06], dtype=float32)

WARNING:kneronnxopt:The optimized model might not be correct.

INFO:kneronnxopt:The input model has been checked. Start simplifying.

Checking 0/1...

INFO:kneronnxopt:Start independent preprocessing optimization passes.

INFO:kneronnxopt:Start shape preparation and optimizations.

INFO:kneronnxopt:Start operator related optimizations.

INFO:kneronnxopt:Start pattern related optimizations.

INFO:kneronnxopt:Start independent postprocessing optimization passes.

INFO:kneronnxopt:Comparing inference result between original/optimized ONNX... (add flag '--skip_check' if want to skip this step)

INFO:kneronnxopt.checker:Two models have the same behaviour.

ERROR:root: Failure for model "best_optimized/best_optimized" when running "kdp720/compiler frontend"



===========================================

=      report on flow status    =

===========================================


                     kdp720        general                            

               compiler frontend compiler_cfg clean_opt nef_model_id model oversize post_clean onnx size (MB)

category    case                                                     

best_optimized best_optimized     Err: 134      ✓     ✓    32768       ✓     ✓       9




[2025-11-25 15:24:10] [debug] [Thread: 834] [/projects_src/kneron-piano_v2/dynasty/floating_point/floating_point/src/common/BaseInferencerImpl.cpp:63] 

start to create operators....

[2025-11-25 15:24:10] [debug] [Thread: 834] [/projects_src/kneron-piano_v2/dynasty/floating_point/floating_point/src/common/BaseInferencerImpl.cpp:102] 

The model allocated space: 585.29252 Mbytes( including workspace: 0.0 Mbytes)

Section 3 E2E simulator finished.

ERROR:root: Failure for model "best_optimized/best_optimized" when running "kdp720/compiler frontend"



===========================================

=      report on flow status    =

===========================================


                     kdp720        general                            

               compiler frontend compiler_cfg clean_opt nef_model_id model oversize post_clean onnx size (MB)

category    case                                                     

best_optimized best_optimized     Err: 134      ✓     ✓    32768       ✓     ✓       9




Workflow failed!

Quantization model generation failed. See above message for details.

在此附上我的模型以及編譯腳本

希望能幫我解惑,到底是哪裡有問題,導致無法編譯成功。

Comments

  • 您好,

    感謝您回報此問題。我們建議您將optimized後的模型裡面Concat以下的operators剪掉之後再去做轉換,如下圖。您可以再把剪掉的operators加回去postprocess裡面。


Sign In or Register to comment.