無法將onnx模型編譯成nef格式

您好:

我在使用kneron toolchain且想將onnx 模型編譯成nef ,並且依照這個網頁內https://doc.kneron.com/docs/#toolchain/appendix/operators/ 查看支援的算子,但我已經排除了kl720不支援的算子,仍然有以下錯誤:

use toolchain binaries


use toolchain binaries

========================================

Processing model: best

INFO:kneronnxopt:The input model has been checked. Start simplifying.

Checking 0/1...

INFO:kneronnxopt:Start independent preprocessing optimization passes.

INFO:kneronnxopt:Start shape preparation and optimizations.

INFO:kneronnxopt:Start operator related optimizations.

INFO:kneronnxopt:Start pattern related optimizations.

INFO:kneronnxopt:Start independent postprocessing optimization passes.

INFO:kneronnxopt:Comparing inference result between original/optimized ONNX... (add flag '--skip_check' if want to skip this step)

WARNING:kneronnxopt.checker:

Arrays are not almost equal to 4 decimals


Mismatched elements: 99 / 210000 (0.0471%)

Max absolute difference: 0.00024414

Max relative difference: 5.7531443e-06

 x: array([1.1304e+01, 1.7465e+01, 2.1873e+01, ..., 3.7253e-06, 2.3246e-06,

    3.4571e-06], dtype=float32)

 y: array([1.1304e+01, 1.7465e+01, 2.1873e+01, ..., 3.7253e-06, 2.3246e-06,

    3.4571e-06], dtype=float32)

WARNING:kneronnxopt:The optimized model might not be correct.

INFO:kneronnxopt:The input model has been checked. Start simplifying.

Checking 0/1...

INFO:kneronnxopt:Start independent preprocessing optimization passes.

INFO:kneronnxopt:Start shape preparation and optimizations.

INFO:kneronnxopt:Start operator related optimizations.

INFO:kneronnxopt:Start pattern related optimizations.

INFO:kneronnxopt:Start independent postprocessing optimization passes.

INFO:kneronnxopt:Comparing inference result between original/optimized ONNX... (add flag '--skip_check' if want to skip this step)

INFO:kneronnxopt.checker:Two models have the same behaviour.

ERROR:root: Failure for model "best_optimized/best_optimized" when running "kdp720/compiler frontend"



===========================================

=      report on flow status    =

===========================================


                     kdp720        general                            

               compiler frontend compiler_cfg clean_opt nef_model_id model oversize post_clean onnx size (MB)

category    case                                                     

best_optimized best_optimized     Err: 134      ✓     ✓    32768       ✓     ✓       9




[2025-11-25 15:24:10] [debug] [Thread: 834] [/projects_src/kneron-piano_v2/dynasty/floating_point/floating_point/src/common/BaseInferencerImpl.cpp:63] 

start to create operators....

[2025-11-25 15:24:10] [debug] [Thread: 834] [/projects_src/kneron-piano_v2/dynasty/floating_point/floating_point/src/common/BaseInferencerImpl.cpp:102] 

The model allocated space: 585.29252 Mbytes( including workspace: 0.0 Mbytes)

Section 3 E2E simulator finished.

ERROR:root: Failure for model "best_optimized/best_optimized" when running "kdp720/compiler frontend"



===========================================

=      report on flow status    =

===========================================


                     kdp720        general                            

               compiler frontend compiler_cfg clean_opt nef_model_id model oversize post_clean onnx size (MB)

category    case                                                     

best_optimized best_optimized     Err: 134      ✓     ✓    32768       ✓     ✓       9




Workflow failed!

Quantization model generation failed. See above message for details.

在此附上我的模型以及編譯腳本

希望能幫我解惑,到底是哪裡有問題,導致無法編譯成功。

Comments

  • 您好,

    感謝您回報此問題。我們建議您將optimized後的模型裡面Concat以下的operators剪掉之後再去做轉換,如下圖。您可以再把剪掉的operators加回去postprocess裡面。


  • 您好,


    我想請問一下

    我在嘗試將基於 YOLOv8 的目標檢測模型部署到 Kneron KL720

    由於 KL720 硬體不支援 Softmax 算子,我將其替換為 4階泰勒展開 來近似

    那我optimized後的模型裡面Concat以下的operators在參考https://doc.kneron.com/docs/#toolchain/appendix/operators/ 都有符合硬體算子的限制

    也想要將模型全部都在npu上跑

    想請問是哪裡出沒有符合限制才導致無法轉換

  • 您好,

    雖然模型的operators都有支援,但是onnx模型裡面的Slice導致Toolchain KL720的compiler無法轉換模型。我們嘗試剪掉operators後,就可以成功轉換了。因為KL720是一個比較舊的平台,會需要很多時間修改這些bugs,所以會建議您剪掉operators,或是更換模型,不好意思。

  • 了解了

    謝謝您

  • 不好意思

    我追問一下

    所以是編譯器有bug嗎

    還是我的onnx模型有問題,沒有符合編譯的限制??

Sign In or Register to comment.