how to explain inference result of LittleNet example?

Hi,

I can run LittleNet example and get result:


Section 3 E2E simulator result:

[array([[[[-12.30762 ,  8.2846937]]]])]

Section 4 E2E simulator result:========================================]100% 0.002000s

[array([[[[-0.01231082, 0.03299299]]]]), array([[[[-13.682867 ,  8.6220808]]]])]

[tool][info][batch_compile.cc:543][BatchCompile] compiling output.bie

[tool][info][batch_compile.cc:574][LayoutBins] Re-layout binaries

[tool][info][batch_compile.cc:623][LayoutBins] output start: 0x6000a820, end: 0x6000a820

[tool][info][batch_compile.cc:543][BatchCompile] compiling output.bie

[tool][info][batch_compile.cc:733][CombineAllBin] Combine all bin files of all models into all_models.bin

[tool][info][batch_compile.cc:809][WriteFwInfo] Generate firmware info to fw_info.txt & fw_info.bin

[tool][info][batch_compile.cc:675][VerifyOutput]

=> 1 models

[tool][info][batch_compile.cc:683][VerifyOutput]   id: 1001

[tool][info][batch_compile.cc:684][VerifyOutput]   version: 0x1

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x60000000, size: 0xa800

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6000a800, size: 0x20

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6000a820, size: 0x0

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6000a820, size: 0x474

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6000aca0, size: 0x14ec0

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6001fb60, size: 0xa0

[tool][info][batch_compile.cc:692][VerifyOutput]


[tool][info][batch_compile.cc:696][VerifyOutput]  end addr 0x6001fc00,

[tool][info][batch_compile.cc:698][VerifyOutput] total bin size 0x153e0

current node is an NPU INPUT NODE

current node is a output NODE

Info: set output buffer [6000a800, 6000a820).

CSIM Version: 8ac3ec2

---------- start npu ----------

---------- dump output node ----------

done

Section 5 E2E simulator result:

[array([[[[-13.68286679,  8.62208044]]]])]


and I tried to user other tool (Matlab in this case) to import LittleNet.onnx-> export LittleNet_convert.onnx and run it in simulaiton, i got below result

Section 3 E2E simulator result:

[array([[[[-0.01243921, 0.03222339]]]])]

Section 4 E2E simulator result:========================================]100% 0.003000s

[array([[[[-0.01231082, 0.03299299]]]]), array([[[[-13.682867 ,  8.6220808]]]])]

[tool][info][batch_compile.cc:543][BatchCompile] compiling output.bie

[tool][info][batch_compile.cc:574][LayoutBins] Re-layout binaries

[tool][info][batch_compile.cc:623][LayoutBins] output start: 0x6000a820, end: 0x6000a820

[tool][info][batch_compile.cc:543][BatchCompile] compiling output.bie

[tool][info][batch_compile.cc:733][CombineAllBin] Combine all bin files of all models into all_models.bin

[tool][info][batch_compile.cc:809][WriteFwInfo] Generate firmware info to fw_info.txt & fw_info.bin

[tool][info][batch_compile.cc:675][VerifyOutput]

=> 1 models

[tool][info][batch_compile.cc:683][VerifyOutput]   id: 1001

[tool][info][batch_compile.cc:684][VerifyOutput]   version: 0x1

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x60000000, size: 0xa800

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6000a800, size: 0x20

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6000a820, size: 0x0

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6000a820, size: 0x474

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6000aca0, size: 0x14ec0

[tool][info][batch_compile.cc:689][VerifyOutput]   addr: 0x6001fb60, size: 0xa0

[tool][info][batch_compile.cc:692][VerifyOutput]


[tool][info][batch_compile.cc:696][VerifyOutput]  end addr 0x6001fc00,

[tool][info][batch_compile.cc:698][VerifyOutput] total bin size 0x153e0

current node is an NPU INPUT NODE

current node is a output NODE

Info: set output buffer [6000a800, 6000a820).

CSIM Version: 8ac3ec2

---------- start npu ----------

---------- dump output node ----------

done

Section 5 E2E simulator result:

[array([[[[-0.01231082, 0.03299299]]]])]


two questions:

1-what is the meaning of inference result here?

2-any flow/usage issue so i cannot get the same result using my onnx?

in kenron_onnx.7z, you will find below onnx:

  1. LittleNet.onnx (original in the package)
  2. LittleNet_convert.onnx (using Matlab to import LittleNet.onnx and export LittleNet_convert.onnx)
  3. LittleNet_convert_opset9.onnx (using python /workspace/libs/ONNX_Convertor/optimizer_scripts/onnx1_3to1_4.py LittleNet_new.onnx LittleNet_new_opset9.onnx)
  4. LittleNet_convert_opset11.onnx (using python /workspace/libs/ONNX_Convertor/optimizer_scripts/onnx1_4to1_6.py LittleNet_new_opset9.onnx LittleNet_new_opset11.onnx)
  5. LittleNet_convert_opset11_o2o.onnx (using python /workspace/libs/ONNX_Convertor/optimizer_scripts/onnx2onnx.py LittleNet_convert_opset11.onnx -o LittleNet_convert_opset11_o2o.onnx --add-bn -t), this is the one i use as comparison to original LittleNet.onnx


thanks

Owen

Tagged:

Comments

  • As the photo below, the output of LittleNet.onnx is a array with dimension 2x1x1. And you will get different inference result with different image input.


    If you want to run your own model, please refer the document center to convert and compile your model to the Kneron's format.

    And you can get inference result by using simulator or run model on our device directly.

  • Hi,

    sorry i may confused you, i was using the same image input on two different onnx (one is original, the other one is simply import / export by Matlab, expected to be identical) but got different result in simulator

    test1: python python_api_workflow.py

    test2:  python python_api_workflow_matlab.py

    but got different inference result, which i guess something wrong in the onnx convert/upgrade flow (please refer to my last post and attachment), please help comment, thanks


    Regards,

    Owen

  • Hi,

    attach my python script for reference


  • The two models LittleNet.onnx and LittleNet_convert.onnx are different. That's why you got different result.

    • The weight between this 2 models are different
    • And the op-type of the last layers are different, too.

    Maybe you can check the flow in Matlab to clarify the cause why model was edited.

  • Hi Ethon,

    thanks for pointing that out, after some flow modification, now i can get same result.


    Owen

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.