Problems related to conversion of onnx to nef in KL720 inside docker,

Objective: Run nef files for my custom yolo model for KL720.

-----------------------------------------------------------------

1.PFA onnx file for my model.<kneron_720/yolo.onnx>

2. nef conversion script<kneron_720/nef_conversion.py>

3. onnx to nef conversion failed. <kneron_720/error_log_screenshot><error_log.txt>

4.I am facing error in evaluation step, as well as bie conversion

5.Nodes in onnx:<kneron_720/nodes_in_onnx.png>

Observe:All the nodes are compatible to KL720.

Please guide me how to proceed further, I need to run model on KL720device. 

PFA model onnx inferencing code <Preprocess_inference_postprocess_for_multiple_images.py>(pre and post processing operations)

All the details are available in zip file.<kneron_720.zip>

Comments

  • Hi Haresh,

    Sorry for the late reply. Just in case, please check that you have gone through the following steps:

    1. Use onnx-simplifier to simplify your original onnx model (Note: not the optimized one)
    2. Run onnx2onnx
    3. Since there are nodes that aren't supported by KL720 (such as ReduceMax, Gather, Exp, etc.), you could use model editor (editor.py) to cut off all the nodes after Transpose (including Transpose too). You could cut them off, then add back the nodes that were cut off in your post process function. Either that, or you could come up with a way to change these nodes to nodes that are supported by KL720

    I see that you have cut the unsupported nodes off, but you could also move transpose and reshape to your post process function, so that there would be a big chance for them to accelerate.

  • Hi Haresh,

    Our model team checked your model, and we would recommend you to try cutting the Reshape and Concat operators and moving them to post processing even though they're supported by KL720. If that doesn't work, please cut the Transpose operator as well.


  • Hi,

    1.PFA simplied onnx <kl720_onnx model/simplify.onnx>.i have used this weblink to simplify the onnx.<https://convertmodel.com/#input=onnx&output=onnx>

    2.Then i have used <kl720_onnx model/onnx2onnx.py> to convert onnx to onnx.PFA onnx2onnx model <kl720_onnx model/onnx2onnx.onnx>

    3.After this tried to convert the obtained onnx to nef file but got <same error>,and the respected file that error screenshot is avaiable<kl720_onnx model/Error screenshot.png>

    Currently I want to run this onnx model (as it does not have any unsupported nodes). Please let me know where I am going wrong.

    I will take up the optimization (cutting graphs and writing the post processing in my next step)

    Thanks,


  • Hi Haresh,

    Thank you for explaining how you processed your model. We've checked the onnx2onnx.onnx file, and it seems like the operators mentioned above are still there. As we have recommended you earlier, even if the operators Reshape and Concat and Transpose are supported by KL720, it is better to cut them and write the postprocess separately to avoid the Quantization model generation failed error, or change the operators to other operators that are supported.

Sign In or Register to comment.