error when running fpAnalyserCompilerIpevaluator_720.py
Hi, I was trying to run the script on my own onnx model, which is optimized via onnx2onnx.py but not pytorch2onnx.py due to some errors , then the following error shows up:
input = /workspace/.tmp/updater.json Traceback (most recent call last): File "/workspace/scripts/fpAnalyserCompilerIpevaluator_720.py", line 43, in <module> bie_file = run_knerex(model_config, threads, 720) File "/workspace/scripts/utils/run_knerex.py", line 49, in run_knerex subprocess.run([LIBS_FOLDER + '/fpAnalyser/updater/run_updater', '-i', TMP_FOLDER + '/updater.json'], check=True) File "/workspace/miniconda/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['/workspace/libs/fpAnalyser/updater/run_updater', '-i', '/workspace/.tmp/updater.json']' died with <Signals.SIGSEGV: 11>.
please let me know how to solve this, thanks.
The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.
Comments
Hi Max,
Could you please share the generated onnx with us? We just need to check the onnx structure and you can set the weight to be dummy weight. If this is a pytorch model, please also share the pth file, dummy weight is fine.
Hi
sure,
thanks again for the reply
Hi Max,
Is this onnx model before onnx2onnx.py? I think we needs to check the issue of pytorch2onnx.py first as this is a necessary step.
OK, thanks for the reply,I will try to solve the pytorch2onnx issue first.
Hi Max,
Would you share the model before pytorch2onnx.py? I could help you locate and solve the issue.
The model you shared is after onnx2onnx which already has the dummy BatchNormalization node wrongly generated.
Thank you.
@Jiyuan Liu Hi, I also have same issue, could you help me to check my model before running onnx2onnx?
I think Max's issue is solved in this thread: https://www.kneron.com/forum/discussion/41/runtimeerror-when-converting-onnx-model-using-pytorch2onnx-py#latest
This was caused by a bug in the converter. And we updated the converter. You can get the new converter by running `git pull` under /workspace/libs/ONNX_Converter.
I trying to do this, but not work.
My model is onnx when it generated by weights file.
And then do onnx2onnx to convert model.
This is my original model after generated from the weights file.
This is my output after "fpAnalyserCompilerIpevaluator_520.py"
Okay, I'll check it.