AttributeError when converting onnx model using pytorch2onnx.py

I already tried Pytorch 1.2 / 1.5.1 / 1.8 version, its still error.

The following is my command and model file.

python3 pytorch2onnx.py resnet18_baseline_att_224x224_A_epoch_249.pth output.onnx --input-size 3 224 224

https://drive.google.com/file/d/1siRBuMcFC5Fq_lDvaYi0z5Y_tGKF11N9/view?usp=sharing

WARNING:root:Converting from pth to onnx is not recommended.

Traceback (most recent call last):

 File "pytorch2onnx.py", line 62, in <module>

  torch.onnx.export(model, dummy_input, args.out_file)

 File "/home/minggatsby/文件/anaconda3/envs/kl720/lib/python3.6/site-packages/torch/onnx/__init__.py", line 132, in export

  strip_doc_string, dynamic_axes)

 File "/home/minggatsby/文件/anaconda3/envs/kl720/lib/python3.6/site-packages/torch/onnx/utils.py", line 64, in export

  example_outputs=example_outputs, strip_doc_string=strip_doc_string, dynamic_axes=dynamic_axes)

 File "/home/minggatsby/文件/anaconda3/envs/kl720/lib/python3.6/site-packages/torch/onnx/utils.py", line 329, in _export

  _retain_param_name, do_constant_folding)

 File "/home/minggatsby/文件/anaconda3/envs/kl720/lib/python3.6/site-packages/torch/onnx/utils.py", line 213, in _model_to_graph

  graph, torch_out = _trace_and_get_graph_from_model(model, args, training)

 File "/home/minggatsby/文件/anaconda3/envs/kl720/lib/python3.6/site-packages/torch/onnx/utils.py", line 163, in _trace_and_get_graph_from_model

  orig_state_dict_keys = _unique_state_dict(model).keys()

 File "/home/minggatsby/文件/anaconda3/envs/kl720/lib/python3.6/site-packages/torch/jit/__init__.py", line 263, in _unique_state_dict

  state_dict = module.state_dict(keep_vars=True)

AttributeError: 'collections.OrderedDict' object has no attribute 'state_dict'

Tagged:

Comments

  • edited March 2021

    I think the issue here is that your pth only has the weight without the network structrure. Please double check how you get the pth file. If possible, try using `torch.onnx` to export the onnx yourself instead of using this pth. Using pth file is not recommended since there are lots of compatibility issues.

  • edited March 2021

    I used onnx model for converting, it appear the problem.

    pytorch          1.7.1

    onnx           1.4.1

    onnxruntime        1.6.0 



    Traceback (most recent call last):

     File "onnx2onnx.py", line 55, in <module>

      m = combo.preprocess(m, args.disable_fuse_bn)

     File "/home/minggatsby/src/ONNX_Convertor-master/optimizer_scripts/tools/combo.py", line 54, in preprocess

      m = onnx.utils.polish_model(model_proto)

     File "/home/minggatsby/文件/anaconda3/envs/kl720/lib/python3.6/site-packages/onnx/utils.py", line 18, in polish_model

      onnx.checker.check_model(model)

     File "/home/minggatsby/文件/anaconda3/envs/kl720/lib/python3.6/site-packages/onnx/checker.py", line 86, in check_model

      C.check_model(model.SerializeToString())

    onnx.onnx_cpp2py_export.checker.ValidationError: Your model ir_version is higher than the checker's.

  • The error message means the onnx version is too new to complete the toolchain flow.

    Please use Netron to open the model.onnx and click the input layer to make sure the format is "ONNX v4".


  • The following is my model format.

    I used toolchain about 0.14.0 verssion.

    The following is my command and output.


    (base) root@06a381b7a23f:/workspace# python /workspace/scripts/convert_model.py onnx resnet18_baseline_att_224x224_A_epoch_249.onnx output.onnx

    /workspace/miniconda/lib/python3.7/site-packages/numpy/__init__.py:156: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service

     from . import _distributor_init

    (base) root@06a381b7a23f:/workspace# python /workspace/scripts/convert_model.py onnx resnet18_baseline_att_224x224_A_epoch_249.onnx output.onnx --no-bn-fusion

    /workspace/miniconda/lib/python3.7/site-packages/numpy/__init__.py:156: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service

     from . import _distributor_init

  • edited May 2021

    Is the above message complete? There should be more message whether you encountered an error or not.

    Or maybe you could provide your model.onnx to make the issue checking becomes easier.

  • Your forum always eat my messages.................................................................................. Many times.............................

    ================================================================================

    python /workspace/scripts/hardware_validate_720.py


    Therefore, that's success? 

    How to write the inference? I can use common python files to inference?

    Use onnxruntime to inference?

  • edited May 2021

    I ref python-api document to inference.

    It's correct methods?

    If it can inference, I continue to solve firmware and hardware problem?

    The following is my problem.

    after Inference, how to get result?



  • Yes, the python-api is a way to simulate the inference result on KL520/ KL720.

    You can just print the inference result in your screen as following or save it to a specific file.


  • Hello, The following is my pose estimation local code.

    How to fix about kneron_inference ?

    It's my code, but its error.



  • Because kneron_inference only has one output, it would got error if you two variables to get its output.

    The output of the kneron_inference would be the results of the model output nodes.

    For example, the output node of the model "output.onnx" which you provided before is as following. In this case, the output would 3 nodes 266 (18x56x56), 268 (18x56x56) and 269 (42x56x56).

    For the final results about class, score, coordinate, etc.. You should put these 3 nodes info into post process function to calculate. I think the post process function should in the function parse_objects() you posted.

  • Sorry, I don't understand your mean.

    Because I can't get three node output.

    How can I send varialbes to parse_objects parse_objects(cmap, paf)?

  • Now your code flow is :

    inf_res, paf = ktc.kneron_inference(img_data, nef_file=compile_result, radix=ktc.get_radix([img_data]))

    Please change it into :

    inf_res = ktc.kneron_inference(img_data, nef_file=compile_result, radix=ktc.get_radix([img_data]))

    And the inf_res would be the 3 nodes' output.


    The parse_objects() is function of your model, others would not realize it except you. You should study the function parse_objects() and process the inf_res into "cmap" and "paf".


  • Hi, I tried the 2 of 3 nodes output, however it become this.

    It maybe is nomalize problem? I already try image / 255 or image / 127.5 -1. It's still fail.

    Or it's model converting fail problem?

  • edited May 2021

    Hi, I tried that agin, I sure already to use correct 2 nodes, however it still fail.

    It maybe is nomalize problem? Or model converting fail problem?

    image / 127.5 -1

    image / 255

    The following is my python file.

    My idea is from the article.

    https://spyjetson.blogspot.com/2020/08/xavier-nx-nvidia-ai-iot-human-pose.html

  • edited May 2021

    Hello 林韋銘:

    to learn the python api usage, you can check the python api document:

    http://doc.kn eron.com/docs/#toolchain/python_api/

    chapter 3. Inferencer

    and the real world case(yolo) tutorial:

    http://doc.kneron.com/docs/#toolchain/yolo_example/

    Python API : chapter 3 Inference


    As you can see in yolo case,

    there are 3 inference type:

    1. onnx 2. bie 3. nef

    seems that you are trying nef inference,

    could you also help to check others?

    if onnx inferece result is correct but got bad result in bie inferece, you might got bad quantization issue.


    And, you can also use "onnxruntime"(opensource python lib) to get the inference result from your onnx.

    you can use the result above to check the correctness of your function "parse_objects" is matched to onnx.

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.