Error when attempting model editing (removing the softmax)

Hello,

I have been following the example steps of "Build new model binary based on MobileNet v2", however I get the following error when trying to perform model editing:

root@7a5c0ba11c6e:/workspace# python scripts/onnx2onnx.py /workspace/data1/MobileNetV2.h5.onnx -o /workspace/data1/MobileNetV2-nosoftmax.h5.onnx

Traceback (most recent call last):

 File "scripts/onnx2onnx.py", line 43, in <module>

  m = combo.preprocess(m)

 File "/workspace/scripts/tools/combo.py", line 54, in preprocess

  m = optimizer.optimize(m, passes)

 File "/usr/local/lib/python3.5/site-packages/onnx/optimizer.py", line 52, in optimize

  optimized_model_str = C.optimize(model_str, passes)

RuntimeError: /onnx/onnx/optimizer/optimize.h:65: optimize: Assertion `it != passes.end()` failed: pass eliminate_nop_dropout is unknown.


Any help would be appreciated.

Comments

  • Hi,

    If you want to remove the op softmax, you can refer to the 3.1.7 Model Editor in Toolchain Manual http://doc.kneron.com/docs/#toolchain/manual/. And try onnx2onnx again to check your model after cutting the layer softmax.

  • Hi Ethon,

    I have cut the the softmax layer from the mobilenet model. However, when I try to apply the onnx2onnx.py script to the updated model, I still see the error that I had mentioned in the original post:

    root@7a5c0ba11c6e:/workspace/data1# python /workspace/scripts/onnx2onnx.py -o /workspace/data1/MobileNetV2.h5_nosoftmax_opt.onnx /workspace/data1/MobileNetV2.h5_nosoftmax.onnx 

    Traceback (most recent call last):

     File "/workspace/scripts/onnx2onnx.py", line 43, in <module>

      m = combo.preprocess(m)

     File "/workspace/scripts/tools/combo.py", line 54, in preprocess

      m = optimizer.optimize(m, passes)

     File "/usr/local/lib/python3.5/site-packages/onnx/optimizer.py", line 52, in optimize

      optimized_model_str = C.optimize(model_str, passes)

    RuntimeError: /onnx/onnx/optimizer/optimize.h:65: optimize: Assertion `it != passes.end()` failed: pass eliminate_nop_dropout is unknown.


    Any ideas how to fix this? Thanks!

  • Hi Tim,

    Would you please provide the onnx or h5 for debug?

  • Hi Ethon,

    Please find both in the tarball attached. Thanks!


  • Hi @Tim Gilmanov ,


    What toolchain version you use?

    the converted onnx you provided seems not match to our latest toolchain (v0.14):

    yours:


    expected:

    Could you try the latest version toolchain?

    The latest toolchain(v0.14) can successfully convert this model without error.

    Here is my step:

    1. python /workspace/libs/ONNX_Convertor/keras-onnx/generate_onnx.py MobileNetV2.h5 -o mv2.onnx
    2. python /workspace/libs/ONNX_Convertor/optimizer_scripts/onnx2onnx.py mv2.onnx -t


  • Hi Eric and Ethon,

    Thank you for looking into the issue and figuring out the problem.

    I was able to get to the point where the model is optimized and the last Softmax layer is cut.

    I am experiencing issues finishing up the tutorial of compiling the model for KL520 (6.3. Model Compile Flow (compile to .nef file)). See the details below.

    Details about the docker container version:

    ===

    (base) root@88aa0c3181b3:/workspace# more version.txt 

    kneron/toolchain:v0.14.1

    ===


    The tutorial suggests the following step: Copy the /workspace/examples/batch_compile_input_params.json into /data1 and modify it before batch-compiling MobileNetV2.

    However, this file is not available anywhere under the /workspace directory (the find command returns no results):

    ===

    (base) root@88aa0c3181b3:/workspace# find /workspace -name batch_compile_input_params.json

    ===


    I have copied the batch_compile_input_params.json from an older container version and tried to modify it according to the instructions:

    ===

    (base) root@24546c406c46:/workspace/scripts# more /data1/batch_input_params.json 

    {

      "input_image_folder": ["/data1/images"],

      "img_channel": ["RGB"],

      "model_input_width": [224],

      "model_input_height": [224],

      "img_preprocess_method": ["tensorflow"],

      "input_onnx_file": ["/data1/MobileNetV2_opt.h5.onnx"],

      "keep_aspect_ratio": ["False"],

      "command_addr": "0x30000000",

      "weight_addr": "0x40000000",

      "sram_addr": "0x50000000",

      "dram_addr": "0x60000000",

      "whether_encryption": "No",

      "encryption_key": "0x12345678",

      "model_id_list": [1000],

      "model_version_list": [1],

      "add_norm_list": ["False"],

      "dedicated_output_buffer": "True"

    }

    ===


    I then run the python script fpAnalyserBatchCompile_520.py which yields the error below:

    ===

    python fpAnalyserBatchCompile_520.py 

    /workspace/miniconda/lib/python3.7/site-packages/numpy/__init__.py:156: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service

     from . import _distributor_init

    Traceback (most recent call last):

     File "/workspace/scripts/utils/load_config.py", line 141, in __init__

      for raw_config in self.config["models"]:

    KeyError: 'models'


    During handling of the above exception, another exception occurred:


    Traceback (most recent call last):

     File "fpAnalyserBatchCompile_520.py", line 43, in <module>

      batch_config = BatchConfig(args.config)

     File "/workspace/scripts/utils/load_config.py", line 146, in __init__

      raise LoadConfigException(filepath, e.args[0])

    utils.load_config.LoadConfigException: Error while loading /data1/batch_input_params.json: models is required but not found

    ===


    Questions:

    1. It looks like the tutorial is a little outdate (as the latest toolchain docker image is missing the batch_input_params.json and the fpAnalyserBatchCompiler.sh.x shell script. Am I using the python script correctly in place of the shell script?
    2. If the answer is yes, can you help me to adress the "models is required but not found" error? If no, could you please advise how to proceed in order to compile the binary for the KL520?

    Thanks!

  • Hi Tim,

    1. fpAnalyserBatchCompiler.sh.x shell script is no longer available since v0.10.0. And I don't think you can find it in the current manual. You can find this information on the major changes list in section 0.
    2. The config you copied seems too old, which should be before v0.9.0. So, it is not compatible with the current docker.
    3. From the latest document, you just need to copy the code in section 3.6.1 to create your own batch_input_params.json. I'm sorry for the wrong description of copying the configuration from a file. I'll update that part.
The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.