Error when attempting model editing (removing the softmax)
Hello,
I have been following the example steps of "Build new model binary based on MobileNet v2", however I get the following error when trying to perform model editing:
root@7a5c0ba11c6e:/workspace# python scripts/onnx2onnx.py /workspace/data1/MobileNetV2.h5.onnx -o /workspace/data1/MobileNetV2-nosoftmax.h5.onnx
Traceback (most recent call last):
File "scripts/onnx2onnx.py", line 43, in <module>
m = combo.preprocess(m)
File "/workspace/scripts/tools/combo.py", line 54, in preprocess
m = optimizer.optimize(m, passes)
File "/usr/local/lib/python3.5/site-packages/onnx/optimizer.py", line 52, in optimize
optimized_model_str = C.optimize(model_str, passes)
RuntimeError: /onnx/onnx/optimizer/optimize.h:65: optimize: Assertion `it != passes.end()` failed: pass eliminate_nop_dropout is unknown.
Any help would be appreciated.
Comments
Hi,
If you want to remove the op softmax, you can refer to the 3.1.7 Model Editor in Toolchain Manual http://doc.kneron.com/docs/#toolchain/manual/. And try onnx2onnx again to check your model after cutting the layer softmax.
Hi Ethon,
I have cut the the softmax layer from the mobilenet model. However, when I try to apply the onnx2onnx.py script to the updated model, I still see the error that I had mentioned in the original post:
root@7a5c0ba11c6e:/workspace/data1# python /workspace/scripts/onnx2onnx.py -o /workspace/data1/MobileNetV2.h5_nosoftmax_opt.onnx /workspace/data1/MobileNetV2.h5_nosoftmax.onnx
Traceback (most recent call last):
File "/workspace/scripts/onnx2onnx.py", line 43, in <module>
m = combo.preprocess(m)
File "/workspace/scripts/tools/combo.py", line 54, in preprocess
m = optimizer.optimize(m, passes)
File "/usr/local/lib/python3.5/site-packages/onnx/optimizer.py", line 52, in optimize
optimized_model_str = C.optimize(model_str, passes)
RuntimeError: /onnx/onnx/optimizer/optimize.h:65: optimize: Assertion `it != passes.end()` failed: pass eliminate_nop_dropout is unknown.
Any ideas how to fix this? Thanks!
Hi Tim,
Would you please provide the onnx or h5 for debug?
Hi Ethon,
Please find both in the tarball attached. Thanks!
Hi @Tim Gilmanov ,
What toolchain version you use?
the converted onnx you provided seems not match to our latest toolchain (v0.14):
yours:
expected:
Could you try the latest version toolchain?
The latest toolchain(v0.14) can successfully convert this model without error.
Here is my step:
Hi Eric and Ethon,
Thank you for looking into the issue and figuring out the problem.
I was able to get to the point where the model is optimized and the last Softmax layer is cut.
I am experiencing issues finishing up the tutorial of compiling the model for KL520 (6.3. Model Compile Flow (compile to .nef file)). See the details below.
Details about the docker container version:
===
(base) root@88aa0c3181b3:/workspace# more version.txt
kneron/toolchain:v0.14.1
===
The tutorial suggests the following step: Copy the /workspace/examples/batch_compile_input_params.json into /data1 and modify it before batch-compiling MobileNetV2.
However, this file is not available anywhere under the /workspace directory (the find command returns no results):
===
(base) root@88aa0c3181b3:/workspace# find /workspace -name batch_compile_input_params.json
===
I have copied the batch_compile_input_params.json from an older container version and tried to modify it according to the instructions:
===
(base) root@24546c406c46:/workspace/scripts# more /data1/batch_input_params.json
{
"input_image_folder": ["/data1/images"],
"img_channel": ["RGB"],
"model_input_width": [224],
"model_input_height": [224],
"img_preprocess_method": ["tensorflow"],
"input_onnx_file": ["/data1/MobileNetV2_opt.h5.onnx"],
"keep_aspect_ratio": ["False"],
"command_addr": "0x30000000",
"weight_addr": "0x40000000",
"sram_addr": "0x50000000",
"dram_addr": "0x60000000",
"whether_encryption": "No",
"encryption_key": "0x12345678",
"model_id_list": [1000],
"model_version_list": [1],
"add_norm_list": ["False"],
"dedicated_output_buffer": "True"
}
===
I then run the python script fpAnalyserBatchCompile_520.py which yields the error below:
===
python fpAnalyserBatchCompile_520.py
/workspace/miniconda/lib/python3.7/site-packages/numpy/__init__.py:156: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
Traceback (most recent call last):
File "/workspace/scripts/utils/load_config.py", line 141, in __init__
for raw_config in self.config["models"]:
KeyError: 'models'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "fpAnalyserBatchCompile_520.py", line 43, in <module>
batch_config = BatchConfig(args.config)
File "/workspace/scripts/utils/load_config.py", line 146, in __init__
raise LoadConfigException(filepath, e.args[0])
utils.load_config.LoadConfigException: Error while loading /data1/batch_input_params.json: models is required but not found
===
Questions:
Thanks!
Hi Tim,
batch_input_params.json
. I'm sorry for the wrong description of copying the configuration from a file. I'll update that part.