Warnings when running `fpAnalyserCompilerIpevaluator_520.py`

We have a MobileNetV1 models (convert from TensorFlow frozen graph to TFLite then to ONNX). When running fpAnalyserCompilerIpevaluator_520.py in toolchain, it shows lots of `We find some weight which are not reasonable!!!` in different layers, excerpt as follows:

# python /workspace/scripts/fpAnalyserCompilerIpevaluator_520.py -t 8
input = /workspace/.tmp/updater.json
We find some weight which are not reasonable!!!
MobilenetV1/Logits/Conv2d_1c_1x1/BiasAdd_bias
We find some weight which are not reasonable!!!
MobilenetV1/MobilenetV1/Conv2d_13_pointwise/Relu6_weight
We find some weight which are not reasonable!!!
MobilenetV1/MobilenetV1/Conv2d_13_depthwise/Relu6_weight
We find some weight which are not reasonable!!!
MobilenetV1/MobilenetV1/Conv2d_12_pointwise/Relu6_bias
We find some weight which are not reasonable!!!
MobilenetV1/MobilenetV1/Conv2d_12_pointwise/Relu6_weight
...
...

But in the end, it shows success:

Done!
[piano][warning][graph_gen.cc:92][GenerateGraph] Model [/data1/fpAnalyser/char_mobilenet_1.0_128_20180515_opt.quan.wqbi.bie] is BIE, skip optimization config

Is there anything I should worry about for the warnings?

Comments

  • This warning is typically when the weight size does not match the expectation based on the onnx node. For example, the conv weight size is smaller than what the conv layer needs. Could you please attach the tensorflow file and onnx file (dummy weight is okay, just need to check the structure)?

  • The zip file contains:

    output_graph.pb : The original Tensorflow frozen graph

    test_mobilenet_1.0_128.tflite : The TFLite version convert from frozen graph with command:

    tflite_convert \
      --graph_def_file=char_mobilenet_1.0_128_20180515/output_graph.pb \
      --output_format=TFLITE \
      --output_file=char_mobilenet_1.0_128_20180515.tflite \
      --inference_type=QUANTIZED_UINT8 \
      --input_arrays=input \
      --output_arrays=final_result \
      --input_shapes=1,128,128,3 \
      --mean_values=128 --std_dev_values=127 \
      --default_ranges_min=0 --default_ranges_max=255
    
    
    

    test_mobilenet_1.0_128.onnx : ONNX file convert from kneron toolchain with command:

    python /workspace/libs/ONNX_Convertor/tflite-onnx/onnx_tflite/tflite2onnx.py -tflite path_of_input_tflite_model -save_path path_of_output_onnx_file -release_mode True

  • edited January 2021

    Hi Phidias,

    Here is what you need to do to resolve the issue.

    1. Follow the instructions in http://doc.kneron.com/docs/#manual_520/#316-onnx-to-onnx-onnx-optimization to process onnx2onnx optimization after tflite2onnx.py and before fpAnalyserCompilerIpevaluator_520.py. This onnx2onnx process could set the weights to the right size for fpAnalyserCompilerIpevaluator_520
    2. We noticed that the tflite model which you provide is a quantized model. Please do not quantize tensorflow model during tensorflow to tflite. Our fpAnalyzerCompilerIpevalutor can directly process the float point model.

    Please take a try and let us know no matter whether you get good results or need more support.

  • Thanks for the help!

    This time I use command:

    tflite_convert --graph_def_file=output_graph.pb \
                   --output_format=TFLITE \
                   --output_file=char_mobilenet_1.0_128_20180515_unquant.tflite \
                   --input_arrays=input --output_arrays=final_result \
                   --input_shapes=1,128,128,3
    

    to convert the file, and it can compile without any warnings.

    However, in hardware verification step it failed, with message:

    encrypt = true
    output = /data1/simulator/
    Done!
    current node is an NPU INPUT NODE
    current node is a output NODE
    Info: output buffer [60000000, 600000b0) is contained in working buffer.
    CSIM Version: 1edd321
    ---------- start npu ----------
    ---------- dump output node ----------
    done
    radix: -1
    [[-1, 1.8303827047348022]]
    /data1/c_sim/node_0000_final_output.txt
    Traceback (most recent call last):
      File "/workspace/scripts/hardware_validate_520.py", line 25, in <module>
        hardware_validate_520(dynasty_result, csim_result)
      File "/workspace/scripts/utils/hardware_validate_520.py", line 61, in hardware_validate_520
        assert match, "[Error] hardware validating fails!"
    AssertionError: [Error] hardware validating fails!
    

    /data1/c_sim/node_0000_final_output.txt :

    5
    -25
    45
    -26
    -41
    -74
    -40
    40
    11
    12
    92
    

    /data1/c_sim/node_0000_final_output_float.txt :

    5.463338335820248
    -27.316691679101243
    49.17004502238224
    -28.409359346265294
    -44.79937435372604
    -80.85740737013968
    -43.706706686561986
    43.706706686561986
    12.019344338804547
    13.112012005968596
    100.52542537909258
    

    /data1/c_sim/node_0000_final_output_matrix.txt :

    ============= channel#, i=1 =============
    Rectangle[1x1] =
    5,
    
    ============= channel#, i=2 =============
    Rectangle[1x1] =
    -25,
    
    ============= channel#, i=3 =============
    Rectangle[1x1] =
    45,
    
    ============= channel#, i=4 =============
    Rectangle[1x1] =
    -26,
    
    ============= channel#, i=5 =============
    Rectangle[1x1] =
    -41,
    
    ============= channel#, i=6 =============
    Rectangle[1x1] =
    -74,
    
    ============= channel#, i=7 =============
    Rectangle[1x1] =
    -40,
    
    ============= channel#, i=8 =============
    Rectangle[1x1] =
    40,
    
    ============= channel#, i=9 =============
    Rectangle[1x1] =
    11,
    
    ============= channel#, i=10 =============
    Rectangle[1x1] =
    12,
    
    ============= channel#, i=11 =============
    Rectangle[1x1] =
    92,
    
  • Hi Phidias,

    Could you please find "simulator" and "csim" folder, pack them and attach here? We can take a look.

    Thanks,

    Bike

  • This result was generated by using the testing model uploaded earlier, please have a look, thank you in advance!

  • Hi Phidias,

    We found and fixed some simulation bug. It will be released soon in the next tool chain release. For now, you can ignore Csim output and try port to hardware and check performance directly.

  • edited January 2021

    Good to hear that, thank you for the support!

  • Thanks for supporting us and helping us to improve our tool chain.

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.