I would like to check the results and process of normalization of the model.

I would like to check the results and process of normalization of the non-Yolo(or detection) model.

What can i do?

I'm not sure if the model is properly normalized to the extent I want.

Do I simply prepare an input that increases sequentially to 0-255, and then convert or refer to the model?

And I can't use kNeuronPlus, so please let me know how to verify using only toolchain dockers and e2esimulator.

Comments

  • Hi Hyun,

    When you quantize the onnx model, you would go through a script where you do some preprocessing on the image data:

    In the image above, it says

    img_data = np.array(image.resize((640, 640), Image.BILINEAR)) / 256 - 0.5
    

    And that is where you do the normalization.

    According to the list above, the preprocessing goes through a Kneron normalization because it does RGB/256 - 0.5

    So you could check your script to see what normalization you're doing.

  • hello

    If I use TensorFlow's normalization, what should be the data range after conversion? 

    If my model learned the normalization of TensorFlow and compiled into the nef model, is the data range of the nef model also the same as the normalization data range of TensorFlow?

    If correct, I think the normalization range is wrong when I compile to nef model, but I want to check the normalization of nef model.

  • Hi Hyun,

    When you say "use TensorFlow's normalization," did you mean the deep learning framework, or KP_NORMALIZE_TENSOR_FLOW in the image in the previous reply? The data range depends on your normalization, and for batch normalization in TensorFlow, there isn't an exact data range, since it depends on its parameters. Usually, the data range after compilation is the data range you chose, such as [0, 1] or [-1, 1].

    By "learned the normalization of TensorFlow and compiled into the nef model," could you elaborate on it? In our toolchain, we don't compile the normalization into nef, so we'd like to know what you mean by that. Did you mean you're training your model on TensorFlow and you're using our toolchain to convert the .onnx file into a .nef model?

    How come you think the normalization is wrong?

  • Hello.

    Yes, KP_NORMAZE_TENSOR_FLOW is used for compilation, and the data range after compilation is not [0,1] or [-1,1].

    If I dump the pre-processed input data, the it min/max value is [-8,3].

    I think it's wrong.

    Q. Does it mean that TensorFlow is training the model and is using the toolchain to convert the .onnx file to the .nef model?

    => Yes. 

    Q. Why do you think normalization is wrong?

    Refer to my inference server (onnx) and toolchain (nef).


    This image is a flat wall.

    (input rgb)

    (server's inference result)


    (nef model's inference result)


    nef model reasoning looks like a staircase. 


    So I think normalization or quantaization is wrong.

  • Hi Hyun,

    Replying to your previous question:

    If my model learned the normalization of TensorFlow and compiled into the nef model, is the data range of the nef model also the same as the normalization data range of TensorFlow?

    This depends on your normalization used on your preprocessing. You could change your normalization according to your trained model and application.

    Additional information: For Kneron NPU structure, it's better to have data normalization that fits -2^x~2^x-type of input data


    Since nef models went through quantization and float -> int conversion, the results wouldn't be exactly the same. If you got the [-8, 3] range from NPU dump, then that would be the input after quantization, which wouldn't be [-1, 1] anymore.

    If you'd like to check the normalization, you could take a look at this data in the numpy array here:

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.