Porting YOLOv5

Hello, I'm trying to port YOLOv5s (see attached) and I get the error below when I run python /workspace/scripts/fpAnalyserBatchCompile_520.py -c /data1/batch_input_params.json.

Is YOLOv5 supported for the newest version of the ONNX_converter? If so, how can I port it to the kneron hardware? Thanks!


Traceback (most recent call last):

 File "/workspace/scripts/fpAnalyserBatchCompile_520.py", line 50, in <module>

   bie_model = run_knerex(batch_model.model_config, threads, 520)

 File "/workspace/scripts/utils/run_knerex.py", line 60, in run_knerex

   subprocess.run(commands, check=True)

 File "/workspace/miniconda/lib/python3.7/subprocess.py", line 512, in run

   output=stdout, stderr=stderr)

subprocess.CalledProcessError: Command '['/workspace/libs/fpAnalyser/updater/run_updater', '-i', '/workspace/.tmp/updater.json']' died with <Signals.SIGABRT: 6>.



Comments

  • Hi Raef Youssef,


    there are several operators are not supported by kl520 in your yolo-v5,

    you can check the document (chapter 2.3 supported operators):

    http://doc.kneron.com/docs/#toolchain/manual/


    if you want to port this yolo-v5 to kl520, you have to try to deal with the following operator:

    Slice, Sigmoid, Mul (two input),

    kl520 can't support these operator even the model pass onnx2onnx.

  • Hey Eric,

    I'm considering using ReLU6 to replace the SiLU used in YOLOv5. Is the ReLU6 supported for the KL520? The link here says yes: http://doc.kneron.com/docs/#toolchain/converters/ but http://doc.kneron.com/docs/#toolchain/manual/ doesn't mention ReLU6.

  • edited May 2021


    Hey Raef Youssef,


    Yes, KL520 support ReLU6 using Clip.


    Some operators in different deep learning framework have different name even though they have same behavior.

    And, onnx doesn't have ReLU6, so we use Clip(min=0, max=6) to represent ReLU6.

    that's why in previous link(http://doc.kneron.com/docs/#toolchain/converters/) we use clip fill the onnx colume:

    and,

    in the second link(http://doc.kneron.com/docs/#toolchain/manual/), we only provide the NPU spec using onnx representation.

  • Thank you! So I can use ReLU6 but I cannot multiply it with x for example to get a new activation function: x * ReLU6?

    Also, I noticed that YOLOv5 has preprocessing layers to downsample (see below). How do I use the editor script to remove those layers and the concat layer and simply feed a resized input to the first conv layer?



  • Hi Raef Youssef,


    Yes, you can use relu6, but you can't multiply it by variable x because kl520 can't support "MUL" operator.


    for removing the preprocessing layer,

    currently the editor script might not able to do that,

    I recommend you can use the official onnx python api utility function to cut those nodes (by onnx.utils.extract_model):

    https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md#extracting-sub-model-with-inputs-outputs-tensor-names

  • Thanks Eric, I cut the model as described. I now do the downsampling as a preprocessing step. It essentially mount to performing the following:

    torch.cat([x[...,  ::2,  ::2],
               x[..., 1::2,  ::2],
               x[...,  ::2, 1::2],
               x[..., 1::2, 1::2]], 1))
    

    However, I am faced with the following error when using fpAnalyserBatchCompile_520.py:

    Input file dimension is not correct: expected: 307200, but got: 76800 lines. /workspace/.tmp/167/ILSVRC2012_val_00000665 resized.txt

    How do I include the pre-processing step in the batch compile?


  • edited June 2021

    Hi Raef,

    I think the issue was caused of the input dimension 1x12x160x160. Kneron's toolchain doesn't support 12-channel input and run image preprocess with 3 channels as default setting. That's why the error message showed got 76800 lines ( 3x160 x160) but expected 307200 (12x160x160).


  • edited June 2021

    Thank you Ethon, that makes sense. Is there anyway to port Yolov5s "Focus" layer? Checkout my earlier posts for a description on what it is. I believe you guys ported Yolov5 before so I'm interested to see what steps you guys took.

  • Hi Raef Youssef,


    the "model_training" document is for the script in "/workspace/ai_training" (folder path in toolchain:v0.14.2).


    And,

    in order to port focus layer to kl520, we use a "conv" instead of those "slice".


    you can check the following item in "/workspace/ai_training/detection/yolov5/exporting/yolov5/common.py":

    1. class Focus_op9(nn.Module)
    2. class Focus(nn.Module)


    hope this can help.


    Note: this script is still under development, please ignore some naming mistake


  • Eric, the convolution layer substitute is a great idea, thank you! I've forwarded it to my colleagues. Meanwhile, I've been tinkering with your ai_training repo and it seems that you have two configuration setups: yolov5s, and yolov5s_noupsample. Both still have the focus layer. Was that intentional on your end?

  • Hi Raef Youssef,


    Yes, we train yolov5s and yolov5s_noupsample with focus layer(slice version).

    Only do convolution layer substitute when doing exporting to onnx for KL520.

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.