Porting Tiny Yolov3 Example

Hello, I have tried to export a Tiny Yolov3 to the KL520 but I have issues in the conversion process. Is it possible to provide an example in converting a Yolov3 architecture? I know you have a Tiny Yolo example so it must be doable. If possible, I would like to know which model did you start with and what steps did you follow (node removal...etc.) to port the model to KL520.

Thanks!

Comments


  • I think this is a bug in "tensorflow2onnx" and "pytorch2onnx".

    When I try to use "weights" convert to Keras format(.*h5) before converting to ONNX, everything will be all right.

    Maybe you can try to do this.

    //keras convert Onnx
    python /workspace/scripts/convert_model.py keras /docker_mount/yolov3_tiny.h5 /docker_mount/Offical_PBModel_Beta.onnx
    
  • edited February 2021

    You can reference my final convert results.

    I didn't edit any node removal, but success all convert steps.

    Remark: No matter what I use to convert(Ex:pytorch2onnx、tensorflow2onnx) only "keras2onnx" successful.


  • Thank you for your help.

    I recently had success using pytorch2onnx. I used the model from this repo: https://github.com/jkjung-avt/tensorrt_demos/tree/master/yolo. My understanding is Kneron uses onnx 1.4.1 and opset 9. Both are older than most of the models out there but this repo is compatible. The end result is similar to yours but the outputs are different sizes because I use 416x416 input size. Have you had luck with compiling the model to run on the KL520? How did you setup your input_params.json?

    Thanks!

  • Sure, I will show you my input_params.json content.

    I'm using 224x224 models because I can not compiler success in 416 or 608 sizes.

    {
        "model_info": {
            "input_onnx_file": "/data1/Offical_Model/Offical_PBModel_Beta.onnx",
            "model_inputs": [
                {
                    "model_input_name": "input_1_o0",
                    "input_image_folder": "/data1/Offical_Model/imgs"
                }
            ]
        },
        "preprocess": {
            "img_preprocess_method": "kneron",
            "img_channel": "RGB",
            "radix": 8,
            "keep_aspect_ratio": true,
            "pad_mode": 1,
            "p_crop": {
                "crop_x": 0,
                "crop_y": 0,
                "crop_w": 0,
                "crop_h": 0
            }
        },
        "simulator_img_files": [
            {
                "model_input_name": "input_1_o0",
                "input_image": "/data1/Offical_Model/imgs/000000000034.jpg"
            }
        ]
    }
    
  • Ok I do have something similar. The only difference I have is I specify 'yolo' for preprocessing and therefore I use a radix of 7. I'll try your file. BTW, I attached a 416x416 YOLOv3 Tiny model in case you would like to try it. I verified that it can be converted successfully.



  • I still have another problem that needs to solve.

    Just like "coco dataset + my own dataset".

    Although the successful compiler and detection, the results were very strange.

    For example, my dataset is fire and smoke.

    When I input an airplane image, it will find an airplane, fire, and smoke.

  • I am running an issue when I execute fpAnalyserCompilerIpevaluator_520.py. I get the following error below. Attached is the optimized model and the input_parameters.json. @kidd @Jiyuan Liu Any ideas?


    input = /workspace/.tmp/updater.json

    Traceback (most recent call last):

     File "/workspace/scripts/fpAnalyserCompilerIpevaluator_520.py", line 43, in <module>

       bie_file = run_knerex(model_config, threads, 520)

     File "/workspace/scripts/utils/run_knerex.py", line 59, in run_knerex

       subprocess.run(commands, check=True)

     File "/workspace/miniconda/lib/python3.7/subprocess.py", line 512, in run

       output=stdout, stderr=stderr)

    subprocess.CalledProcessError: Command '['/workspace/libs/fpAnalyser/updater/run_updater', '-i', '/workspace/.tmp/updater.json']' died with <Signals.SIGSEGV: 11>.


  • Could you give orginal model and some image? (*weights file)

  • Yeah sure. Here's the original trained model and few images that I use for quantization.


  • Hi, I'm really sure this model from PyTorch does not work for me.

    Could you re-train to generate the weights file?

    Because I will convert to Keras format and then do the next step.

    For more information, please reference this page: https://github.com/AlexeyAB/darknet

  • I did not train this issue. I simply used the repo shared earlier to covert pre-trained weights to the ONNX format.


    May I ask, what do you mean that it does not work for you?

  • Hi, @Raef Youssef

    Because I tried multiple scripts to convert ONNX file, but they always compiler fail after use "batchcompilerfpAnalysis.py"

    So, I just according to my successful step to help you checking the model.

    If your weights file from other websites, please share with me.

  • Yes sure,

    The .weight and .cfg files were too big to attach so I attached the script that was used to download both.


  • Hi @Raef Youssef :


    The root cause of this issue comes from a auto_pad bug in onnx 1.4.1,

    In onnx2onnx, we use the builtin function ( shape_inference ) in onnx for calculating the shape of feature map between node and node, which is required by tool chain. If shape is wrong, quantization and compiling go wrong.

    In the onnx you provided, we can see there are many auto_pad attribute in Conv and MaxPool, and some wrong shape information.


    I try to modify the project you provided (https://github.com/jkjung-avt/tensorrt_demos/tree/master/yolo),

    modified auto_pad in Conv here:

    https://github.com/jkjung-avt/tensorrt_demos/blob/master/yolo/yolo_to_onnx.py#L590

    and auto_pad in MaxPool here:

    https://github.com/jkjung-avt/tensorrt_demos/blob/master/yolo/yolo_to_onnx.py#L836


    After experimental modification (just hard coded the value),

    the model could be compiled successfully.

    the correct onnx:


    If you really want to use the model in this project, 

    I recommend you can modify the source code to set correct padding value.

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.