fpAnalyserBatchCompile_520 raise error

I have been through the compilation process in the document:

[1] pytorch2onnx

python /workspace/libs/ONNX_Convertor/optimizer_scripts/pytorch2onnx.py torch_model_efficientnet-b7.onnx torch_model_efficientnet-b7.pth.onnx


[2] onnx2onnx

python /workspace/libs/ONNX_Convertor/optimizer_scripts/onnx2onnx.py torch_model_efficientnet-b7.pth.onnx -o torch_model_efficientnet-b7.opt.onnx --add-bn -t

(Remove the --split-convtranspose argument because onnx2onnx.py: error: unrecognized arguments: --split-convtranspose)


[3] editor.py Remove soft_max/transpose

python /workspace/libs/ONNX_Convertor/optimizer_scripts/editor.py torch_model_efficientnet-b7.opt.onnx torch_model_efficientnet-b7_rmv.opt.onnx --cut-type Transpose


[4] Write input_params.json and batch_input_params.json

input_params.json:

{

   "model_info": {

       "input_onnx_file": "/data1/torch_model_efficientnet-b7_rmv.opt.onnx",

       "model_inputs": [

           {

               "model_input_name": "input.1",

               "input_image_folder": "/data1/ILSVRC2012_val"

           }

       ]

   },

   "preprocess": {

       "img_preprocess_method": "customized",

       "img_channel": "RGB",

       "radix": 8,

       "keep_aspect_ratio": false,

       "pad_mode": 1,

       "p_crop": {

           "crop_x": 0,

           "crop_y": 0,

           "crop_w": 0,

           "crop_h": 0

       }

   },

   "simulator_img_files": [

       {

           "model_input_name": "input.1",

           "input_image": "/data1/ILSVRC2012_val/ILSVRC2012_val_00000001.JPEG"

       }

   ]

}


batch_input_params.json:

{

   "encryption": {

       "whether_encryption": false,

       "encryption mode": 1,

       "encryption_key": "0x12345678",    

       "key_file": "",                    

       "encryption_efuse_key": "0x12345678"

   },

   "models": [

       {

           "id": 1000,

           "version": "1",

           "path": "/data1/torch_model_efficientnet-b7_rmv.opt.onnx",      

           "input_params": "/data1/input_params.json"

       }

   ]

}


[5] python /workspace/scripts/fpAnalyserBatchCompile_520.py -t 8

Get error message:

[2021-05-17 06:25:46] [error] [Thread: 21631] [/projects_src/kneron_piano/dynasty/release/include/ONNXVectorIO.h:93]

Input file dimension is not correct: expected: 2408448, but got: 150528 lines. /workspace/.tmp/input.1/ILSVRC2012_val_00000006.txt

Traceback (most recent call last):

 File "/workspace/scripts/fpAnalyserBatchCompile_520.py", line 50, in <module>

   bie_model = run_knerex(batch_model.model_config, threads, 520)

 File "/workspace/scripts/utils/run_knerex.py", line 60, in run_knerex

   subprocess.run(commands, check=True)

 File "/workspace/miniconda/lib/python3.7/subprocess.py", line 512, in run

   output=stdout, stderr=stderr)

subprocess.CalledProcessError: Command '['/workspace/libs/fpAnalyser/updater/run_updater', '-i', '/workspace/.tmp/updater.json']' returned non-zero exit status 1.

I couldn't figure out how this error raises, have anyone meet a similar problem? or have some advice on where I ignore or make a mistake?

Comments

  • I think the error was caused by preprocess error. Did you revised the customer part in img_preprocess.py?

    "img_preprocess_method": "customized", <= the setting means you want to customize your own preprocess.

    Please refer to the item 8 in FAQ 8. How to use customized methods for image preprocess?

    http://doc.kneron.com/docs/#toolchain/manual/#faq

  • @Ethon Lin, Thanks for your quick reply.

    That is I want to try if using customize option can change the input dimension, I use the 'pytorch' option before but still get the same error I still didn't find out the reason.

  • edited May 2021

    This is my customized part

       if mode == 'customized':

           x /= 224. #Change this line didn't make any difference...

           mean = [0.485, 0.456, 0.406]

           std = [0.229, 0.224, 0.225]

           x[..., 0] -= mean[0]

           x[..., 1] -= mean[1]

           x[..., 2] -= mean[2]

           if std is not None:

               x[..., 0] /= std[0]

               x[..., 1] /= std[1]

               x[..., 2] /= std[2]

           return x


    Still get the same error:

    [2021-05-18 07:53:06] [error] [Thread: 21801] [/projects_src/kneron_piano/dynasty/release/include/ONNXVectorIO.h:93]

    Input file dimension is not correct: expected: 2408448, but got: 150528 lines. /workspace/.tmp/input.1/ILSVRC2012_val_00000003.txt

    Traceback (most recent call last):

     File "/workspace/scripts/fpAnalyserBatchCompile_520.py", line 50, in <module>

       bie_model = run_knerex(batch_model.model_config, threads, 520)

     File "/workspace/scripts/utils/run_knerex.py", line 60, in run_knerex

       subprocess.run(commands, check=True)

     File "/workspace/miniconda/lib/python3.7/subprocess.py", line 512, in run

       output=stdout, stderr=stderr)

    subprocess.CalledProcessError: Command '['/workspace/libs/fpAnalyser/updater/run_updater', '-i', '/workspace/.tmp/updater.json']' returned non-zero exit status 1.

  • Have ever checked the image you set in json? It seems there are some issues in model input.

    Input file dimension is not correct: expected: 2408448, but got: 150528 lines. /workspace/.tmp/input.1/ILSVRC2012_val_00000003.txt


  • The image dataset I was downloaded from the ImageNet official site, I have checked that the image directory is correct and the image's dimension is either 500*xxx or xxx*500, but actually, I want to resize the image to 3*224*224 in img_preprocess.py but it even didn't run to preprocess part and just get an error.

    I also checked the /workspace/scripts/fpAnalyserBatchCompile_520.py but didn't find which part will check the input dimension, I'm wondering which file of code will generate the /workspace/.tmp/input.1/ILSVRC2012_val_00000003.txt?

  • The EfficientNet seems too big to upload, and the model structure seems quite weird after I removed the transpose parts, so I switch to use MobileNetV2 and try again all the processes but still get the same error.

    In my attached zip folder contains the image dataset (just pick the first 10 pictures from ILSVRC2012 for the test), the MobilNetV2 ONNX file that export by torch.onnx, and the img_preprocess.py that I'm modified.


  • Hi Angie,

    I checked your model and found out the issue is caused by the dimension 16x3x224x224.

    We see the first degree "16" as batch_size, and the KL520 / KL720 only support the case whose batch_size is 1. (e.g. 1x3x224x224) That's why you encountered error with preprocess, the model input is invalid on our toolchain.

  • @Ethon Lin, thank you for your reply.

    After I change the batch size to 1 then I can compile the bin files successfully.

    Thank you so much!

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.