I think the problem is that you didn't specify the path of model you want to inference.
Please run “ex_kdp2_generic_inference” with arguments -p, -d, and -m to specify the path of your NEF.
And the step 3 of your list is for another purpose, it's not necessary here.
If you got no response after executing “ex_kdp2_tiny_yolo_v3”, there could be some file losing in your project. Please post the error message or re-download the project and try again.
The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.
Comments
It seems the firmware system got something after load your NEF model.
How many models in your NEF file? Is any model with model_id 19 in the NEF file.
And how large your NEF file is?
Hi,
I create the NEF file after converting one ONNX model following the Toolchain Manual.
I try changing model_Id to the one I used doing the Toolchain Manual, but still doesn't work. Doesn't model_id 19 belong to tiny yolo v3?
The NEF file is 88 Kb large.
I also tried running ex_kdp2_tiny_yolo_v3 and ex_kdp2_update_firmware, but there was no response, I just waited while nothing happened.
Is there a way to reset the firmware or is there another way to fix this?
Thanks for the reply
Yes, the model_id 19 is for Kneron's tiny yolo v3 example. But the NEF is just 88Kb, so I guess the model you generated is not tiny yolo v3.
Was the NEF file generated by command "fpAnalyserCompilerIpevaluator_520.py" and "batchCompile_520.py".
Because of there are two types of chip in toolchain, users should confirm the device they are using.
Are you using the dongle of KL520? Please re-plugin the dongle and the firmware will restart.
Here are some recommend steps for you:
Then you should got the output feature of your model.
If it still got any error, just feel free to provide the command and error message to us.
Hi,
Yes, the NEF file generated by command "fpAnalyserCompilerIpevaluator_520.py" and "batchCompile_520.py".
I'm using the dongle of KL520.
I re-plugin the dongle, execute "ex_kdp2_update_firmware" and work well.
After that, I tried the third step, and haven't get a diferent answer from the start “error load model failed”.
I also try to execute “ex_kdp2_tiny_yolo_v3” with no response.
To synthesize:
Are the steps right or there is a step wrong/missing?
Thanks for your help!
I think the problem is that you didn't specify the path of model you want to inference.
Please run “ex_kdp2_generic_inference” with arguments -p, -d, and -m to specify the path of your NEF.
And the step 3 of your list is for another purpose, it's not necessary here.
If you got no response after executing “ex_kdp2_tiny_yolo_v3”, there could be some file losing in your project. Please post the error message or re-download the project and try again.