Is it possible to use yolov11 or v8 on k730 board?

Hello, I am trying to convert the yolov11 model to nef, following the instructions on the document center.

the optimized onnx model of yolov11 works fine on ktc.kneron_inference

the problem is, when i quantize & compile the model to nef,

the resulting nef model does not work properly (it outputs 0 probablilities for all classes or has no detection at all)

but so far no errors


Im wondering if there is a way to compile the v11 model safely

or it is unsupported for now?

Comments

  • Hello,

    Regarding the YOLOv11 and YOLOv8 models, if you can modify them to a structure consisting only of supported operators, they will be able to run on KL730. Please refer to the table to check the supported list

    https://doc.kneron.com/docs/#toolchain/appendix/operators/


    About the model accuracy of the ONNX and NEF models, please run inference on these two types of models using ktc.kneron_inference() under the exact same conditions (e.g., input image, preprocess function, postprocess function). Any difference in the inference flow would cause a different output result.


    To ensure the model's accuracy, it is recommended not to run model inference on the board before all the verification on the toolchain simulator ktc.kneron_inference() is complete.


  • Thanks for the reply,

    this is the output list of model operators after running optimize.py


    after comparing with the resource you provided

    I see all the operators except 'Constant' are on the list and supported on 730, have I got it right?

    or are operators like Softmax still not supported on 730?


    if there is a need for model modification, may I ask how can I modify it? does kneron support apis for modification

    or should it be done using other resources?

Sign In or Register to comment.