Need to run multiple nef files in sequence.

I have 3 seperate onnx graphs that accomplish the entire network (backbone.onnx,neck.onnx ,head.onnx ).  I have converted them into corresponding nef files.
I want to run nefs in sequence (as show in case of onnx in above attached file).
Please give me an example inference code to run on KL720 with nef files (in python)


Comments

  • edited May 7

    Hi Haresh,

    When there are multiple models, you could combine them into one nef file by using ktc.combine_nef: 5. Compilation - Document Center (kneron.com)

    There is a demo app "kl720_demo_customize_inf_multiple_models" for multiple models in Kneron PLUS, even though it's in C code. However, you will need to create a customized function and edit the firmware.

    There is an faster and easier way: For Python code, you could refer to KL720DemoGenericImageInference.py and change the model_id inside the generic_inference_input_descriptor each time before you inference.

    An example flow would look like this:

    ...

    -Set the model_id inside generic_inference_input_descriptor to the model you'd like to use

    -Inference the image by using generic_image_inference_send

    -Receive the raw result with generic_image_inference_receive

    -Process the result so that it fits the input of the next model

    -Set the model_id inside generic_inference_input_descriptor to the next model and set the other parameters (For example, if your second model will inference a specific part of an image from your first model's inference result, you could crop the original image and use that as the image for GenericInputNodeImage. Please refer to KL720DemoGenericImageInferenceCrop)

    -Inference the image again by using generic_image_inference_send

    -Receive the raw result with generic_image_inference_receive

    ...

    If you are not using an image, you could also take a look at KL720DemoGenericDataInference.

    If inferencing works for your models and you'd like to inference faster, then you could consider writing a customized inference with C code.

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.