How to save the streamd camera image in KL630 in ther board ?

As shown in the attached video, the inferred value keeps showing strange values, so I would like to check whether the input is entered correctly.


3DCNN receives 128x128x3 image input and inputs it into an array of 15 dim.

The output of the deep learning model produces classification results of 0, 1, 2, and 3.


Is there a way to save to video and check video input on the KL630?


What I am suspicious of is that 15 input images are input.

I would like to ask you to check whether the input of the AI ​​model is proper.

Comments

  • I also attached the log. plz help me.

  • Hi,

    It might be hard to figure out the issue just from the capture log. Could you provide more files, such as the KL630 code, the nef model, and the original model before converting it to nef, so that we could check the model input? Thank you for your help.

  • Hi,

    Thank you for providing the files.

    Your .nef model input does look okay, with a shape of (1, 3, 128, 128)

    As you suspected, the num_image is set to 15, meaning that there are 15 images as your input. You might want to set them to 1.

    When you say "save to video," do you mean to export what the camera is showing into a video file? It should be doable.

    If you mean to inference videos on KL630, yes, it's doable, but you will need to inference the images one by one.

    Before directly using your model on host_stream, you might want to test to see if your model can actually run on companion mode.

    When doing model porting, please follow the steps:

    1. Verify the E2E simulator results for NEF in Kneron toolchain
    2. Confirm the results for KL630 companion mode in Kneron PLUS (python code)
    3. Confirm the results for KL630 companion mode in Kneron PLUS (C code)
    4. Move the C code to KL630


  • Thank you Maria.

    The model I am trying to infer on KL630 is aimed at inferring video input consisting of (1,3,128,128) x 15 images.


    This model is similar to this one.

    https://www.tensorflow.org/tutorials/video/video_classification


    You commented as,

    If you mean to inference videos on KL630, yes, it's doable,  but you will need to inference the images one by one


    => I don’t think this part fits what I’m trying to do.


    If I chagned to

    Just inf_config.num_image = 1;

    I think this will result in a (1,3,128,128) x 1 image classification model.


    I'm asking because I think the inferred value will be a strange value if I change it to .

    I am curious whether what you mean by saying that it is possible to infer videos from the KL630 matches what I intended.

  • And please help me, where I can changed


    inf_config.num_image = 1;


    Maybe host_stream.ini ?? Fps part?


    [nnm]

    ModelPath = "nef/models_630_3dcnn_LR_4_id35000.nef"

    ModelId = 35000 # for yolo only - switch model(If there are multiple yolo models in one nef file)

    JobId = 3002         # for job mapping in application_init.c

    InferenceStream = 1     # Inference stream index

    Threshold = 0.5       # for yolo only(JobId = 11)

    Fps = 1         # Image input fps for NPU inference

    GetImageBufMode = 0     # 0: block mode 1: non-block mode

    RoiEnable = 0        # Enable ROI for nnm detect

    RoiX = 0          # ROI start x

    RoiY = 0          # ROI start y

    DrawBoxEnable = 1      # draw object bounding box on stream0

    OnlyPerson = 0       # only draw person bounding box when DrawBoxEnable

                  #(so far for yolo only, JobId = 11)

    DrawOnResize = 0;      # If InferenceStream is 0. This setting needs to be enabled. The box will be drawn on all resize streams.

  • I changed it my_kl630_sin_example_inf.c



      // Image buffer address should be just after the header

      //inf_config.num_image = 15; // For 15fps

      inf_config.num_image = 1; // For 15fps 241028

      inf_config.image_list[0].image_buf = (void *)((uint32_t)_input_header + sizeof(my_kl630_sin_example_header_t));

      inf_config.image_list[0].image_width = _input_header->width;

      inf_config.image_list[0].image_height = _input_header->height;

      inf_config.image_list[0].image_channel = 3;

      //inf_config.image_list[0].image_format = KP_IMAGE_FORMAT_RGB565;   // Assume RGB565

      inf_config.image_list[0].image_format = KP_IMAGE_FORMAT_YUV420;   // Assume yuv420

      inf_config.image_list[0].image_norm = KP_NORMALIZE_KNERON;     // This depends on model

      inf_config.image_list[0].image_resize = KP_RESIZE_ENABLE;      // Default: enable resize

      inf_config.image_list[0].image_padding = KP_PADDING_DISABLE;    // Default: disable padding



    and got following error in the KL630 with Error 131



    Inference Configuration: Image Width: 128, Height: 128

    Model ID: 35000, Output Buffer: 0xa56e0

    Output Buffer Address: 0xa56e0

    Check_ERROR 2

    About to execute inference with model ID: 35000

    [kmdw_ncpu_pre_proc_thread] error: exist cpu op !!!

    NPU Proc: Error occured on Pre Proc

    Post Proc: Error occurred on NPU Proc

    Check_ERROR 2.2

    After inference execution: Result: 131

    Inference failed with result code: 131

    Inference execution failed or result is invalid.

    Final Result: 102, Mapped Class: 0

    Class ID: -1498553451

    Class Score: -15636851853132277000000000.000000


    please help me, Maria

  • Hi,

    The error code 131 is KP_FW_ERROR_MODEL_EXIST_CPU_NODE_131, which indicates that your model has CPU nodes. KL630 doesn't support CPU nodes, so please edit your onnx model and cut off the CPU nodes, then add back the nodes on your host side.

    Could you provide the following files/information? Thanks!

    -The original model and the optimized onnx model

    -The data files used in quantization (images or videos)

    -The Python script for converting the model into an NEF file

    -Your Kneron toolchain verion, available by using the command: cat /workspace/version.txt

    We recommend you to use the E2E simulator (ktc.kneron_inference) in the Kneron toolchain to verify your onnx model and NEF model's inference results first before deploying the model.

    Reference: 3. Floating-Point Model Preparation - Document Center (kneron.com)

    Thank you for explaining your model. Could you also specify what exactly you're trying to inference? Are you trying to inference a video of a person to figure out what the person is doing? Or are you trying to inference something else?

  • Hi YOUNGJUN,

    Since the process of porting the model to the KL630 is quite complex, please follow the model porting steps provided by Maria and verify that the inference result at each step is correct. After confirming this should you proceed with porting the model to the KL630. This approach will also simplify the debugging process.

    You can find detailed instructions at the link below.

    Thanks.

The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.