KL520 AI audio application question
Hi there,
I'm using the KL520 for human recognition, and so far it is working well. I see that there is an I2S interface for audio applications from the datasheet, but I don't have much knowledge about audio technology.
My question is, is it possible to run an audio AI model on the KL520, connected with a microphone and a speaker, so that the KL520 can recognize audio messages? For example, if there is a loud sound like a gunshot in my area (e.g., Philadelphia), I could receive an alert from the AI module via the KL520. Additionally, is it possible to implement two AI models on the KL520? For instance, can I run both a ResNet model and an audio AI model simultaneously and use them in practice?
Thanks
FK
The discussion has been closed due to inactivity. To continue with the topic, please feel free to post a new discussion.
Comments
Greetings,
It's possible to run an audio AI model on the KL520 if the model consists of operators supported by the KL520. Here is the information on supported operators:
https://doc.kneron.com/docs/#toolchain/appendix/operators/
Additionally, it is possible to run 2 models in sequence rather than simultaneously. Please refer to the instructions in the following link:
https://doc.kneron.com/docs/#plus_c/feature_guide/customized_api/create_kl520_multiple_models_example/
Regarding the I2S and connection with the microphone, there is no I2S example in our SDK and no I2S hardware pinout on the KL520 development kit. Therefore, you will need to find another way to collect the audio source and transfer it to the KL520 for model inference.
Thanks for your explanation.
I have another question regarding the KL520 dongle.
I was trying to set up the environment for the KL520 example. However, it occurs some errors as below,
I'm not sure whether the error is related to the Ubuntu version. Mine is 22.04. Or just a syntax error? I was following the instructions from the Internet and stuck on the kdp-host-api things...
Thanks!
Hi FK,
It appears that you are using an outdated version (host_lib) that is no longer supported. Please uninstall the previous version and install the latest one.
The new development tool is PLUS. You can download it from the following link:
https://www.kneron.com/en/support/developers/
You can find the corresponding files in the directory: /kneron_plus/python/package/.
For more details about PLUS, please visit our documentation page:
https://doc.kneron.com/docs/#plus_python/getting_start/
Hi Ethon,
This is the result from ScanDevices.py.
I was trying to run the sample codes from the Document Center. However, I got the "upload model failed". Did I miss anything regarding the model build-up?
My kneron version is from kneron_plus_v3.0.0.zip.
or should I upgrade the firmware? and how? regarding the "warnning"
Thanks!
Hi,
I found it may be the lack of model upload. However, I recheck the model uploading process. I got the message from my code:
My code:
and it seems stuck on the "kp.core.load_model_from_file()" see below,
Not sure whether the "warning" needed to be concerned?
Thanks
Hi FK,
If you are using KL520 dongle on Kneron PLUS, please make sure that:
-These dependencies are installed: Install Dependency - Document Center (kneron.com)
-In regards to the warning, we would recommend you to use the same version for your firmware in KL520 dongle and for your Kneron PLUS (e.g. both use version 1.7, or both are using version 2.2)
For now, in the log, it says that the KL520 dongle is using flash boot mode (firmware 1.7.0). You could update your dongle to USB boot and try running it again.
To update your dongle, you could either download Kneron DFUT (Developers | Kneron – Full Stack Edge AI), or if your Ubuntu isn't Ubuntu 18.04 (x86_64 64-bit), you could also use DFUT console provided inside Kneron PLUS and use the commands to update.
Information on Kneron DFUT: Upgrade AI Device To KDP2 - Document Center (kneron.com)
Build DFUT console in Kneron PLUS: Build Kneron PLUS - Document Center
I followed the instruction for DFUT console building by https://doc.kneron.com/docs/#plus_c/introduction/build_plus/#23-build-with-dfutconsole
This is my code and I tried it again,
However, got the same result......(I set the timeout this time in case forever stuck)
The suggestion from the post,
In regards to the warning, we would recommend you to use the same version for your firmware in KL520 dongle and for your Kneron PLUS (e.g. both use version 1.7, or both are using version 2.2)
How can I upgrade my dongle?
Hi FK,
The log is showing that you are still using flash boot mode, so you would need to either:
-Update dongle to USB boot, then the Python program would load from SCPU_FW_PATH and NCPU_FW_PATH when you run the program, or
-Update dongle to flash boot with the firmware inside kneron_plus/res/firmware/KL520 folder.
You can refer to the section "Upgrade AI Device To KDP2" inside the document center: Upgrade AI Device To KDP2 - Document Center (kneron.com)
To update your KL520 dongle to USB boot, please go inside kneron_plus/build/bin and run ./DFUT_console and use --kl520-usb-boot and specify your --port, as shown in the image above.
Hi Maria,
Thanks for the help. I've updated my KL520 dongle to USB boot. Thanks a lot.
The code "KL520DemoGenericImageInference.py" I ran is totally fine. However, I was trying to run the python code "KL520DemoGenericImageInferencePostYolo.py". It occurs some errors as below,
I found the errors that I got from some examples also happening the same things "no attribute channel" . I think the problem is from the post-processing?
Also another attribute error is from "KL520DemoGenericDataInference.py"
Did I miss any processes like installation? I was trying to firstly make sure the overall codes and environments are fine and then dig into further application.
Please give me some clues. Appreciate it!
Hi FK,
Have you installed the Python package? Please go to kneron_plus/python/package/[your environment], and pip install the .whl and try running the python examples again.
Dependencies for Python: Install Dependency - Document Center (kneron.com)
Hi Maria,
Thanks for your assistance. I've run the dongle successfully.
Another question is whether it is possible to integrate my IP camera with the dongle, analyze the video using KL520 AI engine, and then output the video stream?
Hi FK,
As long as the IP camera can connect and send data to the environment with the KL520 device, it should be possible.
You could refer to the python example KL520DemoCamGenericImageInferenceDropFrame.py, since it uses a camera to take in images, then the KL520 inferences the images and the results will be on the computer screen. You may also connect a USB camera and do the same thing.
Hi Maria,
[Connect Device]
Error: connect device fail, port ID = '0', error msg: [Error raised in function: connect_devices. Error code: 27. Description: ApiReturnCode.KP_ERROR_CONFIGURE_DEVICE_27]
How can I know the exactly Error code means? Where can I reach out? Thanks
Hi FK,
The error seems to be a failed USB configuration error. Please check if your OS fits these requirements:
And if you have these USB permissions configured:
Reference: Install Dependency - Document Center (kneron.com)
In your previous reply, it said that your Ubuntu version is 22.04, so the Python examples might not work. You could run the C examples instead.
As for how to look up the error code: If your editor can search inside your file contents, you could look up "KP_ERROR_CONFIGURE_DEVICE_27" and see where the error came from.
Hi Maria,
Currently, I can run on my 2204 fine. I have a question can I run on my Raspberry Pi 3B instead of 4B...
Also, is it possible to convert yolov5 or another .pt model into the file that KL520 can run? like .nef file?
Thanks
FK
Hi FK,
You might not be able to run Python examples on Kneron PLUS with Raspberry Pi 3B, because the supported OS is listed below:
However, as long as your Raspberry Pi 3B fits the requirements below, you should be able to run C examples on Kneron PLUS.
Yes, it is possible to convert yolov5 or another .pt model into an .nef file for the KL520 to load and inference. You would need to convert the .pt model into an .onnx file, then follow the instructions on: 1. Toolchain Manual Overview - Document Center (kneron.com)
Hi Maria,
Thanks for the messages.
Besides, how much capacity that the KL520 can use for the AI model?
Thanks
FK
Hi FK,
The available memory depends on how you upload the model.
Please note that the capacity is not only for the model size but also includes memory for input data, output features, and so on.
For more details, please refer to our documentation page at https://doc.kneron.com/docs/#plus_c/introduction/write_model_to_flash/#1-introduction
Hi Ethon,
I might be silly and have a question.
1. Currently, when I'm trying to run the model on KL520, I upload the firmware and model first before the application. Is it possible that I skip the steps? because it makes sense that I don't need to upload any firmware and model to the KL520 anymore since I've already done it in the past code operation.
2. Pre-processing can be performed either on the KL520 or on the host device. I want to confirm that the example code 'KL520DemoCamGenericImageInference.py' is using the host device for pre-processing. I noticed that the code converts the image to BGR565 using
cv2
so I guessed. (Kneron PLUS supports BGR565, BGRA8888, RAW8 (Grayscale), and YCbCr422 (YUYV422) data formats for inference.)However, I saw the code
It seems like the KL520 does some "resizing, padding, normalizaing". Does that mean the pre-processing is done by KL520?
3. Another thing that I'm curious about how to use "reset" button on KL520 dongle? Is there any explanation or how can I know I've reset it successfully.
Thanks
FK
Hi FK,
Hi there,
Quick question:
Is the model "tiny_yolo_v3.nef" converted from the "YOLOv3-tiny.weights"(https://pjreddie.com/darknet/yolo/#google_vignette) by 1. Toolchain Manual Overview - Document Center (kneron.com)?
I found the "YOLOv3-tiny.weights" is around 35.4MB, however "tiny_yolo_v3.nef" is around 9.9MB only. Does the conversion(compression) affect the performance or just a kind of optimization for the model to fit in the KL520?
Besides, what is the difference between "example_model_zoo" and "example" in the python folder?
Thanks
Hi FK,
To run model inference on Kneron's NPU, you need to perform quantization and compilation in Kneron's toolchain beforehand. The quantization process will reduce the model's file size. Although there might be a slight loss of precision during quantization, we have techniques in place to minimize this loss.
Regarding the "example" section in PLUS, all the examples inside are designed to demonstrate the usage of different functions. The "model_zoo" section collects packages with end-to-end workflows from training to inference. You can refer to our documentation for more details about this section. https://doc.kneron.com/docs/#plus_python/modelzoo/
Hi Ethon,
I have a quick question. I understand that the available model format is ".nef". Therefore, regardless of whether I generate the ".nef" file using Python or C, I should be able to use the model on the KL520 device. Is that correct? For example, if I use Python code to convert the model from ".weight" to ".nef", I can then use C for development and add the ".nef" file to my KL520 device. This way, I can isolate the model generation process to Python only.
Hi FK,
Actually, both Python and C methods mentioned above are for running inference on the KL520 and are not related to model conversion.
To convert a model from the original framework (e.g., Keras, PyTorch, etc.), you should use the corresponding Python API in our toolchain. Here is the instruction for the toolchain: https://doc.kneron.com/docs/#toolchain/manual_1_overview/