
Maria Chen
Maria Chen
About
- Username
- Maria Chen
- Joined
- Visits
- 1,106
- Last Active
- Roles
- Member, ModeratorsKL520, ModeratorsKL720, ModeratorsAIModel
Comments
-
您好, 這是因為host_stream和companion_user_ex (Kneron PLUS)的inputs不一樣,host_stream的input是從sensor來的。host_stream 中預設串流進來的影像格式為 yuv40p,須要設定為 KP_IMAGE_FORMAT_YUV420,而設定為 KP_IMAGE_FORMAT_RAW8 能運行是因為hardware只取了灰…
-
您好, 不是的,image_format的設定是用來告訴硬體輸入影像的色彩空間,並不會去轉換影像 1. 可以請您測試一下這個KL630DemoGenericImageInferenceTesting.py,切換true/false的flag來測試看看並提供一下結果與生成的RGB/gray binary file嗎? https://www.kneron.com/f…
-
您好, 謝謝您提供更多資訊。如果連KL630DemoGenericImageInference.py也會出問題的話,我們可以先解決這個問題,然後再去應用到KL630 solution_host_stream上。 KL630DemoGenericImageInference.py裡面的kp.GenericInputNodeImage,可以設定normalization,default為K…
-
您好, Int16模型的錯誤,是因為16-bit模型目前不支持hardware image pre-process,所以需要您自行做前處理 (請參考kl630_demo_generic_data_inference.c)。 關於模型預測需要先改成灰階再使用KP_IMAGE_FORMAT_RGB565的問題,可以請您提供NEF模型,normalization設定,…
-
Hi Zhihao, Yes, you're using the right conda environment for opset 11. The Resize operator isn't supported by KL520 so it should be removed and added in the postprocess function. Unfortunately, we could usually cut off operators that are not weighed…
-
Hi Zhihao, What is your onnx model's opset? If it was opset 13 or higher, we'd recommend you to use the onnx1.13 environment inside the toolchain. Also, please check the following: -Your onnx model has gone through onnx2onnx optimization -Your onnx …
-
Hi, Thank you for providing the log. For the code, did you mean the one below? https://www.kneron.com/forum/uploads/666/MJZF4ULWRPE2.png If so, the Yolov3 example on our documentation is outdated; it was written for toolchain v0.22.0, and our latest…
-
Hi Dongkyun, What is showing up on your UART log? Please make sure that your KL730 firmware is the one used for Kneron PLUS. The file location is: SDK\04_Firmware\ubifs\fw_plus_nand_4k.ubifs As for flashing firmware, please refer to this documentat…
-
Hi Sathish, Sorry for the late reply. For questions on KL730, please contact the sales representative you purchased the KL730 from, as they would provide you with the contact person for technical assistance. Thank you!
-
Hi Sathish, Could you provide your models and code so we could take a better look at this issue? For questions on KL730, you could also contact the sales representative you purchased the KL730 from. They should be able to provide you with the conta…
-
Hi Sathish, Unfortunately, we can't run models parallely in NPU, but if you were in companion mode and used 2 dongles, it could perform nearly as close as running parallely.
-
Hi, I was talking about flash_image_solution_host_mipi.bin earlier, but for the flash_image_solution_hico_mipi.bin you were trying, you will need to have access to Kneron PLUS enterprise, then you could build and run [kneron_plus]/examples_enterpris…
-
Hi, No problem, I'm glad to hear that! By the way, if you have the latest KL520 version (v2.2.0), you could also flash flash_image_solution_host_mipi.bin in KL520_SDK/firmware/utils/bin_gen and it would work as well.
-
Hi, It seems like the flash_image.bin isn't the correct file, because kdp2_flash_image.bin seems to include KDP2 FW, which is used for Kneron PLUS. Have you tried generating the firmware by using the workspace inside tiny_yolo_v3_host? https://www.k…
-
Hi Olivier, The memory size restriction for KL720 is 70-75MB. Reference: Write Model To Flash - Document Center https://www.kneron.com/forum/uploads/887/80S281ZRN3EO.png Our sales representative's email is: brian.lin@kneron.us You may also use our w…
-
Hi Olivier, As long as the onnx model's operators are supported by KL720, it should be able to run. Supported operators list: Hardware Supported Operators - Document Center For a reference design for the chip, please contact our sales team. They wou…
-
Hi Jyoti, Usually, those who port their models would write their own postprocess, as they are the most familiar with their model. There are example tiny yolov3 and yolov5 postprocess functions in kneron_plus/python/example/utils, and for further tec…
-
您好, 那個文檔用的toolchain是比較舊的版本。用最新的toolchain版本修改模型的話,如果opset是11或12,可以在base環境使用ONNX editor。如果是opset 13到18,可以在onnx1.13環境使用onnx.utils.extract_model (https://onnx.ai/onnx/api/utils.html)。
-
您好, https://www.kneron.com/forum/discussion/comment/2259#Comment_2259 用onnx_convertor轉換工具來轉換pytorch等的模型成onnx,應該是可行的: 3. Floating-Point Model Preparation - Document Center 如果會失敗的話,可以試著在Kneron Toolchain (docker)裡…
-
您好, https://www.kneron.com/forum/discussion/comment/2258#Comment_2258 關於這些問題: 1 問題是因為toolchain轉換的過程會自動將input format設定成1W16C8B,所以pytorch2onnx_kneron.py不用修改喔。 2 現在,因為firmware的preprocess有出現問題,所以pre_proc_…
-
您好, 不好意思,我們找到原因了! 因為在Kneron Toolchain轉換模型時,會自動將input format設定成1W16C8B,所以您的model input format是1W16C8B: https://www.kneron.com/forum/uploads/498/X4BBLC6WKTNV.png 一般的input format是4W4C8B,而1W16C8B也可以支援,但…
-
您好, 回答您最新的問題: 雖然nef model看起來沒有問題,但是KL730 generic inference應該要可以跑,但是會失敗。 YoloX當時是給KL720的範例,而且是Kneron PLUS 1.x的時候的文檔,目前沒有人在維護,那時候還沒有KL730。現在的Pytorch版本也一直在更新,導致model架…
-
您好, 我也有轉換您的onnx模型,雖然可以成功轉換成nef模型,跑KL730 generic inference卻也會失敗 (這個generic inference,用任何nef模型應該都要成功)。我認為可能是因為您之前說過的"编译出来的YOLOX模型与创建KL730单一模型示例图像格式不匹配",因為…
-
Hi Jyoti, If you were talking about how in Kneron PLUS Python, the numpy arrays get truncated when you print out the results, you could try using: import sys import numpy numpy.set_printoptions(threshold=sys.maxsize) This would help print out all th…
-
您好, kneron_inference的結果會是numpy arrays,所以會需要您寫模型的postprocess來得到座標之後,才可以用顯示邊界框(display)的function,function的細節在/workspace/E2E_Simulator/python_flow/utils/utils.py裡面: https://www.kneron.com/forum/uploads/124/9W…
-
您好, 不好意思回覆晚了,因為error code -7可能源於各種原因,我們會需要先確認模型沒有問題,然後再復現error和確認code,可以請您提供以下的資訊嗎? 謝謝您! -NEF model -Onnx model (確認有沒有不支持的operators) -firmware.tar -Kneron PLUS的code 如果是用自…
-
您好, 因為KL720不支援那些運算子,所以可能會需要請您用其他的模型。 如果這些運算子是在模型的最後面,也是可以將它們剪掉之後再自行寫postprocess把剪掉的運算子加回去,不過我們會建議用其他的模型。 Toolchain的話,請用最新的v0.28.0來轉換模型。您可以使用這個…
-
您好, 只要KL720有支援ViT架構的onnx模型中的operators,照理說應該可以跑。請參照文檔裡的清單: Hardware Supported Operators - Document Center
-
Hi Franklin, I'm sorry for their lack of communication. Could you contact Jeffrey (jeffrey-yc.chen@kneron.us) instead and cc Brian just in case? Thank you!
-
您好, 不好意思,這個文檔已不支援,所以不會再有新的文檔了。訓練出來的模型的operators,只要是KL730都有支持,就可以用Kneron Toolchain轉換成.nef模型。請參照: Hardware Supported Operators - Document Center 轉換的nef模型就可以在KL730上進行推論。