Default Avatar

Maria Chen

Maria Chen

About

Username
Maria Chen
Joined
Visits
1,195
Last Active
Roles
Member, ModeratorsKL520, ModeratorsKL720, ModeratorsAIModel

Comments

  • Hi, No problem, I'm glad to hear that! By the way, if you have the latest KL520 version (v2.2.0), you could also flash flash_image_solution_host_mipi.bin in KL520_SDK/firmware/utils/bin_gen and it would work as well.
  • Hi, It seems like the flash_image.bin isn't the correct file, because kdp2_flash_image.bin seems to include KDP2 FW, which is used for Kneron PLUS. Have you tried generating the firmware by using the workspace inside tiny_yolo_v3_host? https://www.k…
  • Hi Olivier, The memory size restriction for KL720 is 70-75MB. Reference: Write Model To Flash - Document Center https://www.kneron.com/forum/uploads/887/80S281ZRN3EO.png Our sales representative's email is: brian.lin@kneron.us You may also use our w…
  • Hi Olivier, As long as the onnx model's operators are supported by KL720, it should be able to run. Supported operators list: Hardware Supported Operators - Document Center For a reference design for the chip, please contact our sales team. They wou…
  • Hi Jyoti, Usually, those who port their models would write their own postprocess, as they are the most familiar with their model. There are example tiny yolov3 and yolov5 postprocess functions in kneron_plus/python/example/utils, and for further tec…
  • 您好, 那個文檔用的toolchain是比較舊的版本。用最新的toolchain版本修改模型的話,如果opset是11或12,可以在base環境使用ONNX editor。如果是opset 13到18,可以在onnx1.13環境使用onnx.utils.extract_model (https://onnx.ai/onnx/api/utils.html)。
  • 您好, https://www.kneron.com/forum/discussion/comment/2259#Comment_2259 用onnx_convertor轉換工具來轉換pytorch等的模型成onnx,應該是可行的: 3. Floating-Point Model Preparation - Document Center 如果會失敗的話,可以試著在Kneron Toolchain (docker)裡…
  • 您好, https://www.kneron.com/forum/discussion/comment/2258#Comment_2258 關於這些問題: 1 問題是因為toolchain轉換的過程會自動將input format設定成1W16C8B,所以pytorch2onnx_kneron.py不用修改喔。 2 現在,因為firmware的preprocess有出現問題,所以pre_proc_…
  • 您好, 不好意思,我們找到原因了! 因為在Kneron Toolchain轉換模型時,會自動將input format設定成1W16C8B,所以您的model input format是1W16C8B: https://www.kneron.com/forum/uploads/498/X4BBLC6WKTNV.png 一般的input format是4W4C8B,而1W16C8B也可以支援,但…
  • 您好, 回答您最新的問題: 雖然nef model看起來沒有問題,但是KL730 generic inference應該要可以跑,但是會失敗。 YoloX當時是給KL720的範例,而且是Kneron PLUS 1.x的時候的文檔,目前沒有人在維護,那時候還沒有KL730。現在的Pytorch版本也一直在更新,導致model架…
  • 您好, 我也有轉換您的onnx模型,雖然可以成功轉換成nef模型,跑KL730 generic inference卻也會失敗 (這個generic inference,用任何nef模型應該都要成功)。我認為可能是因為您之前說過的"编译出来的YOLOX模型与创建KL730单一模型示例图像格式不匹配",因為…
  • Hi Jyoti, If you were talking about how in Kneron PLUS Python, the numpy arrays get truncated when you print out the results, you could try using: import sys import numpy numpy.set_printoptions(threshold=sys.maxsize) This would help print out all th…
  • 您好, kneron_inference的結果會是numpy arrays,所以會需要您寫模型的postprocess來得到座標之後,才可以用顯示邊界框(display)的function,function的細節在/workspace/E2E_Simulator/python_flow/utils/utils.py裡面: https://www.kneron.com/forum/uploads/124/9W…
  • 您好, 不好意思回覆晚了,因為error code -7可能源於各種原因,我們會需要先確認模型沒有問題,然後再復現error和確認code,可以請您提供以下的資訊嗎? 謝謝您! -NEF model -Onnx model (確認有沒有不支持的operators) -firmware.tar -Kneron PLUS的code 如果是用自…
  • 您好, 因為KL720不支援那些運算子,所以可能會需要請您用其他的模型。 如果這些運算子是在模型的最後面,也是可以將它們剪掉之後再自行寫postprocess把剪掉的運算子加回去,不過我們會建議用其他的模型。 Toolchain的話,請用最新的v0.28.0來轉換模型。您可以使用這個…
  • 您好, 只要KL720有支援ViT架構的onnx模型中的operators,照理說應該可以跑。請參照文檔裡的清單: Hardware Supported Operators - Document Center
  • Hi Franklin, I'm sorry for their lack of communication. Could you contact Jeffrey (jeffrey-yc.chen@kneron.us) instead and cc Brian just in case? Thank you!
  • 您好, 不好意思,這個文檔已不支援,所以不會再有新的文檔了。訓練出來的模型的operators,只要是KL730都有支持,就可以用Kneron Toolchain轉換成.nef模型。請參照: Hardware Supported Operators - Document Center 轉換的nef模型就可以在KL730上進行推論。
  • 您好, 這個文檔現在已不支援,需要麻煩您用之前的mmcv版本,要與您的CUDA版本對應。下載的command是: pip install mmcv-full==1.5.0 -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html 如下圖,不過是用mmcv-full,mmcv版本為1.…
  • Hi Franklin, Yes, you should reach out to Brian (brian.lin@kneron.us), our sales representative. Thank you for your interest! You could also fill out the form on our website: Contact us | Kneron – Full Stack Edge AI
  • Hi Nhleem, Edit: We've answered your question in another email.
  • 您好, 無論是使用任何模型,我們都會建議您使用最新的Toolchain (v0.28.0)和SDK (v2.2.0) 版本。 KL720 SDK在官網上可以下載,而Kneron Toolchain可以用"docker pull kneron/toolchain:latest"指令來進行更新。
  • 您好, 不好意思這麼晚回覆,感謝您的分享! 另外,我們的toolchain版本有更新,可以支持Opset 18: Kneronnxopt - Document Center https://www.kneron.com/forum/uploads/313/1K6MGYK85OYU.png 也可以參考我們支持的operators/nodes: Hardware Supported Operators - D…
  • 您好, 不好意思這麼晚回覆。 1. 照理說可以,請在使用前確認KL730是否都有支援您的模型的operators: Hardware Supported Operators - Document Center 2 我們建議剪掉onnx模型中Concat以下的operators: https://www.kneron.com/forum/uploads/227/YPDF2CSNW4O9.png 剪…
  • Hi again Franklin, For KL520 hardware information, you could refer to these documents available on our Developer Center: Developers | Kneron – Full Stack Edge AI https://www.kneron.com/forum/uploads/525/ZYX6GRBR8MVC.png
  • 您好, 是的,無論是用USB上傳和firmware燒錄都會需要先用NEF Combine組合兩個nef模型,然後再做燒錄喔。 NEF Combine的說明: 5. Compilation - Document Center 在KL720上用多個模型的範例: Create Multiple Model Example for KL720 - Document Center
  • 您好, 我們沒有特別建議使用哪一個架構,只要模型的operators都是KL720有支援的話,照理上都可以使用喔。 硬體支援的operators: Hardware Supported Operators - Document Center 也可以參考Kneron硬體的效能: Hardware Performance - Document Center 使用Kneron Too…
  • 您好, 可以的,如果是要用USB上傳模型,大小的限制總共是75MB,而如果是在firmware上燒模型,限制總共是70MB。 https://www.kneron.com/forum/uploads/635/CQL637KGXO9U.png 參考文檔: Write Model To Flash - Document Center
  • Hi Franklin, These are the platforms you can build and run Kneron PLUS on: https://www.kneron.com/forum/uploads/135/4K7XP5J1COMK.png If you are not using the above platforms, to flash the KL520 firmware, you could download Kneron PLUS and try buildi…
  • Hi SeonGyun, In that case, I don't think it's possible.