
Ethon Lin
Ethon Lin
About
- Username
- Ethon Lin
- Joined
- Visits
- 1,159
- Last Active
- Roles
- Member, ModeratorsKL520, ModeratorsKL720, ModeratorsAIModel
Comments
-
Hi, If you want to remove the op softmax, you can refer to the 3.1.7 Model Editor in Toolchain Manual http://doc.kneron.com/docs/#toolchain/manual/. And try onnx2onnx again to check your model after cutting the layer softmax.
-
I think the error was caused by preprocess error. Did you revised the customer part in img_preprocess.py? "img_preprocess_method": "customized", <= the setting means you want to customize your own preprocess. Ple…
-
I think the problem is that you didn't specify the path of model you want to inference. Please run “ex_kdp2_generic_inference” with arguments -p, -d, and -m to specify the path of your NEF. And the step 3 of your list is for another purpose, it's …
-
有關batchcompile的做法現在剩兩種 http://doc.kneron.com/docs/#toolchain/manual/#3-toolchain-scripts-usage 連結中的3.5 Batch-Compile,此步驟會需要先執行3.2 FpAnalyser, Compiler and IpEvaluator產生*.bie,然後在input_batch_compile.json中填入id, version…
-
Now your code flow is : inf_res, paf = ktc.kneron_inference(img_data, nef_file=compile_result, radix=ktc.get_radix([img_data])) Please change it into : inf_res = ktc.kneron_inference(img_data, nef_file=compile_result, radix=ktc.get_radix([i…
-
Because kneron_inference only has one output, it would got error if you two variables to get its output. The output of the kneron_inference would be the results of the model output nodes. For example, the output node of the model "output.onnx&q…
-
有關NEF生成的部分,Toolchain v0.14.1的步驟為新的做法,http://doc.kneron.com/docs/#520_1.4.0.0/getting_start/#520_1.4.0.0/getting_start/的6.3為舊的作法,需要正確的填寫兩個json的設定才可以生成NEF,但兩種做法結果是一樣的,擇一即可 而fw_scpu.bin與fw_ncp…
-
Yes, the model_id 19 is for Kneron's tiny yolo v3 example. But the NEF is just 88Kb, so I guess the model you generated is not tiny yolo v3. Was the NEF file generated by command "fpAnalyserCompilerIpevaluator_520.py" and "batchCompil…
-
It seems the firmware system got something after load your NEF model. How many models in your NEF file? Is any model with model_id 19 in the NEF file. And how large your NEF file is?
-
KL520 could be used with RK3399, but the usb command of KL520 is transferred by libusb. https://github.com/libusb/libusb/releases/tag/v1.0.23 User should make sure the libusb could work on their own environment.
-
Hi Ellen, 抱歉你所提供的截圖解析度太差沒辦法清楚的看到裡面的資訊,不知道你是否方便直接提供你的epxxx.h5讓我們確認?
-
Yes, the python-api is a way to simulate the inference result on KL520/ KL720. You can just print the inference result in your screen as following or save it to a specific file. https://www.kneron.com/forum/uploads/820/9IRTTKERXGPR.jpg
-
No, KL520 is controlled through the cortex M4 which is 32 bit architecture.
-
Is the above message complete? There should be more message whether you encountered an error or not. Or maybe you could provide your model.onnx to make the issue checking becomes easier.
-
看起來應該是哪邊有損壞,詳細的錯誤可能要進廠做分析才能了解,我們將會協助你進行換貨流程 還請你寫信至 Kneo.marketplace@kneron.us 並提供聯絡方式以及說明購買時間、地點、簡單的描述錯誤現象,將會有專人協助你相關程序,謝謝。
-
Hi, The latest version of toolchain just update the opset from 9 to 11, if you want to run in the latest version, please use the python script "/workspace/libs/ONNX_Convertor/optimizer_scripts/onnx1_4to1_6.py" to upgrade the opset version.…
-
請問一下,在執行後完整訊息應該會包含SCPU與NCPU,如下 update SCPU firmware from file ../../app_binaries/KL520/kdp2//fw_scpu.bin update SCPU firmware OK update NCPU firmware from file ../../app_binaries/KL520/kdp2//fw_ncpu.bin update NCPU firmware OK …
-
The error message means the onnx version is too new to complete the toolchain flow. Please use Netron to open the model.onnx and click the input layer to make sure the format is "ONNX v4". https://www.kneron.com/forum/uploads/473/OBO2U36TH…
-
I think the issue could be coursed by the old version bug, please update to the latest version and try it again. Please refer to Toolchain Manual http://doc.kneron.com/docs/#toolchain/manual/ to pull the latest version of toolchain. Download the lat…
-
Except the opset = 9, there are still some version condition in toolchain. Toolchain use the pytorch v1.2 and onnx version 1.4. If you want to use a higher version pytorch to export onnx file, please make sure the ir_version is 4. We will support hi…
-
Actually, the radix with your setting is 8. There are different radix in each layer, and the radix: 2 in log means the radix of last layer. Please disregard the number. In your case, you set the parameters "img_preprocess_method": "k…
-
Hi Jerome, The latest version of toolchain is v0.12.1, you can convert and compile the model yolov3 on it. Please refer to the instruction manual. (http://doc.kneron.com/docs/#toolchain/manual_520/ ) The toolchain only supports opset = 9 now, optset…
-
https://www.kneron.com/forum/discussion/comment/144#Comment_144 Hi WenTzu, What kind of image format are you using? General format like jpg and png are usable. Or could you post your json setting and onnx to clarify the issue?
-
Hi Jerome, There are some version condition in toolchain. Take pytorch as an example, please check the version are "pytorch = 1.4" and "onnx = 1.4" . When export model to onnx from pytorch, make sure the opset version equals 9. Y…
-
Hi Alfred, 看過你的模型後,有幾個KL520不支援的地方需要請你修改模型,請參考下面步驟 https://www.kneron.com/forum/uploads/051/1L1ZIOX1K1LU.png 模型使用的opset版本為v8(如上圖右側紅圈處所示),但KL520目前僅支援v9,還請在轉成onnx時選擇v9 KL520並不直接支…
-
是的,需先將模型轉成KL520專用的*.nef (舊版的範例中為*.bin) 後才可在KL520上運行,轉換的方式可以參照文件中心http://doc.kneron.com/docs/#toolchain/manual_520/ 的說明 成功將模型轉換成.onnx後,請繼續參照 3.2 FpAnalyser, Compiler and IpEvaluator 的指示依…
-
Yes, that's what I'm getting at. If you want to get the same model result, please calculate the layer softmax with the inf_res from kdp_dme_retrieve_res(), the prediction scale should be normalized after implement softmax.
-
The score range of final prediction should be the same as your original model. And there is usually a layer "softmax" in the end of MobileNetV1, but KL520 doesn't support "softmax". So I believe that "softmax" was be r…
-
The hardware structure is using RGBA8888 for 4 channels, but if you use the input format RGB565, you will get better accuracy with the model trained by image RGB565. And for better accuracy, there are two ways you can try. Modify the parameter NPU_…
-
Here is the format of RGBA8888, please check whether your data format is correct. https://www.kneron.com/forum/uploads/518/C2O5P2QAWFAH.png And may I ask your setting "img_preprocess_method" of input_params.json in toolchain?