Ethon Lin
Ethon Lin
About
- Username
 - Ethon Lin
 - Joined
 - Visits
 - 1,232
 - Last Active
 - Roles
 - Member, ModeratorsKL520, ModeratorsKL720, ModeratorsAIModel
 
Comments
- 
                About your question: 1. Did you provide any API to get the AI Dongle's run-time hardware usage information? What kind of information you want to get? You can find all corresponding APIs from following link. http://doc.kneron.com/docs/#host_lib_1.0.…
 - 
                Hello, As you mentioned the result of RGB565 was right but RGBA8888 (you bypass preprocess and implement by yourself) was wrong. I think the issue could be related to image format. Here is the format of RGB565, RGBA8888 and bypass preprocess https:/…
 - 
                The *.bie is a quantized model, you can run it on End to End Simulator. http://doc.kneron.com/docs/#toolchain/python_app/app_flow_manual/ And the *.nef is a model format for kneron dongle (KL520/ KL720), which was compiled from *.bie. To run inferen…
 - 
                Once run with out of memory, the system could keep on some memory issues still. Please exit from the docker environment, re-login and use 1 image case retry again.
 - 
                Sorry but I don't understand your question. Could you show more details about your issue? Including the command you used, full error message, and so on. And it will be helpful if you could provide your onnx file. Did you successfully get the convert…
 - 
                Hi Raef, I think the issue was caused of the input dimension 1x12x160x160. Kneron's toolchain doesn't support 12-channel input and run image preprocess with 3 channels as default setting. That's why the error message showed got 76800 lines ( 3x160 …
 - 
                Hi Angie, I checked your model and found out the issue is caused by the dimension 16x3x224x224. We see the first degree "16" as batch_size, and the KL520 / KL720 only support the case whose batch_size is 1. (e.g. 1x3x224x224) That's why …
 - 
                Hi Tim, Would you please provide the onnx or h5 for debug?
 - 
                Have ever checked the image you set in json? It seems there are some issues in model input. Input file dimension is not correct: expected: 2408448, but got: 150528 lines. /workspace/.tmp/input.1/ILSVRC2012_val_00000003.txt
 - 
                Hi, If you want to remove the op softmax, you can refer to the 3.1.7 Model Editor in Toolchain Manual http://doc.kneron.com/docs/#toolchain/manual/. And try onnx2onnx again to check your model after cutting the layer softmax.
 - 
                I think the error was caused by preprocess error. Did you revised the customer part in img_preprocess.py? "img_preprocess_method": "customized", <= the setting means you want to customize your own preprocess. Ple…
 - 
                I think the problem is that you didn't specify the path of model you want to inference. Please run “ex_kdp2_generic_inference” with arguments -p, -d, and -m to specify the path of your NEF. And the step 3 of your list is for another purpose, it's …
 - 
                有關batchcompile的做法現在剩兩種 http://doc.kneron.com/docs/#toolchain/manual/#3-toolchain-scripts-usage 連結中的3.5 Batch-Compile,此步驟會需要先執行3.2 FpAnalyser, Compiler and IpEvaluator產生*.bie,然後在input_batch_compile.json中填入id, version…
 - 
                Now your code flow is : inf_res, paf = ktc.kneron_inference(img_data, nef_file=compile_result, radix=ktc.get_radix([img_data])) Please change it into : inf_res = ktc.kneron_inference(img_data, nef_file=compile_result, radix=ktc.get_radix([i…
 - 
                Because kneron_inference only has one output, it would got error if you two variables to get its output. The output of the kneron_inference would be the results of the model output nodes. For example, the output node of the model "output.onnx&q…
 - 
                有關NEF生成的部分,Toolchain v0.14.1的步驟為新的做法,http://doc.kneron.com/docs/#520_1.4.0.0/getting_start/#520_1.4.0.0/getting_start/的6.3為舊的作法,需要正確的填寫兩個json的設定才可以生成NEF,但兩種做法結果是一樣的,擇一即可 而fw_scpu.bin與fw_ncp…
 - 
                Yes, the model_id 19 is for Kneron's tiny yolo v3 example. But the NEF is just 88Kb, so I guess the model you generated is not tiny yolo v3. Was the NEF file generated by command "fpAnalyserCompilerIpevaluator_520.py" and "batchCompil…
 - 
                It seems the firmware system got something after load your NEF model. How many models in your NEF file? Is any model with model_id 19 in the NEF file. And how large your NEF file is?
 - 
                KL520 could be used with RK3399, but the usb command of KL520 is transferred by libusb. https://github.com/libusb/libusb/releases/tag/v1.0.23 User should make sure the libusb could work on their own environment.
 - 
                Hi Ellen, 抱歉你所提供的截圖解析度太差沒辦法清楚的看到裡面的資訊,不知道你是否方便直接提供你的epxxx.h5讓我們確認?
 - 
                Yes, the python-api is a way to simulate the inference result on KL520/ KL720. You can just print the inference result in your screen as following or save it to a specific file. https://www.kneron.com/forum/uploads/820/9IRTTKERXGPR.jpg
 - 
                No, KL520 is controlled through the cortex M4 which is 32 bit architecture.
 - 
                Is the above message complete? There should be more message whether you encountered an error or not. Or maybe you could provide your model.onnx to make the issue checking becomes easier.
 - 
                看起來應該是哪邊有損壞,詳細的錯誤可能要進廠做分析才能了解,我們將會協助你進行換貨流程 還請你寫信至 Kneo.marketplace@kneron.us 並提供聯絡方式以及說明購買時間、地點、簡單的描述錯誤現象,將會有專人協助你相關程序,謝謝。
 - 
                Hi, The latest version of toolchain just update the opset from 9 to 11, if you want to run in the latest version, please use the python script "/workspace/libs/ONNX_Convertor/optimizer_scripts/onnx1_4to1_6.py" to upgrade the opset version.…
 - 
                請問一下,在執行後完整訊息應該會包含SCPU與NCPU,如下 update SCPU firmware from file ../../app_binaries/KL520/kdp2//fw_scpu.bin update SCPU firmware OK update NCPU firmware from file ../../app_binaries/KL520/kdp2//fw_ncpu.bin update NCPU firmware OK …
 - 
                The error message means the onnx version is too new to complete the toolchain flow. Please use Netron to open the model.onnx and click the input layer to make sure the format is "ONNX v4". https://www.kneron.com/forum/uploads/473/OBO2U36TH…
 - 
                I think the issue could be coursed by the old version bug, please update to the latest version and try it again. Please refer to Toolchain Manual http://doc.kneron.com/docs/#toolchain/manual/ to pull the latest version of toolchain. Download the lat…
 - 
                Except the opset = 9, there are still some version condition in toolchain. Toolchain use the pytorch v1.2 and onnx version 1.4. If you want to use a higher version pytorch to export onnx file, please make sure the ir_version is 4. We will support hi…
 - 
                Actually, the radix with your setting is 8. There are different radix in each layer, and the radix: 2 in log means the radix of last layer. Please disregard the number. In your case, you set the parameters "img_preprocess_method": "k…