Avatar

kidd

kidd

About

Username
kidd
Joined
Visits
68
Last Active
Roles
Member, ModeratorsAIModel

Comments

  • Thanks for the question. I see that your input image size is 100x100. What is your model input size? Because if you select IMAGE_FORMAT_BYPASS_PRE , then it won't do any resize or padding. You have to provide the exact RGBA with the input size of th…
  • If you want to convert a Keras model, you can save it as hdf5 instead of h5 and JSON, then the script should work fine. here is the example from the toolchain manual in document center: python /workspace/scripts/convert_model.py keras /docker_mount…
  • it looks like your optimized doesn't convert the split correctly. It creates extra branch of slice. What was your intention to split? Get the first half of the channels? And BTW, KL520 also doesn't support slice as well. https://www.kneron.com/foru…
  • which version of toolchain that you are using? And this is for which device? KL520 or KL720? I think there are two issues with the ONNX. your converted ONNX doesn't have shape. This might due to you didn't run the onnx2onnx.py There is a split laye…
  • Could you let us know which model you are running? And what's the command you run in host? Could you post it here? The timing breakdown is in uart log. If you don't connect the uart, then it is hard to see the time of inference. We could run 100 inf…
  • do you have an uart connect to your development board? What's the command you pass to the KL720? do you run it in parallel mode or non-parallel mode?
  • how do you set up the radix? Because the original model's input range is -2 to 1.98 (radix 6) but the new model's input range is -128 to 127 (radix 0) if you didn't set up the correct radix, it will result error in quantization.
  • Kneron quantization is different than PyTorch's method, and the quantization has to do with the hardware implementation. So to run the model in the Kneron device, users cannot use models that already quantize from another platform. So you should alw…
  • soon we will post a step-by-step tutorial here.
  • I think the tensorflow2onnx might have some issue handling the input size. What I recommend you to do is to transfer the pb file into tensorflow lite file first, then use tflite2onnx script to transfer to onnx instead.
  • Hi Bob, If you take a look at the op support list in our document center, you will find that there are some op in Yolo v4 that KL520 doesn't support, especially the activation layer. You can replace those activations with the one that we support, su…
  • We support darknet, and our tool did support by moving the upsample layer to the cpu node. It should be correct since we have verified many times. The difference between two images should be due to the threshold value. Since the floating point and f…
  • I just tried at my end, our example (app/app1) is working as expected. Have you modified any files? thanks
  • Hi, I see a couple of issues when editing the ONNX graph: The first two nodes should be removed. I guess this multiple and transpose are normalization, which we can do it in preprocess because NPU doesn't support Mul node. I see that you might remov…
  • Could you sent the binaries that has this problem? Thanks,
  • Maybe you didn't fix your input size? To use the NPU, you have to specify the exact input size for the model, and it cannot be a random size. You might have to modify your PyTorch script.
  • For your question: Supposedly, it will then divided by 2^7 to make it [0,1] on the device. But, where to set this 2^7 value? When you set the input_params.json, you already tell the hardware radix at 7, this means the decimal point is set to before …
  • I think because you select yolo [0-1] as your input pre process, so you have to modify the dme configuration. You should modify the image_format to the following: image_format = (constants.IMAGE_FORMAT_RIGHT_SHIFT_ONE_BIT | constants…
  • the 80 classes should be the same across all examples, which are from coco 80 classes.
  • To deploy your own model, you have to download docker. Please follow the toolchain docker in windows. Here is the tutorialL: http://doc.kneron.com/docs/#manual_520/
  • To deploy your own model, you have to download docker. Please follow the toolchain docker in windows. Here is the tutorial: http://doc.kneron.com/docs/#manual_520/