How do I infer with the ONNX model?
Hi,
I am quite new to Kneron. I have a model that is developed with Keras 2.4 ish with the TensorFlow backend. I have not connected the chip yet but I am trying to learn about the docker image and my mini goal is to build a simple onnx model that can make predictions inside the docker image. Here are the steps I have taken so far,
Convert the model into tflite.
Convert the tflite model into ONNX using following functions;
ktc.onnx_optimizer.tflite2onnx_flow
ktc.onnx_optimizer.onnx2onnx_flow
onnx.save
Then, evaluate the model using,
ktc.ModelConfig
km.evaluate()
Finally use following line to predict an outcome using,
inf_results = ktc.kneron_inference(input_data, onnx_file=_onnx_path, input_names=["data_out"])
My problem is that I can't make properly make a prediction yet (throws me errors). I was not sure what "data_out" meant in the last syntax (docs weren't clear about it, so I assumed it's the input layer name I assigned when creating the Keras model). When I run the LittleNet example, I can see the prediction properly, but strangely I don't see some of the JSON files LittleNet does have but my model does not. Note that I am quite confident about the input preprocessing so that does not seem like an issue. I am happy to elaborate more and get the conversation going, can someone please verify if my initial steps are correct?
Thanks
Kaveen
Comments
Hello,
Yes, the input_names=["data_out"] means the input layer name. You can check it in https://netron.app/.
Can you show the error message you encountered to us for debugging?
2.3 Supported operators http://doc.kneron.com/docs/#toolchain/manual/#2-toolchain-docker-overview
And here is the supporting list, you can check whether all operators in your model are supported or not.