I want to write inference code for /workspace/examples/LittleNet/LittleNet.onnx model present in toolchain but I am confused as I see ISI and DME mode . I want to know which mode to use and also what KDP wrapper API are needed for writing inference code ?
Comments
The python examples is good start for learn APIs for config / inference / get result.
Whats the difference between ISI and DME mode ?
In KL520, ISI mean load model from flash. dme mean load model from usb.
But KL720 remove dme api, and use kdp_isi_config to set model location.
param : CONFIG_USE_FLASH_MODEL = bit(0) # 1: use flash model, 0: use model in memory