KL730 Simulator Inference spikes lot with cascaded model

Facing inference issue with cascaded model, we get model inference spikes during inference items say for model 3 jump from 8ms inference time to 300 ms inference time


Tagged:

Comments

  • edited June 2

    Hi Sathish,

    Could you provide your models and code so we could take a better look at this issue? For questions on KL730, you could also contact the sales representative you purchased the KL730 from. They should be able to provide you with the contact person for technical assistance. Thank you!

    Edit: Also, your model IDs need to be set above 32767, since model IDs within 32767 are for Kneron internal usage.

Sign In or Register to comment.