WHAT IS ON-DEVICE EDGE AI?

On-device edge AI is when AI computing and inferencing happens on-device rather than in cloud servers. Since edge devices have power and storage limitations, on-device AI inferencing requires striking a difficult, but vital balance of performance, power, and size. We are obsessed with innovating on-device edge AI solutions that work seamlessly with cloud-based AI.
WHEN IS IT NEEDED?
WHEN IS IT NEEDED?
When data is being generated where it also needs to be inferred, and response time needs to be immediate, on-device edge AI is the solution. When it augments cloud-based AI, we accelerate AI everywhere.
WHY IS IT NEEDED?
WHY IS IT NEEDED?
Working together with cloud-based AI, on-device edge AI ensures more privacy, increases computing speed with low latency, and lowers total costs to integrate AI into everyday devices.
WHO IS LEADING IT?
WHO IS LEADING IT?
Our customers and partners are using our solutions as you read this sentence. We are leading the on-device edge AI movement because our solutions are not just concepts still in development.

Reconfigurable NPU

As a leader in the field of reconfigurable NPUs, Kneron has innovated in the industry by proposing dynamic memory (DMA) to enhance memory access efficiency and dynamically support different data accuracy requirements of the same neural network, enabling NPU SoC with high performance ASIC without sacrificing programmability of data-intensive algorithms. With its unique and innovative architecture and outstanding performance, the Kneron team received the IEEE CAS 2021 Darlington Best Paper Award. Kneron's 4th generation reconfigurable NPU can support running CNN and Transformer networks simultaneously,​and can do both machine vision and semantic analysis with excellent computational power efficiency, providing end-users with higher performance, lower power consumption and lower cost solutions for AI applications in various end devices.
Most AI models are limited to specific applications and frameworks. Kneron's Reconfigurable Artificial Neural Network (RANN) technology adapts in real-time to audio, 2D, or 3D recognition applications while also being compatible with mainstream AI frameworks and convolutional neural network (CNN) models.
Reconfigurable NPU
RANN Technology can:
  • Compute audio and images including 2D/3D visual recognition
  • Support AI frameworks: ONNX, TensorFlow, Keras, Caffe, PyTorch
  • Support CNN models: ResNet, GoogleNet, VGG16, LeNet, MobileNet, DenseNet, YOLO, Tiny YOLO, and more

RANN Technology helps partners:
  • Lower costs by up to 20%
  • Create commercial applications 
  • Customize edge AI to fit unique use cases

TOTAL SYSTEM SOLUTIONS

Kneron can customize integrated total system hardware + software solutions that are ideal for hardware makers and industry partners looking to instantly accelerate on-device edge AI computing affordably.
TOTAL SYSTEM SOLUTIONS
Kneron's expertise in both hardware and software solutions sets us apart in the edge AI industry and consistently saves our partners time, energy, and money. Kneron's total system solutions are embodied by the KL520 AI SoC because it integrates Kneron's neural network algorithms to maximize power efficiency and performance for segments such as:

  • Smart home 
  • Mobile
  • IoT
  • Security