The Ingenic AIE architecture balances computational efficiency and flexibility at the architectural level. Building upon high computational power, it extends programmability to adapt to the flexible and rapidly evolving neural network structures. The low-bit quantization technology further enhances the low power consumption and low bandwidth AI computing capabilities of Ingenic AIE.
Addressing the computational characteristics of mainstream deep neural networks, the Ingenic AIE Core introduces instructions with different computational dimensions and intensities. It can effectively accelerate high-intensity computations such as convolution and pooling while ensuring sufficient flexibility.
The Ingenic AIE Core supports low-bit quantization technology, providing different computational capabilities at different levels of precision to meet various computational requirements and scenarios.
Name | Specifications | Related Chips |
AIE 1 | Same frequency as the main CPU 0.5T~8T@1GHz tiered computing power Supports low-bit quantization Supports hybrid quantization Highest energy efficiency ratio for NN core Built-in CV acceleration unit Built-in maximum available 1MB shared storage | T40 |
AIE 2 | Same frequency as the main CPU 1T~16T@1GHz tiered computing power Supports low-bit quantization Supports low-bit quantization Highest energy efficiency ratio for NN core Built-in CV acceleration unit Built-in maximum available 1MB shared storage |