Magik Description
  • Magik platform supports model training quantization and post-quantization, focusing on end-side AI full-stack development, the platform sets model quantization training, layer merging optimization, and end-side efficient deployment. From version 1.0 in 2020 to the latest version, magik quantization platform has been keeping up with the pace of model quantization deployment, after in-depth optimization iteration, the platform to complete the one-click quantization; enrich the ModelZoo repository, covering speech, vision, Transformer and other fields; AI compiler development to further improve the efficiency of the arithmetic, reduce bandwidth requirements and memory size.
  • Highlights

    1. Quantization bit widths of 16-bit, 12-bit, 8-bit, and 4-bit, supporting post-quantization and training quantization.

    2. The network structure interface is simple and easy to use, enabling one-click quantization without modifying the client's network structure.

    3. The ModelZoo repository that covers the entire process of model training, conversion, and on-board deployment.

    4. The AI compiler fully explores the potential of operators, improving operator performance and reducing bandwidth requirements.

    5. Support for quantization sparsity, reducing model size."


    1. Supports quantization bit-widths of 16 bit,12 bit,8 bit and 4 bit, with options for post-quantization and training quantization;

    2. The network structure interface is simple and user-friendly, with one click quantization, without the need to modify the customer's network structure;

    3. A rich ModelZoo repository covering the entire process of model training, transformation and deployment on the board;

    4. The AI compiler fully taps into the potential of operators, enhancing operator performance, and reducing bandwidth requirements;

    5. The model supports quantization sparsity and reducing model size.