Beyond offering highly competitive NPU IP, AsicAI innovatively provides a one-stop AI solution platform, AsicAI™ Link, encompassing AI model design, IP development, and SoC integration. This service enables rapid integration of AI functionalities into chips, significantly boosting deployment efficiency and thus accelerating the widespread adoption and practical application of AI technology.
The core of our one-stop service is a comprehensive and highly efficient suite of software tools and Application Programming Interfaces (APIs), designed to simplify the entire process of AI application development from concept to realization, offering a clear, efficient, and standardized workflow. Key features and advantages include:
.AI Model Hardware Friendliness Verification: Automatically analyzes the operations within AI models slated for deployment, ensuring they maximize the utilization of AsicAI's Neural Processing Unit (NPU IP).
.Support for Mainstream Model Frameworks: Supports popular frameworks like PyTorch, TensorFlow, and Caffe, eliminating the need for extensive model conversion time and enabling easy migration to hardware platforms.
.Ultimate Performance and Resource Optimization: Capable of model lightweighting and performance optimization, including critical model quantization techniques. This effectively reduces latency and shrinks model size while maintaining inference accuracy. By converting floating-point operations to more efficient integer operations, our empirical data shows that model size can be reduced by an average of 2-4 times, and inference speed can be boosted by 4-16 times. This significantly decreases power consumption and memory, creating faster and more energy-efficient AI applications.
.Customized Model Compilation: Offers two compilation methods for different application scenarios, balancing performance and versatility: General Compilation and Advanced Compilation. General Compilation does not require pre-compilation for specific hardware, allowing for real-time compilation and execution, suitable for general applications. Advanced Compilation is designed for end-user applications with higher demands for low latency or memory constraints, where AsicAI performs advanced and customized compilation to enable models to achieve superior performance and efficiency during inference.
This service has successfully attracted several leading listed IC design companies for technical collaboration and solution adoption. Critically, two of these listed IC company clients have already successfully entered mass production phases with our solutions. Notably, an AI chip utilizing our technology for Microsoft's Human Presentation Detection application has garnered the attention and adoption of a leading international laptop manufacturer, with products anticipated to launch by the end of 2025. Furthermore, we've successfully secured a cutting-edge 7nm process AI chip design service project, providing comprehensive turnkey solutions ranging from NPU IP design and integration to complete System-on-Chip (SoC) design.