EE Awards / DeepMentor Inc. / DeepTransformCore
DeepMentor Inc.
Company Website
DeepTransformCore
Candidate for:EE Awards - IP/ Processor
DeepMentor has launched a Transformer accelerator optimized for language and visual tasks. This breakthrough solution leverages our patented miniaturization technology to dramatically reduce memory bandwidth requirements. Integrating an innovative encoder–decoder multimodal IP architecture, flexibly supporting standard DRAM configurations and multiple FPGA platforms, thereby significantly accelerating the verification process. It supports low-latency (under 2 seconds) speech-to-speech multimodal LLM model applications. DeepTransformCore provides exceptional integration flexibility to meet diverse application needs, helping products go from prototype to deployment faster.

Leveraging a mature 22 nm process node for proven yield and lower mask costs, we integrate two or four DDR4-2666 dies to ensure ample memory bandwidth, implement SPI for MCU command and control, and embed an I²S interface for high-quality audio streaming. This co-designed AI SoC delivers the market’s most cost-effective accelerator and, as a companion to existing MCUs, seamlessly upgrades them with AI capabilities.

DeepTransformCore is ideal for edge AI scenarios such as voice-controlled home appliances, interactive toys that understand and speak, natural-language ordering kiosks, in-car voice assistants in smart cockpits, and conversational educational robots. Our on-device IP eliminates cloud dependency and latency, delivers accurate speech recognition and synthesis, and preserves user privacy—all with minimal power and cost.
Malicious vote manipulation is expressly forbidden in this voting event. The organizers reserve the right to evaluate the fairness and accuracy of the voting results. AspenCore retains the authority to interpret the rules of this event.