Edge Time Series Processor
Efficient hardware for the revolutionary LMU
Process time series including speech, language, audio, biosignals, RF signals, network traffic, and more at the edge. Our TSP provides extremely low power usage, low latency, and low cost.
The Time Series Processor chip (TSP) is based on a revolutionary new algorithm for AI signal processing, the provably optimal Legendre Memory Unit (LMU) architecture.
Currently, product managers for electronics, cars, appliances, and IoT sensors have 3 options when dealing with time series data in their devices:
- Process large data in the cloud—PMs incur cloud compute and latency costs.
- Process large data on CPU/GPU—PMs pay $50-$200 for hardware and require significant power.
- Process small amounts of data at the edge with small chips—PMs are restricted to feature-limited, small models.
The TSP is a groundbreaking addition to the PM’s arsenal, allowing for low-cost, power-efficient processing using large AI models for time series data.
Now, all devices can have complex, full-featured AI. For example, full-featured natural speech and language interfaces can be affordably designed into everyday devices without an expensive or inconsistent internet connection.
The TSP saves power and increases response speed, accuracy, and privacy while lowering costs to device makers and consumers.
- Design-ins underway in automotive, healthcare and ioT.
- Production chips expected Q4 2023.
- Production chip cost is expected to be <$4 USD. Compares to full CPU/GPU with memory for full real-time speech recognition at over $50 USD.
- 10x to 25x cost advantage over CPU / GPU.
- 10x to 100x power advantage over existing algorithms computed on CPU / GPU.
- Full speech and natural language processing <10 mW.
- Full software stack including AI model design and deployment. Custom and pre-trained networks available.
First production chip quantities are being allocated now. Design-in and applications development services are available now.
Contact [email protected] to start leading your market with revolutionary edge AI features for your device!
Reduce cloud costs
Reduce CPU and GPU requirements
No network required
Give your customers privacy
Legendre Memory Units
Time series problems include processing of language, audio, biosignals, RF, network traffic and so on -- anything where the order of the data matters. We have invented Legendre Memory Units (LMUs), which are provably optimal at compressing time series, resulting in increased efficiency and accuracy compared to LSTMs and Transformers.
Train high accuracy, low power audio processing models in an easy-to-use cloud platform. Reduce product development time and cost by taking advantage of the edge device expertise we have built into the training and deployment process.