Optimal time series AI

Solve time series problems more accurately and efficiently with the LMU

Time series problems include processing of language, audio, biosignals, RF signals, network traffic and so on – anything where the order of the data matters. Most researchers know of two main architectures used to solve time series problems: LSTMs and Transformers. We have invented a third: Legendre Memory Units (LMUs). LMUs are provably optimal at compressing time series.

Long short-term memories (LSTMs) have been called the most commercially valuable AI algorithm ever invented (Bloomberg), having been deployed in many speech, vision and text analysis systems. But LSTMs fall apart when tasked with learning temporal dependencies spanning thousands of timesteps, limiting their usefulness.

Transformers are the workhorse of NLP and the basis for the largest AI models available, including GPT-3 and Google Brain’s Switch Transformer. But Transformers are incredibly expensive in both computation and memory, O(N2) for both.

The Legendre Memory Unit (LMU) is a recurrent network that can learn temporal dependencies over millions of timesteps, unlike LSTMs. The LMU can scale to the size of transformers, but scales as O(N) in compute and O(1) in memory. In addition, LMUs require 10x less training (and data) to be as accurate as a transformer.

LMUs maintain efficient and scale-invariant representations of recent inputs and learn how to solve real world problems using those representations. LMUs can be implemented with traditional deep learning techniques on hardware you already have, or can be deployed on neuromorphic hardware or neural accelerators for massive power savings. Using LMUs, we and others have obtained the best-known results on a variety of benchmarks.

Do you have a time series problem you’re currently solving with an LSTM or Transformer?

Have a time series problem that you haven’t been able to solve yet?

Do you need to solve a time series problem in an edge or IoT device?

Take a look at our published and easily reproducible results. ABR’s patent-pending LMU is free to use for academic research and personal non-commercial uses and is available at a yearly per-processor rate commercially through our store.

10x more data efficient than Transformers
10x more data efficient than Transformers
O(n) compute scaling<br /> O(1) memory scaling
O(n) compute scaling
O(1) memory scaling
Provably optimal
Provably optimal
Deployable on microcontrollers
Deployable on microcontrollers
Purchase a license

Getting started

The KerasLMU package is a simple and user-friendly LMU implementation for TensorFlow.

Community

Talk to the people who know LMUs best at the Nengo forum.

Commerical licenses

Commercial LMU use can be purchased at a per-year per-processor rate.

Services

Custom AI solutions with LMUs

LMU apps

Let us build an AI solution for your time series problem using LMUs.

More about LMU apps >