Toronto, Ontario, Canada – December 10, 2019 – Applied Brain Research (ABR) announce a new algorithm that enables advances in ultra-low-power AI speech, vision and signal processing systems for always-on and edge-AI applications, extending battery life while making them more accurate.
ABR’s announcement demonstrates the potential to realize ultra-low-power instantiations of a large class of algorithms that learn patterns in data, spanning extraordinarily long intervals of time.
Current algorithms, like Long Short-Term Memories (LSTMs), can learn and predict sequences of data for long periods of time and make it possible for neural networks to learn to process data like speech, video and control signals.
Present in most smart speakers and voice recognition systems, LSTMs are said to be the most financially valuable AI algorithm ever invented (Bloomberg).
LSTMs fail when tasked with learning temporal dependencies in signals that span 1,000 time-steps or more, making them very difficult to scale and limiting commercial application.
This new algorithm – the Legendre Memory Unit (LMU) – is a neuromorphic algorithm for continuous-time memory that can learn temporal dependencies over millions of time-steps or more. The algorithm is a new INN architecture that enables networks of artificial neurons to classify and predict temporal patterns far more efficiently than LSTMs.
Unlike the LSTM, the LMU can be implemented using spiking neurons, thus demonstrating an algorithmic advance that is anticipated to provide leaps in efficiency for solutions to dynamical time-series problems using low-power neuromorphic devices.
Voelker et al. (2019) found that ABR’s LMU required fewer resources and less computations, whilst providing superior memory and demonstrating state-of-the-art performance of 97.15% on a challenging RNN benchmark compared to 89.86% using LSTMs. Video
The core building block of the LMU has been implemented on spiking neuromorphic hardware including Braindrop (Neckar et al., 2019) and Loihi (Voelker, 2019).
The LMU outperforms both spiking and non-spiking reservoir computers (i.e., liquid state machines and echo state machines) in efficiency and memory capacity when tasked with representing temporal windows of information (Voelker, 2019).
North America
Peter Suma, co-CEO
peter.suma@appliedbrainresearch.com
+1-416-505-8973
EMEA
Oliver Morgan
oliver.morgan@tailoredbrands.co.uk
+44 7957 489928