Toronto, Ontario, Canada – December 10, 2019 – Applied Brain Research (ABR) announce a new Legendre Memory Unit (LMU) that surpasses the best known Recurrent Neural Network (RNN) results by LeCun et al. (93.7–94.5%) and Bengio et al. (95.4–95.9%) by a full percentage point.
ABR’s LMU enables advances in ultra-low-power AI speech, vision and signal processing systems for always-on and edge-AI applications, extending battery life while making them more accurate.
ABR’s announcement demonstrates the potential to realize ultra-low-power instantiations of a large class of algorithms that learn patterns in data, spanning extraordinarily long intervals of time.
Voelker et al. (2019) found that ABR’s LMU required fewer resources and less computations, whilst providing superior memory and demonstrating state-of-the-art performance of 97.15% on a challenging RNN benchmark compared to 89.86% using LSTMs. Video
The core building block of the LMU has been implemented on low-power spiking neuromorphic hardware including Braindrop (Neckar et al., 2019) and Loihi (Voelker, 2019).
The LMU outperforms both spiking and non-spiking reservoir computers (i.e. liquid state machines and echo state machines) in efficiency and memory capacity when tasked with representing temporal windows of information (Voelker, 2019).
Long Short-Term Memory (LSTM), is a form of Recurrent Neural Network (RNN) which can learn to predict sequences of data over long periods of time, more so than standard neural networks.
ABR’s new Legendre Memory Unit (LMU) is a neuromorphic algorithm for continuous-time memory that can learn temporal dependencies over millions of time-steps or more. The Legendre Memory Unit (LMU) is a new RNN architecture that enables networks of artificial neurons to classify and predict temporal patterns far more efficiently than LSTMs.
The LMU is mathematically derived to implement the continuous-time dynamical system that optimally maintains a scale-invariant representation of time.
The ABR LMU obtains the best-known results on permuted sequential MNIST, a difficult RNN benchmark, and has been shown to scale to input sequences spanning hundreds of millions of time-steps.
The resulting patterns in spiking activity have also been linked to neural “time cells” observed in the striatum and medial prefrontal cortex in mammalian brains.
Unlike the LSTM, the LMU can be implemented using spiking neurons, thus demonstrating an algorithmic advance that is anticipated to provide leaps in efficiency for solutions to dynamical time-series problems using low-power neuromorphic devices.
Aaron R. Voelker. Dynamical Systems in Spiking Neuromorphic Hardware. PhD thesis, University of Waterloo, 2019. URL http://hdl.handle.net/10012/14625.
Alexander Neckar, Sam Fok, Ben V Benjamin, Terrence C Stewart, Nick N Oza, Aaron R Voelker, Chris Eliasmith, Rajit Manohar, and Kwabena Boahen. Braindrop: A mixed-signal neuromorphic architecture with a dynamical systems-based programming model. Proceedings of the IEEE, 107(1):144–164, 2019.
He, Yanzhang, Tara N. Sainath, Rohit Prabhavalkar, Ian McGraw, Raziel Alvarez, Ding Zhao, David Rybach et al. Streaming End-to-end Speech Recognition For Mobile Devices. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6381-6385. IEEE, 2019.
Jing, Li, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Soljačić. Tunable efficient unitary neural networks (EUNN) and their application to RNNs. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1733-1741. JMLR. org, 2017.
Krueger, David, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, and Chris Pal. Zoneout: Regularizing RNNs by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305 (2016).
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature 521, no. 7553 (2015): 436.
Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou, and Yoshua Bengio. Towards non-saturating recurrent units for modelling long-term dependencies. arXiv preprint arXiv:1902.06704, 2019.
North America
Peter Suma, co-CEO
peter.suma@appliedbrainresearch.com
+1-416-505-8973
EMEA
Oliver Morgan
oliver.morgan@tailoredbrands.co.uk
+44 7957 489928