Automatic Conversion from Keras to Nengo with NENGODL

April 22, 2020
Dr. Daniel Rasmussen

How to automatically convert from Keras to nengo with NENGODL

ABR has recently released the NengoDL Converter, a tool designed to automate the process of translating models from the popular Keras deep learning framework into Nengo. The process is as easy as:

import nengo_dl
converter = nengo_dl.Converter(my_keras_model)

and the Converter will take care of all the rest! The result is a Nengo Network that produces the same behaviour as the original Keras model (both for training and inference). Whenever possible this will be done using native Nengo objects, but the Converter can also use NengoDL’s TensorNodes to build hybrid Nengo/TensorFlow networks.

In addition to producing behaviour equivalent to Keras, the Converter can also be used to enhance a Keras model. In particular, it is designed to assist in the translation from a non-spiking Keras model into a spiking Nengo network. More details on this process can be found in this example [1], which walks through the entire process of translating a Keras model into spikes and then optimizing the performance of the spiking model to match the original, non-spiking network.

Once a Keras model has been converted to a native Nengo network it enjoys all of the benefits of the Nengo ecosystem. For example, these networks will be able to run on any Nengo-supported hardware platform. This can be seen in action in this example [2], where a Keras model is converted to a Nengo network using the NengoDL Converter and then deployed onto neuromorphic hardware using Nengo Loihi.

For more information, or if you have any questions about using the Converter, check out the NengoDL documentation [3], drop by the forums, or contact us directly!


Dr. Daniel Rasmussen, Co-Founder and Software Team Lead at Applied Brain Research, specializes in adaptive computational systems, integrating deep learning research and practical software applications. A University of Waterloo Ph.D. alumnus, he has made significant contributions to hierarchical reinforcement learning models and is actively advancing AI applications at ABR.

Similar content from the ABR blog