Dr. Chris Eliasmith Co-founder

Website Email

Chris is the co-inventor of the Neural Engineering Framework (NEF), the Neural Engineering Objects (Nengo) software environment, and the Semantic Pointer Architecture, all of which are dedicated to understanding how the brain works. His team has developed the Semantic Pointer Architecture Unified Network (Spaun) which is the most realistic functional brain simulation yet developed as of its publication in November, 2012. He won the prestigious [2015 NSERC Polanyi Award] (http://www.nserc-crsng.gc.ca/Prizes-Prix/Polanyi-Polanyi/Profiles-Profils/Eliasmith-Eliasmith_eng.asp) for his research.

Chris is the Canada Research Chair in Theoretical Neuroscience. At the University of Waterloo Chris is jointly appointed in the Philosophy, Systems Design Engineering faculties, as well he is cross-appointed to the Computer Science department. Chris has supervised students in each of these departments as well as in Biology and Psychology.

Chris is the director of the Centre for Theoretical Neuroscience (CTN) at the University of Waterloo. The Centre brings together researchers across many faculties as diverse as math, engineering, arts and science who are interested in computational and theoretical models of neural systems.

The Computational Neuroscience Research Group (CNRG) is Chris’ research lab which is associated with the CTN. The CNRG site contains the most up-to-date information on Chris’ team’s research.

Chris has published two books: How to Build a Brain (Oxford University Press) and his seminal Neural Engineering.

When he is not pushing the boundaries of theoretical neuroscience forward or hanging out with his team at the CNRG, Chris spends time with his family and occasionally straps on the blades for a game of hockey near his home in Waterloo. He has a Bacon-Erdos number of 8.


Peter Suma Co-founder

Website Email

Peter’s love of understanding brains began at the University of Toronto while doing his undergraduate degree in computer science in the early 1980’s. After scanning the world of AI researchers to do a degree under, Peter found Dr. Eliasmith right here at the University of Waterloo in Southern Ontario. Peter believes that Chris and his lab team’s work is ground breaking and is the world’s first scaleable architecture for building true cognition.

Peter has been a director on over sixteen technology company boards, a venture capitalist, a CEO and a technology company founder in the software and robotics sectors. Previously President, Co-Founder & Director then CEO at PharmaTrust where he grew the company with capital and sweat equity from April 2006 to his resignation in June 2011, during which time the team built great technology, changed laws and all in all did the impossible together for the company’s first five years. Prior to that Peter was V.P. Investments at Growthworks Capital Inc., and prior to that principal at Start Seed Capital. Peter started his first company, the award winning SRG Software, while a student in computer science at the University of Toronto.

Peter is completing his MASc in Theoretical and Computational Neuroscience at the University of Waterloo supervised by Chris. Peter holds; an Honours B.Sc. from the University of Toronto; an M.B.A. from the University of Chicago; a P.D.A.M. majoring in Financial Engineering from the Schulich School of Business; an ICD.D from the Institute of Corporate Directors at the Rotman School of Business and an LLM in Securities Law from Osgoode Hall Law School in Toronto.

Peter spends his non-work or study time with his wife and their two young children, skiing and playing as much as they can together, as well as coaching his son’s winning hockey team.


Dr. Terry Stewart Co-founder

Website Email

Terry’s initial training was as an engineer (B.A.Sc. in Systems Design Engineering, University of Waterloo, 1999); his masters involved applying experimental psychology on simulated robots (M.Phil. in Computer Science and Artificial Intelligence, University of Sussex, 2000), and his Ph.D. was on cognitive modelling (Ph.D. in Cognitive Science, Carleton University, 2007).

He believes that making progress in understanding any phenomenon as complex as cognition requires the construction of computational models. The primary role of these models is to test theories: as theories become more complex, we need computational models to let us determine the quantitative predictions of these theories. These models should also be mechanistic. That is, they should be process models where the behaviour of the overall system is caused by the interaction of internal components over time. These components should then correspond to the components of the real system. To achieve this, he develops modelling tools to support large-scale cognitive models. This has involved both high-level cognitive architectures (such as ACT-R) and detailed neural models (such as the Neural Engineering Framework and Nengo). Of particular interest to him are models involving cognitive reasoning, experience-based learning, and reinforcement learning. In concert with this, Terry has worked on statistical tools for comparing modelling results with empirical results, and he has clarified how such results should be interpretted. In particular, instead of the standard approaches of finding best-fit parameter settings based on minimizing the mean squared error, he advocates finding a range of parameter settings for which the model and reality are not statistically significantly different.


Dr. Daniel Rasmussen Co-founder

Website Email

Daniel’s research is motivated by a general interest in building more flexible, adaptive computational systems. This has led to work in a range of different areas, including computational neuroscience, (hierarchical) reinforcement learning, and deep learning. Daniel works to build hybrid systems that combine these (and other) approaches, in order to best leverage the strengths of each method.

In 2014 Daniel completed his PhD at the University of Waterloo, where his thesis involved developing the first neural model capable of performing hierarchical reinforcement learning. From 2014–2016 he worked as a postdoc at Princeton University, where he developed a Hessian-free optimization package for deep learning, with the aim to apply these methods to RL. Now at ABR he works to expand these and other projects from their academic origins into practical, state-of-the-art applications.


Dr. Travis DeWolf Co-founder

Website Email

Travis’ research focuses on studying the brain’s motor control system. Using modern control theoretic methods, such as operational space control, nonlinear adaptive control, and dynamic movement primitives, he has worked to develop biologically plausible spiking neural networks that model the brain, capable of generating the same diversity of behavioural phenomena and robust adaptation / learning seen in primates.

He received his undergraduate degree in computer science at Acadia University, with a thesis discussing the algebraic properties of template-guided DNA recombination. His masters degree was in computer science at the University of Waterloo, and focused on the development of the Neural Optimal Control Hierarchy (NOCH); a biologically plausible framework for large-scale models of the motor control system. His Ph.D. was in systems design engineering at the University of Waterloo, where he presented the Recurrent Error-driven Adaptive Control Hierarchy (REACH) model; a large-scale, fully spiking neural model of the motor cortices and cerebellum able to account for data from 19 studies from a behavioural level down to the level of single spiking neurons.


Dr. Trevor Bekolay Co-founder

Website Email

Trevor’s primary research interest is in learning and memory. In his Master’s degree, he explored how to do supervised, unsupervised, and reinforcement learning in networks of biologically plausible spiking neurons. In his PhD, he applied this knowledge to the domain of speech to explore how sounds coming into the ear become high-level linguistic representations, and how those representations become sequences of vocal tract movements that produce speech.

Trevor is also passionate about reproducible science, particularly when complex software pipelines are involved. In 2013, he started a development effort to reimplement the Nengo neural simulator from scratch in Python, which has now grown to a project with over 20 contributors around the world.


Xuan Choo Co-founder

Website Email

Xuan has a rather wide set of research interests. First, he is interested in working and long-term memory – how does the brain remember things, and the type of representations used to do so. Of related interest are the mechanisms that the brain uses to integrate different sensory inputs within working memory, as well as the underlying structure of working memory and the control systems necessary to ensure that information arrives at the right place at the right time. Xuan is also interested in general cognition and large-scale model design. He studies and models how the brain performs tasks like concept generalization and problem solving. He is also interested in the problem of integration: how does one build a system that seamlessly melds sensory input, with decision making and problem solving, with motor output, in order to perform highly complex tasks?

Xuan received his undergraduate in Computer Engineering from the University of British Columbia. There, he focused on the integration between computer hardware and software systems, as well as FPGA and VLSI chip design. For his Master’s degree, he worked under the supervision of Dr. Chris Eliasmith at the University of Waterloo, and developed a spiking neural model of serial-order working memory. Xuan took a year between his Master’s and PhD degrees to develop Spaun, a proof-of-concept fully-self- enclosed spiking neural model capable of performing 8 basic cognitive tasks ranging from digit recognition, to list memorization and counting, and even simple induction tasks. He is currently pursuing his PhD, focusing on further developing and improving Spaun, with the goal of making the underlying architecture of Spaun more general. This will allow external instructions to guide Spaun’s actions, expanding its repertoire of tasks beyond the original 8.