Dr. Terry Stewart

Terry’s initial training was as an engineer (B.A.Sc. in Systems Design Engineering, University of Waterloo, 1999); his masters involved applying experimental psychology on simulated robots (M.Phil. in Computer Science and Artificial Intelligence, University of Sussex, 2000), and his Ph.D. was on cognitive modelling (Ph.D. in Cognitive Science, Carleton University, 2007). He believes that making progress in understanding any phenomenon as complex as cognition requires the construction of computational models. The primary role of these models is to test theories: as theories become more complex, we need computational models to let us determine the quantitative predictions of these theories. These models should also be mechanistic. That is, they should be process models where the behaviour of the overall system is caused by the interaction of internal components over time. These components should then correspond to the components of the real system. To achieve this, he develops modelling tools to support large-scale cognitive models. This has involved both high-level cognitive architectures (such as ACT-R) and detailed neural models (such as the Neural Engineering Framework and Nengo). Of particular interest to him are models involving cognitive reasoning, experience-based learning, and reinforcement learning. In concert with this, Terry has worked on statistical tools for comparing modelling results with empirical results, and he has clarified how such results should be interpretted. In particular, instead of the standard approaches of finding best-fit parameter settings based on minimizing the mean squared error, he advocates finding a range of parameter settings for which the model and reality are not statistically significantly different.