About me

About me

I have completed my PhD on methods of ensembling spiking neural networks that offer guaranteed improvements in performance in May 2020 and I am actively searching for a research position in industry that offers the potential for attending conferences and publishing, so if you know of any such roles do get in touch!

Some machine learning areas and topics I have experience in are: spiking neural networks, deep learning, ensemble learning, representation learning, transfer learning, multitask learning, and information theory. For further details see the sections below.

Summary of my PhD research

Spiking neural networks are machine learning models developed for the purpose of replicating the cognitive capabilities of the brain. Training these models to be as competent as the brain on the same tasks, however, remains an unresolved task. In contrast, deep learning models (or classical neural networks) have recently achieved human-level (or beyond) performance on certain cognitive tasks. We investigate the application of ensemble learning as a means of improving the performance of spiking neural networks. Ensemble learning has been used successfully in the past to improve the performance of classical neural networks. The reason for this framework’s success comes from the fact that when the ensemble is constructed in a certain way, it is guaranteed that the ensemble prediction is on average better than any of the individual predictions. We study how the ensemble learning framework can be applied to spiking neural networks so that the same guarantees can be achieved. We achieve this by investigating how spike train predictions should be interpreted and represented (so that predictions are unaffected by stochasticity or other design choices), and how the central model framework (see here for more details) can be extended to combining spiking neural network predictions.

Research outside the PhD

Aside from these topics, I have an interest in transfer learning and representation learning, developed during my internship at IBM Research. During my time with the IBM Daresbury team I investigated the problem of learning chemical representations using deep learning, through an application of transfer learning and multitask learning. The first part of my project looked at gaining a better understanding into an existing task similarity measure (link to paper), while the second part implemented and studied a more scalable method for learning representations and measuring their similarity.