Research
The Unit's core strengths are in computationally and probabilistically oriented theoretical neuroscience, and statistical machine learning. In neuroscience, we have particular interests in plasticity, population coding and neural dynamics; applied to the fields of audition, control/action selection, and vision. In machine learning, we work on parametric and non-parametric Bayesian methods, graphical models and sampled and deterministic approximate inference and learning methods, applied to neuroscience problems as well as to other areas.
Find out more from our research publications.
1) Theoretical Neuroscience
2) Machine Learning
1) Theoretical Neuroscience
| Dynamics |
|
Biological neural networks exhibit rich dynamical behaviours, whose importance for computation is under constant debate. We study the import of oscillatory excitatory-inhibitory systems in such areas as preventing spontaneous symmetry breaking in neural activities, perceptual learning, neural plasticity, associative memory, the representation of interval time, and the oscillatory coordination between the hippocampus and neocortex. We also study the dynamical properties of active membrane processes associated with spiking. |
|
|
| Neural coding |
|
Understanding the relationship between stimuli and neural spiking activity is one of the most fundamental questions in neuroscience. We approach the question in many ways, on the one hand working with empirical data to understand, process and formalise the information available in them, and on the other, looking at theoretical issues associated with sophisticated versions of population codes. We also study how principles of early sensory coding may be derived from efficient coding principles of information theory. |
|
| Plasticity |
|
A remarkable feature of the brain is its ability to adapt to, and learn from, experience. This learning has measurable physiological correlates in terms of changes at individual synapses, as well as in resulting modifications of the stimulus-response properties of individual neurons. We study the theoretical significance of these changes at a number of levels, including the interpretation of spike-timing update rules for synaptic strength, the interaction of reinforcement and neuromodulation with receptive field plasticity, and the consequences of plastic changes on perceptual learning. |
|
| Vision |
|
We study the organisational and computational principles that lie behind physiological, anatomical, and psychophysical observations in biological vision. Using both theoretical models and psychophysical experiments, we focus on coding principles that can help elucidate the information-processing function of receptive fields in the retina and cortex, on the mechanisms of visual grouping, adaptation, and segmentation in early visual cortex, and on visual inference and attentional mechanisms |
|
| Audition |
|
Starting with only a 1- or 2-dimensional time series (the sound wave at one or two ears), the auditory system extracts a rich portrait of the auditory environment; accurately segmenting and locating auditory objects in the presence of noise, distortion, echos and other signal imperfections. We study the question of how this is done, applying both algorithmic and neuroscientific tools. |
|
2) Machine Learning
| Bayesian statistics |
|
Bayesian statistics is a framework for doing inference by combining prior knowledge and data, and as such has been influential in the understanding of intelligent learning systems. We work on many areas of Bayesian statistics, including using variational methods to do inference efficiently in complex domains, model selection and non-parametric modelling, novel Markov chain methods, semi-supervised learning and modelling temporal sequences. |
|
| Graphical models |
|
Realistic models often require representing the dependencies between many random variables. Graphical models provide an elegant formalism for representing these dependencies and for doing efficient probabilistic inference and decision making. We study novel algorithms for approximate inference and methods for learning both parameters and the structure of graphical models from data. |
|
| Kernel methods |
|
Difficult real-world pattern recognition and function learning problems require that the learning system be highly flexible. Kernel methods such as Gaussian processes and support vector machines are one way of defining highly flexible non-parametric models based on similarities between data points. Gaussian processes, which correspond to neural networks with infinitely many hidden neurons, have proved powerful at avoiding some of the common pitfalls of learning such as 'overfitting'. We focus on how to make kernel methods even more flexible and efficient, how to learn the kernel from data, and how to use them in a variety of applications. |
|
| Reinforcement learning |
|
Reinforcement learning studies how systems can actively learn about the transition and reward structure of their environments and come to choose appropriate actions. Apart from the links with conditioning and neuromodulation, we have studied various aspects of the trade-off between exploration and exploitation, the effects of approximation and the divination of hierarchical structure. |
|
| Neural data analysis |
|
The brain is perhaps the most complex subject of empirical investigation in scientific history. The scale is staggering: over 10 11 neurons, each making an average of 10 3 synapses, with computation occurring on scales ranging from a single dendritic spine to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterise this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. In collaborations with experimental laboratories we have adapted machine learning techniques to characterise data from multiple extracellular electrodes, from identified single cells, as well as from local-field and magnetoencephalographic recordings. These studies have the potential to introduce powerful new theoretically-motivated ways of looking at neural data |
|