Keynotes

    • Modelling The Neural Mechanisms of Navigation In Insects

      Insect navigation has been a focus of behavioural study for many years, and provides a striking example of cognitive complexity in a miniature brain. We have used computational modelling to bridge the gap from behaviour to neural mechansims by relating the computational requirements of navigational tasks to the type of computation offered by invertebrate brain circuits.

      We have shown that visual memory of multiple views could be acquired by associative learning in the mushroom body neuropil, and allow insects to recapitulate long routes. We have also proposed a circuit in the central complex neuropil that integrates sky compass and optic flow information on an outbound path and can thus steer the animal directly home; moreover this circuit can be used for additional vector calculations such as finding novel shortcuts. The models are strongly constrained by neuroanatomy, and are tested in realistic agent and robot simulations.

    • Bayesian inference, reinforcement learning, and the cortico-basal ganglia circuit

      Bayesian inference is a standard way of handling uncertainties in sensory perception and reinforcement learning is a common way of acting in unknown environments. While they are used in combination for perception and action in uncertain environments, the similarity of their computations has been formulated as the duality of inference and control, or control as inference.
       

      In this talk, I will review these theoretical frameworks and discuss their implications in understanding the common circuit architectures of the sensory and motor cortices, and possible roles of the basal ganglia in motor and sensory processing.

    • Neuroinformatics, Neural networks and Neurocomputers for Brain-inspired AI: Challenges and Opportunities

      The talk discusses briefly current challenges in AI, including: efficient learning of data (interactive, adaptive, life-long; transfer); interpretability and explainability; personalised predictive modelling and profiling; multiple modality of data (e.g. genetic, clinical, behaviour, cognitive, static, temporal, longitudinal); computational complexity; energy consumption; human-machine interaction.  

      Opportunities to address these challenges are presented through advancement in Neuroinformatics, Neural networks and Neurocomputers (the 3N).  Neuroinformatics offer a tremendous amount of data and knowledge about how the human brain and the nervous system work. Many brain information processing principles can be now implemented in novel Neural network computational models, such as:  sparseness of computation, leading to a much less computational complexity and a significant energy consumption; diversity in the NN architecture in terms of type of neurons and compartmentalisation of computations, which can improve results; cognitive computation, where bottom-up sensory information and top-down prior knowledge are used to  speeds-up the learning process; life-long and transfer learning; interactive and reinforcement learning (rather than batch-mode); self-organisation (rather than pre-defined number of layers and neurons); evolving spatio-temporal knowledge and many more.  Some of these principles have already been used in neural network models, such as SOM (Kohonen), ART (Grossberg), ECOS ([1,2]), spiking neural networks (SNN) (Maass), [3]. The latter ones have inspired the development of neuromorphic hardware chips and Neurocomputers, characterised by much low power consumption, massive parallelism and fast processing.

    • Multi-objective Ensemble Learning and Its Applications

      Most, if not all, machine learning problems are defined by a single loss function. Yet the vast majority of those loss functions have two or more terms summed together through hyper-parameters. A closer examination of those loss functions reveals that there are in essence two or more conflicting objectives that a loss function tries to minimise, e.g., accuracy and regularisation. This talk formulates machine learning as a multi-objective problem, instead of trying to combine different objectives into a single loss function through a weighted sum. While the weighted sum approach is simpler, it does require additional time and effort to tune hyper-parameters (weights). This talk starts with ensemble learning. Then it describes a simple idea of multi-objective learning and its natural fit to ensembles. Existing multi-objective evolutionary algorithms can be used as multi-objective learning algorithms without requiring the objective functions to be differentiable or even continuous. Selected examples of multi-objective learning in class imbalance learning, software effort estimation and fair machine learning will be presented to illustrate the flexibility and generality of multi-objective learning. It is argued that multi-objective learning can be an effective approach towards achieving different trade-off in various practical learning scenarios.

    • Neural Spectrospatial Filter: On Beamforming in the Deep Learning Era

      Perception and Neurodynamics Laboratory
      Ohio State University

      As the most widely-used spatial filtering approach for multi-channel signal separation, beamforming extracts the target signal arriving from a specific direction. We present an emerging approach based on multi-channel complex spectral mapping, which trains a deep neural network (DNN) to directly estimate the real and imaginary spectrograms of the target signal from those of the multi-channel noisy mixture. In this all-neural approach, the trained DNN itself becomes a nonlinear, time-varying spectrospatial filter. How does this conceptually simple approach perform relative to commonly-used beamforming techniques on different array configurations and in different acoustic environments? We examine this issue systematically on speech dereverberation, speech enhancement, and speaker separation tasks. Comprehensive evaluations show that multi-channel complex spectral mapping achieves speech separation performance comparable to or better than beamforming for different array geometries, and reduces to monaural complex spectral mapping in single-channel conditions, demonstrating the versatility of this new approach for multi-channel and single-channel speech separation. In addition, such an approach is computationally more efficient than popular mask-based beamforming. We conclude that this neural spectrospatial filter is capable of superseding traditional and mask-based beamforming.

    • LEARNING WITH NO DATA COLLECTIONS

      By and large, the spectacular results of machine learning rely on the appropriate organization of huge data collections, which has strongly pushed the development of top level solutions by big companies. I this talk we propose an orthogonal research direction where we expect that perceptual cognitive skills (e.g. in language, vision, and control) can emerge simply by of environmental interactions without needing to store and properly organize big data collections. The proposed approach relies on moving the framework of statistical machine learning to that of learning over time by solving optimization problems similar to those that are at the basis of laws in Physics. We show that any classic learning process arises from the forward solution of classic variational problems and provide preliminary experimental evidence of the effectiveness of the theory.