© Copyright 2009 - 2012
The UK Neuroinformatics Node
All Rights Reserved
Node Funded By:
MRC
EPSRC
BBSRC
2nd UK NI Node Congress - Titles and Abstracts

Invited Speakers

Gaute Einevoll - Norwegian University of Life Sciences, Norway (1400-1500 26/3/12)

Title: Modeling what you can measure with modern multielectrodes

Abstract: While extracellular electrical recordings have been the workhorse in electrophysiology, the interpretation of such recordings is not trivial. The recorded extracellular potentials in general stem from a complicated sum of contributions from all transmembrane currents of the neurons in the vicinity of the electrode contact. The duration of spikes, the extracellular signatures of neuronal action potentials, is so short that the high-frequency part of the recorded signal, the multi-unit activity (MUA), often can be sorted into spiking contributions from the individual neurons surrounding the electrode. However, no such simplifying feature aids us in the interpretation of the low-frequency part, the local field potential (LFP). To take a full advantage of the new generation of silicon-based multielectrodes recording from tens or hundreds of positions simultaneously, we thus need to develop new data analysis methods.
While extracellular electrical recordings have been the workhorse in electrophysiology, the interpretation of such recordings is not trivial. The recorded extracellular potentials in general stem from a complicated sum of contributions from all transmembrane currents of the neurons in the vicinity of the electrode contact. The duration of spikes, the extracellular signatures of neuronal action potentials, is so short that the high-frequency part of the recorded signal, the multi-unit activity (MUA), often can be sorted into spiking contributions from the individual neurons surrounding the electrode. However, no such simplifying feature aids us in the interpretation of the low-frequency part, the local field potential (LFP). To take a full advantage of the new generation of silicon-based multielectrodes recording from tens or hundreds of positions simultaneously, we thus need to develop new data analysis methods.
From volume conduction theory it follows that the extracellular potentials can be calculated by adding contributions from the transmembrane currents around the electrode contact [1], and with morphologically reconstructed neurons a straightforward computational scheme can be used to calculate the extracellular potential from a single neuron at any point in space [2-4]. Due to the linearity of the electrostatic equations, the scheme directly generalizes to extracellular potentials generated by populations of neurons [5,6]. In this two-step computational scheme, morphologically reconstructed neurons are first simulated with compartmental modeling using a simulation program such as NEURON to provide transmembrane currents, and next the extracellular potentials are calculated based on these [2-5].
In the talk, I will briefly discuss some results from our group where this scheme has been used to illuminate (A) frequency filtering and size variation of extracellular signatures of action potentials [3], (B) the frequency spectra and spatial range of the local field potential (LFP) [4-6], and (C) the relationship between the LFP and multi-unit activity (MUA) with the underlying neural activity in an activated columnar population of pyramidal neurons [5]. Next, examples of newly developed analysis methods for multi electrode data such as iCSD [7], laminar population analysis (LPA) [8], and population firing-rate model extraction [9], will be briefly presented.  Finally, example results from a project involving generation of test data to stimulate and aid the development and testing of automated spike-sorting algorithms for multi electrode data ([10], www.g-node.org/spike), will be shown.
[1] C Nicholson and JA Freeman, J Neurophsyiol 38:356 (1975)
[2] G Holt, C Koch, J Comp Neurosci 6:169 (1999)
[3] KH Pettersen, GT Einevoll, Biophys J 94:784 (2008)
[4] H Linden et al, J Comp Neurosci 29: 423 (2010)
[5] KH Pettersen et al, J Comp Neurosci 24:291 (2008)
[6] H Linden et al, Neuron 72: 859 (2011)
[7] KH Pettersen et al, J Neurosci Meth 154:116 (2006)
[8] GT Einevoll et al, J Neurophysiol 97:2174 (2007)
[9] P Blomquist et al, PLoS Comp Biol 5:e1000328 (2009)
[10] GT Einevoll et al., Curr Opin Neurobiol  22 (2011), online first

 

Sean Hill - INCF, Sweden (1545-1630, 26/3/12)

Title: Toward an international infrastructure for collaborative neuroscience

Abstract: The International Neuroinformatics Coordinating Facility (INCF) was launched in 2005, following the proposal by the Global Science Forum of the Organization for Economic Cooperation and Development (OECD) to create an organization to coordinate an open international infrastructure and integrate heterogeneous neuroscience data and knowledge bases and enable new insights from analysis, modeling and simulation. Here we present the INCF multi-phase strategy to deploy such an infrastructure with specific capabilities and milestones. The first phase would establish a globally federated dataspace with searchable metadata. The second phase would develop an object-based data integration layer employing web services to ensure the unique identification of all data through ontologies and spatial coordinates, while using data models to access diverse data formats through standard interfaces. The third phase would establish standard workflow management for analysis, visualization, modeling and simulation can then be built on top of the data integration layer. The development of portal interfaces will be critical to provide interactive user access to data, analyses and simulation results. This infrastructure should facilitate international sharing, publication and integration of neuroscience data across multiple levels and scales.

 

Simon Laughlin - Cambridge University, UK (0900-1000 27/3/12)

Title: Energy efficiency shapes the design of neurons, circuits and codes

Abstract: An understanding of design principles helps us to reverse engineer brains. I will briefly review the evidence for design principles that improve energy efficiency. Energy is generally required to transmit, process and store information, and brains are no exception. Energy budgets, biophysical and theoretical models and experimental measurements have suggested Ten Principles that help brains operate effectively within a limited energy budget.
1) Wherever possible, avoid electricity and use chemistry – it is cheaper.
2) Organise mechanisms to avoid futile cycles.
3) Process directly at every stage, using basic physical and chemical interactions.
4) Adapt to changing operating conditions.
5) Minimise speed, accuracy and rate.
6) Transmit only what is needed.
7) Distribute information and use sparse codes.
8) Save on wire.
9) Below all, nanofy and
10) Above all, complexify.
Many of these principles have been discerned in peripheral sensory systems, where we have gained a sufficient understanding of the relationships between mechanism and function. However, evolutionary biology suggests that nervous systems are unlikely to have abandoned all of these principles at “higher levels” because they have lowly origins. Evolution will have added to this list, and it is only when we understand what is old and basic that we are truly shocked by the new.

 

Simon Schultz - Imperial College London, UK (0900-1000 28/3/12)

Title: The Mnemic code: reading information traces from neural ensembles

Abstract: Much attention has been given in the recent neural coding literature to developing tools for quantifying and reading out the information available from ensembles of simultaneously recorded neurons. Indeed, the development of experimental technology (multi electrode array recording and two photon imaging) for recording the activity of large populations of neurons, has made the improvement of such algorithms critical. In this talk, I will argue that such tools can be applied not only to studying how a sensory stimulus "now" is encoded, but also to probe the dynamics of the system for traces of previously evoked activity - the "Mnemic" neural code.

 

Read Montague -Virginia Tech Carilion Research Institute and Department of Physics, Virginia Tech, USA and Wellcome Trust Centre for Neuroimaging, UCL, UK (1000-1100 28/3/12)

Unfortunately, Read was unable to attend our Congress due to personal reasons; his talk was replaced with one by Thomas Wolbers.

Title: Computational and neural phenotyping using economic games

Abstract: Social exchange occurs in species ranging from insects to humans and breaks down to varying degrees in a range of mental disorders. We and others have been developing models of the computational components of simple social exchange and using these to identify cognitive and neural phenotypes associated with healthy and diseased cognition.  This talk will review some early results in applying this approach to Autism Spectrum Disorder and Borderline Personality Disorder.   We view this as a first steps toward identifying computations associated with mental problems resulting from a range of diseases, injury, or developmental disorders.

 

Thomas Wolbers - Centre for Cognitive and Neural Systems, University of Edinburgh (1000-1100 28/3/12)

Title: Dynamic and multimodal representations of space in the human brain



Invited Session - DBS Workshop (1415-1545 27/3/12)

Peter Brown - University of Oxford, UK

Title: Excessive synchrony in Parkinson's disease and its implications for therapeutic brain stimulation

Abstract: There is growing interest in how synchronised activity across populations of neurons might underlie impairment in some diseases and there is now a general consensus that exaggerated oscillatory synchrony occurs within and between the basal ganglia and cerebral cortex in patients with Parkinson's disease. In particular, activity in the beta frequency band (13-30 Hz) is prominent, and can be attenuated by treatment with dopaminergic drugs. The latter suggests that elevated beta activity may be a key disturbance in Parkinson's disease (PD). Here, I will consider both correlative evidence and tests of causality that support a mechanistic link between pathologically exaggerated beta activity and stiffness and slowness of movement in PD. Questions, however, remain with regards to the quantitive importance of beta and its failure to account for tremor. Regardless, new findings with respect to beta activity can help optimise surgical targeting and provide the basis for more sophisticated forms of therapeutic stimulation for Parkinson's disease, such as those using closed loop feedback control and specific temporal patterning.

 

Madeleine Lowery - University College Dublin, Ireland

Title: Modelling Suppression of Neural Oscillations by Deep Brain Stimulation

Abstract: Despite the application of Deep Brain Stimulation (DBS) as an effective treatment for the symptoms of Parkinson’s disease, the underlying mechanisms by which DBS works remain unclear.  As a result, setting of parameters remains largely a trial and error process.  Given the vast array of combinations available, optimisation of stimulus parameters on a subject-specific basis is challenging.  Recent evidence suggests that DBS, similar to levodopa, can result in the suppression of low-frequency pathological oscillations within the basal ganglia.  Whether such oscillations are epiphenomenal or causal in nature remains to be determined.  By providing a framework for understanding the processes involved and allowing the effects of variables that are not accessible experimentally to be examined, computational modeling can help shed to light on some of these issues.  Appropriate models can enable experimental hypotheses to be tested and cause and effect relationships to be established.  Computational models may be divided into phenomenological models which capture essential features of the behaviour of the systems involved, and physiologically or physically based models which aim to describe the details of the underlying processes.  In this paper, two complementary models of DBS which utilise these distinct approaches are presented.   The models are used to explore the generation and suppression by DBS of low-frequency neural oscillations within the cortico-basal ganglia network.   The first model applies theory from non-linear control systems to describe the generation of low-frequency pathological oscillations within a closed-loop network of synchronised neurons and the suppression of these oscillations through DBS.  The concept of dither is used to explain how high-frequency stimulation can be used to suppress unwanted low-frequency oscillations by reducing the effective gain around the network.  The second model describes the activity of individual neurons within a model of the closed loop cortico-basal ganglia network.  Possible mechanisms for the generation of oscillations within the network are explored.  The model is then used to predict the influence of DBS parameters on oscillations in the tremor and beta frequency ranges.  Concepts from both models are combined to yield an integrated model of volume conduction effects and beta-band activity in the cortico-basal ganglia network.  The model is used to test the clinical hypothesis that closed-loop control of DBS amplitude may be possible, based on the average rectified value of beta-band local field potential oscillations.

 

Christian Hauptmann and Peter Tass - Julich Forschungszentrum, Germany

Title: Restoration of segregated, physiological neuronal connectivity by desynchronizing stimulation

Abstract: The loss of segregation of neuronal signal processing pathways is an important hypothesis for explaining the origin of functional deficits as associated with Parkinson’s disease. A modeling approach is utilized to study the influence of deep brain stimulation on the restoration of segregated activity in the target structures. Besides the spontaneous activity of the target network, the model considers weak sensory input mimicking signal processing tasks, electrical deep brain stimulation delivered through a standard DBS electrode and synaptic plasticity. We demonstrate that the sensory input is capable to induce a modification of the network structure which results in segregated microcircuits if the network is initialized in the healthy, desynchronized state. Depending on the strength and coverage, the sensory input is capable to restore the functional sub-circuits even if the network is initialized in the synchronized, pathological state. Weak coordinated reset stimulation, applied to a network featuring a loss of segregation caused by global synchronization, is able to restore the segregated activity and to resolve the pathological, synchronized activity.
Based on these results we discuss novel technical concepts for the stimulation of neuronal structures with the aim to establish therapeutic deep brain stimulation.
References:
• Tass PA1999 Phase resetting in medicine and biology: Stochastic modelling and data analysis Springer Verlag, Berlin
• Tass PA 2003 Biol. Cybern. 89 81-88
• Tass PA, Majtanik M 2006 Biol. Cybern. 94 58-66
• Hauptmann C and Tass PA 2007 BioSystems 89 (2007) 173–181
• Hauptmann C and Tass PA 2009 J. Neural Eng. 6 (2009) 016004
• Tass PA and Hauptmann C 2009 Restorative Neurology and Neurosci. 27 589
• Hauptmann C et al. 2009 J. Neural Eng. 6 066003
• Tass PA et al. 2009 Phys. Rev. E 80, 011902

 

Contributed Talks

1. Bruce Graham - University of Stirling, UK (1700-1730, 26/3/12)

Title: A computational study of the influence of synaptic cooperativity on synaptic plasticity

Abstract: A “grand challenge” for neuroscience is to produce a coherent picture of long term synaptic plasticity and its role in learning in the intact, behaving animal. A bewildering range of stimulus protocols have been shown to induce long-term synaptic plasticity in brain tissue slices and cultures. It is difficult to relate these results to stimulus patterns a neuron in an intact animal is likely to experience, in the precise neural conditions in which it experiences them, including spatial and temporal patterns of inhibition and neuromodulation. Computational modelling can help to try to bridge this gap, because with a detailed model of a neuron we can study synaptic signals that might underpin plasticity at spatially distributed synapses that can be given precise and independently timed patterns of spiking input that can be formulated to approximate in vivo activity. In this study we use a detailed computational model of a hippocampal CA1 pyramidal cell (PC) to explore the effects of different stimulus patterns on synaptic plasticity at the distinct excitatory pathways from entorhinal cortex (EC) and hippocampal region CA3, and attempt to relate this to the likely activity patterns and neural environment experienced by the PC during theta/gamma activity in the awake, behaving rat. To this end we assume that stimulus patterns likely to induce plasticity are near synchronous spiking activity in groups of afferent neurons, corresponding to sets of coactive neurons on a gamma cycle during the theta rhythm. Such cells may fire single spikes or high frequency bursts. A key determinant of long-term synaptic plasticity is the calcium influx at the synapse, the major source for which is the synaptic NMDAR-mediated current (INMDA), with possible contributions also from voltage-gated calcium channels and intracellular calcium stores. Depolarisation of the spine head during the time course of the INMDA causes greater calcium influx. Such depolarisation can result from high frequency stimuli to the synapse itself, activity at nearby synapses causing depolarisation of the dendritic branch and possibly a sodium- or calcium-mediated dendritic spike, or from back-propagating action potentials (bAPs) originating at the cell body. Consequently, we measure the peak calcium level (pCa) reached in each spine head following stimulation and use it as an indicator of the likely plasticity outcome, which may be LTP, LTD or no change, for the different stimulus patterns. Firstly we consider the effects of different stimulus patterns to spatially diffuse or clustered synapses within a single dendritic layer (stratum radiatum, SR or stratum lacunosum-moleculare, SLM) that contain synapses innervated from a particular source (CA3 or EC, respectively). We then consider possible interactions between layers. Due to the voltage dependence of NMDA channels and VGCCs, as dendritic depolarisation increases when more synapses are stimulated synchronously, pCa in each spine head rises as well. This rise is rather linear in the number of stimulated synapses if NMDA channels are the only source of spine head calcium, until pCa saturates. The rise becomes sigmoidal with the addition of voltage-gated calcium channels (VGCCs) and the resultant ability to generate dendritic calcium spikes. Small additional nonlinearities in the rise are also seen in SR when the synaptic stimulation becomes strong enough to cause somatic spikes which backpropagate into the proximal dendrites. The gain of the number of synapses to pCa function (slope of the rise in pCa) is much greater in electrically compact cells, in which the density of the A-type potassium channels (which rises with distance from the soma in the apical dendrites) is reduced. It is also much greater if all synapses are located on a single dendritic branch, or if synapses receive a burst input, rather than a single presynaptic spike. A burst of stimuli to even a small number of synapses is far more effective than single stimuli to large numbers of distributed synapses in raising pCa levels. Moderate simultaneous inputs (tens of coactive inputs) to SR and SLM may combine nonlinearly to give a much higher pCa in their synapses than if the same input is provided to SR or SLM alone. However, this only occurs for a low density of KA and if the individual pathway inputs are just subthreshold for regenerative NMDA and VGCC activity. The pathways remain more sensitive to changes in the number of active synapses within the pathway, rather than to activity in the adjacent pathway. So while cooperative synaptic plasticity between distinct input pathways is possible, it requires the cell state and the input patterns to be tightly constrained. This may make such cooperativity unlikely to be the usual driver of plasticity in vivo. These results show that peak calcium in a spine head is a good measure of the amount of synaptic activity arriving in a dendritic layer. In other words, it provides a local (at each spine) measure of global postsynaptic activity, corresponding to the “activation function” that is the basis of artificial neural network (ANN) learning rules. This “activation function” is specific to a particular input pathway and does not depend on somatic spiking activity in the cell (though it may be influenced by such activity). Without VGCCs, pCa is a reasonably linear function of the number of active synapses, up until a saturation point. This nicely corresponds to the linear “activation function”, which is the weighted sum of the inputs, used in many ANN learning rules, including Hebbian rules and the BCM rule. Addition of VGCCs results in a nonlinear, sigmoidal “activation function”, which is the basis of back propagation learning in multilayer perceptrons. However, this “activation function” is subject to noise. Otherwise identical spine heads may experience different pCa levels to the same stimulation purely due to their position in the dendrites and consequent varying voltage level. This is amplified by the voltage-dependent nature of calcium influx into spine heads, leading to a wide variation in pCa across the synaptic population. This is rather unsatisfactory for graded synaptic learning. In SR, once synaptic activity is sufficient to cause somatic spiking, proximal synapses see higher pCa levels due to bAPs. This bias may be flattened or even tipped into reverse by coincident SLM activity, so that distal SR synapses may see higher pCa levels. This could be a mechanism for generating the so-called dendritic democracy in which more distal synapses become stronger and all synapses have a similar influence on somatic depolarisation. The effectiveness of burst stimulation in raising pCa suggests that when a cell receives patterns of activity that involve both single presynaptic spikes and bursts of spikes, only the bursting synapses are likely to undergo LTP. Synapses receiving single spikes will contribute to the postsynaptic activity level and possibly generate cell output in the form of action potentials, but will not undergo plasticity. The experimental evidence that LTD results from prolonged low frequency stimulation suggests that such synapses may exhibit LTD if they continually receive single spikes with no intervening bursts.

 

2.  María Valdés Hernández - University of Edinburgh, UK (1200-1230 28/3/12)

Title: The human brain imaged in ageing and disease: models, standards and techniques

Abstract: The ageing brain exhibits features not found in healthy young adults. The thickening of the skull and meningeal layers, the accumulation of mineral deposits and the manifestations of global brain atrophy, small vessel disease and ischaemic and haemorrhagic stroke lesions are common structural changes that start to emerge in middle age healthy adults in different degrees. The skull thickening has been largely overlooked, the identification and characterisation of mineral deposits from radiological images appears inconsistently described in the literature, white matter lesions and ischaemic stroke lesions have been reported as part of the same volumetric measurement due to their similar appearance on structural MRI, and there is a lack of age-relevant normal brain registration templates. We intend to discuss recent work in modelling anatomical brain structures, finding standards to characterise the “normal” ageing brain, and developing techniques to quantitatively assess lesions and tissues in ageing and disease, acknowledging the diversity and the challenges it poses.

 

3. Shabnam Nargis Kadir - Rutgers University / Imperial College London, UK (1130-1200, 28/3/12)

Title: Spike sorting for large dense arrays
Authors: Shabnam Kadir, Dan Goodman, John Schulman, Gyorgy Buzsaki, Kenneth D. Harris

Abstract: Spike sorting, the classification of which action potentials belong to which neuron, is essential to the study of how neuronal populations  process information. Recent developments in electrode manufacture allow simultaneous recording from hundreds of channels, allowing in principle the monitoring of thousands of neurons simultaneously. Existing automatic spike sorting algorithms, however, are not scalable to such high channel-count data. In particular, current algorithms such as KlustaKwik cannot deal with cases where multiple spikes occur at the same time, even on spatially distant channels. For high-count probes, this is the rule, not the exception.
Here we describe a new approach for sorting high channel count data. In the first step, spikes are detected as areas of spatiotemporally continuous threshold crossings, allowing for robust detection of temporally overlapping but spatially separated spikes. In the second step, we introduce a "distributional " mixture of Gaussians EM algorithm, in which the features on subthreshold channels are replaced with a fixed probabilistic model. This ensures that noise from the large number of subthreshold channels does not swamp the signal from the few suprathreshold channels.
To test the efficacy of our algorithm we create a data set where groundtruth is available through the addition of a set of spikes from one recording to a second recording made with the same probe. Performance of the new algorithm is comparable to that achieved by supervised learning based on groundtruth, suggesting that performance is close to optimal.

 

4. Liam McDaid - University of Ulster, N.Ireland (1630-1700 25/3/12)

Title: The functional significance of Bidirectional Coupling between Astrocytes and Neurons

Abstract: This talk presents a model neuron-astrocyte interaction which shows that astrocytes have a role to play in LTP/LTD where neuron-astrocyte interactions at a synaptic site can cause plasticity at other remote sites via SICs. Also the model demonstrates how an astrocytic induced signal can cause dynamic synchronization between neurons. Much evidence indicates that cognitive and behavioral functions rely on flexible coordination among distributed neural activities within and between cortical areas. However, although several mechanisms have been proposed for synchronization, its physical basis remains obscure. Our model shows how dynamic coordination in the brain may emerge from bidirectional communication between neurons and astrocytes. Results will also be presented which demonstrate how direct and indirect signalling of retrograde messengers (2-AG) can give rise to a form of self repair. In particular, it will be demonstrated that indirect signaling via an astrocyte results in the increase on the probability of release (PR) at synaptic sites and this signaling pathway is the catalyst for self repair of damaged or low probability synapses.

 

5. Thomas Nowotny - University of Sussex (1230-1300 28/3/12)

Title: GPU enhanced Neuronal Networks (GeNN): Why using code generation for GPUs?

Abstract: In this presentation I will discuss the GeNN (GPU enhanced Neuronal Networks) framework that is based on automatic code generation for the NVidia CUDA application programming interface. Unlike precompiled simulator systems for GPUs, the generation of custom code for individual neuronal network simulations and for each specific detected GPU hardware architecture has many decisive advantages: The system can provide for a large number of potential neuron and synapse models, it can optimise for specific network structures and GPU hardware properties and it can be extended more easily. The present beta version shows encouraging competitive computing performance and is available under the GPL v2 license at http://genn.sourceforge.net

6. KongFatt Wong-Lin - University of Ulster, N.Ireland (1730-1800 26/3/12)

Title: Neuronal circuit model and dynamics of the serotonergic system

Abstract: Serotonin is one of the oldest neurotransmitters in evolutionary terms, and the serotonergic system is complex and multifaceted. Neurons that emit serotonin mainly come from the dorsal and median raphe nuclei in the brain stem. These small brain nuclei innervate serotonin throughout various parts of the brain, modulate cellular excitability and network properties of targeted brain areas, and regulate mood, cognition and various behaviors. Recent studies have shown that the dorsal raphe nucleus (DRN) neuronal activities can encode rewarding (e.g. appetitive) and unrewarding (e.g. aversive) behaviours. Other experimental work has also shown that DRN neurons can exhibit heterogeneous spiking behaviours. Dysfunctions of the serotonergic system are implicated in mood, depressive and other psychiatric disorders. Although this important system has been extensively studied, an integrative account of its underlying neurobiological computation has yet to be established. In this work, we build and study a basic spiking neuronal network model of the DRN constrained by neuronal properties observed in in vitro and in vivo experiments. Specifically, we use an efficient adaptive quadratic integrate-and-fire neuronal model to capture slow afterhyperpolarization current, occasional bursting behaviours in serotonergic neurons, and fast spiking activities in the non-serotonergic inhibitory neurons. Provided that our noisy and heterogeneous spiking neuronal network model adopts a feedforward inhibitory network architecture, it is able to replicate the main features of DRN neuronal activities recorded in monkeys performing a reward-based memory-guided saccade task. Interestingly, our network model exhibits theta band oscillation, which has been observed in other experimental studies. The theta band oscillation is especially pronounced among the non-serotonergic inhibitory neurons during the rewarding outcome of a simulated trial, thus forming a model prediction. By varying the inhibitory synaptic strengths and the afferent inputs, we find that the network model can oscillate over a range of relatively low frequencies, allow co-existence of multiple stable frequencies, and spike synchrony can spread from within a local neural subgroup to global. Our model suggests plausible network architecture, provides interesting model predictions that can be experimentally tested, and offers a sufficiently realistic multi-scale model for serotonergic neuromodulation simulations. If time permits, I will also briefly discuss other related modelling efforts. This is joint work with Mr. Alok Joshi, Prof. T. Martin McGinnity and Prof. Girijesh Prasad.

 

SIG Sessions (1000-1300, 27/3/12)

SIG1
Thomas Nielsen (University of Leicester) - Spatiotemporal and probabilistic neuroinformatics
Riccardo Storchi (University of Manchester) - First spike latency coding for the direction of whisker deflection in the subcortical somatosensory pathway

SIG2
Talk by Albert Burger (Heriot Watt University and MRC Human Genetics Unit) - Summary from the SIG2-Workshop: Image-based Neuroinformatics

SIG3
Piotr Dudek (University of Manchester) Neurally-inspired engineering - Overview of SIG3 Activities
Jannetta Steyn (Newcastle University) - Towards integrating silicon and biological neurons in the crab stomatogastric ganglion
Liam McDaid (University of Ulster) - Neural Computing Platform using Memresistor Synapses

SIG4
Talk by Padraig Gleeson (University College London) - Current state of the art in tool development for computational neuroscience modelling; A summary of the latest developments in software for computational neuroscience from the NeuroML/BrainScaleS CodeJam workshop.

SIG5
Talk by Marc de Kamps (University of Leeds) - Update on SIG5.