Schedule for: 15w5158 - Connecting Network Architecture and Network Computation

Arriving in Banff, Alberta on Sunday, December 6 and departing Friday December 11, 2015
Sunday, December 6
16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)
17:30 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 22:00 Informal gathering
Beverages and a small assortment of snacks are available on a cash honor system
(Corbett Hall Lounge (CH 2110))
Monday, December 7
07:00 - 08:45 Breakfast
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
08:45 - 09:00 Introduction and Welcome by BIRS Station Manager (TCPL 201)
09:00 - 09:35 Markus Diesmann: Towards multi-layered multi-area models of cortical networks
Theoretical research on the local cortical network has mainly been concerned with the study of random networks composed of one excitatory and one inhibitory population of neurons. This led to fundamental insights on the correlation structure of activity. The present contribution discusses next steps towards a more realistic representation of the cortical microcircuit and the brain-scale architecture. The talk first introduces the draft of a full-scale model of the microcircuit at cellular and synaptic resolution [1] comprising about 100,000 neurons and one billion local synapses connecting them. The emerging network activity exhibits fundamental properties of in vivo activity: asynchronous irregular activity, layer-specific spike rates, higher spike rates of inhibitory neurons as compared to excitatory neurons, and a characteristic response to transient input. As the formal executable specification is publicly available, the model can serve as a testbed for theoretical approaches and can iteratively be refined. A key element in the mean-field theory of systems of heterogeneous populations is the transfer function of the individual elements. Recent progress [2] enables insights into the anatomical origin of oscillations in the multi-layered circuitry [3]. Despite these successes, the explanatory power of local models is limited as half of the synapses of each excitatory nerve cell have non-local origins. The second part of the talk therefore argues for the need of brain-scale models to arrive at self-consistent descriptions and addresses the arising technological and theoretical questions: Are simulations of the required size feasible [4]? Are full scale simulations required as opposed to downscaled representatives [5]? How can anatomical and physiological constraints with their respective uncertainty margins be integrated to arrive at a multi-area model with a realistic activity state [6]? www.nest-initiative.org www.csn.fz-juelich.de [1] Potjans TC, Diesmann M Cerebral Cortex 24(3):785-806 (2014) www.opensourcebrain.org [2] Schuecker J, Diesmann M, Helias M Phys Rev E 92:052119 (2015) [3] Bos H, Diesmann M, Helias M arXiv:1510.00642[q-bio.NC] (2015) [4] Kunkel S, Schmidt M, Eppler JM, Plesser HE, Masumoto G, Igarashi J, Ishii S, Fukai T, Morrison A, Diesmann M, Helias M Front Neuroinform 8:78 (2014) [5] van Albada S, Helias M, Diesmann M PLoS Comput Biol 11(9):e1004490 (2015) [6] Schuecker J, Schmidt M, van Albada S, Diesmann M, Helias M arXiv:1509.03162 [q-bio.NC] (2015)
(TCPL 201)
09:35 - 10:00 Robert Rosenbaum: Correlations and dynamics in spatially extended balanced networks
Balanced networks offer an appealing theoretical framework for studying neural variability since they produce intrinsically noisy dynamics with some statistical features similar to those observed in cortical recordings. However, previous balanced network models face two critical shortcomings. First, they produce extremely weak spike train correlations, whereas cortical circuits exhibit both moderate and weak correlations depending on cortical area, layer and state. Second, balanced networks exhibit simple mean-field dynamics in which firing rates linearly track feedforward input. Cortical networks implement non-linear functions and produce non-trivial dynamics, for example, to produce motor responses. We propose that these shortcoming of balanced networks are overcome by accounting for the distance dependence of connection probabilities observed in cortex. We generalize the mean-field theory of firing rates, correlations and dynamics in balanced networks to account for distance-dependent connection probabilities. We show that, under this extension, balanced networks can exhibit either weak or moderate spike train correlations, depending on the spatial profile of connections. Networks that produce moderate correlation magnitudes also produce a signature spatial correlation structure. A careful analysis of in vivo primate data reveals this same correlation structure. Finally, we show that spatiotemporal firing rate dynamics can emerge spontaneously in spatially extended balanced networks. Principal component analysis reveals that these dynamics are fundamentally high-dimensional and reliable, suggesting a realistic spiking model for the rich dynamics underlying non-trivial neural computations. Taken together our results show that spatially extended balanced networks offer a parsimonious model of cortical circuits.
(TCPL 201)
10:10 - 10:40 Coffee Break (TCPL Foyer)
10:40 - 11:15 Daniel Marti: Structured connectivity as a source of slow dynamics in randomly connected networks
Cortical networks exhibit dynamics on a range of timescales. Slow dynamics at the timescale of hundreds of milliseconds to seconds carry information about the recent history of the stimulus, and can therefore act as a substrate for short-term memory. How networks composed of fast units, like neurons, can generate such slow dynamics is still an open question. One possible mechanism is based on positive feedback: in randomly connected networks, the collective timescale can be set arbitrarily long by balancing the intrinsic decay rate of individual neurons with recurrent input. This type of mechanism relies however on fine-tuning the synaptic coupling. Another possibility is that slow dynamics are induced by structured connectivity between neurons. In fact, the connectivity of cortical networks is not fully random. The simplest and most prominent deviation from randomness found in experimental data is the overrepresentation of bidirectional connections among pyramidal cells. Here we argue that symmetry in the connectivity can act as a robust mechanism for the generation of slow dynamics in networks of fast units. Using numerical and analytical methods, we investigate the dynamics of networks with partially symmetric structure. We consider the two dynamical regimes exhibited by random neural networks: the weak-coupling regime, where the firing activity decays to a single fixed point unless the network is stimulated, and the strong-coupling or chaotic regime, characterized by internally generated fluctuating firing rates. We determine how symmetry modulates the timescale of the noise filtered by the network in the weak-coupling regime, as well as the timescale of the intrinsic rate fluctuations in the chaotic regime. In both cases symmetry increases the characteristic asymptotic decay time of the autocorrelation function. Furthermore, or sufficiently symmetric connections the network operating in the chaotic regime exhibits aging effects, by which the timescale of the rate fluctuations slowly grows as time evolves. Such history-dependent dynamics might constitute a new mechanism for short-term memory storage in random networks.
(TCPL 201)
11:15 - 11:50 Ruben Moreno-Bote: Causal Inference in Spiking Networks
While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition.
(TCPL 201)
11:50 - 13:00 Lunch (Vistas Dining Room)
13:00 - 14:00 Guided Tour of The Banff Centre
Meet in the Corbett Hall 2nd floor Lounge for a guided tour of The Banff Centre campus.
(Corbett Hall Lounge (CH 2110))
14:00 - 14:20 Group Photo
Meet in foyer of TCPL to participate in the BIRS group photo. Please don't be late, or you will not be in the official group photo! The photograph will be taken outdoors so a jacket might be required.
(TCPL Foyer)
14:20 - 14:40 Simon Stolarczyk: Optimal decision making in social networks
Humans and other animals integrate information across modalities and across time to perform simple tasks nearly optimally. However, it is unclear whether humans can optimally integrate information in the presence of redundancies. For instance, di erent modalities, or di erent agents in a social network can transmit information received from the same or related sources. What computations need to be performed to combine all incoming information while taking into account such redundancies? Moreover, if information propagates through a larger network, does locally optimal inference at each node permit optimal inference of all available information downstream? To address these questions we study a simple Bayesian network model for optimal inference. We fi rst investigate feedforward networks where nodes (agents) in the first layer estimate a single parameter drawn from a Gaussian distribution. The agents pass their beliefs about these estimates on to nodes in the next layer where they are optimally integrated, accounting for redundancies. The information is then propagated analogously across other layers until it reaches a fi nal observer. We give a simple criterion for when the final estimate is nonoptimal, showing that redundancies can signifi cantly impact performance even when information is integrated locally optimally by every agent. This gives us a benchmark to compare to the case when observers do not account for such correlations. We also show that when connections between layers are random, the probability that the final observer can perform optimal inference approaches 1 if intervening layers contain more nodes than the fi rst. We also examine other factors in the network structure that lead to globally suboptimal inference, and show how the process compares to the case of parameters that follow non-Gaussian distributions, and how information propagates through recurrent networks. This work has the potential to account for how optimal individual performance can be detrimental for group intelligence.
(TCPL 201)
14:40 - 15:00 Aubrey Thompson: Relating spontaneous dynamics and stimulus coding in competitive networks
Understanding the relation between spontaneously active and stimulus evoked cortical dynamics is a recent challenge in systems neuroscience. Recordings across several cortices show highly variable spike trains during spontaneous conditions, and that this variability is promptly reduced when a stimulus drives an evoked response. Networks of spiking neuron models with clustered excitatory architecture capture this key feature of cortical dynamics. In particular, clusters show stochastic transitions between periods of low and high ring rates, providing a mechanism for slow cortical variability that is operative in spontaneous states. We explore a simple Markov neural model with clustered architecture, where spontaneous and evoked stochastic dynamics can be examined more carefully. We model the activity of each cluster in the network as a birth-death Markov process, with positive self feedback and inhibitory cluster-cluster competition. Our Markov model allows a calculation of the expected transition times between low and high activity states, yielding an estimate of the invariant density of cluster activity. Using our theory, we explore how the strength of inhibitory connections between the clusters sets the maximum likelihood for the number of active clusters in the network during spontaneous conditions. We show that when the number of stimulated clusters matches the most-likely number of spontaneously active clusters then the mutual information between stimulus and response is maximized. This then gives a direct connection between the statistics of spontaneous activity and the coding capacity of evoked responses. Further, our work relates two disparate aspects of cortical computation|lateral inhibition and stimulus coding.
(TCPL 201)
15:00 - 15:30 Coffee Break (TCPL Foyer)
15:30 - 16:05 Artur Luczak: Neuronal activity packets as basic units of neuronal code
Neurons are active in a coordinated fashion, for example, an onset response to sensory stimuli usually evokes a ~50-200ms long burst of population activity. Recently it has been shown that such 'packets' of neuronal activity are composed of stereotypical sequential spiking patterns. The exact timing and number of spikes within packets convey information about the stimuli. Here we present evidence that packets can be a good candidate for basic building blocks or 'the words' of neuronal coding, and can explain the mechanisms underlying multiple recent observations about neuronal coding, such as: multiplexing, LFP phase coding, and provide a possible connection between memory preplay and replay. This presentation will summarize and expand on opinion paper: Luczak et al. (2015, Nature Rev. Neurosci.; doi:10.1038/nrn4026)
(TCPL 201)
16:05 - 16:40 Katherine Newhall: Variability in Network Dynamics
Mathematical models of neuronal network dynamics, such as randomly connected integrate-and-fire model neurons, typically create homogeneous dynamics in the sense that a single neuron in the network is representative of the ensemble behavior, and dynamics in time are statistically repeatable. I will discuss work in progress on experimental data in which neither is true, looking at a statistical method to answer biological questions, and pondering the existence of simple model network motifs capable of producing such variability.
(TCPL 201)
16:40 - 17:15 Joel Zylberberg: Correlated stochastic resonance
Even when repeatedly presented with the same stimulus, sensory neurons show high levels of inter-trial variability. Similarly high levels of variability are observed throughout the brain, leading us to wonder how variability affects the function of neural circuits. One the one hand, prior work on “stochastic resonance” (SR) has shown that random fluctuations can enhance information transmission by nonlinear circuit elements like neurons. Specifically, the thresholding inherent in spike generation means that much of the information contained within the membrane potential can fail to propagate downstream. Random membrane potential fluctuations “soften” spike thresholds, allowing more information to survive the spike-generation process. This phenomenon reflects a tradeoff between the positive effects of threshold-softening, and the negative effects of corrupting signals by noise. While membrane potential fluctuations are often correlated between neurons in vivo, the role of this collective behavior in SR is largely unknown. Concurrently to the SR studies, other work investigated the impact of correlations on signal encoding by noisy non-spiking populations. For these non-spiking models, coding performance is highest when the noise is absent altogether: the noise is always a hindrance to the population codes. Consequently, those studies cannot reveal conditions under which collective variability enhances information coding. Despite these limitations, the prior studies of non-spiking models show that — depending on the patterns of inter-neural correlation — correlations can mitigate corruption of signals by noise. Combining ideas about correlations, and about SR, my talk will show that correlated membrane potential fluctuations can soften neural spiking thresholds without substantially corrupting the underlying signals with noise, thereby significantly enhancing spiking neural information coding.
(TCPL 201)
17:15 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
Tuesday, December 8
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 09:35 Janet Best: Homeostasis on Networks
Homeostasis is important for many aspects of brain function, and the possibility for homeostasis depends on network architecture.
(TCPL 201)
09:35 - 10:10 Jochen Triesch: Where’s the noise? Key features of spontaneous activity and neural variability arise through learning in a deterministic network
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.
(TCPL 201)
10:10 - 10:40 Coffee Break (TCPL Foyer)
10:40 - 11:40 Emre Aksay: Network architectures underlying persistent neural activity
Persistent neural activity is important for motor control, short-term memory, and decision making. It is unclear what network processing mechanisms and architectures support this brain dynamic. Here we present some of our recent e orts to address this question in the oculomotor integrator, a model system for studying persistent neural activity. We determine candidate architectures by tting dynamical network models to population-wide recordings with additional constraints from experiments on cellular morphology, intrinsic excitability, and localized perturbations. We test candidate architectures by imaging activity in the dendritic arbor of integrator neurons during persistent ring. These e orts suggest architectures of higher rank than previously assumed. Such architectures may allow persistent activity networks to act as hubs that perform numerous input-output transformations.
(TCPL 201)
11:40 - 12:00 Aaron Voelker: Computing with temporal representations using recurrently connected populations of spiking neurons
The modeling of neural systems often involves representing the temporal structure of a dynamic stimulus. We extend the methods of the Neural Engineering Framework (NEF) to generate recurrently connected populations of spiking neurons that compute functions across the history of a time-varying signal, in a biologically plausible neural network. To demonstrate the method, we propose a novel construction to approximate a pure delay, and use that approximation to build a network that represents a finite history (sliding window) of its input. Specifically, we solve for the state-space representation of a pure time-delay filter using Pade-approximants, and then map this system onto the dynamics of a recurrently connected population. The construction is robust to noisy inputs over a range of frequencies, and can be used with a variety of neuron models including: leaky integrate-and-fire, rectified linear, and Izhikevich neurons. Furthermore, we extend the approach to handle various models of the post-synaptic current (PSC), and characterize the effects of the PSC model on overall dynamics. Finally, we show that each delay may be modulated by an external input to scale the spacing of the sliding window on-the-fly. We demonstrate this by transforming the sliding window to compute filters that are linear (e.g., discrete Fourier transform) and nonlinear (e.g., mean squared power), with controllable frequency.
(TCPL 201)
12:00 - 13:30 Lunch (Vistas Dining Room)
13:30 - 14:05 Nicolas Brunel: Statistics of connectivity optimizing information storage in recurrent networks
The rules of information storage in cortical circuits are the subject of ongoing debate. Two scenarios have been proposed by theorists: In the first scenario, specific patterns of activity representing external stimuli become fixed-point attractors of the dynamics of the network. In the second, the network stores sequences of patterns of network activity so that when the first pattern is presented the network retrieves the whole sequence. In both scenarios, the right dynamics are achieved thanks to appropriate changes in network connectivity. I will describe how methods from statistical physics can be used to investigate information storage capacity of such networks, and the statistical properties of network connectivity that optimize information storage (distribution of synaptic weights, probabilities of motifs, degree distributions, etc) in both scenarios. Finally, I will compare the theoretical results with available data.
(TCPL 201)
14:05 - 14:40 Kathryn Hedrick: Megamap: Flexible representation of a large space embedded with nonspatial information by a hippocampal attractor network
The problem of how the hippocampus encodes both spatial and nonspatial information at the cellular network level remains largely unresolved. Spatial memory is widely modeled through the theoretical framework of attractor networks, but standard computational models can only represent spaces that are much smaller than the natural habitat of an animal. We propose that hippocampal networks are built upon a basic unit called a megamap, or a cognitive attractor map in which place cells are flexibly recombined to represent a large space. Its inherent flexibility gives the megamap a huge representational capacity and enables the hippocampus to simultaneously represent multiple learned memories and naturally carry nonspatial information at no additional cost. On the other hand, the megamap is dynamically stable, as the underlying network of place cells robustly encodes any location in a large environment given a weak or incomplete input signal from the upstream entorhinal cortex. Our results suggest a general computational strategy by which a hippocampal network enjoys the stability of attractor dynamics without sacrificing the flexibility needed to represent a complex, changing world.
(TCPL 201)
14:40 - 15:10 Coffee Break (TCPL Foyer)
15:10 - 15:45 Tatyana Sharpee: Optimal cell type composition in recurrent and feedforward networks
I will give an update on how we extended the work described in Aljadeff, Stern, & Sharpee PRL 2015 and Kastner, Baccus, & Sharpee PNAS 2015.
(TCPL 201)
15:45 - 16:20 Julijana Gjorgjieva: Optimal sensory coding by neuronal populations
In many sensory systems the neural signal splits into multiple parallel pathways, suggesting an evolutionary fitness benefit of a very general nature. For example, in the mammalian retina, ~20 types of retinal ganglion cells transmit information about the visual scene to the brain. What factors drove the evolution of such an early and elaborate pathway split remains elusive. We test the hypothesis that pathway splitting enables more efficient encoding of sensory stimuli. We focus on a specific prominent instance of sensory splitting: the emergence of ON and OFF pathways that code for stimulus increments and decrements, respectively. We developed a theory of optimal coding for a population of sensory ON and OFF neurons and computed the coding efficiency for different mixtures of ON and OFF cells. The optimal ON-OFF ratio in the population can be related to the statistics of natural stimuli resulting in set of predictions for the optimal response properties of the neurons.
(TCPL 201)
16:20 - 17:00 Michael Metzen: The role of neural correlations in information coding
The role of correlated neural activity in neural coding remains controversial. Here we show that correlated neural activity can provide information about particular stimulus features independently of single neuron activity using the weakly electric fish, Apteronotus leptorhynchus as an animal model. These fish generate an electric organ discharge (EOD) surrounding their body, the amplitude of which is encoded in the discharge of electroreceptors (P-units) that synapse onto pyramidal neurons in the hindbrain electrosensory lateral line lobe (ELL) that in turn synapse onto neurons within the midbrain Torus semicircularis (TS). When two conspecifics come into close proximity, each fish experiences a sinusoidal amplitude modulation (i.e. beat) with a frequency that is equal to the difference between both EOD frequencies. The beat can be further modulated due to movements of the animals, thus creating an envelope. Furthermore, these fish can generate communication signals or chirps (i.e. electrosensory “objects”) that consist of transient increases in EOD frequency and always occur simultaneously with the beat under natural conditions. The pairwise correlation coefficient but not single neuron spiking activity: 1) can reliably be used to predict the stimulus envelope and 2) allows for the emergence of a feature invariant representation of natural communication stimuli that is actually exploited by the electrosensory system. Moreover, information carried by correlated neural activity at the periphery is decoded and further refined in downstream brain areas. Finally, this gives rise to similar behavioral responses to stimulus waveforms associated with a given electrosensory object. As such, correlated activity codes for stimulus attributes that are distinct from those coded by firing rate and provide a novel role for neural variability. Furthermore, correlated neural activity is invariant to identity preserving transformations of natural stimuli. This reveals how a sensory system exploits this fact in order to implement the emergence and refinement of invariant neural representations of natural stimuli and how these mediate perception and behavior. The associated neural circuits are generic and thus likely to be found across systems and species.
(TCPL 201)
17:00 - 19:30 Dinner (Vistas Dining Room)
Wednesday, December 9
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 17:30 Informal meetings and free time (various)
12:00 - 13:30 Lunch (Vistas dining room)
17:30 - 19:30 Dinner (Vistas Dining Room)
19:00 - 19:35 John Beggs: High-degree neurons feed cortical computations
Recent results have shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, new theoretical work makes it possible to quantify how neurons modify information from the connections they receive. These developments allow us to investigate how information modi cation, or computation, depends on the number of connections a neuron receives (in- degree) or sends out (out-degree). We used a high-density 512 electrode array to record spontaneous spiking activity from cortical slice cultures and transfer entropy to construct a network of information ow. We identi ed generic computations by the synergy produced wherever two information streams converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in- degree of a neuron was not related to the amount of information it computed. To gain insight into these ndings, we developed a simple feedforward network model. We found that a degree- modi ed Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the rst results to show that the extent to which a neuron modi es incoming information streams depends on its topological location in the surrounding functional network. Co-authors: Nick Timme and Sunny Nigam
(TCPL 201)
19:35 - 20:10 Alex Reyes: Homeostatic control of neuronal firing rate and correlation: scaling of synaptic strength with network size
Features of sensory input are represented as the spatiotemporal activities of neuronal population. This network dynamics depends on the balance of excitatory (E) and inhibitory (I) drives to individual neurons. Maintaining balance in the face of continuously changing nervous system is vital for preserving the response properties of neurons and preventing neuropathologies. While homeostatic processes are in place to maintain excitatory level, the conditions for maintaining stable responses are yet unclear. Here, we used a culture preparation to systematically vary the density of the network. Using optogenetic techniques to stimulate individual neurons in the network with high spatial and temporal resolution, we were able to systematically vary the number and correlation of external inputs. We found that the average firing rate and the correlation structure of synaptic inputs are invariant with network size. Finally, we used paired recordings to measure the synaptic strengths and connection probability between excitatory (E) and inhibitory (I) neurons. We confirmed experimentally a long standing theoretical assumption that synaptic strength scales with the number of connections per neuron ($N$) closer to $N^{-1/2}$ than to $N^{-1}$ .
(TCPL 201)
20:10 - 20:45 Woodrow Shew: Functional implications of phase transitions in the cerebral cortex
A long-standing hypothesis at the nexus neuroscience, physics, and network science posits that a network of neurons may be tuned through a phase transition. Originally, this idea was motivated by intriguing analogies between the brain and physical systems which undergo phase transitions including Ising and percolation models. More recently, this idea has graduated from an appealing analogy to an experimentally supported and biophysically important fact. Here I will review recent experiments and models, which have now established not only that phase transitions can occur in cerebral cortex, but also that neural information processing crucially depends on what phase the cortex is in. Cortical phase can be controlled by myriad biophysical mechanisms including tuning the balance of excitation and inhibition and adaptation to sensory input. Importantly, multiple aspects of information processing, such as sensory dynamic range and discrimination are optimized when the network operates nearby (but not exactly at) the critical point of a phase transition. These studies suggest that by operating in the vicinity of criticality the cerebral cortex may be tuned to accommodate changing information processing needs depending on behavioral context.
(TCPL 201)
Thursday, December 10
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 09:35 Kathleen Cullen: Neural correlates of sensory prediction errors during voluntary self-motion: evidence for internal models in the cerebellum.
The computation of sensory prediction errors is an important theoretical concept in motor control. In this context, the cerebellum is generally considered as the site of a forward model that predicts the expected sensory consequences of self-generated action. Changes in motor apparatus and/or environment will cause a mismatch between the cerebellum’s prediction and the actual resulting sensory stimulation. Thus this mismatch - the ‘sensory prediction error,’ - is thought to be vital for updating both the forward model and motor program during motor learning to ensure that sensory-motor pathways remain calibrated. In addition, through our daily activities, the computation of sensory prediction errors is required to discriminate externally-applied from self-generated inputs. However, direct proof for the existence of this comparison had been lacking. We took advantage of a relatively simple sensory-motor pathway with a well-described organization to gain insight into the computations that drive motor learning. The most medial of the deep cerebellar nuclei (fastigial nucleus), constitutes a major output target of the cerebellar cortex and in turn sends strong descending projections that ensure accurate posture and balance. We carried out a trial-by-trial analysis of these cerebellar neurons during the execution and adaptation of voluntary head movements and found that neuronal sensitivities dynamically tracked the comparison of predictive and feedback signals. When the relationship between the motor command and resultant movement was altered, neurons robustly responded to sensory input as if the movement was externally generated. Neuronal sensitivities then declined with the same time course as the concurrent behavioral learning. These findings demonstrate the output of an elegant computation in which rapid updating of an internal model enables the motor system to sense and then learn to expect unexpected sensory inputs. In turn this enables both the i) rapid suppression of descending reflexive commands during voluntary movements and ii) rapid updating of motor programs in the face of changes to either the motor apparatus or external environment.
(TCPL 201)
09:35 - 10:10 Stefan Mihalas: Cortical circuits implementing optimal cue integration
Neurons in the primary visual cortex (V1) predominantly respond to a patch of the visual input, their classical receptive field. These responses are modulated by the visual input in the surround. This reflects the fact that features in natural scenes do not occur in isolation: lines, surfaces are generally continuous. There is information about a visual patch in its surround. This information is assumed to be passed to a neuron in V1 by neighboring neurons via lateral connections. The relation between visual evoked responses and lateral connectivity has been recently measured in mouse V1. In this study we combine these three topics: natural scene statistics, mouse V1 neuron responses and their connectivity. We are interested in addressing the question: Given a set of natural scene statistics, what lateral connections would optimally integrate the cues from the classical receptive field with those from the surround?\\ First, we assumed a neural code: the firing rate of the neuron maps bijectively to the probability of the feature the neuron is representing to be in the presented image. We generate a database of features these neurons represent by constructing a parameterized set of models from V1 electrophysiological responses. We used the Berkeley Segmentation Dataset to compute the probabilities of co-occurrences of these features. We computed the relation between probabilities of feature co-occurrences and the synaptic weight which optimally integrates these features. The relation between evoked responses and connectivity which leads to optimal cue integration is qualitatively similar to the measured one, but several additional predictions are made. We hypothesize that this computation: optimal cue integration is a general property of cortical circuits, and the rules constructed for mouse V1 generalize for other areas and species.
(TCPL 201)
10:10 - 10:40 Coffee Break (TCPL Foyer)
10:40 - 11:15 Valentin Dragoi: Optical stimulation of network activity in visual cortex changes perceptual performance
Sensory detection is a basic perceptual experience that critically relies on the accurate stimulus encoding in primary sensory cortex. However, the responses of neurons in early cortical areas are known to be only poorly correlated with perceptual reports, and hence how neurons in these areas contribute to perceptual decisions remains unclear. Here we show that optogenetic stimulation of distinct populations of excitatory neurons in V1 of macaque monkey can enhance the detection of an oriented stimulus when the stimulated population is tuned to the stimulus orientation. Activating populations of neurons untuned to the stimulus, however, elicited a large increase in neuronal ring rates, but did not impact behavioral performance. By examining how optical stimulation influences the information encoded in population activity, we found that the light-induced improvement in behavioral performance was accompanied by a reduction in noise correlations and an increase in the population signal-to-noise ratio. Our results demonstrate that causal manipulation of the responses of an informative population of excitatory neurons in V1 can bias the animals behavioral choice.
(TCPL 201)
11:15 - 11:35 Kameron Harris: Role and limits of inhibition in an excitatory burst generator
The pre-Botzinger complex (preBot) is now recognized as the essential core of respiratory rhythm generation, where it generates the inspiratory phase. Rhythmogenesis occurs through network synchronization. Using a biophysical model of the entire preBot, we ask: What is the role of inhibitory cells in the preBot? How does changing the sparsity of connections and synaptic strengths affect the resulting rhythm? These modeling results are compared to in vitro slice experiments in which we progressively block inhibitory and excitatory synaptic transmission. We find that too much sparsity or inhibition disrupts rhythm generation, yet highly connected networks without inhibition also produce non-biological rhythms. Our slice experiments suggest that the real preBot lies within the partially synchronized region of network parameter space. As inhibitory neurons are added to the network, some cells fire out-of-phase with the main population rhythm, which offers an explanation for the out-of-phase cells observed in preBot. However, it is not possible to produce a two-phase population rhythm in our model without adding further structure to the network. The preBot and Botzinger complexes therefore require structured networks in order to produce alternating inspiratory and expiratory rhythms. Finally, we present preliminary stages of a spin model for oscillator phases which reproduces the qualitative features of the synchronization transition.
(TCPL 201)
11:35 - 11:55 Jeff Dunworth: Finite size e ffects and rare events in balanced cortical networks with plastic synapses
Cortical neuron spiking activity is broadly classified as temporally irregular and asynchronous. Model networks with a balance between large recurrent excitation and inhibition capture these two key features, and are a popular framework relating circuit structure and network dynamics. Balanced networks stabilize the asynchronous state through reciprocal tracking by the inhibitory and excitatory population activity, leading to a cancellation of total current correlations driving cells within the network. While asynchronous network dynamics are often a good approximation of neural activity, in many cortical datasets there are nevertheless brief epochs wherein the network dynamics are transiently synchronized (Buzs?aki and Mizuseki, 2014,Tan et al., 2014). We analyze paired whole cell voltage-clamp recordings from spontaneously active neurons in mouse auditory cortex slices (Graupner and Reyes, 2013) showing a network where correlated excitation and inhibition effectively cancel, except for intermittent periods when the network shows a macroscopic synchronous event. These data suggest that while the core mechanics of balanced activity are important, we require new theories capturing these brief but powerful periods when balance fails. Traditional balanced networks with linear firing rate dynamics have a single attractor, and fail to exhibit macroscopic synchronous events. Mongillo et. al. (2012) showed that balanced networks with short-term synaptic plasticity can depart from strict linear dynamics through the emergence of multiple attractors. We extend this model by incorporating finite network size, introducing strong nonlinearities in the firing rate dynamics and allowing finite size induced noise to elicit large scale, yet infrequent, synchronous events. We carry out a principled finite size expansion of an associated Markovian birth-death process and identify core requirements for system size and network plasticity to capture the transient synchronous activity observed in our experimental data set. Our model properly mediates between the asynchrony of balanced activity and the tendency for strong recurrence to promote macroscopic population dynamics.
(TCPL 201)
11:55 - 13:30 Lunch (Vistas Dining Room)
12:31 - 13:05 Ashok Litwin-Kumar: Learning associations with both pure and randomly mixed representations
To make decisions and guide actions based on sensory information, neurons that mediate behavior must learn to respond appropriately to combinations of previously experienced stimuli and/or contexts. Many models of learning assume this is accomplished by a feedforward hierarchy of layers of neurons leading from input to desired output. However, sensory information is often relayed by multiple convergent and divergent pathways, each of which may have different representations of the input. We study the ability of output neurons that receive both pure stimulus information and randomly mixed stimulus/context information via an indirect pathway to perform associative learning. For realistic input-output mappings, the optimal pattern of connectivity is an intermediate one that includes input from both pure and mixed representations converging on the output layer. We also discuss the optimal level of mixing to maximize behavioral performance, finding, surprisingly, that sparse connectivity improves performance compared to the fully connected case. Our results shed light on the principles governing learning from random representations, a strategy employed in many areas of the brain.
(TCPL 201)
14:07 - 14:39 Zachary Kilpatrick: Learning the volatility of a dynamic environment
Humans and other animals make perceptual decisions based on noisy sensory input. Recent studies focus on ecologically realistic situations in which the correct choice or the informative features of the stimulus change dynamically. Importantly, optimal evidence accumulation in changing environments requires discounting prior evidence at a rate determined by environmental volatility. To explain these observations, we extend previous accumulator models of decision making to the case where the correct choice changes at an unknown rate. An ideal observer can optimally infer these transition rates and accumulate evidence to make the best decision. We also discuss a neural implementation for this inference process whereby Hebbian plasticity shapes connectivity between populations representing each choice. Coauthors: Adrian Radillo, Alan Veliz-Cuba, Kresimir Josic
(TCPL 201)
14:40 - 15:18 Stefano Fusi: Computational principles of synaptic plasticity
Memories are stored, retained, and recollected through complex, coupled processes operating on multiple timescales. To understand the computational principles behind these intricate networks of interactions we construct a broad class of synaptic models that eciently harnesses biological complexity to preserve numerous memories. The memory capacity scales almost linearly with the number of synapses, which is a substantial improvement over the square root scaling of previous models. This was achieved by combining multiple dynamical processes that initially store memories in fast variables and then progressively transfer them to slower variables. Importantly, the interactions between fast and slow variables are bidirectional. The proposed models are robust to parameter perturbations and can explain several properties of biological memory, including delayed expression of synaptic modi cations, metaplasticity, and spacing eff ects.
(TCPL 201)
15:15 - 15:45 Coffee Break (TCPL Foyer)
15:46 - 16:22 Michael Buice: The Cortical Activity Map and the Neural Basis of Behavior
One of the fundamental questions in neuroscience is how sensory stimuli and behavior are represented in the neural activity of the cortex. Part of the mission of The Allen Institute for Brain Science is to provide resources to the scienti c community to aid in answering such fundamental questions. We are preparing a large scale dataset called the Cortical Activity Map, a public scienti c resource, which will provide neural responses in two-photon calcium imaging from large sets of simultaneously recorded cells to a diverse set of visual stimuli from awake, behaving mice in multiple cortical layers, cortical regions, and Cre-line de ned cell types. This data set will allow for unprecedented access to population responses and provides a unique opportunity to explore the collective characteristics of neural dynamics. The visual stimuli for CAM include gratings, sparse noise, spatio-temporal noise, natural images, and natural movies. I will describe this project along with our plans to use these data to construct models and test theories of the mouse visual system. We feel that this dataset will be of particular importance for the computational neuroscience community as a tool for exploring many questions in neuroscience such as population coding, neural variability, and correlated activity.
(TCPL 201)
16:28 - 17:00 Cheng Ly: Firing Rate Statistics with Intrinsic and Network Heterogeneity
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. There is still a lot unknown about it; specifically, how 2 sources of heterogeneity: network (synaptic heterogeneity) and intrinsic heterogeneity, interact and alter neural activity is mysterious. In a recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of firing rates. The relationship between intrinsic and network heterogeneity can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically. To analytically characterize our observations, we employ dimension reduction methods and asymptotic analysis to derive compact analytic descriptions of the phenomena. These descriptive formulas show how these 2 forms of heterogeneity determine the firing rate heterogeneity in various settings.
(TCPL 201)
17:01 - 17:20 Braden Brinkman: Crouching tiger, hidden neuron
A major obstacle to understanding population coding in the brain is that neural activity can only be monitored at limited spatial and temporal scales. Inferences about network properties important for coding, such as connectivity between neurons, are sensitive to “hidden units”: unobserved neurons or other inputs that drive network activity. This problem is important not just for understanding inference from data, but also for which network properties shape spike train statistics as subsampled or pooled signals are transmitted through the brain. Recent computational efforts have been made to fit models to hidden units, but a fundamental theory of the effects of unobserved influences on the statistics of subsampled or pooled network activity remains elusive. Using methods from statistical physics, we have developed an analytical framework to begin answering questions about how “ground truth” properties of neuronal networks are distorted when an experimenter (or downstream neuron) can only observe coarsely resolved activity data. As a specific example, we study how the coupling filters of a generalized linear model fit to pooled spike train data change as a function of the fraction of spike trains pooled together.
(TCPL 201)
17:15 - 19:30 Dinner (Vistas Dining Room)
Friday, December 11
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 11:30 Informal discussions (various)
11:30 - 12:00 Checkout by Noon
5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon.
(Front Desk - Professional Development Centre)
12:00 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)