By Bharath Chandra Talluri.

Computational neuroscientists attempt to understand the human brain by building computational models at various levels of abstraction. These models range from detailed low level biophysical models to highly abstract system level models. A single class of models cannot explain all the observed phenomena in neuroscience. Active debates often arise in the field on which class of models to use and if it is possible to integrate different classes of models. While detailed biophysical models attempt to capture morphology and structural connectivity, system level models often ignore low level biophysics and focus on explaining cognitive processes like decision-making, memory, etc.

This is the backdrop of a recent talk by Prof. Markus Diesmann from the Institute of Neuroscience and Medicine at the Research Center Jülich given in the Lecture series of the SFB 936 here at Hamburg. Prof. Diessman stressed the importance of detailed biophysical simulations and the feasibility of whole brain simulations with current computational resources. Specifically, he addressed the following questions: Are simulations at the scale of a whole human brain feasible using today’s computational resources? Are full scale simulations of the human brain really necessary? How can we build dynamical constraints into a high dimensional spiking network model?

Prof. Diesmann considered a 1 mm3 cortical volume with 100,000 neurons and 10,000 synapses per neuron in a minimal model of a layered cortical network (Potjans & Diesmann, 2014), which is available on The network contains distinct populations of excitatory and inhibitory integrate-and-fire neurons with identical neural dynamics in every layer. This model ignores spatial dependency of synaptic inputs to the neurons and assume homogeneous lateral connectivity in the model. The connectivity matrix which gives the connection probabilities between pairs of neurons can be obtained from the in vivo anatomical and in vitro physiological data from animal studies. By taking into account specific connections between layers and neurons, experimental observations like asynchronous-irregular firing of neurons across layers, higher spike rate of inhibitory neurons, the distribution of spike rates of neurons, and the response of the network to transient inputs in different layers can be reproduced with the network. This basic model was combined with known aspects of brain morphology to obtain a mesoscopic measure of local field potentials (LFP) observed in experimental data.

Despite the success of the model in simulating experimental data like LFPs, it is important to note that physiological neurons receive only part of their synaptic inputs from local networks while the rest of the input comes from neurons located in other columns and cortical areas. Since the model assumes an isolated cortical network, this spatially distant input was replaced with random inputs. As a result, the power spectrum from simulations of the model could not account for the slow oscillation components that are typically observed in mammalian cortex in vivo. Also, the model does not take into account the inhomogeneous connections with other brain areas. These shortcomings can be resolved by taking brain-scale connectivity into account, which leads me to the discussion of feasibility and necessity of simulating such a network.

Simulating a brain-scale network with the number of neurons and synapses comparable to that of the human brain is possible by making supercomputers accessible to neuroscience. This is discussed in detail in two recent papers from Prof. Diesmann and his colleagues (Helias et al., 2012; Kunkel at al., 2014). Supercomputers with petascale systems like the JUGENE computer developed by IBM at Jülich and the K computer developed by Fujitsu at RIKEN Institute Japan are being employed to simulate large scale cortical networks. These supercomputers are able to simulate large scale connected random networks of up to 109 neurons using ~660,000 cores, showing that even the most powerful supercomputers can simulate only 1% of the whole human brain. However, the random network simulated is the worst case of a large-scale network due to its lack of modularity. Brain networks, on the other hand, are more organized than random networks. This feature of organic networks, coupled with growing computational power, gives hope that it is possible to simulate a brain-scale network in the near future. Another issue concerning feasibility is the run-time of the simulated network. The random network in the above example took ~40 min to simulate 1 s of biological activity. This clearly restricts the supercomputers from simulating neural plasticity at the time-scale of actual cortical networks. This problem can be partly resolved by making greater use of multi-threading (a version of parallel processing) to simulate the large scale network.

This raises another important question that Prof. Diesmann and his colleagues address in their work – is it really necessary to simulate a large scale cortical network? Simulating biological neural networks using networks of neurons that preserve key characteristics of neural connectivity but at a smaller scale, as is done very often in neuroscience poses some serious limitations on the network to replicate experimental findings. Such reduced small scale networks can capture the first order statistics of a biological network (like firing rate) but pose severe constraints on second-order statistics (like the cross-correlation between the spike trains of different neurons in the network). Mesoscopic measures of cortical activity like the LFP strongly depend on the cross-correlation between neurons, thereby making it difficult for the network to provide accurate predictions for such measures. Cross-correlations depend on effective connectivity as well as the size of the network to be simulated. Put together, this provides strength to the argument that full-scale simulations are necessary to study large scale brain networks (see van Albada et al., 2015, for more).

To simulate a realistic model of cortical networks, one has to consider inter-areal and cortico-cortical synapses and input from areas representing sensory stimulation. Prof. Diesmann used the example of a multi-area model of macaque visual cortex (Schmidt et al., 2016). The availability of rich anatomical connectivitiy datasets makes it feasible to simulate this network. The 32 areas in the network are structured in layers and contain ~ 8 x 108 neurons. Representing each area by a 1 mm2 microcircuit, the network is downscaled to ~ 4 x 106 neurons with ~ 4 x 109 synapses. This downscaling allows the researcher to simulate the network and study its dynamics on supercomputers as discussed above. In initial simulation efforts, the simulated network existed in two stable attractor states – a ‘low activity’ state and a ‘high activity’ state – both of which are not directly comparable to experimental observations. Techniques from mean-field theory were used to stabilise and incorporate dynamical constraints in the network construction (Schuecker et al., 2016). The functional connectivity of the resulting simulated network showed correspondence with functional connectivity in primate brains. Thus, the simulated network should greatly facilitate more fine-grained analysis of neural dynamics and functional connectivity.

To summarise, using supercomputers in neuroscience will allow the simulation of large-scale brain networks that cannot be downscaled if they are to provide a clear window onto neural dynamics. Full-scale models explain various prominent features of network activity including, but not limited to, functional connectivity and network dynamics. These networks can be used to study the function of various brain regions by simulating their interaction with the environment. The simulated networks, though detailed with respect to the connections between populations of neurons, do not incorporate the detailed biophysics of individual neurons and neuromorphology. As a result, these networks cannot be used to simulate low level processes in the human brain.

As a systems neuroscientist, I am very impressed by the talk and the results presented by Prof. Diesmann. Simulating large-scale cortical networks will help gain insights into the mechanistic processes that give rise to neural computation and understand how altering the structure and dynamics will result in various disorders. The talk was mostly focused on the structural connectivity of cortical networks. Though structural connectivity can explain several experimental observations, it is still puzzling how these large scale connectivity networks can be integrated with various models explaining development and cognitive processes in human brain. Prof. Diesmann postulated that this integration may be possible by extending these networks without violating the underlying structural connectivity and dynamics observed from detailed structural models. I suppose various complex cognitive processes unique to humans and non-human primates like language, communication, social behaviour, reasoning, and abstract thought are a result of genetic variations over millions of years of evolution. Though one would hope that these cognitive phenomena can be captured by large-scale network models, current models lay more emphasis on structure and dynamics at higher levels of abstraction, which in my opinion restricts the ability of these models to explain such complex cognitive functions and the role of low level neural processes in shaping these networks. Clearly, more research is required to help integrate different classes of models and bridge the gap between Marr’s three levels of analysis to solve the mystery of the human brain.


Additional reading:

  1. Potjans W., Morrison A. and Diesmann M. (2010). Enabling functional neural circuit simulations with distributed computing of neuromodulated plasticity.  Comput. Neurosci. 4:141.
  2. Potjans, T. C. and Diesmann, M. (2014). The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model. Cereb Cortex 24:785–806.
  3. Kunkel S., Schmidt M., Eppler J. M., Plesser H. E., Masumoto G., Igarashi J., Ishii S., Fukai T., Morrison A., Diesmann M. and Helias M. (2014)Spiking network simulation code for petascale computers.  Neuroinform 8:78. 
  4. van Albada S. J., Helias M. and Diesmann M. (2015)Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations PLoS Comput Biol 11(9): e1004490. 
  5. Helias M., Kunkel S., Masumoto G., Igarashi J., Eppler J. M., Ishii S., Fukai T., Morrison A. and Diesmann M. (2012)Supercomputers ready for use as discovery machines for neuroscience.  Neuroinform. 6:26. 
  6. Schmidt M., Bakker R., Shen K., Bezgin G., Diesmann M. and van Albada S. J. (2016). Full-density multi-scale account of structure and dynamics of macaque visual cortex. arXiv preprint arXiv:1511.09364.
  7. Schuecker J., Schmidt M., van Albada S. J., Diesmann M. and Helias M. (2016) Fundamental activity constraints lead to specific interpretations of the connectome. arXiv preprint arXiv:1509.03162.
Brain-scale simulations of cortical networks at cellular and synaptic resolution

Leave a Reply