Attendees at the workshop were invited to share written questions for the speakers. My responses to the questions remaining at the end of my talk are:
Q : What have we learned about the brain if we can predict every spike and every LFP waveform with a super complicated model?
Anton Arkhipov: This is an excellent question that warrants substantial discussion. In short, detailed models by themselves do not constitute understanding. They are a tool. Using this tool, one can test hypotheses and help move science forward. They provide realistic constraints on what brain circuits might be doing, which is better than operating with a simple schematic or just a couple of equations. If they are powerful enough, such models can also be used for a lot of practical applications, where experiments may be too difficult. In the end, we want to make prediction and study mechanisms, and that’s what these models can be good for once they are powerful enough.
Q : Anton, given the simulation cost and duration, does the multilevel model address long term changes in (e.g. neuron replacement)?
Anton Arkhipov: Currently, no. There are many long-timescale mechanisms that are not included in these models, but it will be very interesting to start including those.
Q : Anton, Once we have the corresponding data/figures for simulation vs. experiment, how to quantify “how well” the model is doing?
Anton Arkhipov: Good question. There are many ways to do that, and not necessarily just “one correct approach”. In our paper we used a similarity score based on the Kolmogorov-Smirnov distance between distributions (e.g., experimental and simulated distributions of direction selectivity index for each neuron in the population of interest). That gives you one number for a given population and given metric, from 0 to 1, which tells how similar the two distributions are. But there are many other ways to quantify this, and we are looking into several alternatives now in different projects.
Q : Please, can you share the reference of this Dai et al, bioRxiv paper?
Anton Arkhipov: https://www.biorxiv.org/content/10.1101/2020.05.08.084947v1
Q : Will a python 3.6 and above version be released?
Anton Arkhipov: If you are asking about BMTK, it has been released already supporting python 3.6. Please contact us directly with questions.
Q : What model parameters do you fit? How do you fit these parameters?
Anton Arkhipov: We fit synaptic weights at the level of cell types. Our target is the spontaneous firing rate (mean for each cell type) and the peak firing rate (also mean for each cell type) in response to a single episode (0.5 s) of a single drifting grating — so, that’s a small training set, and it’s quite interesting that the model works so well on the test set (all other visual stimuli). Please see details in Billeh et al., Neuron, 2020.
Q : Is there any benefits incorporating Bayesian in the models?
Anton Arkhipov: The main challenge is that in these models biological mechanisms are incorporated explicitly. So, including Bayesian principles in these models requires an understanding or a hypothesis regarding how these principles are implemented biologically. But if that can be provided, models likes these can be a great tool to test such hypotheses.
Q : Any particular reason why only V1 has been focused on?
Anton Arkhipov: Most of the data come from V1. This is probably the best characterized part of the cortex in the mouse currently.
Q : How do you see statistical models being integrated into this process of computational modeling? Are statistical models simply a steppingstone towards getting at better computational models? I am thinking about this at the scale of LFP/EEG/MEG.
Anton Arkhipov: I think the key is to connect statistical models with physical or biological mechanisms. Once you have a particular effect derived from a statistical model, develop a hypothesis of how that effect may be implemented in biology — then such a hypothesis can be tested using these large-scale bio-realistic models.
Q : Anton, broader question, given the objective for the multilevel models — doesn’t this lead to an inherently complex model? Can you comment on the complexity and effects on usability, interpretability, and cost?
Anton Arkhipov: This is an important point. These bio-realistic models do not replace theory, and they do not constitute “understanding” on their own. They are tools — ideally, they can become powerful enough to be used as platforms for predictive studies. If this works well, these models can be used to test hypotheses and theories, or perform studies that cannot be done experimentally and drive scientific progress that way. It’s like calculus is not an understanding, it’s a tool to answer questions about mathematical problems.
Q : How is the AI relating these models with behavior?
Anton Arkhipov: I don’t think we know the answer to that. It would be very interesting to try and explore this question. Certainly many potential applications there.
Q : To Dr. Arkhipov, natural movie responses are more sparse & precise than responses to drifting gratings, does your model reproduce this phenomena? If so, what do you think is the mechanistic reason behind that given your model?
Anton Arkhipov: We see that difference in the sparsity between gratings and natural movies in our model, but not to the extent that is seen experimentally — in experiment, the sparsity for movies is higher than what our model shows. So, the model does relatively well qualitatively on thsi observable, and that probably has to do with natural movies having less “concentrated” set of stimuli at each time moment (the movement and objects are usually not all over the screen). At the same time, apparently the model is missing some important mechanism here, since its sparsity is not as high as in the experiment. That’s an interesting topic for further investigation.
Q : when you update the model to fit some data better (e.g., in case of CSD to full field flashes), do you also check whether those updates also fit other aspects of the data well (e.g., evoked response to other visual stimuli)?
Anton Arkhipov: Yes, this is what we aim to do. This is work in progress, so full comparison has not been done yet. But yes, it is important to test the modified model on the previous metrics as well.
Q : Are feedback inputs from higher areas also accounted for in these models?
Anton Arkhipov: Currently, no. This is a great direction for future work.
Q : How about the inverse problem from EEG and LFP. Would it make more sense modeling spikes due to this issue?
Anton Arkhipov: That’s a good question, and there are important issues to consider when tying to make predictions regarding the connection between EEG and LFP. However, LFP can be measured directly using probes like Neuropixels. And that information, together with spikes, is useful to constrain models and learn something about mechanisms operating in these cortical circuits.
Q for Anton from Greg Handy : when you updated the parameters to capture the LFP/CSD features, how was this done to prevent changes to previous results? when new data comes online, do you continue to constrain the model with all of the previous experimental data results as well?
Anton Arkhipov: As I wrote in response to another question:
Yes, this is what we aim to do. This is work in progress, so full comparison has not been done yet. But yes, it is important to test the modified model on the previous metrics as well.
I will add that there can be different ways of using these models. One way is to have “base” model and perform tweaks for it for some specific questions. The other is to build a new version of “base” model, which supersedes the old one. Testing on many metrics, the new and old ones, is particularly important in this latter case, so that we are certain the new “base” model becomes a standard in the field
Q : @Anton Do you include feedback inputs from higher areas in the cortical models (biophysical and point)? Much of this is largely unknown and seem to play a huge role modulating firing rates during Contextual influences and adaptation!
Anton Arkhipov: Not yet, but that’s an important area for future work. I agree, this is a very interesting area to explore.
Q : [to Anton] With so many available parameters in your model, what strategy do you have to change them when you have discrepancy with data (like you showed for LFP/CSD) : beyond eventually getting some help from an even more reduced version of the model (with populations of neurons), which could help for this case, how do you proceed? Don’t you have problem, like flat landscape mentioned by Gaute?
Anton Arkhipov: Yes, there’s a problem of a potentially flat landscape. At this point, it is mostly intuition. We try doing perturbations to the parameters along one or a few well-constrained dimensions. If it doesn’t work, it doesn’t mean there’s no solution… If it works, it doesn’t mean that solution is the only one possible. But usually this process leads to some useful insights and at least some predictions that can then be followed up experimentally.
Follow up comment to Anton’s answer: Thanks Anton, interesting!
However, it seems that you are in a situation not too different from simpler models (where only few parameters are available) as you can explore only a few among many.
In the theoretical/computational neuroscience field, almost everyone infers only a few ingredients that one thinks to be crucial to explain some experiments, and relating an approach like yours to this field would be very interesting. On one side, your approach can give new ideas about what is important to consider (instead of an a priori), so extracting the essential facts making the model work are quite important. On the other side, it would be very nice that your approach goes really beyond what we can do with simpler models, and can explore more thoroughly the complexity not taken into account usually.
Q : What about using weighted sum of currents arriving to excitatory neurons as LFP signals for point neurons?
Espen Hagen: That won’t account for the effects of filtering along dendritic cables and neuron geometry relative to the electrode location. We proposed a so-called hybrid scheme for LFPs from point neuron networks in [Hagen et al., Cerebral Cortex, 2016] which can account for these effects. Mazzoni et al proposed a simpler method as well (more similar to your idea): [Mazzoni, Alberto et al. (2016), Data from: Computing the local field potential (LFP) from integrate-and-fire network models, Dryad, Dataset].
Anton Arkhipov: Thanks for answering this, Espen!
Q : DNN modeling has shown that learning task-relevant representations is a critical aspect of predicting neural responses. are you attempting to investigate learning in these circuit models, or are you simply trying to implement known biophysical details and fit the parameters to neural data?
Anton Arkhipov: At this point, it is “simply trying to implement known biophysical details and fit the parameters to neural data”, but I agree that adding learning to these models will be very interesting. Of course, more data on learning at the cellular level will be useful here.
Q : Specifically which point neuron model has been used in network models that were described? Figure legends say “GLIF”, but this is a class of neuron models rather than a particular one.
Stefan Mihalas: I will answer this question as the point network equivalent to the biophysical one was done primarily in my group. We used GLIF3 from https://celltypes.brain-map.org for the network model. That is LIF + multiple spike induced kernels. The neurons are 1-1 mapped to the biophysical ones in connection graph, but the weights were tuned separately.
Anton Arkhipov: Thanks for answering this, Stefan!