Dear Community,
I am working with the following three datasets:
Visual behavior Optical Physiology, Visual Coding- Optical Physiology and Visual Behavior Neuropixels.
I wondered if the segmentation of the neurons can be regarded as segmentation of all visible neurons or only all active neurons. In the Neuropixel whitepaper e.g. it is stated, the the active neurons are segmented. however, I did not see any step that ensured this in my view.
Maybe you have a better idea about, if really only the activ eneurons are segmented, and if yes, which step ensures this behavior.
Best,
Nicolas
Hi Nicolas,
For the optical physiology datasets (visual coding and visual behavior) is of visible neurons. There is a caveat that to be visible, the neuron needs some baseline activity - but it can be very minimal activity for the neuron to be detected and segmented.
I’m unclear about the segmentation in the Neuropixels dataset - I don’t see any mention of segmenting neurons in that white paper. There is, however, a spike sorting step, and this does require that the neurons be active. There is also a QC requirement that neurons are present (i.e. firing spikes) for the majority of the session. So this is a key difference between the ophys and ephys data modalities.
Hope this helps,
Saskia
Hey Saskia,
thanks for the quick reply!
I found the following in the Visual Behavior Whitepaper:
The Visual Behavior 2P project used the same segmentation procedure that was developed for the Visual Coding 2P dataset, published in de Vries et al., 2020. The active cell segmentation module was designed to locate active cells within a field-of-view (FOV) by isolating cellular objects using the spatial and temporal information from the entire movie. The goal of the active cell segmentation module is to achieve robust performance across experimental conditions with no or little adjustment, such as different mouse cell lines, fluorescent proteins (e.g., GCaMP6f or GCaMP6s), and FOV locations of visual areas and depths. The process begins with the full image sequence as input to apply both the spatial as well as temporal information to isolate an individual active cell of interest without data reduction, such as by PCA, and does not make assumptions about the number of independent components existing in the active cell movie. Also, in contrast to other methods, this approach separates the individual steps, including identifying and isolating each cellular object, computing confidence of each identified object (by object classification) and the step of resolving objects overlapping in x-y space (which lead to cross talk in traces), so that each can be improved upon if necessary.
Here the URL:
It states that the same procedure was used for visual coding and visual behaior, which should include all the datasets I mentioned earlier.
The text, however, states that the segmentation result is the segmentation of the active neurons - that’s why I was a bit confused as I did not find any methodological approach that made sure that only active neurons are segmented.
So you are positive that in principal, the masks obtained from these datasets include all visible neurons, right?
Thanks again!
Best,
Nicolas
Hi Nicolas,
Yes, the masks obtained are based on the cells being visible - not based on activity. I apologize that the language in that white paper is confusing.
Saskia
Thanks for the reply!
okay, nice to have that clarified!
Best,
Nicolas