I have a quick question about the Neuropixels dataset. I was thinking of analyzing some layer-specific things in the Neuropixels data (especially in layer 6b). The units are only labelled with a cortical region, but no layer - in order to assign them a layer I’ve used the ccfv3 coordinates of the units where available and the annotation matrix with reference space key ‘annotation/ccf_2017’.
This works fine and is very easy, however I’ve noticed that some units then end up in another (adjacent) cortical area than they were originally prescribed to (i.e. their original ‘ecephys_structure_acronym’). This just made me wonder: how trustworthy is this method? Has anyone tried this before (I haven’t found anything that similar in the forum…)? Is it possible to extract the layer information of these units?
I’m happy to post any code.
Glad you were able to figure out the layer assignments! This is the same method we used to assign layers in Siegle, Jia et al. (2021) Nature.
The reason for the discrepancy in the area labels is that the cortical area determined from our retinotopic mapping procedure takes precedence over the area extracted from the CCF coordinates. Because the precise cortical area boundaries can vary from mouse to mouse, we register the probe insertion image to each mouse’s individual cortical surface map in order to recover the actual area that was recorded. If the exact area of origin is ambiguous, the cells are labeled VIS. We chose to call the area label the ecephys_structure_acronym in order to indicate that it may differ from the expected structure acronym at a particular location in the CCF.
Hey, I’m also looking for doing layer-specific analysis with Neuropixel data, do you mind sharing more detail of how you obtain the cortical layer of each unit with me? Thank you so much!
It will also allow you calculate unit depth along cortical “streamlines” (paths normal to the cortical surface), which is more accurate than using distance along the probe.
Please keep in mind that these layer assignments are only estimates, and not definitive labels. We chose not to include layer labels in the NWB files because this method is based on the boundaries of the average CCF template volume and may not be accurate for individual mice. To determine the area boundaries for individual mice, we recommend looking at the current source density plots that are available for each probe (see this notebook for info on how to retrieve these).
Hi Josh,
I have three following up questions abut aligning CSD with the estimated cortical layers.
session_839557629 doesn’t have any lfp.nwb files and 8 sessions (e.g. 48th to 54th sessions) lost one lfp.nwb file (should be 6 instead of 5). Is this due to the loading issue in my program?
Some probes’ CSD (-0.1 to 0.25s, 2500Hz sample rate) have 875 time stamps , while others have 876 time stamps. It seems like for the 875 time stamps, it lost the last stamp and finished at 0.2496s. Is this correct?
The pre-computed CSD should be 0.35s but I found over 10 sessions (e.g., session_819701982
) have 1.3s CSD although their flash duration is still 0.25s. Is this CSD computed using other stimuli instead of flashes?
The sessions you mentioned are missing LFP data due to excessive noise in the recording, either on specific probes, or across all probes. It’s not an issue with loading the data – we have not released these files.
This is correct – this is just due to a rounding error when computing the CSD window.
I’m not entirely sure what’s going on here. The CSD is always computed using flashes, but the time window must have been longer for those sessions. I think it’s safe to ignore the additional data in those CSD traces.
The reason that’s needed is because “layer 3” doesn’t exist in the CCF, only “layer 2/3.” That line of code prevents cells from hippocampal “CA3” from getting assigned to a cortical layer.