Using Visual Coding - Neuropixels data for image classification across sessions

To whom it may concern,

I am curious what the recommended way to pre-process the Neuropixels data is for the following application: image classification decoding of natural scenes across sessions in a particular regions, say VISp, using spike counts. Given that ‘unit_id’ points to specific units across all sessions, and those units may or may not refer to the same exact neural populations across sessions, would it be best just to learn from all spike counts from units in VISp for natural scenes?

I refer mostly to the image classification example in the quick start tutorial. If one wanted to scale this across Brain Observatory 1.1 sessions (given that Functional Connectivity sessions didn’t include natural scenes), what would be recommended?

Many thanks for any feedback ahead of time!

Best regards,
Alex McClanahan

Hi Alex, thanks for the post! Each unit in the database is recorded in exactly one session. Unlike in the two-photon imaging experiments, we can’t return to the same populations over multiple days; the Neuropixels probes are inserted into the brain once and then removed. So you can’t aggregate over multiple sessions when training an image classifier on a unique neural population.

Hey Josh,

I see. Thanks for clarifying!

Hey again @joshs,

One more curious point: in the LFP analysis tutorial, down near the bottom, the webpage has some formatting peculiarities. Just thought I would let you know! Also, the code seems to be missing from that portion of the tutorial, starting with ‘Aligning LFP data to a stimulus.’

1 Like

Thanks for pointing that out! It seems like there was an error when converting that notebook to HTML…we’ll look into getting that fixed. Until then, if you click the “download .ipynb” link and load the original .ipynb file using Jupyter, you’ll be able to see the rest of the notebook.

1 Like