To whom it may concern,
I am curious what the recommended way to pre-process the Neuropixels data is for the following application: image classification decoding of natural scenes across sessions in a particular regions, say VISp, using spike counts. Given that ‘unit_id’ points to specific units across all sessions, and those units may or may not refer to the same exact neural populations across sessions, would it be best just to learn from all spike counts from units in VISp for natural scenes?
I refer mostly to the image classification example in the quick start tutorial. If one wanted to scale this across Brain Observatory 1.1 sessions (given that Functional Connectivity sessions didn’t include natural scenes), what would be recommended?
Many thanks for any feedback ahead of time!