Accessing frames for natural_movie_shuffled in Neuropixel data

Hello,
I am working with the Neuropixel data, and the “Functional Connectivity” experiment paradigm. In that, there is a block of 10 minutes, where the “natural_movie_shuffled” is presented. I was curious about

  1. how the shuffled frames were obtained, and if the same shuffled frames were presented in all experiments, or different shuffles were obtained on each day?
  2. Since there were 20 repeats, were all these 20 repeats the same “movie” or was each presentation different?
    In trying to access the .nwb file (for example session -778240327), I see the “frame” entry inside “/intervals/natural_movie_one_shuffled_presentations” to simply increase from 0 to 900, which I assume is just the frame number for the presentation, and does not reflect frameIDs with respect to the original video. I want to access the information of what visual cue/frame was present at each time point.
    I tried the idea from this link , but it did not work since, I assume that was for the calcium imaging experiments, where the shuffled movies were not presented.
    Thanks in anticipation,
    Chinmay

Hi Chinmay,

The shuffled movie was created once by randomly permuting the frames in “Natural Movie One.” So the movie is identical for each repeat, and for each session.

The original movie is not available through the AllenSDK, but I can send it to you if you’d like.

Josh

Hi Josh,
Thank you again for your prompt reply. Yes, please that would be great if you could send me that video/frame stack for the same. My email is chinmay.purandare@gmail.com .
Thanks again,
Chinmay

Hi, Josh,

I also need to access the shuffled movie for my research. Could you please send me the shuffled movie or frame data corresponding to the original one? My email is joshuayanginf@gmail.com.

Thanks,
Joshua

Just sent it!

Hi Josh,

Thanks for lending a hand on this. I would like the movie frames for the shuffled presentation as well. You can email me at awilliams@flatironinstitute.org

Thanks again,

– Alex

Sorry for missing this earlier, I just sent it!

Hi Josh, I recently encountered the same issue. Could you also send me (cyusi@ucsd.edu) the movie frames for the shuffled stimulus? Thanks in advance.

Yusi

Just sent it!

Hi Josh,

Could you also send me the movie frames for the shuffled natural movie? (L.Meyerolbersleben@campus.lmu.de)

Thank you very much, best wishes,
Lukas

Sent!

Hi Josh,

Could you please send the me the movie frames for all three original natural movies if possible?

Thank you,
Jacob

Sent!

Hi Josh,

I have the same problem, could you send me a copy? My email address is 202331061057@mail.bnu.edu.cn

Thank you,
Weiwei Wang

Hi Josh,

Another detail about stimulus presentation I wanted to confirm was, were the movie stimuli presented in the center of the screen at native resolution (height: 304 pixels * width: 608 pixels), with gray screen fill around the edges?

Thanks in advance.

I just sent you the frame info!

A spherical warp was applied to all stimuli (including movies) so that they covered the whole screen, with no gray regions. Details of the warping procedure can be found in the “Stimulus monitor” section of this whitepaper.

Thank you very much, I have received the file. I noticed the spherical warp transformation described in the white paper. I understand that the stimulus generation process has two stages:

  1. Generate stimuli using specific parameters;
  2. Warp screen transformation before presentation.

I wonder if I need to scale the movie frame to achieve the effect of covering the full screen after warping when generating movie stimuli. Specifically, I use ImageStim in psychopy to present movie stimuli frame. Should I set the size parameter to (1920, 960), or keep its original resolution (608,304), or use other parameters to ensure that the stimulus position is consistent with the situation during the experiment?

Would you like to replicate the same stimuli for your own experiments, or determine which pixels were on the screen for analysis purposes?

If it’s the former, the PsychoPy window/warp parameters that were used can be found here: openscope_loop/camstim/camstim/window.py at main · AllenInstitute/openscope_loop · GitHub

If it’s the latter, the make_stimulus_template function in the AllenSDK can be used to figure out which pixels were visible to the mouse.

Thank you for your very valuable reply. I am currently trying to reproduce the stimulus presentation process for data analysis rather than reproducing the experiment. The openscope_loop repository looks very valuable and I will study it carefully. According to my understanding, the spherical transformation is to maintain the invariance of the stimulus projected onto the retina. I try to directly generate the stimulus and use the mask obtained by the make_stimulus_template function to determine the ROI, interpreting this stimulus as a stimulus projected directly onto the retina for subsequent analysis.

Hi, josh

I have same problem like other people.

Could you send me movie frames for the shuffled natural movie?

My email address is yrkim224@snu.ac.kr

Thank you very much!

Yeerim Kim