Transformation of degrees of visual angle to pixels

Hi all, I have a question regarding the transformation of degrees of visual angle (e.g. from the pre-computed RF positions, which are given in degrees) to monitor pixels.

Using the module allensdk.brain_observatory.stimulus_info, specifically its object BrainObservatoryMonitor, I get a px2deg ratio of 9.68 using the method visual_degrees_to_pixels. However, in the whitepaper, it says that the monitor size in pixels was 1920x1200, corresponding to 120° x 95° visual angle. Using this, I get a px2deg ratio of 1920/120 = 16, which is quite different. I’m not sure why I get this discrepancy - thank you very much in advance for any help!

Sorry, just realized: Does this discrepancy have to do with the stimulus warping? If so, does the value of the variable visual_degrees_to_pixels then represent the approximate relationship between degrees and pixels from the mouse’s perspective after warping?

Hi! You’re on the right track!
The visual size of the monitor (120x95) is based on the size of the monitor and its distance from the mouse’s eye. The px2deg value is true for the center of the monitor. But because the stimulus is warped, that relationship is not uniform across the monitor.

Thank you very much - but can’t we then assume that, if the warping perfectly corrects for the distortion caused by placing the monitor close to the mouse’s eye, the px2deg translation is constant from the perspective of the mouse? I’m asking because I’d like to extract local patches of the natural movies in the receptive field of a given unit, and thus need to translate both the RF position and RF area from degrees to pixels. (I’m using the method map_stimulus_coordinate_to_monitor_coordinate to relate template pixels to monitor pixels.)

In the BrainObservatoryMonitor module are functions “lsn_image_to_screen” and “natural_movie_image_to_screen” that puts the stimulus templates of each stimulus into the same screen coordinates, allowing you to directly to compare.
We actually made a function as part of a course to do this that might help:

Dear Saskia,
Sorry about the long delay - I was focusing on other aspects of my data analysis in between. Thank you for the link to the code - it pointed me towards some useful functions, but I think it’s not quite what I need. I’m struggling with the following issues:

  • I’m analyzing the Neuropixels dataset, which doesn’t have sparse noise, as far as I’m aware, but uses Gabor patches to map RFs. So far, I couldn’t find any function to map the Gabor patch positions to the screen - especially since they are given in degrees visual angle in the warped coordinate system (as per this answer: Gabor patch data - units).
  • The Gabor patch positions go from -40:40 in azimuth and elevations, but the estimated RF azimuths go from 10:90, and the elevations from -30:50. Could you clarify for me how those two relate to each other?

Hi! The Gabor patch locations are initially defined in degrees from the center of the screen, ranging from -40º to +40º along both axes. When we translate those to azimuth and elevation, they are relative to the center of the mouse’s binocular visual field (0º azimuth = directly in front of the mouse, with positive values to the right), hence the +10º to +90º and -30º to +50º.

Once you know the angular location of the stimuli, I believe you can just use the visual_degrees_to_pixels method to translate them into pixel coordinates.

Many thanks, Josh and Saskia, it’s clear and working for me now.