Follow up on wrong rotation angles using image to reference API

Dear Allen Brain Atlas team,

We are trying to compute the 10um resolution expression grid for E13.5 developing mouse. However, we observed a wrong rotation angle problem. My student and I have spent almost a month on this and really appreciate any suggestion and comments

Our method overview.
Step 1. Download the ISH images of all SectionDataSets at E13.5
Step 2. Use image to reference API to synchronize all images to the reference space.
Step 3. Use the reference space to generate 10um resolution data.

However, we found that ISH images that are mapped to the reference space present very different rotations.

We showed an example of original images (i.e., before synchronizing to the reference space), After synchronization (extract multiple slice in the z-axis and then aggregate them using np.sum from 3D reference space) and our reimplementation of using reverse angle tvs instead of tsv below. We can see that images after synchronization have very different angles.

We can see there is an angle between downloaded section 48 and section 41 images in the original images. The angle between section 41 and 48 becomes even larger after synchronization! This means the two sections are not aligned, the mouse head towards different directions.

Since the angle after the alignment grows even larger, we guess we can reduce the angle by reverse the rotation matrix.
We thus implement the synchronization by first obtaining the ailngment2d matrix and do the np.dot matrix multiplication. To implement the reverse angle, we use “tvs” affine matrix, instead of “tsv” (because actually their rotations are inverse). “tvs” can also be found in the link of “tsv”.
Notably, when we use “tvs”, the resulted (rx, ry, rz) is different from the synchronization API (the img-to-reference API) for the same (sx, sy).

Therefore, we guess the correct rotation should be “tvs”, not “tsv”. The image to reference API might be wrong? Could someone help us? Really appreciated!

Thank you,
Sheng Wang
Assistant professor of computer science
University of Washington

could anyone help us with this? Thank you!

Thank you for your patience, Sheng Wang. We are looking to get you an accurate answer to your question, and hope do so soon.

Best,
Tyler

Thank you very much! Feel free to email me at swang91@uw.edu

I am happy to provide more code/data on my side to reproduce these figures.

Dear Allen Brain team,

Could you please help us with this? One solution is to share the script that can reproduce 200um 3-D EXPRESSION GRID DATA. We can then follow that process to generate the higher resolution 3D grid (e.g., 100um, 50um) we need.

Thank you,
Sheng

Hi @shengwanguw,

Thank you very much for the patience. The developers who initially implemented this back in ~2010 are no longer at the institute so it has taken a while to get familiar with the project and conventions. Would you be able to share some snippets of code for how you are filling in the affine transform matrices from the “tvs” and “tsv” values as well how you are applying the transform matrices?

Best,

Nick

1 Like

Hi Nick,
Thanks for your reply. Is there an email address I can use to send you my scripts and some figures? It is not able to upload multiple files here.
Thanks,
Sheng

Hi @shengwanguw,

Usually it’s preferred to post code in the forum directly so that other researchers/users can benefit, but if you feel strongly about keeping code closed source, I’ve sent an e-mail to swang91@uw.edu that you can reply to.

Best,

Nick