How does allen brain create meshes for viewing from the volumetric data. And how do you guys annotate the structures
I can speak to the first question - at least as far as the mouse CCF meshes that you see in the Allen Brain Explorer webapp. Briefly, we started with the 2017 CCF annotations and applied marching cubes to polygonalize each structure’s surface (working from an indicator mask for that specific structure).
Since these meshes were meant for visualization in a web application, we applied a few postprocessing steps to the outputs of marching cubes. These include decimation (reducing the size of the meshes allows them to be served more quickly), smoothing, and slight shrinkage of fully contained structures (otherwise fully contained structures clip through their containing structures).
Thanks for your response. I have a few doubts.
When you say you started with the allen brain annotations did you draw these annotations on all slides of the 3d histological/mri volume or only on regularly interspaced one.
Right now im doing it on regularly interspaced one (draw in illustrator and then extract contours and overlay on the histological image). Then i do an interpolation to propagate the labels to the slices in the middle. However this doesnt preserve the structure boundaries too well and there is some amount of bleeding out of the structure boundaries. And I have to do a fair bit of manual editing on ITK-SNAP. How do you fix this problem. Because the mri volume has over 784 slices in the sagittal section and thats a lot
After doing these steps i load the segmentation mask in ITK-SNAP and then do an export mesh to get the structure mesh, could you suggest some ways of improving this workflow or some other finer steps that im missing but make the process work much better
Would you mind providing a few more details with your question to make sure we are hitting the mark in answering your question? When you ask about volumetric data? What specific data are you referring to?
Regarding, structure annotation. I am assuming that you are asking about our reference atlases. Is this correct?
On this page, you should be able to find documentation as the process of annotating each of the atlases. Historically, all our atlases has been drawn as 2D plates. As you mentioned this causes lots of issue when you are trying to get 3D objects from them. There are a combination of discontinuities and also interpolation issues that are hard to overcome retrospectively. We did had do a lot manual annotation just like you described for the times that we tried to do this.
This is why we had changed to a different fully 3D annotation workflow when time came to do a updated atlas for the mouse brain. Here is a link to the recent publication.