What is the Allen Mouse Common Coordinate Framework?
Allen Mouse Brain Common Coordinate Framework (CCFv3) is 3D reference space by creating an
average brain at 10um voxel resolution from serial two-photon tomography images of 1,675 young adult C57Bl6/J mice.
Using multimodal reference data, we parcellated the entire brain directly in 3D, labeling every voxel with a brain structure spanning 43 isocortical areas and their layers, 314 subcortical gray matter structures, 82 fiber tracts, and 8 ventricular structures.
Read this documentation to learn more about the creation of the CCF.
How do I view the Allen Mouse Common Coordinate Framework (CCF)?
How do I download the CCF?
There are a couple of ways to download the average template volume, and annotation volume (volumes at 10, 25, 50, and 100 um isotropic resolution):
- Using the AllenSDK
We recommend that you download atlas volumes using the AllenSDK , a Python package containing tools for accessing and using our data. Using the AllenSDK lets you easily download and organize the atlas and template volumes. Please see the example notebook for more information.
If you run into problems using the AllenSDK, let us know on the AllenSDK’s Github page .
- Direct download from the server
We also make these volumes available through our download server. Read the overview and download instructions for more information.
How do I download the labelled atlas images I see from the atlas image viewer?
An individual image can be downloaded directly from the atlas viewer via the menu on the top-right corner of the window.
See this post to learn how to access the images in bulk using the API or SDK.
How do I download the structure ontology/tree?
Read this overview page for more information.
How do I register my images to the CCF?
Image registration tools to align experimental data to the CCF is an active research area. Different modalities will likely required a targeted method to solve the problem.
How do I access the registration code used in your pipeline?
The code is currently under re-development, in its current form it is highly coupled to the data modality, format and operations of the pipeline. We can make the code available on request by collaborators so we can help them assess suitability and how to redevelop to for their needs.
How do I visualize CCF annotations on the original images?
Our typical workflow for analysis and display is to align image data from different specimens all into the CCF space for integrated analyses. Using the computed deformation field, it is possible to visualized CCF annotations of interest on the original images available through the AllenSDK. This github repo has code and an example notebook to demonstrate this process step by step.
How is the CCF being used by the neuroscience community?
We are keeping a list of publications and tools that uses the Allen Mouse CCF in this post. We invite others to add and contribute to the list.