API: Allen Brain Connectivity

ALLEN BRAIN ATLAS API

The primary data of the Allen Mouse Brain Connectivity Atlas consists of high-resolution images of axonal projections targeting different anatomic regions or various cell types using Cre-dependent specimens. Each data set is processed through an informatics data analysis pipeline to obtain spatially mapped quantified projection information.

From the API, you can:
image: Download Images
image: Download quantified projection values by structure
image: Download quantified projection values as 3-D grids
image: Query the source, target, spatial and correlative search services
image: Query the image synchronization service
image: Download atlas images, drawings and structure ontology

This document provides a brief overview of the data, database organization and example queries. API database object names are in camel case. See the main API documentation for more information on data models and query syntax.

Experimental Overview and Metadata

Experimental data from the Atlas is associated with the “Mouse Connectivity Projection” Product.

Each Specimen is injected with a viral tracer that labels axons by expressing a fluorescent protein. For each experiment, the injection site is analyzed and assigned a primary injection structure and, if applicable, a list of secondary injection structures.

Labeled axons are visualized using serial two-photon tomography. A typical SectionDataSet consists of 140 coronal images at 100 µm sampling density. Each image has 0.35 µm pixel resolution and raw data is in 16-bit per channel format. Background fluorescence in the red channel illustrates basic anatomy and structures of the brain, and the injection site and projections are shown in the green channel. No data was collected in the blue channel.

From the API, detailed information about SectionDataSets, SectionImages, Injections and TransgenicLines can be obtained using RMA queries.


Figure: Projection dataset (id=126862385) with injection in the primary visual area (VISp) as visualized in the web application image viewer.

To provide a uniform look over all experiments, default window and level values were computed using intensity histograms. For each experiment, the upper threshold defaults to (2.33 x the 95th percentile value) for the red channel and (6.33 x the 95th percentile value) for the green channel. The default threshold can be used to download images and/or image region in 8-bit per channel image format.

In the web application, images from the experiment are visualized in an experimental detail page. All displayed information, images and structural projection values are also available through the API.

Informatics Data Processing

The informatics data processing pipeline produces results that enable navigation, analysis and visualization of the data. The pipeline consists of the following components:

  • an annotated 3-D reference space,
  • an alignment module,
  • a projection detection module,
  • a projection gridding module, and
  • a structure unionizer module.

The output of the pipeline is quantified projection values at a grid voxel level and at a structure level according to the integrated reference atlas ontology. The grid level data are used downstream to provide a correlative search service and to support visualization of spatial relationships. See the informatics processing white paper for more details.

3-D Reference Models

The cornerstone of the automated pipeline is an annotated 3-D reference space. For this purpose, a next generation of the common coordinate framework (CCF v3) is being created based on an average population of 1675 specimens. See the Allen Mouse Common Coordinate Framework whitepaper for detailed construction information. In this current release, the framework consists of 207 newly drawn structures spanning approximately half the brain. To support whole brain quantification, structures which have not yet been drawn are extracted and merged from the version 2 framework based on the Allen Reference Atlas. The interfaces between old and new structures were manually inspected and filled to create smooth transitions to create a complete brain map (~700 structures) for quantification.

CCF_October_2016

Figure: The next generation Allen Mouse Common Coordinate Framework is based on shape and intensity average of 1675 specimens from the Allen Mouse Brain Connectivity Atlas. At the time of the October 2016 release, 207 structures have been delineated on the anatomical template.

Structures in the common coordinate framework are arranged in a hierarchical organization. Each structure has one parent and denotes a “part-of” relationship. Structures are assigned a color to visually emphasize their hierarchical positions in the brain.

All SectionDataSets are registered to ReferenceSpace id = 9 in PIR orientation (+x = posterior, +y = inferior, +z = right).
image
Figure: The common reference space is in PIR orientation where x axis = Anterior-to-Posterior, y axis = Superior-to-Inferior and z axis = Left-to-Right.

3-D annotation volumes were updated in the October 2017 release to include newly drawn structures in the Allen Mouse Common Coordinate Framework (CCFv3).

Volumetric data files available download:

Data Description
Data Description
average_template USHORT(16 bit) anatomical template of CCFv3 - a shape and intensity average of 1675 specimen brains
ara_nissl FLOAT(32 bit) reconstructed Allen Reference Atlas Nissl deformably registered to the anatomical template of CCFv3
annotation/ccf_2017 UINT (32 bit) structure gray matter and fiber tract annotation of CCFv3 (October 2017)
annotation/ccf_2016 UINT (32 bit) structure gray matter and fiber tract annotation of CCFv3 (October 2016)
annotation/ccf_2015 UINT (32 bit) structure gray matter and fiber tract annotation of CCFv3 (May 2015)
annotation/mouse_2011 UINT (32 bit) structure annotation extracted from the coronal Allen Reference Atlas and deformably registered to CCFv3
annotation/devmouse_2012 UINT (32 bit) structure annotation extracted from the P56 Allen Developing Mouse Brain Reference Atlas and deformably registered to CCFv3

Each data type is available in multiple voxel resolutions:

Voxel Resolution Volume Dimension (AP,SI,LR)
10µm sotropic 1320, 800, 1140
25µm isotropic 528, 320, 456
50µm isotropic 264, 160, 228
100µm isotropic 132, 80, 114

All volumetric data is compressed NRRD (Nearly Raw Raster Data) format. The raw numerical data is stored as a 1-D array raster as shown in the figure below.

Example Matlab code snippet to read in the 25µm template and annotation volumes:

% -------------------------------

%

% Download a NRRD reader

% For example:

% http: //www.mathworks.com/matlabcentral/fileexchange/50830-nrrd-format-file-reader

%

% Requires: MATLAB 7.13 (R2011b)

%

% Download average_template_25.nrrd,

% ara_nissl_25.nrrd,

% ccf_2015/annotation_25.nrrd

%

% ---------------------------------

%

% Read image volume with NRRD reader

% Note that reader swaps the order of the first two axes

%

% AVGT = 3 -D matrix of average_template

% NISSL = 3 -D matrix of ara_nissl

% ANO = 3 -D matrix of ccf_2015/annotation

%

[AVGT, metaAVGT] = nrrdread( 'average_template_25.nrrd' );

[NISSL, metaNISSL] = nrrdread( 'ara_nissl_25.nrrd' );

[ANO, metaANO] = nrrdread( 'annotation_25.nrrd' );

% Display one coronal section

figure;imagesc(squeeze(AVGT(:, 264 ,:)));colormap(gray( 256 )); axis equal;

figure;imagesc(squeeze(NISSL(:, 264 ,:)));colormap(gray( 256 )); axis equal;

figure;imagesc(squeeze(ANO(:, 264 ,:)));

caxis([ 1 , 2000 ]); colormap(lines( 256 )); axis equal;

% Display one sagittal section

figure;imagesc(squeeze(AVGT(:,:, 220 )));colormap(gray( 256 )); axis equal;

figure;imagesc(squeeze(NISSL(:,:, 220 )));colormap(gray( 256 )); axis equal;

figure;imagesc(squeeze(ANO(:,:, 220 )));

caxis([ 1 , 2000 ]); colormap(lines( 256 )); axis equal;

Example Python code snippet to read in the 25µm template and annotation volumes:

# -------------------------------

#

# Install pynrrd: https: //github.com/mhe/pynrrd

#

# Download average_template_25.nrrd,

# ara_nissl_25.nrrd,

# ccf_2015/annotation_25.nrrd

#

# ---------------------------------

import nrrd

import numpy as np

import matplotlib.pyplot as plt

from PIL import Image

#

# Read image volume with NRRD reader

# Note that reader swaps the order of the first two axes

#

# AVGT = 3 -D matrix of average_template

# NISSL = 3 -D matrix of ara_nissl

# ANO = 3 -D matrix of ccf_2015/annotation

#

AVGT, metaAVGT = nrrd.read( 'average_template_25.nrrd' );

NISSL, metaNISSL = nrrd.read( 'ara_nissl_25.nrrd' );

ANO, metaANO = nrrd.read( 'annotation_25.nrrd' );

# Save one coronal section as PNG

slice = AVGT[ 264 ,:,:].astype( float )

slice /= np.max(slice)

im = Image.fromarray(np.uint8(plt.cm.gray(slice)* 255 ))

im.save( 'output/avgt_coronal.png' )

slice = NISSL[ 264 ,:,:].astype( float )

slice /= np.max(slice)

im = Image.fromarray(np.uint8(plt.cm.gray(slice)* 255 ))

im.save( 'output/nissl_coronal.png' )

slice = ANO[ 264 ,:,:].astype( float )

slice /= 2000

im = Image.fromarray(np.uint8(plt.cm.jet(slice)* 255 ))

im.save( 'output/ano_coronal.png' )

# Save one sagittal section as PNG

slice = AVGT[:,:, 220 ].astype( float )

slice /= np.max(slice)

im = Image.fromarray(np.uint8(plt.cm.gray(slice)* 255 ))

im.save( 'output/avgt_sagittal.png' )

slice = NISSL[:,:, 220 ].astype( float )

slice /= np.max(slice)

im = Image.fromarray(np.uint8(plt.cm.gray(slice)* 255 ))

im.save( 'output/nissl_sagittal.png' )

slice = ANO[:,:, 220 ].astype( float )

slice /= 2000

im = Image.fromarray(np.uint8(plt.cm.jet(slice)* 255 ))

im.save( 'output/ano_sagittal.png' )

Image Alignment

The aim of image alignment is to establish a mapping from each SectionImage to the 3-D reference space. The module reconstructs a 3-D Specimen volume from its constituent SectionImages and registers the volume to the 3-D reference model by maximizing mutual information between the red channel of the experimental data and the average template.

Once registration is achieved, information from the 3-D reference model can be transferred to the reconstructed Specimen and vice versa. The resulting transform information is stored in the database. Each SectionImage has an Alignment2d object that represents the 2-D affine transform between an image pixel position and a location in the Specimen volume. Each SectionDataSet has an Alignment3d object that represents the 3-D affine transform between a location in the Specimen volume and a point in the 3-D reference model. Spatial correspondence between any two SectionDataSets from different Specimens can be established by composing these transforms.

For convenience, a set of “Image Sync” API methods is available to find corresponding positions between SectionDataSets, the 3-D reference model and structures. Note that all locations on SectionImages are reported in pixel coordinates and all locations in 3-D ReferenceSpaces are reported in microns. These methods are used by the Web application to provide the image synchronization feature in the multiple image viewer (see Figure).
Connectivity_ImageSync.jpg
Figure: Point-based image synchronization. Multiple image-series in the Zoom-and-Pan (Zap) viewer can be synchronized to the same approximate location. Before and after synchronization screenshots show projection data with injection in the superior colliculus (SCs), primary visual area (VISp) anteolateral visual area (VISal), and the relevant coronal plates of the Allen Reference Atlas. All experiments show strong signal in the thalamus.

Projection Data Segmentation

For every Projection image, a grayscale mask is generated that identifies pixels corresponding to labeled axon trajectories. The segmentation algorithm is based on image edge/line detection and morphological filtering.

The segmentation mask image is the same size and pixel resolution as the primary projection image and can be downloaded through the image download service.


Connectivity_Segmentation.jpg
Figure: Signal detection for projection data with injection in the primary motor area. Screenshot of a segmentation mask showing detected signal in the ventral posterolateral nucleus of the thalamus (VPL), internal capsule (int), caudoputamen (CP) and supplemental somatosensory area (SSs). In the Web application, the mask is color-coded for display: green indicates a pixel is part of an edge-like object while yellow indicates pixels that are part of a more diffuse region.

Reference-aligned Image Channel Volumes

The red, green, and blue channels have been aligned to the 25um adult mouse brain reference space volume. These volumes have been stored in the API WellKnownFile table with type name “ImagesResampledTo25MicronARA”. To retrieve the download link for a specific data set, query for WellKnownFiles of the appropriate type with an “attachable_id” equal to the data set id:
http://api.brain-map.org/api/v2/data/WellKnownFile/query.xml?criteria=well_known_file_type[name$eq’ImagesResampledTo25MicronARA’][attachable_id$eq156198187]

Download this by attaching the value of the download-link field to the API web host name (http://api.brain-map.org/api/v2/well_known_file_download/269830017). The download file will be a .zip file containing three images stored in the raw meta image format:

  • resampled_red.mhd/raw: red background fluorescence
  • resampled_green.mhd/raw: rAAV signal
  • resampled_blue.mhd/raw: blue background fluorescence

All volumes have the same dimensions as the 25um adult mouse reference space volume.

Projection Data Gridding

For each dataset, the gridding module creates a low resolution 3-D summary of the labeled axonal trajectories and resamples the data to the common coordinate space of the 3-D reference model. Casting all data into a canonical space allows for easy cross-comparison between datasets. The projection data grids can also be viewed directly as 3-D volumes or used for analysis (i.e. target, spatial and correlative searches).

Each image in a dataset is divided into a 10 x 10 µm grid. In each division, the sum of the number of detected pixels and the sum of detected pixel intensity were collected. A second set of these same summations was computed for the regions manually identified as belonging to the injection site for injection site quantification. The resulting 3-D grid is then transformed into the standard reference space using linear interpolation to generate sub-grid values.

From the summations we obtained measures for:

  • projection density = sum of detected pixels / sum of all pixels in division
  • projection energy = sum of detected pixel intensity / sum of all pixels in division
  • injection_fraction = fraction of pixels belonging to manually annotated injection site
  • injection_density = density of detected pixels within the manually annotated injection site
  • injection_energy = energy of detected pixels within the manually annotated injection site
  • data_mask = binary mask indicating if a voxel contains valid data (0=invalid, 1=valid). Only valid voxels should be used for analysis

For each summation type, grid files can be downloaded at 10, 25, 50 and 100 ÎĽm isotropic voxel resolution.

3-D grids were updated in the May 2015 release to reflect the remapping to the new Allen Mouse Common Coordinate Framework (CCFv3), higher resolution computation and a new compress data format. 3-D grids from the October 2014 release (mapped to CCFv2) can be accessed through our data download server (see instructions).

Grid data for each SectionDataSet can be downloaded using the 3-D Grid Data Service. The service returns a compressed NRRD (Nearly Raw Raster Data) 32-bit FLOAT format. To download a particular grid file, the user specifies the SectionDataSet ID, the type of grid and the resolution.

Examples:

  • Download projection_density for a VISal injection SectionDataSet (id=287495026) at 50 ÎĽm resolution

http://api.brain-map.org/grid_data/download_file/287495026??image=projection_density&resolution=50

Example Matlab code snippet to read in the 50 µm projection_density grid volume and average_template:

% -------------------------------

%

% Download a NRRD reader

% For example:

% http: //www.mathworks.com/matlabcentral/fileexchange/50830-nrrd-format-file-reader

%

% Requires: MATLAB 7.13 (R2011b)

%

% Download average_template_50.nrrd

% Download projection_density at 50 micron for SectionDataSet id = 287495026

%

% ---------------------------------

%

% Read image volume with NRRD reader

% Note that reader swaps the order of the first two axes

%

% AVGT = 3 -D matrix of average_template

% PDENS = 3 -D matrix of projection_density

% DMASK = 3 -D matrix of data_mask

%

[AVGT, metaAVGT] = nrrdread( 'average_template_50.nrrd' );

[PDENS, metaPDENS] = nrrdread( '11_wks_coronal_287495026_50um_projection_density.nrrd' );

[DMASK, metaDMASK] = nrrdread( '11_wks_coronal_287495026_50um_data_mask.nrrd' );

% Display one coronal section

figure;imagesc(squeeze(AVGT(:, 184 ,:)));colormap(gray( 256 )); axis equal;

figure;imagesc(squeeze(PDENS(:, 184 ,:)));colormap(jet( 256 )); axis equal;

figure;imagesc(squeeze(DMASK(:, 184 ,:)));colormap(gray( 256 )); axis equal;

Example Python code snippet to read in the 50 µm injection_density and injection_fraction and compute an injection centroid:

# -------------------------------

#

# Install pynrrd: https: //github.com/mhe/pynrrd

#

# Download injection_density at 50 micron for SectionDataSet id = 287495026

# Download injection_fraction at 50 micron for SectionDataSet id = 287495026

#

# ---------------------------------

import nrrd

import numpy as np

import matplotlib.pyplot as plt

import Image

#

# Read image volume with NRRD reader

# Note that reader swaps the order of the first two axes

#

# INJDENS = 3 -D matrix of injection_density

# INJFRAC = 3 -D matrix of injection_fraction

#

INJDENS, metaINJDENS = nrrd.read( '11_wks_coronal_287495026_50um_projection_density.nrrd' );

INJFRAC, metaINJFRAC = nrrd.read( '11_wks_coronal_287495026_50um_injection_fraction.nrrd' );

# find all voxels with injection_fraction >= 1

injection_voxels = np.where( INJFRAC >= 1 )

injection_density = INJDENS[injection_voxels]

sum_density = sum(injection_density)

# compute centroid in CCF coordinates

centroid = map( lambda x : sum( injection_density * x ) / sum_density * 50 , injection_voxels)

print centroid

Projection Structure Unionization

Projection signal statistics can be computed for each structure delineated in the reference atlas by combining or unionizing grid voxels with the same 3-D structural label. While the reference atlas is typically annotated at the lowest level of the ontology tree, statistics at upper level structures can be obtained by combining measurements of the hierarchical children to obtain statistics for the parent structure. The unionization process also separates out the left versus right hemisphere contributions as well as the injection versus non-injection components.
Projection statistics are encapsulated as a ProjectionStructureUnionize object associated with one Structure, either left, right or both Hemispheres and one SectionDataSet. ProjectionStructureUnionize can be downloaded via RMA. ProjectionStructureUnionize data is used in the web application to display projection summary bar graphs.

Examples:

http://api.brain-map.org/api/v2/data/ProjectionStructureUnionize/query.xml?criteria=[section_data_set_id$eq126862385], [is_injection$eqfalse]&num_rows=5000&include=structure

http://api.brain-map.org/api/v2/data/ProjectionStructureUnionize/query.xml?criteria=[section_data_set_id$eq126862385], [is_injection$eqtrue]&num_rows=5000&include=structure

Projection Grid Search Service

A projection grid service has been implemented to allow users to instantly search over the whole dataset to find experiments with specific projection profiles.

  • The Source Search function retrieves experiments by anatomical location of the injection site.
  • The Target Search function returns a rank list of experiments by signal volume in the user specified target structure(s).
  • The Spatial Search function returns a rank list of experiments by density of signal in the user specified target voxel location.
  • The Injection Coordinate Search function returns a rank list of experiments by distance of their injection site to a user specified seed location.
  • The Correlation Search function enables the user to find experiments that have a similar spatial projection profile to a seed experiment when compared over a user-specified domain.
    The projection grid search service is available through both the Web application and API.

Source Search

To perform a Source Search, a user specifies a set of source structures. The service returns all experiments for which either the primary injection structure or one of its secondary injection structures corresponding to one of the specified source structures or their descendents in the ontology. The search results can also be filtered by a list of transgenic lines.

See the connected service page for definitions of service::mouse_connectivity_injection_structure parameters.

The output of the source search is a xml list of objects. Each object represents one experiment and contains information about the experiment including its unique identifier, the primary injection structure, list of any secondary injection structures, injection coordinates, injection volume and transgenic line name.

Examples:

  • Source search for experiments with injection in the isocortex

http://api.brain-map.org/api/v2/data/query.json?criteria= service::mouse_connectivity_injection_structure[injection_structures$eqIsocortex][primary_structure_only$eqtrue]

  • Source search for experiments performed on wild-type specimens and with injection in the isocortex

http://api.brain-map.org/api/v2/data/query.json?criteria= service::mouse_connectivity_injection_structure[injection_structures$eqIsocortex][transgenic_lines$eq0][primary_structure_only$eqtrue]

  • Source search for experiments performed on Syt6-Cre_KI148 cre-line specimens and with injection in the isocortex

http://api.brain-map.org/api/v2/data/query.json?criteria= service::mouse_connectivity_injection_structure[injection_structures$eqIsocortex][transgenic_lines$eq’Syt6-Cre_KI148’][primary_structure_only$eqtrue]

Figure: Screenshot of source search results in the web application for experiments with injection in the isocortex. The injection location of each experiment is shown as a sphere on the 3D injection map.

Target Search

To perform a Target Search, the user specifies a set of target structures. The service returns a rank list of experiments by signal volume in the target structures which are above a minimum threshold. The target structure specification can be further refined by hemisphere. The search results can also be filtered by a list of source structures and/or list of transgenic lines.

See the connected service page for definitions of service::mouse_connectivity_injection_structure parameters.

The output of the target search is a xml list of objects. Each object represents one experiment and contains information about the experiment including its unique identifier, the primary injection structure, list of any secondary injection structures, injection coordinates, injection volume and transgenic line name. Additionally, the total signal volume and number of voxels spanned by the target structure(s) is also reported.

Example:

  • Target search for experiments with projection signal in the target structure LGd (dorsal part of the lateral geniculate complex) and injection in the isocortex

http://api.brain-map.org/api/v2/data/query.json?criteria= service::mouse_connectivity_injection_structure[injection_structures$eqIsocortex][primary_structure_only$eqtrue][target_domain$eqLGd]

Figure: Screenshot of target search results in the web application for experiments with projection in target structure LGd (dorsal part of the lateral geniculate complex) and injection in the isocortex. The injection location of each experiment is shown as a sphere on the 3D injection map.

Spatial Search

To perform a Spatial Search, a user selects a target location within the 3D reference space. The service returns a rank list of experiments by signal density in the target location and with density greater than 0.1.

See the connected service page for definitions of service::mouse_connectivity_target_spatial parameters.

The output of the target search is a xml list of objects. Each object represents one experiment and contains information about the experiment including its unique identifier, the primary injection structure, list of any secondary injection structures, injection coordinates, injection volume and transgenic line name. Additionally, the path from the target location to the injection site is listed along with signal density at each node.

Example:

  • Spatial search for experiments with projection signal in a target location in VM (ventral medial nucleus of the thalamus)

http://api.brain-map.org/api/v2/data/query.xml?criteria= service::mouse_connectivity_target_spatial[seed_point$eq6900,5050,6450]

Figure: Screenshot of spatial search results in the web application for experiments with projection in target location within VM (ventral medial nucleus of the thalamus). Each line in the 3D map is the computationally generated path from the target location to injection of one experiment.

Injection Coordinate Search

To perform an Injection Coordinate Search, a user specifies a seed location within the 3D reference space. The service returns a rank list of experiments by distance of its injection site to the specified seed location.

See the connected service page for definitions of service::mouse_connectivity_injection_coordinate parameters.

The output of the injection coordinate search is a xml list of objects. Each object represents one experiment and contains information about the experiment including its unique identifier, the primary injection structure, list of any secondary injection structures, injection coordinates, injection volume and transgenic line name. Additionally, distance between the injection site and seed location is also reported.

Example: Injection coordinate search for experiments with a seed location in VM (ventral medial nucleus of the thalamus)

http://api.brain-map.org/api/v2/data/query.xml?criteria= service::mouse_connectivity_injection_coordinate[seed_point$eq6900,5050,6450]

Correlation Search

To perform a Correlation Search, the user selects a seed experiment and a domain over which the similarity comparison is to be made. All voxels belonging to any of the domain structures form the domain voxel set. Pearson’s correlation coefficient is computed between the domain voxel set from the seed experiment and every other experiment in the product. The return list is sorted by descending correlation coefficient.

See the connected service page for definitions of service::service::mouse_connectivity_correlation parameters.

The output of the injection coordinate search is a xml list of objects. Each object represents one experiment and contains information about the experiment including its unique identifier, the primary injection structure, list of any secondary injection structures, injection coordinates, injection volume and transgenic line name. Additionally, the Pearson’s correlation coefficient between the experiment and the seed is reported.

Example:

  • Correlation search for experiment with similar projection profile in the thalamus compared to seed experiment 112670853 (injection in primary motor area of the cortex)

http://api.brain-map.org/api/v2/data/query.xml?criteria= service::mouse_connectivity_correlation[row$eq112670853][structures$eqTH]


Figure: Screenshot of top returns of a correlation search for experiments with similar projection to a MOp injection experiment (top-left) within the thalamus

1 Like

Is there a way to bulk-download the post-processing connectivity scores i.e. the normalized projection volume & projection density that are displayed on the Experiment Details pages for all injection sites?

Is there a way to automatically retrieve the info about all the brain nuclei projecting to a specific target? I’d like to create a network of connections but downloading each CSV table for each area is very time-consuming, but I didn’t find any explanation on how to automatically retrieve these information.