I have repackaged this data from the server into a new format…a “precomputed” skeleton format that is compatible with neuroglancer and for which there is a python library that is available for reading.
here is a neuroglancer link: Neuroglancer
[click login at bottom of screen to unshorten link]
the precomputed source location from the link is: precomputed://gs://allen_neuroglancer_ccf/allen_mesoscale
I would use the python library cloud-volume to access this bucket and pull streamlines skeletons for each of the experiments. GitHub - seung-lab/cloud-volume: Read and write Neuroglancer datasets programmatically.
here is an example snippet…
import cloudvolume
cv = cloudvolume.CloudVolume(‘precomputed://gs://allen_neuroglancer_ccf/allen_mesoscale’, use_https=True)
streamline = cv.skeleton.get(479983421)
Streamline has vertices in nanometers and edges as indices into the vertices, and if you perform connected components analysis of each graph you will find each streamline is a different component.
if you don’t want to use python you can get the steamline at this https address pattern.
https://storage.googleapis.com/allen_neuroglancer_ccf/allen_mesoscale/skeleton/[EXPERIMENT_ID]
The resulting binary needs to be interpreted according to the precomputed format
the reference info file for this dataset can be found here
https://storage.googleapis.com/allen_neuroglancer_ccf/allen_mesoscale/skeleton/info