Hanging downloads from neuropixel

I am trying to download all spike data from the neuropixel data set. I used the ‘Accessing Neuropixels Visual Coding Data’ tutorials code on doing this from the section: Downloading the complete dataset. Three times now it has begun on a file but hung indefinitely (nwb files byte count unchanging over minutes after downloading over a gigabyte). Do you have advice/time out code snippets to address this problem? The code I use is below:

import os
import shutil

import numpy as np
import pandas as pd

from allensdk.brain_observatory.ecephys.ecephys_project_cache import EcephysProjectCache



data_directory = '/loc6tb/data/responses/np_allen/' # must be updated to a valid directory in your filesystem

manifest_path = os.path.join(data_directory, "manifest.json")

cache = EcephysProjectCache.from_warehouse(manifest=manifest_path)


sessions = cache.get_session_table()

for session_id, row in sessions.iterrows():
    
    truncated_file = True
    directory = os.path.join(data_directory + '/session_' + str(session_id))
    
    while truncated_file:
        session = cache.get_session_data(session_id)
        try:
            print(session.specimen_name)
            truncated_file = False
        except OSError:
            shutil.rmtree(directory)
            print(" Truncated spikes file, re-downloading")

Hanging downloads is an issue we’re aware of, and we’re working on a fix. In the meantime, can you try scrolling to the end of that notebook and using the api.brain-map.org links to download the NWB files through your browser? That seems to work more reliably than requesting them through the SDK (at least for now).

Hi deanp3,

Welcome to the forum!

We’re tracking this issue here: https://github.com/AllenInstitute/AllenSDK/issues/1214 and hope to have it resolved soon.