Failed to get session data

Hi everyone,
I ran the Jupyter Notebook’s code to access neuropixels visual coding data.
After typing the following comments to get session data,

import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from allensdk.brain_observatory.ecephys.ecephys_project_cache import EcephysProjectCache
cache= EcephysProjectCache.from_warehouse(manifest=manifest_path)
filtered_sessions = sessions[( == ‘M’) &
…: (sessions.full_genotype.str.find(‘Sst’) > -1) &
…: (sessions.session_type == ‘brain_observatory_1.1’) &
…: ([‘VISl’ in acronyms for acronyms in
…: sessions.ecephys_structure_acronyms])]
analysis_metrics1 = cache.get_unit_analysis_metrics_by_session_type(‘brain_observatory_1.1’)
analysis_metrics2 = cache.get_unit_analysis_metrics_by_session_type(‘functional_connectivity’)
all_metrics = pd.concat([analysis_metrics1, analysis_metrics2], sort=False)
session = cache.get_session_data(filtered_sessions.index.values[0])

I bumped into error on my mac, 8GB 1600 MHz DDR3, saying “raise asyncio.TimeoutError from None concurrent.futures._base.TimeoutError”

I assumed this is because my mac does not have enough disk space to store that session data locally. Then I moved to a PC with 350 GB free and 16 RAM GB and ran the same code on jupyter notebook. However I bumped into another error saying RuntimeError: This event loop is already running

Does anyone know what issue I am facing and what I should do to fix this issue?

Best Regards


I have been able to reproduce the error. We will look into this


I too am having the same problem (windows 10), always when calling cache.get_session_data(session_id)

I’ve tried this on session ids 756029989 and 791319847 using the exact code posted in the example notebooks and I get the same error each time - “runtime error: this event loop is already running”

in the local cache directory, session folders are created but are empty.

happy to provide more info / do additional testing if that would be helpful.



Looks like the problem here is most likely that these data files are very large, and at “typical” Internet download speeds, the time that it takes to get the files is greater than an internal default timeout limit. In the long-term, we will increase the default timeout value and make a change to AllenSDK to make it easier to pass in your own timeout value. In the short-term, here is a code snippet that will show how you can set your own timeout value so that the files will have a chance to download. In this example, I have set the value to 50 minutes; you can change that value as needed for your system.

import os
from allensdk.brain_observatory.ecephys.ecephys_project_cache import EcephysProjectCache
from allensdk.brain_observatory.ecephys.ecephys_project_api import EcephysProjectWarehouseApi
from allensdk.brain_observatory.ecephys.ecephys_project_api.rma_engine import RmaEngine
data_directory = 'C:/allensdk_data'
manifest_path = os.path.join(data_directory, "manifest.json")
session_id = 721123822
cache = EcephysProjectCache(
    timeout=50 * 60  #set timeout to 50 minutes
session = cache.get_session_data(session_id)

Thanks! This worked for me.

Thank you! this worked for me too