Hello
I have been exploring the AllenSDK for accessing large-scale cell feature datasets, and while the API works well for smaller queries, I’m running into challenges when attempting to pull down and process larger slices of data. ![]()
For eg, trying to extract detailed electrophysiology features across thousands of cells often times out or requires significant memory handling on my local machine.
I’m wondering if there are recommended best practices for structuring such queries more efficiently.
Some documentation points toward batching requests, but I haven’t found a clear end-to-end example of how to manage this workflow without either overwhelming memory or missing out on important metadata. ![]()
It would be really helpful to have a reference or a working code snippet that demonstrates how to query, batch, and store large datasets in a reproducible way.
Checked Cell Types — Allen SDK dev documentation guide related to this and found it quite informative.
In a related context, I was reading about what is Microsoft SQL Server and it struck me how similar approaches in database optimization—like indexing or partitioning—might be applied here to speed up data retrieval. ![]()
Does the AllenSDK already support optimizations like this, or should we be handling that layer entirely on the client side?![]()
Thank you !!![]()