Running the allensdk analyses on new data - time traces

Hi all,

I’m trying to run the analyses included as parts of the “drifting gratings” and “natural movies” analyses on new/updated time-traces. Basically I have data exactly in the same form as the data downloaded via the brain observatory package in the sdk:

boc = BrainObservatoryCache(manifest_file=‘boc/manifest.json’)
data_set = boc.get_ophys_experiment_data(exp_id)
dg = DriftingGratings(data_set)

so rather than calling dg.get_response() to get the precomputed values, I’d like to update the time traces in data_set and recompute the analyses from scratch.

Is this possible and how can I do it?

Thanks!

-Adam

1 Like

Hi Adam,

Welcome to the forum!

When you say “exactly in the same form” do you mean:

  1. you have NWB files?
  2. you have modified some of the data for existing experiments? e.g. calculated your own dff traces.
  3. you have whole-cloth new data for entire experiments?
  4. something else?

Thanks,
Nile

Thanks Nile!

For your questions, I mean 2. I have downloaded the dFF traces, and made modifications to them based on a new algorithm. For all intents and purposes, these can be considered an array in python of the same size and type as the dFF trace matrix I loaded. I want to re-incorporate those new dFF traces and redo the analysis to see if the tuning curves and other distributions are preserved.

Thanks!

-Adam

Gotcha.

StimulusAnalysis objects (the base class of DriftingGratings & NaturalMovie) are constructed with a BrainObservatoryNwbDataSet, which they use to access data. To access dff, they call the get_dff_traces method on their BrainObservatoryNwbDataSet. What you need is an object that behaves almost exactly like a BrainObservatoryNwbDataSet, but whose get_dff_traces method returns your traces instead of the ones in the NWB file.

Here is an example implementation.

One caveat is that sweepwise stimuli (those with discrete presentations, like gratings) use the average fluorescence over the 1 second preceding stimulus onset as f_0, rather than using a running window. The running window df/f is used when a continuous signal is desired, such as when analyzing the response to natural movies, or when analyzing running speed data. The upshot is that altering the way this running window df/f is calculated will not impact analysis metrics calculated from sweepwise stimuli. For more information, please see the white paper.

Good luck! Please let us know if you have further questions.
Nile

2 Likes

Thanks Nile,

I’ve been trying to get this working right, and I’m starting to get some annoying errors. So one thing is that it looks like there’s an error in line 18 (should that not be “if self.dff_timestamps is None:” instead of “if self.dff_timestamps is not None:”?). Second I’m trying to extract a specific neuron, and I’m getting the following issue:

  1. If I try

cache = DffPatchingCache()
exp_id = 503109347
regular_data_set = cache.get_ophys_experiments(ids=[exp_id])
timestamps, dff = regular_data_set.get_dff_traces()

I get the error:
—> 15 timestamps, dff = regular_data_set.get_dff_traces()
AttributeError: ‘list’ object has no attribute ‘get_dff_traces’

  1. If I try

cache = DffPatchingCache()
exp_id = 503109347
regular_data_set = cache.get_ophys_experiments(exp_id)
timestamps, dff = regular_data_set.get_dff_traces()

I get the error:
—> 11 regular_data_set = cache.get_ophys_experiments(exp_id)
TypeError: expected str, bytes or os.PathLike object, not int

  1. If I try

cache = DffPatchingCache()
exp_id = 503109347
regular_data_set = cache.get_ophys_experiments(exp_id, dff_traces=None)
timestamps, dff = regular_data_set.get_dff_traces()

I get the error:
—> 12 regular_data_set = cache.get_ophys_experiments(exp_id, dff_traces=None)
TypeError: get_ophys_experiments() got an unexpected keyword argument ‘dff_traces’

Interestingly, if I replace the dff traces, it seems to work OK:

patched_data_set = cache.get_ophys_experiment_data(exp_id, dff_traces=new_dff)

Any idea what the issue is?

Thanks!

-Adam

OK, update: somehow I confused “get_ophys_experiments” and " get_ophys_experiment_data". I can seemingly create data objects now. I’m getting a second issue, though, in that the actual responses are not being recomputed. For example, using the above code I try:

exp_id = 503109347
regular_data_set = boc.get_ophys_experiment_data(exp_id)

make a random array of the right shape

timestamps, dff = regular_data_set.get_dff_traces()
rand_dff = np.random.rand(*dff.shape)

… and patch it onto the dataset

rand_data_set = cache.get_ophys_experiment_data(exp_id, dff_traces=rand_dff)

regular_dg_analysis = DriftingGratings(regular_data_set)
rand_dg_analysis = DriftingGratings(rand_data_set)

dgreg = regular_dg_analysis.get_response()
dgrand = rand_dg_analysis.get_response()

np.sum(np.sum((dgreg[:,1:,:,0] - dgrand[:,1:,:,0])**2,1),0)

This returns all zeros, meaning that the responses are identical and the random time traces don’t seem to be effecting the calculations! Any idea how to fix that?

Thanks in advance!