(DEPRECATED) The psychofit toolbox contains tools to fit two-alternative psychometric data. The fitting is done using maximal likelihood estimation: one assumes that the responses of the subject are given by a binomial distribution whose mean is given by the psychometric function. The data can be expressed in fraction correct (from .5 to 1) or in fraction of one specific choice (from 0 to 1). To fit them you can use these functions: - weibull50: Weibull function from 0.5 to 1, with lapse rate - weibull: Weibull function from 0 to 1, with lapse rate - erf_psycho: erf function from 0 to 1, with lapse rate - erf_psycho_2gammas: erf function from 0 to 1, with two lapse rates Functions in the toolbox are: - mle_fit_psycho: Maximumum likelihood fit of psychometric function - neg_likelihood: Negative likelihood of a psychometric function For more info, see: - Examples: Examples of use of psychofit toolbox Matteo Carandini, 2000-2015.
# Get the next training status
->>> next(member for member in sorted(TrainingStatus) if member > TrainingStatus[status.upper()])
-<TrainingStatus.READY4RECORDING: 128>
Computes the training status of all alive and water restricted subjects in a specified lab
+
Computes the training status of all alive and water restricted subjects in a specified lab.
+
The response are printed to std out.
Parameters:
-
lab (string) – lab name (must match the name registered on Alyx)
-
date – the date from which to compute training status from. If not specified will compute
+
lab (str) – Lab name (must match the name registered on Alyx).
+
date (str) – The ISO date from which to compute training status. If not specified will compute from the
+latest date with available data. Format should be ‘YYYY-MM-DD’.
+
details (bool) – Whether to display all information about training status computation e.g. performance,
+number of trials, psychometric fit parameters.
from the latest date with available data
-:type date: string of format ‘YYYY-MM-DD’
-:param details: whether to display all information about training status computation e.g
-performance, number of trials, psychometric fit parameters
-:type details: bool
-:param one: instantiation of ONE class
Computes the training status of specified subject and prints results to std out.
Parameters:
-
subj (string) – subject nickname (must match the name registered on Alyx)
-
date – the date from which to compute training status from. If not specified will compute
+
subj (str) – Subject nickname (must match the name registered on Alyx).
+
date (str) – The ISO date from which to compute training status. If not specified will compute from the
+latest date with available data. Format should be ‘YYYY-MM-DD’.
+
details (bool) – Whether to display all information about training status computation e.g. performance,
+number of trials, psychometric fit parameters.
from the latest date with available data
-:type date: string of format ‘YYYY-MM-DD’
-:param details: whether to display all information about training status computation e.g
-performance, number of trials, psychometric fit parameters
-:type details: bool
-:param one: instantiation of ONE class
@@ -357,127 +406,117 @@
Parameters:
-
subj (string) – subject nickname (must match the name registered on Alyx)
-
date – the date from which to compute training status from. If not specified will compute
+
subj (str) – Subject nickname (must match the name registered on Alyx).
+
date (str) – The ISO date from which to compute training status. If not specified will compute from the
+latest date with available data. Format should be ‘YYYY-MM-DD’.
Compute training status of a subject from three consecutive training datasets
+
Compute training status of a subject from consecutive training datasets.
+
For IBL, training status is calculated using trials from the last three consecutive sessions.
Parameters:
-
trials (Bunch) – dict containing trials objects from three consecutive training sessions
-
task_protocol – task protocol used for the three training session, can be ‘training’,
+
trials (dict of str) – Dictionary of trials objects where each key is the ISO session date string.
+
task_protocol (list of str) – Task protocol used for each training session in trials, can be ‘training’, ‘biased’ or
+‘ephys’.
+
ephys_sess_dates (list of str) – List of ISO date strings where training was conducted on ephys rig. Empty list if all
+sessions on training rig.
+
n_delay (int) – Number of sessions on ephys rig that had delay prior to starting session > 15min.
+Returns 0 if no sessions detected.
-
-
‘biased’ or ‘ephys’
-:type task_protocol: list of strings
-:param ephys_sess_dates: dates of sessions conducted on ephys rig
-:type ephys_sess_dates: list of strings
-:param n_delay: number of sessions on ephys rig with delay before start > 15 min
-:type n_delay: int
-:returns:
-
-
-
status - training status of subject
-
-
info - Bunch containing performance metrics that decide training status e.g performance
on easy trials, number of trials, psychometric fit parameters, reaction time
+
Returns:
+
+
str – Training status of the subject.
+
iblutil.util.Bunch – Bunch containing performance metrics that decide training status i.e. performance on easy
+trials, number of trials, psychometric fit parameters, reaction time.
sess_dates (list of strings) – training session dates used to determine training status
-
status (string) – training status of subject
-
perf_easy (np.array) – performance on easy trials for each training sessions
-
n_trials (np.array) – number of trials for each training sessions
-
psych (np.array - bias, threshold, lapse high, lapse low) – parameters of psychometric curve fit to data from all training sessions
-
psych_20 – parameters of psychometric curve fit to data in 20 (probability left) block
+
subj (str) – Subject nickname (must match the name registered on Alyx).
+
sess_dates (list of str) – ISO date strings of training sessions used to determine training status.
+
status (str) – Training status of subject.
+
perf_easy (numpy.array) – Proportion of correct high contrast trials for each training session.
+
n_trials (numpy.array) – Total number of trials for each training session.
+
psych (numpy.array) – Psychometric parameters fit to data from all training sessions - bias, threshold, lapse
+high, lapse low.
+
psych_20 (numpy.array) – The fit psychometric parameters for the blocks where probability of a left stimulus is 0.2.
+
psych_80 (numpy.array) – The fit psychometric parameters for the blocks where probability of a left stimulus is 0.8.
+
rt (float) – The median response time for zero contrast trials across all training sessions. NaN
+indicates no zero contrast stimuli in training sessions.
-
from all training sessions
-:type psych_20: np.array - bias, threshold, lapse high, lapse low
-:param psych_80: parameters of psychometric curve fit to data in 80 (probability left) block
-from all training sessions
-:type psych_80: np.array - bias, threshold, lapse high, lapse low
-:param rt: median reaction time on zero contrast trials across all training sessions (if nan
-indicates no zero contrast stimuli in training sessions)
Compute all relevant performance metrics for when subject is on trainingChoiceWorld
+
Compute all relevant performance metrics for when subject is on trainingChoiceWorld.
Parameters:
-
trials – dict containing trials objects from three consecutive training sessions,
+
+
trials (dict of str) – Dictionary of trials objects where each key is the ISO session date string.
+
trials_all (one.alf.io.AlfBunch) – Trials object with data concatenated over three training sessions.
+
-
-
keys are session dates
-:type trials: Bunch
-:param trials_all: trials object with data concatenated over three training sessions
-:type trials_all: Bunch
-:returns:
-
-
-
perf_easy - performance of easy trials for each session
-
n_trials - number of trials in each session
-
psych - parameters for psychometric curve fit to all sessions
-
rt - median reaction time for zero contrast stimuli over all sessions
+
Returns:
+
+
numpy.array – Proportion of correct high contrast trials for each session.
+
numpy.array – Total number of trials for each training session.
+
numpy.array – Array of psychometric parameters fit to all_trials - bias, threshold, lapse high,
+lapse low.
+
float – The median response time for all zero-contrast trials across all sessions. Returns NaN if
+no trials zero-contrast trials).
Compute psychometric fit parameters for trials object.
Parameters:
-
trials (dict) – trials object that must contain contrastLeft, contrastRight and probabilityLeft
-
signed_contrast (np.array) – array of signed contrasts in percent, where -ve values are on the left
-
block (float) – biased block can be either 0.2 or 0.8
+
trials (one.alf.io.AlfBunch) – An ALF trials object containing the keys {‘probabilityLeft’, ‘contrastLeft’,
+‘contrastRight’, ‘feedbackType’, ‘choice’, ‘response_times’, ‘stimOn_times’}.
+
signed_contrast (numpy.array) – An array of signed contrasts in percent the length of trials, where left contrasts are -ve.
+If None, these are computed from the trials object.
+
block (float) – The block type to compute. If None, all trials are included, otherwise only trials where
+probabilityLeft matches this value are included. For biasedChoiceWorld, the
+probabilityLeft set is {0.5, 0.2, 0.8}.
+
plotting (bool) – Which set of psychofit model parameters to use (see notes).
+
compute_ci (bool) – If true, computes and returns the confidence intervals for response at each contrast.
+
alpha (float, default=0.032) – Significance level for confidence interval. Must be in (0, 1). If compute_ci is false,
+this value is ignored.
Returns:
-
array of psychometric fit parameters - bias, threshold, lapse high, lapse low
+
+
numpy.array – Array of psychometric fit parameters - bias, threshold, lapse high, lapse low.
+
(tuple of numpy.array) – If compute_ci is true, a tuple of
+
+
+
+
See also
+
+
statsmodels.stats.proportion.proportion_confint
interval.
+
+
+
psychofit.mle_fit_psycho
+
+
Notes
+
The psychofit starting parameters and model constraints used for the fit when computing the
+training status (e.g. trained_1a, etc.) are sub-optimal and can produce a poor fit. To keep
+the precise criteria the same for all subjects, these parameters have not changed. To produce a
+better fit for plotting purposes, or to calculate the training status in a manner inconsistent
+with the IBL training pipeline, use plotting=True.
Compute median reaction time on zero contrast trials from trials object
+
Compute median response time on zero contrast trials from trials object
Parameters:
-
trials (dict) – trials object that must contain response_times and stimOn_times
-
stim_on_type – feedback from which to compute the reaction time. Default is stimOn_times
+
trials (one.alf.io.AlfBunch) – An ALF trials object containing the keys {‘probabilityLeft’, ‘contrastLeft’,
+‘contrastRight’, ‘feedbackType’, ‘choice’, ‘response_times’, ‘stimOn_times’}.
+
stim_on_type (str, default='stimOn_times') – The trials key to use when calculating the response times. The difference between this and
+‘feedback_times’ is used (see notes).
+
contrast (float) – If None, the median response time is calculated for all trials, regardless of contrast,
+otherwise only trials where the matching signed percent contrast was presented are used.
+
signed_contrast (numpy.array) – An array of signed contrasts in percent the length of trials, where left contrasts are -ve.
+If None, these are computed from the trials object.
+
Returns:
+
The median response time for trials with contrast (returns NaN if no trials matching
+contrast in trials object).
+
+
Return type:
+
float
+
-
i.e when stimulus is presented
-:type stim_on_type: string (must be a valid key in trials object)
-:param signed_contrast: array of signed contrasts in percent, where -ve values are on the left
-:type signed_contrast: np.array
-:return: float of median reaction time at zero contrast (returns nan if no zero contrast
-trials in trials object)
+
Notes
+
+
The stim_on_type is ‘stimOn_times’ by default, however for IBL rig data, the photodiode is
+sometimes not calibrated properly which can lead to inaccurate (or absent, i.e. NaN) stim on
+times. Therefore, it is sometimes more accurate to use the ‘stimOnTrigger_times’ (the time of
+the stimulus onset command), if available, or the ‘goCue_times’ (the time of the soundcard
+output TTL when the audio go cue is played) or the ‘goCueTrigger_times’ (the time of the
+audio go cue command).
+
The response/reaction time here is defined as the time between stim on and feedback, i.e. the
+entire open-loop trial duration.
trials – trials object that must contain response_times and stimOn_times
-
stim_on_type –
-
stim_off_type –
-
signed_contrast –
-
block –
+
trials (one.alf.io.AlfBunch) – An ALF trials object containing the keys {‘probabilityLeft’, ‘contrastLeft’,
+‘contrastRight’, ‘feedbackType’, ‘choice’, ‘response_times’, ‘stimOn_times’}.
+
stim_on_type (str, default='stimOn_times') – The trials key to use when calculating the response times. The difference between this and
+stim_off_type is used (see notes).
+
stim_off_type (str, default='response_times') – The trials key to use when calculating the response times. The difference between this and
+stim_on_type is used (see notes).
+
signed_contrast (numpy.array) – An array of signed contrasts in percent the length of trials, where left contrasts are -ve.
+If None, these are computed from the trials object.
+
block (float) – The block type to compute. If None, all trials are included, otherwise only trials where
+probabilityLeft matches this value are included. For biasedChoiceWorld, the
+probabilityLeft set is {0.5, 0.2, 0.8}.
+
compute_ci (bool) – If true, computes and returns the confidence intervals for response time at each contrast.
+
alpha (float, default=0.32) – Significance level for confidence interval. Must be in (0, 1). If compute_ci is false,
+this value is ignored.
Returns:
-
+
+
numpy.array – The median response times for each unique signed contrast.
+
numpy.array – The set of unique signed contrasts.
+
numpy.array – The number of trials for each unique signed contrast.
+
(numpy.array) – If compute_ci is true, an array of confidence intervals is return in the shape (n_trials,
+2).
+
+
+
Notes
+
+
The response/reaction time by default is the time between stim on and response, i.e. the
+entire open-loop trial duration. One could use ‘stimOn_times’ and ‘firstMovement_times’ to
+get the true reaction time, or ‘firstMovement_times’ and ‘response_times’ to get the true
+response times, or calculate the last movement onset times and calculate the true movement
+times. See module examples for how to calculate this.
The total number of trials is greater than 200 for each session
+
Performance on easy contrasts > 80% for all sessions
+
+
+
param psych:
+
The fit psychometric parameters three consecutive sessions. Parameters are bias, threshold,
+lapse high, lapse low.
+
+
type psych:
+
numpy.array
+
+
param n_trials:
+
The number for trials for each session.
+
+
type n_trials:
+
numpy.array of int
+
+
param perf_easy:
+
The proportion of correct high contrast trials for each session.
+
+
type perf_easy:
+
numpy.array of float
+
+
returns:
+
True if the criteria are met for ‘trained_1a’.
+
+
rtype:
+
bool
+
+
+
Notes
+
The parameter thresholds chosen here were originally determined by averaging the parameter fits
+for a number of sessions determined to be of ‘good’ performance by an experimenter.
The total number of trials is greater than 400 for each session
+
Performance on easy contrasts > 90% for all sessions
+
The median response time across all zero contrast trials is less than 2 seconds
+
+
+
param psych:
+
The fit psychometric parameters three consecutive sessions. Parameters are bias, threshold,
+lapse high, lapse low.
+
+
type psych:
+
numpy.array
+
+
param n_trials:
+
The number for trials for each session.
+
+
type n_trials:
+
numpy.array of int
+
+
param perf_easy:
+
The proportion of correct high contrast trials for each session.
+
+
type perf_easy:
+
numpy.array of float
+
+
param rt:
+
The median response time for zero contrast trials.
+
+
type rt:
+
float
+
+
returns:
+
True if the criteria are met for ‘trained_1b’.
+
+
rtype:
+
bool
+
+
+
Notes
+
The parameter thresholds chosen here were originally chosen to be slightly stricter than 1a,
+however it was decided to use round numbers so that readers would not assume a level of
+precision that isn’t there (remember, these parameters were not chosen with any rigor). This
+regrettably means that the maximum threshold fit for 1b is greater than for 1a, meaning the
+slope of the psychometric curve may be slightly less steep than 1a.
Returns bool indicating whether criterion for ready4ephysrig or ready4recording is met.
+
Returns bool indicating whether criteria for ready4ephysrig or ready4recording are met.
+
NB: The difference between these two is whether the sessions were acquired ot a recording rig
+with a delay before the first trial. Neither of these two things are tested here.
Function to plot reaction time with trial number a la datajoint webpage
+
Function to plot reaction time with trial number a la datajoint webpage.
Parameters:
-
trials –
-
stim_on_type –
-
ax –
-
title –
-
kwargs –
+
trials (one.alf.io.AlfBunch) – An ALF trials object containing the keys {‘probabilityLeft’, ‘contrastLeft’,
+‘contrastRight’, ‘feedbackType’, ‘choice’, ‘response_times’, ‘stimOn_times’}.
+
stim_on_type (str, default='stimOn_times') – The trials key to use when calculating the response times. The difference between this and
+‘feedback_times’ is used (see notes for compute_median_reaction_time).
+
ax (matplotlib.pyplot.Axes) – An axis object to plot onto.
+
title (str) – An optional plot title.
+
**kwargs – If ax is None, these arguments are passed to matplotlib.pyplot.subplots.
Returns:
-
+
+
matplotlib.pyplot.Figure – The figure handle containing the plot.
+
matplotlib.pyplot.Axes – The plotted axes.
+
+
@@ -722,7 +1051,7 @@
@@ -353,7 +354,7 @@
dataset_types (list of str) – Optional additional spikes/clusters objects to add to the standard default list
spike_sorter (str) – Name of the spike sorting you want to load (None for default which is pykilosort if it’s
available otherwise the default MATLAB kilosort)
-
brain_atlas (ibllib.atlas.BrainAtlas) – Brain atlas object (default: Allen atlas)
Find the nearest volume image index to a given x-axis coordinate.
-
-
Parameters:
-
-
x (float, numpy.array) – One or more x-axis coordinates, relative to the origin, x0.
-
round (bool) – If true, round to the nearest index, replacing NaN values with 0.
-
mode ({'raise', 'clip', 'wrap'}, default='raise') – How to behave if the coordinate lies outside of the volume: raise (default) will raise
-a ValueError; ‘clip’ will replace the index with the closest index inside the volume;
-‘wrap’ will return the index as is.
-
-
-
Returns:
-
The nearest indices of the image volume along the first dimension.
-
-
Return type:
-
numpy.array
-
-
Raises:
-
ValueError – At least one x value lies outside of the atlas volume. Change ‘mode’ input to ‘wrap’ to
- keep these values unchanged, or ‘clip’ to return the nearest valid indices.
Find the nearest volume image index to a given y-axis coordinate.
-
-
Parameters:
-
-
y (float, numpy.array) – One or more y-axis coordinates, relative to the origin, y0.
-
round (bool) – If true, round to the nearest index, replacing NaN values with 0.
-
mode ({'raise', 'clip', 'wrap'}) – How to behave if the coordinate lies outside of the volume: raise (default) will raise
-a ValueError; ‘clip’ will replace the index with the closest index inside the volume;
-‘wrap’ will return the index as is.
-
-
-
Returns:
-
The nearest indices of the image volume along the second dimension.
-
-
Return type:
-
numpy.array
-
-
Raises:
-
ValueError – At least one y value lies outside of the atlas volume. Change ‘mode’ input to ‘wrap’ to
- keep these values unchanged, or ‘clip’ to return the nearest valid indices.
Find the nearest volume image index to a given z-axis coordinate.
-
-
Parameters:
-
-
z (float, numpy.array) – One or more z-axis coordinates, relative to the origin, z0.
-
round (bool) – If true, round to the nearest index, replacing NaN values with 0.
-
mode ({'raise', 'clip', 'wrap'}) – How to behave if the coordinate lies outside of the volume: raise (default) will raise
-a ValueError; ‘clip’ will replace the index with the closest index inside the volume;
-‘wrap’ will return the index as is.
-
-
-
Returns:
-
The nearest indices of the image volume along the third dimension.
-
-
Return type:
-
numpy.array
-
-
Raises:
-
ValueError – At least one z value lies outside of the atlas volume. Change ‘mode’ input to ‘wrap’ to
- keep these values unchanged, or ‘clip’ to return the nearest valid indices.
Find the nearest volume image indices to the given Cartesian coordinates.
-
-
Parameters:
-
-
xyz (array_like) – One or more Cartesian coordinates, relative to the origin, xyz0.
-
round (bool) – If true, round to the nearest index, replacing NaN values with 0.
-
mode ({'raise', 'clip', 'wrap'}) – How to behave if any coordinate lies outside of the volume: raise (default) will raise
-a ValueError; ‘clip’ will replace the index with the closest index inside the volume;
-‘wrap’ will return the index as is.
-
-
-
Returns:
-
The nearest indices of the image volume.
-
-
Return type:
-
numpy.array
-
-
Raises:
-
ValueError – At least one coordinate lies outside of the atlas volume. Change ‘mode’ input to ‘wrap’
- to keep these values unchanged, or ‘clip’ to return the nearest valid indices.
Objects that holds image, labels and coordinate transforms for a brain Atlas.
-Currently this is designed for the AllenCCF at several resolutions,
-yet this class can be used for other atlases arises.
Get the volume top, bottom, left and right surfaces, and from these the outer surface of
-the image volume. This is needed to compute probe insertions intersections.
-
NOTE: In places where the top or bottom surface touch the top or bottom of the atlas volume, the surface
-will be set to np.nan. If you encounter issues working with these surfaces check if this might be the cause.
axis – xyz convention: 0 for ml, 1 for ap, 2 for dv
-- 0: sagittal slice (along ml axis)
-- 1: coronal slice (along ap axis)
-- 2: horizontal slice (along dv axis)
-
volume –
-
‘image’ - allen image volume
-
’annotation’ - allen annotation volume
-
’surface’ - outer surface of mesh
-
’boundary’ - outline of boundaries between all regions
-
’volume’ - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
-
’value’ - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
-
-
-
mode – error mode for out of bounds coordinates
-- ‘raise’ raise an error
-- ‘clip’ gets the first or last index
-
region_values – custom values to plot
-- if volume=’volume’, region_values must have shape ba.image.shape
-- if volume=’value’, region_values must have shape ba.regions.id
-
mapping – mapping to use. Options can be found using ba.regions.mappings.keys()
Plot coronal slice through atlas at given ap_coordinate
-
-
Param:
-
ap_coordinate (m)
-
-
Parameters:
-
-
volume –
-
‘image’ - allen image volume
-
’annotation’ - allen annotation volume
-
’surface’ - outer surface of mesh
-
’boundary’ - outline of boundaries between all regions
-
’volume’ - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
-
’value’ - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
-
-
-
mapping – mapping to use. Options can be found using ba.regions.mappings.keys()
-
region_values – custom values to plot
-- if volume=’volume’, region_values must have shape ba.image.shape
-- if volume=’value’, region_values must have shape ba.regions.id
-
mapping – mapping to use. Options can be found using ba.regions.mappings.keys()
Plot horizontal slice through atlas at given dv_coordinate
-
-
Param:
-
dv_coordinate (m)
-
-
Parameters:
-
-
volume –
-
‘image’ - allen image volume
-
’annotation’ - allen annotation volume
-
’surface’ - outer surface of mesh
-
’boundary’ - outline of boundaries between all regions
-
’volume’ - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
-
’value’ - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
-
-
-
mapping – mapping to use. Options can be found using ba.regions.mappings.keys()
-
region_values – custom values to plot
-- if volume=’volume’, region_values must have shape ba.image.shape
-- if volume=’value’, region_values must have shape ba.regions.id
-
mapping – mapping to use. Options can be found using ba.regions.mappings.keys()
Plot sagittal slice through atlas at given ml_coordinate
-
-
Param:
-
ml_coordinate (m)
-
-
Parameters:
-
-
volume –
-
‘image’ - allen image volume
-
’annotation’ - allen annotation volume
-
’surface’ - outer surface of mesh
-
’boundary’ - outline of boundaries between all regions
-
’volume’ - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
-
’value’ - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
-
-
-
mapping – mapping to use. Options can be found using ba.regions.mappings.keys()
-
region_values – custom values to plot
-- if volume=’volume’, region_values must have shape ba.image.shape
-- if volume=’value’, region_values must have shape ba.regions.id
-
mapping – mapping to use. Options can be found using ba.regions.mappings.keys()
Computes the minimum distance to the trajectory line for one or a set of points.
-If bounds are provided, computes the minimum distance to the segment instead of an
-infinite line.
-
-
Parameters:
-
-
xyz – […, 3]
-
bounds – defaults to None. np.array [2, 3]: segment boundaries, inf line if None
Given a Trajectory and a BrainAtlas object, computes the brain exit coordinate as the
-intersection of the trajectory and the brain surface (brain_atlas.surface)
Given a Trajectory and a BrainAtlas object, computes the brain entry coordinate as the
-intersection of the trajectory and the brain surface (brain_atlas.surface)
Converts anatomical coordinates to CCF coordinates.
-
Anatomical coordinates are in meters, relative to bregma, which CFF coordinates are
-assumed to be the volume indices multiplied by the spacing in micormeters.
-
-
Parameters:
-
-
xyz (numpy.array) – An N by 3 array of anatomical coordinates in meters, relative to bregma.
-
ccf_order ({'mlapdv', 'apdvml'}, default='mlapdv') – The order of the CCF coordinates returned. For IBL (the default) this is (ML, AP, DV),
-for Allen MCC vertices, this is (AP, DV, ML).
-
mode ({'raise', 'clip', 'wrap'}, default='raise') – How to behave if the coordinate lies outside of the volume: raise (default) will raise
-a ValueError; ‘clip’ will replace the index with the closest index inside the volume;
-‘wrap’ will return the index as is.
-
-
-
Returns:
-
-
numpy.array – Coordinates in CCF space (um, origin is the front left top corner of the data
Convert anatomical coordinates from CCF coordinates.
-
Anatomical coordinates are in meters, relative to bregma, which CFF coordinates are
-assumed to be the volume indices multiplied by the spacing in micormeters.
-
-
Parameters:
-
-
ccf (numpy.array) – An N by 3 array of coordinates in CCF space (atlas volume indices * um resolution). The
-origin is the front left top corner of the data volume.
-
ccf_order ({'mlapdv', 'apdvml'}, default='mlapdv') – The order of the CCF coordinates given. For IBL (the default) this is (ML, AP, DV),
-for Allen MCC vertices, this is (AP, DV, ML).
-
-
-
Returns:
-
The MLAPDV coordinates in meters, relative to bregma.
Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
-using the IBL Bregma and coordinate system. The Needles atlas defines a stretch along AP
-axis and a squeeze along the DV axis.
-
-
Parameters:
-
-
res_um ({10, 25, 50} int) – The Atlas resolution in micrometres; one of 10, 25 or 50um.
-
**kwargs – See AllenAtlas.
-
-
-
Returns:
-
An Allen atlas object with MRI atlas scaling applied.
The scaling was determined by manually transforming the DSURQE atlas [1]_ onto the Allen CCF.
-The DSURQE atlas is an MRI atlas acquired from 40 C57BL/6J mice post-mortem, with 40um
-isometric resolution. The alignment was performed by Mayo Faulkner.
-The atlas data can be found here.
-More information on the dataset and segmentation can be found
-here.
Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
-using the IBL Bregma and coordinate system. The MRI Toronto atlas defines a stretch along AP
-a squeeze along DV and a squeeze along ML. These are based on 12 p65 mice MRIs averaged [1]_.
-
-
Parameters:
-
-
res_um ({10, 25, 50} int) – The Atlas resolution in micrometres; one of 10, 25 or 50um.
-
**kwargs – See AllenAtlas.
-
-
-
Returns:
-
An Allen atlas object with MRI atlas scaling applied.
Compute the region of interest mask for a given camera. This corresponds to a box in the video that we will
+use to compute the wheel motion energy
+:return:
Finds frames in the video that have artefacts such as the mouse’s paw or a human hand. In order to determine
+frames with contamination an Otsu thresholding is applied to each frame to detect the artefact from the
+background image
+
+
Parameters:
+
+
video_frames – np array of video frames (nframes, nwidth, nheight)
+
thresold – threshold to differentiate artefact from background
+
normalise – whether to normalise the threshold values for each frame to the baseline
+
+
+
Returns:
+
mask of frames that are contaminated
+
+
+
+
+
+
+compute_motion_energy(first, last, wg, iw)[source]
+
Computes the video motion energy for frame indexes between first and last. This function is written to be run
+in a parallel fashion jusing joblib.parallel
+
+
Parameters:
+
+
first – first frame index of frame interval to consider
+
last – last frame index of frame interval to consider
+
wg – WindowGenerator
+
iw – iteration of the WindowGenerator
+
+
+
Returns:
+
+
+
+
+
+
+
+compute_shifts(times, me, first, last, iw, wg)[source]
+
Compute the cross-correlation between the video motion energy and the wheel velocity to find the mismatch
+between the camera ttls and the video frames. This function is written to run in a parallel manner using
+joblib.parallel
+
+
Parameters:
+
+
times – the times of the video frames across the whole session (ttls)
+
me – the video motion energy computed across the whole session
Removes artefacts from the computed shifts across time. We assume that the shifts should never increase
+over time and that the jump between consecutive shifts shouldn’t be greater than 1
Compute qc values for the wheel alignment. We consider 4 things
+1. The number of camera ttl values that are missing (when we have less ttls than video frames)
+2. The number of shifts that have nan values, this means the video motion energy computation
+3. The number of large jumps (>10) between the computed shifts
+4. The number of jumps (>1) between the shifts after they have been cleaned
+
+
Parameters:
+
+
shifts – np.array of shifts over session
+
shifts_filt – np.array of shifts after being cleaned over session
Main function used to apply the video motion wheel alignment to the camera times. This function does the
+following
+1. Computes the video motion energy across the whole session (computed in windows and parallelised)
+2. Computes the shift that should be applied to the camera times across the whole session by computing
+
+
the cross correlation between the video motion energy and the wheel speed (computed in
+overlapping windows and parallelised)
+
+
+
Removes artefacts from the computed shifts
+
Computes the qc for the wheel alignment
+
Extracts the new camera times using the shifts computed from the video wheel alignment
+
If upload is True, creates a summary plot of the alignment and uploads the figure to the relevant session
Task pipeline creation from an acquisition description.
+
The principal function here is make_pipeline which reads an _ibl_experiment.description.yaml
+file and determines the set of tasks required to preprocess the session.
This module concerns the data extraction and preprocessing for IBL data. The lab servers routinely
+call local_server.job_creator to search for new sessions to extract. The job creator registers
+the new session to Alyx (i.e. creates a new session record on the database), if required, then
+deduces a set of tasks (a.k.a. the pipeline [*]) from the ‘experiment.description’ file at the
+root of the session (see dynamic_pipeline.make_pipeline). If no file exists one is created,
+inferring the acquisition hardware from the task protocol. The new session’s pipeline tasks are
+then registered for another process (or server) to query.
+
Another process calls local_server.task_queue to get a list of queued tasks from Alyx, then
+local_server.tasks_runner to loop through tasks. Each task is run by called
+tasks.run_alyx_task with a dictionary of task information, including the Task class and its
+parameters.
+
+
Notes
+
All new tasks are subclasses of the base_tasks.DynamicTask class. All others are defunct and shall
+be removed in the future.
Functions
@@ -193,13 +218,13 @@
Standard task protocol extractor dynamic pipeline tasks.
Purge data from RIG - Find all files by rglob - Find all sessions of the found files - Check Alyx if corresponding datasetTypes have been registered as existing sessions and files on Flatiron - Delete local raw file if found on Flatiron
This is the module called by the job services on the lab servers. See
+iblscripts/deploy/serverpc/crons for the service scripts that employ this module.
dry (bool, default=False) – If true, simply prints the full session paths and task names without running the tasks.
+
count (int, default=5) – The maximum number of tasks to run from the tasks_dict list.
+
time_out (float, optional) – The time in seconds to run tasks before exiting. If set this will run tasks until the
+timeout has elapsed. NB: Only checks between tasks and will not interrupt a running task.
Adds the key ‘surface_normal_unit_vector’ to the most recent surgery JSON, containing the
+provided three element vector. The recorded craniotomy center must match the coordinates
+in the provided meta file.
+
+
Parameters:
+
+
meta (dict) – The imaging meta data file containing the ‘centerMM’ key.
+
normal_vector (array_like) – A three element unit vector normal to the surface of the craniotomy center.
+
+
+
Returns:
+
The updated surgery record, or None if no surgeries found.
Purge data from RIG
-- Find all files by rglob
-- Find all sessions of the found files
-- Check Alyx if corresponding datasetTypes have been registered as existing
-sessions and files on Flatiron
-- Delete local raw file if found on Flatiron
+
Purge data from acquisition PC.
+
Steps:
+
+
Find all files by rglob
+
Find all sessions of the found files
+
Check Alyx if corresponding datasetTypes have been registered as existing
+sessions and files on Flatiron
job_deck (list of dict, optional) – A list of all tasks in the same pipeline. If None, queries Alyx to get this.
+
max_md5_size (int, optional) – An optional maximum file size in bytes. Files with sizes larger than this will not have
+their MD5 checksum calculated to save time.
+
machine (str, optional) – A string identifying the machine the task is run on.
+
clobber (bool, default=True) – If true any existing logs are overwritten on Alyx.
+
location ({'remote', 'server', 'sdsc', 'aws'}) – Where you are running the task, ‘server’ - local lab server, ‘remote’ - any
+compute node/ computer, ‘sdsc’ - Flatiron compute node, ‘aws’ - using data from AWS S3
+node.
+
mode ({'log', 'raise}, default='log') – Behaviour to adopt if an error occurred. If ‘raise’, it will raise the error at the very
+end of this function (i.e. after having labeled the tasks).
+
+
+
Returns:
+
+
Task – The instantiated task object that was run.
+
list of pathlib.Path – A list of registered datasets.
+
-
to check dependency status if the jdict has a parent field. If jdict has a parent and
-job_deck is not entered, will query the database
-:param max_md5_size: in bytes, if specified, will not compute the md5 checksum above a given
-filesize to save time
-:param machine: string identifying the machine the task is run on, optional
-:param clobber: bool, if True any existing logs are overwritten, default is True
-:param location: where you are running the task, ‘server’ - local lab server, ‘remote’ - any
-compute node/ computer, ‘SDSC’ - flatiron compute node, ‘AWS’ - using data from aws s3
-:param mode: str (‘log’ or ‘raise’) behaviour to adopt if an error occured. If ‘raise’, it
-will Raise the error at the very end of this function (ie. after having labeled the tasks)
-:return:
Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non compatible with ephys when not running on server
Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server
Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non compatible with ephys when not running on server
Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server
Task to sync camera timestamps to main DAQ timestamps
-N.B Signatures only reflect new daq naming convention, non compatible with ephys when not running on server
+N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server
Task to sync camera timestamps to main DAQ timestamps
-N.B Signatures only reflect new daq naming convention, non compatible with ephys when not running on server
+N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server
Main function to call to input a reason for marking an insertion as
-CRITICAL from the alignment GUI. It will:
-- create note text, after deleting any similar notes existing already
Main function to call to input a reason for marking a session/insertion
-as CRITICAL programmatically. It will:
+main(uuid, one=None, alyx=None)[source]
+
Main function to call to input a reason for marking a session/insertion as CRITICAL programmatically.
+
It will:
- ask reasons for selection of critical status
- check if ‘other’ reason has been selected, inquire why (free text)
- create note text, checking whether similar notes exist already
-- upload note to Alyx if none exist previously or if overwrite is chosen
-Q&A are prompted via the Python terminal.
-
Example:
-# Retrieve Alyx note to test
-one = ONE(base_url=’https://dev.alyx.internationalbrainlab.org’)
-uuid = ‘2ffd3ed5-477e-4153-9af7-7fdad3c6946b’
-main(uuid=uuid, one=one)
-
# Get notes with pattern
-notes = one.alyx.rest(‘notes’, ‘list’,
-
-
-
django=f’text__icontains,{STR_NOTES_STATIC},’
f’object_id,{uuid}’)
-
-
-
-
test_json_read = json.loads(notes[0][‘text’])
+- upload note to Alyx if none exist previously or if overwrite is chosen Q&A are prompted via the Python terminal
Parameters:
-
uuid – session/insertion uuid
-
one – default: None -> ONE()
+
uuid (uuid.UUID, str) – An experiment UUID or an insertion UUID.
+
one (one.api.OneAlyx) – (DEPRECATED) An instance of ONE. NB: Pass in an instance of AlyxClient instead.
Infer the content_type from the uuid. Only checks to see if uuid is a session or insertion. If not recognised will raise
-an error and the content_type must be specified on note initialisation e.g Note(uuid, one, content_type=’subject’)
-:return:
+
Infer the content_type from the uuid. Only checks to see if uuid is a session or insertion.
+If not recognised will raise an error and the content_type must be specified on note
+initialisation e.g. Note(uuid, alyx, content_type=’subject’)
Upload note to alyx. If no values for nums and other_reason are specified, user will receive a prompt in command line
-asking them to choose from default list of reasons to add to note as well as option for free text. To upload without
-receiving prompt a value for either nums or other_reason must be given
+
Upload note to Alyx.
+
If no values for nums and other_reason are specified, user will receive a prompt in command
+line asking them to choose from default list of reasons to add to note as well as option
+for free text. To upload without receiving prompt a value for either nums or
+other_reason must be given.
Parameters:
-
nums – string of numbers matching those in default descrptions, e.g, ‘1,3’. Options can be see using note.describe()
-
other_reason – other comment or reasons to add to note (string)
-
kwargs –
+
nums (str) – string of numbers matching those in default descriptions, e.g, ‘1,3’. Options can be
+seen using note.describe().
+
other_reason (str) – Other comment or reason(s) to add to note.
Class for uploading a critical note to an insertion.
-
Example
-
note = CriticalInsertionNote(pid, one)
-# print list of default reasons
-note.describe()
-# to receive a command line prompt to fill in note
-note.upload_note()
-# to upload note automatically without prompt
-note.upload_note(nums=’1,4’, other_reason=’lots of bad channels’)
note = CriticalInsertionNote(uuid, one)
-# print list of default reasons
-note.describe()
-# to receive a command line prompt to fill in note
-note.upload_note()
-# to upload note automatically without prompt
-note.upload_note(nums=’1,4’, other_reason=’session with no ephys recording’)
+
>>> note=CriticalInsertionNote(uuid,AlyxClient)
+
+
+
Print list of default reasons
+
>>> note.describe()
+
+
+
To receive a command line prompt to fill in note
+
>>> note.upload_note()
+
+
+
To upload note automatically without prompt
+
>>> note.upload_note(nums='1,4',other_reason='session with no ephys recording')
+
Class for signing off a session and optionally adding a related explanation note.
Do not use directly but use classes that inherit from this class e.g TaskSignOffNote, RawEphysSignOffNote
Upload note to alyx. If no values for nums and other_reason are specified, user will receive a prompt in command line
-asking them to choose from default list of reasons to add to note as well as option for free text. To upload without
-receiving prompt a value for either nums or other_reason must be given
+
Upload note to Alyx.
+
If no values for nums and other_reason are specified, user will receive a prompt in command
+line asking them to choose from default list of reasons to add to note as well as option
+for free text. To upload without receiving prompt a value for either nums or
+other_reason must be given.
Parameters:
-
nums – string of numbers matching those in default descrptions, e.g, ‘1,3’. Options can be see using note.describe()
-
other_reason – other comment or reasons to add to note (string)
-
kwargs –
+
nums (str) – string of numbers matching those in default descriptions, e.g, ‘1,3’. Options can be
+seen using note.describe().
+
other_reason (str) – Other comment or reason(s) to add to note.
Class for signing off a task part of a session and optionally adding a related explanation note.
-
Example
-
note = TaskSignOffNote(uuid, one, ‘_ephysChoiceWorld_00’)
-# to sign off session without any note
-note.sign_off()
-# print list of default reasons
-note.describe()
-# to upload note and sign off with prompt
-note.upload_note()
-# to upload note automatically without prompt
-note.upload_note(nums=’1,4’, other_reason=’session with no ephys recording’)
Class for signing off a passive part of a session and optionally adding a related explanation note.
-
Example
-
note = PassiveSignOffNote(uuid, one, ‘_passiveChoiceWorld_00’)
-# to sign off session without any note
-note.sign_off()
-# print list of default reasons
-note.describe()
-# to upload note and sign off with prompt
-note.upload_note()
-# to upload note automatically without prompt
-note.upload_note(nums=’1,4’, other_reason=’session with no ephys recording’)
Class for signing off a video part of a session and optionally adding a related explanation note.
-
Example
-
note = VideoSignOffNote(uuid, one, ‘_camera_left’)
-# to sign off session without any note
-note.sign_off()
-# print list of default reasons
-note.describe()
-# to upload note and sign off with prompt
-note.upload_note()
-# to upload note automatically without prompt
-note.upload_note(nums=’1,4’, other_reason=’session with no ephys recording’)
Class for signing off a raw ephys part of a session and optionally adding a related explanation note.
-
Example
-
note = RawEphysSignOffNote(uuid, one, ‘_neuropixel_raw_probe00’)
-# to sign off session without any note
-note.sign_off()
-# print list of default reasons
-note.describe()
-# to upload note and sign off with prompt
-note.upload_note()
-# to upload note automatically without prompt
-note.upload_note(nums=’1,4’, other_reason=’session with no ephys recording’)
Class for signing off a spikesorting part of a session and optionally adding a related explanation note.
-
Example
-
note = SpikeSortingSignOffNote(uuid, one, ‘_neuropixel_spike_sorting_probe00’)
-# to sign off session without any note
-note.sign_off()
-# print list of default reasons
-note.describe()
-# to upload note and sign off with prompt
-note.upload_note()
-# to upload note automatically without prompt
-note.upload_note(nums=’1,4’, other_reason=’session with no ephys recording’)
+
Class for signing off a spike sorting part of a session and optionally adding a related explanation note.
Class for signing off a alignment part of a session and optionally adding a related explanation note.
-
Example
-
note = AlignmentSignOffNote(uuid, one, ‘_neuropixel_alignment_probe00’)
-# to sign off session without any note
-note.sign_off()
-# print list of default reasons
-note.describe()
-# to upload note and sign off with prompt
-note.upload_note()
-# to upload note automatically without prompt
-note.upload_note(nums=’1,4’, other_reason=’session with no ephys recording’)
Check that the time difference between the go cue sound being triggered and
effectively played is smaller than 1ms.
Metric: M = goCue_times - goCueTrigger_times
-Criterion: 0 < M <= 0.001 s
+Criterion: 0 < M <= 0.0015 s
Units: seconds [s]
Parameters:
@@ -701,7 +702,7 @@
Check that the time difference between the error sound being triggered and
effectively played is smaller than 1ms.
Metric: M = errorCue_times - errorCueTrigger_times
-Criterion: 0 < M <= 0.001 s
+Criterion: 0 < M <= 0.0015 s
Units: seconds [s]
Parameters:
@@ -717,7 +718,7 @@
Check that the time difference between the visual stimulus onset-command being triggered
and the stimulus effectively appearing on the screen is smaller than 150 ms.
Metric: M = stimOn_times - stimOnTrigger_times
-Criterion: 0 < M < 0.150 s
+Criterion: 0 < M < 0.15 s
Units: seconds [s]
Parameters:
@@ -734,7 +735,7 @@
being triggered and the visual stimulus effectively turning off on the screen
is smaller than 150 ms.
Metric: M = stimOff_times - stimOffTrigger_times
-Criterion: 0 < M < 0.150 s
+Criterion: 0 < M < 0.15 s
Units: seconds [s]
Parameters:
@@ -751,7 +752,7 @@
being triggered and the visual stimulus effectively freezing on the screen
is smaller than 150 ms.
Metric: M = stimFreeze_times - stimFreezeTrigger_times
-Criterion: 0 < M < 0.150 s
+Criterion: 0 < M < 0.15 s
Units: seconds [s]
The ONE function is cached and therefore the One object persists beyond this test.
+Here we return the mode back to the default after testing behaviour in offline mode.
-importlogging
+"""Computing and testing IBL training status criteria.
+
+For an in-depth description of each training status, see `Appendix 2`_ of the IBL Protocol For Mice
+Training.
+
+.. _Appendix 2: https://figshare.com/articles/preprint/A_standardized_and_reproducible_method_to_\
+measure_decision-making_in_mice_Appendix_2_IBL_protocol_for_mice_training/11634729
+
+Examples
+--------
+Plot the psychometric curve for a given session.
+
+>>> trials = ONE().load_object(eid, 'trials')
+>>> fix, ax = plot_psychometric(trials)
+
+Compute 'response times', defined as the duration of open-loop for each contrast.
+
+>>> reaction_time, contrasts, n_contrasts = compute_reaction_time(trials)
+
+Compute 'reaction times', defined as the time between go cue and first detected movement.
+NB: These may be negative!
+
+>>> reaction_time, contrasts, n_contrasts = compute_reaction_time(
+... trials, stim_on_type='goCue_times', stim_off_type='firstMovement_times')
+
+Compute 'response times', defined as the time between first detected movement and response.
+
+>>> reaction_time, contrasts, n_contrasts = compute_reaction_time(
+... trials, stim_on_type='firstMovement_times', stim_off_type='response_times')
+
+Compute 'movement times', defined as the time between last detected movement and response threshold.
+
+>>> import brainbox.behavior.wheel as wh
+>>> wheel_moves = ONE().load_object(eid, 'wheeMoves')
+>>> trials['lastMovement_times'] = wh.get_movement_onset(wheel_moves.intervals, trial_data.response_times)
+>>> reaction_time, contrasts, n_contrasts = compute_reaction_time(
+... trials, stim_on_type='lastMovement_times', stim_off_type='response_times')
+
+"""
+importloggingimportdatetimeimportrefromenumimportIntFlag,auto,unique
@@ -119,6 +158,7 @@
'choice','response_times','stimOn_times']
+"""list of str: The required keys in the trials object for computing training status."""
@@ -152,7 +193,8 @@
Source code for brainbox.behavior.training
... assert TrainingStatus[status.upper()] in ~TrainingStatus.FAILED, 'Subject untrained' ... assert TrainingStatus[status.upper()] in TrainingStatus.TRAINED ^ TrainingStatus.READY
- # Get the next training status
+ Get the next training status
+
>>> next(member for member in sorted(TrainingStatus) if member > TrainingStatus[status.upper()]) <TrainingStatus.READY4RECORDING: 128>
@@ -160,7 +202,7 @@
Source code for brainbox.behavior.training
----- - ~TrainingStatus.TRAINED means any status but trained 1a or trained 1b. - A subject may acheive both TRAINED_1A and TRAINED_1B within a single session, therefore it
- is possible to have skipped the TRAINED_1A session status.
+ is possible to have skipped the TRAINED_1A session status. """UNTRAINABLE=auto()UNBIASABLE=auto()
@@ -181,17 +223,23 @@
Source code for brainbox.behavior.training
[docs]defget_lab_training_status(lab,date=None,details=True,one=None):"""
- Computes the training status of all alive and water restricted subjects in a specified lab
-
- :param lab: lab name (must match the name registered on Alyx)
- :type lab: string
- :param date: the date from which to compute training status from. If not specified will compute
- from the latest date with available data
- :type date: string of format 'YYYY-MM-DD'
- :param details: whether to display all information about training status computation e.g
- performance, number of trials, psychometric fit parameters
- :type details: bool
- :param one: instantiation of ONE class
+ Computes the training status of all alive and water restricted subjects in a specified lab.
+
+ The response are printed to std out.
+
+ Parameters
+ ----------
+ lab : str
+ Lab name (must match the name registered on Alyx).
+ date : str
+ The ISO date from which to compute training status. If not specified will compute from the
+ latest date with available data. Format should be 'YYYY-MM-DD'.
+ details : bool
+ Whether to display all information about training status computation e.g. performance,
+ number of trials, psychometric fit parameters.
+ one : one.api.OneAlyx
+ An instance of ONE.
+
"""one=oneorONE()subj_lab=one.alyx.rest('subjects','list',lab=lab,alive=True,water_restricted=True)
@@ -205,17 +253,20 @@
Source code for brainbox.behavior.training
[docs]defget_subject_training_status(subj,date=None,details=True,one=None):"""
- Computes the training status of specified subject
-
- :param subj: subject nickname (must match the name registered on Alyx)
- :type subj: string
- :param date: the date from which to compute training status from. If not specified will compute
- from the latest date with available data
- :type date: string of format 'YYYY-MM-DD'
- :param details: whether to display all information about training status computation e.g
- performance, number of trials, psychometric fit parameters
- :type details: bool
- :param one: instantiation of ONE class
+ Computes the training status of specified subject and prints results to std out.
+
+ Parameters
+ ----------
+ subj : str
+ Subject nickname (must match the name registered on Alyx).
+ date : str
+ The ISO date from which to compute training status. If not specified will compute from the
+ latest date with available data. Format should be 'YYYY-MM-DD'.
+ details : bool
+ Whether to display all information about training status computation e.g. performance,
+ number of trials, psychometric fit parameters.
+ one : one.api.OneAlyx
+ An instance of ONE. """one=oneorONE()
@@ -246,19 +297,28 @@
Source code for brainbox.behavior.training
data from the three (or as many as are available) previous sessions up to the specified date. If not it will load data from the last three training sessions that have data available.
- :param subj: subject nickname (must match the name registered on Alyx)
- :type subj: string
- :param date: the date from which to compute training status from. If not specified will compute
- from the latest date with available data
- :type date: string of format 'YYYY-MM-DD'
- :param one: instantiation of ONE class
- :returns:
- - trials - dict of trials objects where each key is the session date
- - task_protocol - list of the task protocol used for each of the sessions
- - ephys_sess_data - list of dates where training was conducted on ephys rig. Empty list if
- all sessions on training rig
- - n_delay - number of sessions on ephys rig that had delay prior to starting session
- > 15min. Returns 0 is no sessions detected
+ Parameters
+ ----------
+ subj : str
+ Subject nickname (must match the name registered on Alyx).
+ date : str
+ The ISO date from which to compute training status. If not specified will compute from the
+ latest date with available data. Format should be 'YYYY-MM-DD'.
+ one : one.api.OneAlyx
+ An instance of ONE.
+
+ Returns
+ -------
+ iblutil.util.Bunch
+ Dictionary of trials objects where each key is the ISO session date string.
+ list of str
+ List of the task protocol used for each of the sessions.
+ list of str
+ List of ISO date strings where training was conducted on ephys rig. Empty list if all
+ sessions on training rig.
+ n_delay : int
+ Number of sessions on ephys rig that had delay prior to starting session > 15min.
+ Returns 0 if no sessions detected. """one=oneorONE()
@@ -347,21 +407,31 @@
Source code for brainbox.behavior.training
[docs]defget_training_status(trials,task_protocol,ephys_sess_dates,n_delay):"""
- Compute training status of a subject from three consecutive training datasets
+ Compute training status of a subject from consecutive training datasets.
- :param trials: dict containing trials objects from three consecutive training sessions
- :type trials: Bunch
- :param task_protocol: task protocol used for the three training session, can be 'training',
- 'biased' or 'ephys'
- :type task_protocol: list of strings
- :param ephys_sess_dates: dates of sessions conducted on ephys rig
- :type ephys_sess_dates: list of strings
- :param n_delay: number of sessions on ephys rig with delay before start > 15 min
- :type n_delay: int
- :returns:
- - status - training status of subject
- - info - Bunch containing performance metrics that decide training status e.g performance
- on easy trials, number of trials, psychometric fit parameters, reaction time
+ For IBL, training status is calculated using trials from the last three consecutive sessions.
+
+ Parameters
+ ----------
+ trials : dict of str
+ Dictionary of trials objects where each key is the ISO session date string.
+ task_protocol : list of str
+ Task protocol used for each training session in `trials`, can be 'training', 'biased' or
+ 'ephys'.
+ ephys_sess_dates : list of str
+ List of ISO date strings where training was conducted on ephys rig. Empty list if all
+ sessions on training rig.
+ n_delay : int
+ Number of sessions on ephys rig that had delay prior to starting session > 15min.
+ Returns 0 if no sessions detected.
+
+ Returns
+ -------
+ str
+ Training status of the subject.
+ iblutil.util.Bunch
+ Bunch containing performance metrics that decide training status i.e. performance on easy
+ trials, number of trials, psychometric fit parameters, reaction time. """info=Bunch()
@@ -438,28 +508,31 @@
Source code for brainbox.behavior.training
defdisplay_status(subj,sess_dates,status,perf_easy=None,n_trials=None,psych=None,psych_20=None,psych_80=None,rt=None):"""
- Display training status of subject to terminal
-
- :param subj: subject nickname
- :type subj: string
- :param sess_dates: training session dates used to determine training status
- :type sess_dates: list of strings
- :param status: training status of subject
- :type status: string
- :param perf_easy: performance on easy trials for each training sessions
- :type perf_easy: np.array
- :param n_trials: number of trials for each training sessions
- :type n_trials: np.array
- :param psych: parameters of psychometric curve fit to data from all training sessions
- :type psych: np.array - bias, threshold, lapse high, lapse low
- :param psych_20: parameters of psychometric curve fit to data in 20 (probability left) block
- from all training sessions
- :type psych_20: np.array - bias, threshold, lapse high, lapse low
- :param psych_80: parameters of psychometric curve fit to data in 80 (probability left) block
- from all training sessions
- :type psych_80: np.array - bias, threshold, lapse high, lapse low
- :param rt: median reaction time on zero contrast trials across all training sessions (if nan
- indicates no zero contrast stimuli in training sessions)
+ Display training status of subject to terminal.
+
+ Parameters
+ ----------
+ subj : str
+ Subject nickname (must match the name registered on Alyx).
+ sess_dates : list of str
+ ISO date strings of training sessions used to determine training status.
+ status : str
+ Training status of subject.
+ perf_easy : numpy.array
+ Proportion of correct high contrast trials for each training session.
+ n_trials : numpy.array
+ Total number of trials for each training session.
+ psych : numpy.array
+ Psychometric parameters fit to data from all training sessions - bias, threshold, lapse
+ high, lapse low.
+ psych_20 : numpy.array
+ The fit psychometric parameters for the blocks where probability of a left stimulus is 0.2.
+ psych_80 : numpy.array
+ The fit psychometric parameters for the blocks where probability of a left stimulus is 0.8.
+ rt : float
+ The median response time for zero contrast trials across all training sessions. NaN
+ indicates no zero contrast stimuli in training sessions.
+
"""ifperf_easyisNone:
@@ -494,15 +567,19 @@
Source code for brainbox.behavior.training
[docs]defconcatenate_trials(trials):"""
- Concatenate trials from different training sessions
+ Concatenate trials from different training sessions.
- :param trials: dict containing trials objects from three consecutive training sessions,
- keys are session dates
- :type trials: Bunch
- :return: trials object with data concatenated over three training sessions
- :rtype: dict
+ Parameters
+ ----------
+ trials : dict of str
+ Dictionary of trials objects where each key is the ISO session date string.
+
+ Returns
+ -------
+ one.alf.io.AlfBunch
+ Trials object with data concatenated over three training sessions. """
- trials_all=Bunch()
+ trials_all=AlfBunch()forkinTRIALS_KEYS:trials_all[k]=np.concatenate(list(trials[kk][k]forkkintrials.keys()))
@@ -514,18 +591,27 @@
Source code for brainbox.behavior.training
[docs]defcompute_training_info(trials,trials_all):"""
- Compute all relevant performance metrics for when subject is on trainingChoiceWorld
+ Compute all relevant performance metrics for when subject is on trainingChoiceWorld.
- :param trials: dict containing trials objects from three consecutive training sessions,
- keys are session dates
- :type trials: Bunch
- :param trials_all: trials object with data concatenated over three training sessions
- :type trials_all: Bunch
- :returns:
- - perf_easy - performance of easy trials for each session
- - n_trials - number of trials in each session
- - psych - parameters for psychometric curve fit to all sessions
- - rt - median reaction time for zero contrast stimuli over all sessions
+ Parameters
+ ----------
+ trials : dict of str
+ Dictionary of trials objects where each key is the ISO session date string.
+ trials_all : one.alf.io.AlfBunch
+ Trials object with data concatenated over three training sessions.
+
+ Returns
+ -------
+ numpy.array
+ Proportion of correct high contrast trials for each session.
+ numpy.array
+ Total number of trials for each training session.
+ numpy.array
+ Array of psychometric parameters fit to `all_trials` - bias, threshold, lapse high,
+ lapse low.
+ float
+ The median response time for all zero-contrast trials across all sessions. Returns NaN if
+ no trials zero-contrast trials). """signed_contrast=get_signed_contrast(trials_all)
@@ -653,17 +739,50 @@
Source code for brainbox.behavior.training
[docs]
-defcompute_psychometric(trials,signed_contrast=None,block=None,plotting=False,compute_ci=False,alpha=0.32):
+defcompute_psychometric(trials,signed_contrast=None,block=None,plotting=False,compute_ci=False,alpha=.032):"""
- Compute psychometric fit parameters for trials object
+ Compute psychometric fit parameters for trials object.
- :param trials: trials object that must contain contrastLeft, contrastRight and probabilityLeft
- :type trials: dict
- :param signed_contrast: array of signed contrasts in percent, where -ve values are on the left
- :type signed_contrast: np.array
- :param block: biased block can be either 0.2 or 0.8
- :type block: float
- :return: array of psychometric fit parameters - bias, threshold, lapse high, lapse low
+ Parameters
+ ----------
+ trials : one.alf.io.AlfBunch
+ An ALF trials object containing the keys {'probabilityLeft', 'contrastLeft',
+ 'contrastRight', 'feedbackType', 'choice', 'response_times', 'stimOn_times'}.
+ signed_contrast : numpy.array
+ An array of signed contrasts in percent the length of trials, where left contrasts are -ve.
+ If None, these are computed from the trials object.
+ block : float
+ The block type to compute. If None, all trials are included, otherwise only trials where
+ probabilityLeft matches this value are included. For biasedChoiceWorld, the
+ probabilityLeft set is {0.5, 0.2, 0.8}.
+ plotting : bool
+ Which set of psychofit model parameters to use (see notes).
+ compute_ci : bool
+ If true, computes and returns the confidence intervals for response at each contrast.
+ alpha : float, default=0.032
+ Significance level for confidence interval. Must be in (0, 1). If `compute_ci` is false,
+ this value is ignored.
+
+ Returns
+ -------
+ numpy.array
+ Array of psychometric fit parameters - bias, threshold, lapse high, lapse low.
+ (tuple of numpy.array)
+ If `compute_ci` is true, a tuple of
+
+ See Also
+ --------
+ statsmodels.stats.proportion.proportion_confint - The function used to compute confidence
+ interval.
+ psychofit.mle_fit_psycho - The function used to fit the psychometric parameters.
+
+ Notes
+ -----
+ The psychofit starting parameters and model constraints used for the fit when computing the
+ training status (e.g. trained_1a, etc.) are sub-optimal and can produce a poor fit. To keep
+ the precise criteria the same for all subjects, these parameters have not changed. To produce a
+ better fit for plotting purposes, or to calculate the training status in a manner inconsistent
+ with the IBL training pipeline, use plotting=True. """ifsigned_contrastisNone:
@@ -677,10 +796,12 @@
Source code for brainbox.behavior.training
ifnotnp.any(block_idx):returnnp.nan*np.zeros(4)
- prob_choose_right,contrasts,n_contrasts=compute_performance(trials,signed_contrast=signed_contrast,block=block,
- prob_right=True)
+ prob_choose_right,contrasts,n_contrasts=compute_performance(
+ trials,signed_contrast=signed_contrast,block=block,prob_right=True)ifplotting:
+ # These starting parameters and constraints tend to produce a better fit, and are therefore
+ # used for plotting.psych,_=psy.mle_fit_psycho(np.vstack([contrasts,n_contrasts,prob_choose_right]),P_model='erf_psycho_2gammas',
@@ -689,7 +810,8 @@
Source code for brainbox.behavior.training
parmax=np.array([50.,50.,0.2,0.2]),nfits=10)else:
-
+ # These starting parameters and constraints are not ideal but are still used for computing
+ # the training status for consistency.psych,_=psy.mle_fit_psycho(np.vstack([contrasts,n_contrasts,prob_choose_right]),P_model='erf_psycho_2gammas',
@@ -701,7 +823,7 @@
Source code for brainbox.behavior.training
importstatsmodels.stats.proportionassmp# noqa# choice == -1 means contrast on right hand siden_right=np.vectorize(lambdax:np.sum(trials['choice'][(x==signed_contrast)&block_idx]==-1))(contrasts)
- ci=smp.proportion_confint(n_right,n_contrasts,alpha=alpha/10,method='normal')-prob_choose_right
+ ci=smp.proportion_confint(n_right,n_contrasts,alpha=alpha,method='normal')-prob_choose_rightreturnpsych,cielse:
@@ -713,17 +835,39 @@
Source code for brainbox.behavior.training
[docs]defcompute_median_reaction_time(trials,stim_on_type='stimOn_times',contrast=None,signed_contrast=None):"""
- Compute median reaction time on zero contrast trials from trials object
+ Compute median response time on zero contrast trials from trials object
- :param trials: trials object that must contain response_times and stimOn_times
- :type trials: dict
- :param stim_on_type: feedback from which to compute the reaction time. Default is stimOn_times
- i.e when stimulus is presented
- :type stim_on_type: string (must be a valid key in trials object)
- :param signed_contrast: array of signed contrasts in percent, where -ve values are on the left
- :type signed_contrast: np.array
- :return: float of median reaction time at zero contrast (returns nan if no zero contrast
- trials in trials object)
+ Parameters
+ ----------
+ trials : one.alf.io.AlfBunch
+ An ALF trials object containing the keys {'probabilityLeft', 'contrastLeft',
+ 'contrastRight', 'feedbackType', 'choice', 'response_times', 'stimOn_times'}.
+ stim_on_type : str, default='stimOn_times'
+ The trials key to use when calculating the response times. The difference between this and
+ 'feedback_times' is used (see notes).
+ contrast : float
+ If None, the median response time is calculated for all trials, regardless of contrast,
+ otherwise only trials where the matching signed percent contrast was presented are used.
+ signed_contrast : numpy.array
+ An array of signed contrasts in percent the length of trials, where left contrasts are -ve.
+ If None, these are computed from the trials object.
+
+ Returns
+ -------
+ float
+ The median response time for trials with `contrast` (returns NaN if no trials matching
+ `contrast` in trials object).
+
+ Notes
+ -----
+ - The `stim_on_type` is 'stimOn_times' by default, however for IBL rig data, the photodiode is
+ sometimes not calibrated properly which can lead to inaccurate (or absent, i.e. NaN) stim on
+ times. Therefore, it is sometimes more accurate to use the 'stimOnTrigger_times' (the time of
+ the stimulus onset command), if available, or the 'goCue_times' (the time of the soundcard
+ output TTL when the audio go cue is played) or the 'goCueTrigger_times' (the time of the
+ audio go cue command).
+ - The response/reaction time here is defined as the time between stim on and feedback, i.e. the
+ entire open-loop trial duration. """ifsigned_contrastisNone:signed_contrast=get_signed_contrast(trials)
@@ -748,13 +892,55 @@
Source code for brainbox.behavior.training
defcompute_reaction_time(trials,stim_on_type='stimOn_times',stim_off_type='response_times',signed_contrast=None,block=None,compute_ci=False,alpha=0.32):"""
- Compute median reaction time for all contrasts
- :param trials: trials object that must contain response_times and stimOn_times
- :param stim_on_type:
- :param stim_off_type:
- :param signed_contrast:
- :param block:
- :return:
+ Compute median response time for all contrasts.
+
+ Parameters
+ ----------
+ trials : one.alf.io.AlfBunch
+ An ALF trials object containing the keys {'probabilityLeft', 'contrastLeft',
+ 'contrastRight', 'feedbackType', 'choice', 'response_times', 'stimOn_times'}.
+ stim_on_type : str, default='stimOn_times'
+ The trials key to use when calculating the response times. The difference between this and
+ `stim_off_type` is used (see notes).
+ stim_off_type : str, default='response_times'
+ The trials key to use when calculating the response times. The difference between this and
+ `stim_on_type` is used (see notes).
+ signed_contrast : numpy.array
+ An array of signed contrasts in percent the length of trials, where left contrasts are -ve.
+ If None, these are computed from the trials object.
+ block : float
+ The block type to compute. If None, all trials are included, otherwise only trials where
+ probabilityLeft matches this value are included. For biasedChoiceWorld, the
+ probabilityLeft set is {0.5, 0.2, 0.8}.
+ compute_ci : bool
+ If true, computes and returns the confidence intervals for response time at each contrast.
+ alpha : float, default=0.32
+ Significance level for confidence interval. Must be in (0, 1). If `compute_ci` is false,
+ this value is ignored.
+
+ Returns
+ -------
+ numpy.array
+ The median response times for each unique signed contrast.
+ numpy.array
+ The set of unique signed contrasts.
+ numpy.array
+ The number of trials for each unique signed contrast.
+ (numpy.array)
+ If `compute_ci` is true, an array of confidence intervals is return in the shape (n_trials,
+ 2).
+
+ Notes
+ -----
+ - The response/reaction time by default is the time between stim on and response, i.e. the
+ entire open-loop trial duration. One could use 'stimOn_times' and 'firstMovement_times' to
+ get the true reaction time, or 'firstMovement_times' and 'response_times' to get the true
+ response times, or calculate the last movement onset times and calculate the true movement
+ times. See module examples for how to calculate this.
+
+ See Also
+ --------
+ scipy.stats.bootstrap - the function used to compute the confidence interval. """ifsigned_contrastisNone:
@@ -786,9 +972,35 @@
Source code for brainbox.behavior.training
[docs]defcriterion_1a(psych,n_trials,perf_easy):"""
- Returns bool indicating whether criterion for trained_1a is met. All criteria documented here
- (https://figshare.com/articles/preprint/A_standardized_and_reproducible_method_to_measure_
- decision-making_in_mice_Appendix_2_IBL_protocol_for_mice_training/11634729)
+ Returns bool indicating whether criteria for status 'trained_1a' are met.
+
+ Criteria
+ --------
+ - Bias is less than 16
+ - Threshold is less than 19
+ - Lapse rate on both sides is less than 0.2
+ - The total number of trials is greater than 200 for each session
+ - Performance on easy contrasts > 80% for all sessions
+
+ Parameters
+ ----------
+ psych : numpy.array
+ The fit psychometric parameters three consecutive sessions. Parameters are bias, threshold,
+ lapse high, lapse low.
+ n_trials : numpy.array of int
+ The number for trials for each session.
+ perf_easy : numpy.array of float
+ The proportion of correct high contrast trials for each session.
+
+ Returns
+ -------
+ bool
+ True if the criteria are met for 'trained_1a'.
+
+ Notes
+ -----
+ The parameter thresholds chosen here were originally determined by averaging the parameter fits
+ for a number of sessions determined to be of 'good' performance by an experimenter. """criterion=(abs(psych[0])<16andpsych[1]<19andpsych[2]<0.2andpsych[3]<0.2and
@@ -801,7 +1013,41 @@
Source code for brainbox.behavior.training
[docs]defcriterion_1b(psych,n_trials,perf_easy,rt):"""
- Returns bool indicating whether criterion for trained_1b is met.
+ Returns bool indicating whether criteria for trained_1b are met.
+
+ Criteria
+ --------
+ - Bias is less than 10
+ - Threshold is less than 20 (see notes)
+ - Lapse rate on both sides is less than 0.1
+ - The total number of trials is greater than 400 for each session
+ - Performance on easy contrasts > 90% for all sessions
+ - The median response time across all zero contrast trials is less than 2 seconds
+
+ Parameters
+ ----------
+ psych : numpy.array
+ The fit psychometric parameters three consecutive sessions. Parameters are bias, threshold,
+ lapse high, lapse low.
+ n_trials : numpy.array of int
+ The number for trials for each session.
+ perf_easy : numpy.array of float
+ The proportion of correct high contrast trials for each session.
+ rt : float
+ The median response time for zero contrast trials.
+
+ Returns
+ -------
+ bool
+ True if the criteria are met for 'trained_1b'.
+
+ Notes
+ -----
+ The parameter thresholds chosen here were originally chosen to be slightly stricter than 1a,
+ however it was decided to use round numbers so that readers would not assume a level of
+ precision that isn't there (remember, these parameters were not chosen with any rigor). This
+ regrettably means that the maximum threshold fit for 1b is greater than for 1a, meaning the
+ slope of the psychometric curve may be slightly less steep than 1a. """criterion=(abs(psych[0])<10andpsych[1]<20andpsych[2]<0.1andpsych[3]<0.1andnp.all(n_trials>400)andnp.all(perf_easy>0.9)andrt<2)
@@ -813,11 +1059,44 @@
Source code for brainbox.behavior.training
[docs]defcriterion_ephys(psych_20,psych_80,n_trials,perf_easy,rt):"""
- Returns bool indicating whether criterion for ready4ephysrig or ready4recording is met.
+ Returns bool indicating whether criteria for ready4ephysrig or ready4recording are met.
+
+ NB: The difference between these two is whether the sessions were acquired ot a recording rig
+ with a delay before the first trial. Neither of these two things are tested here.
+
+ Criteria
+ --------
+ - Lapse on both sides < 0.1 for both bias blocks
+ - Bias shift between blocks > 5
+ - Total number of trials > 400 for all sessions
+ - Performance on easy contrasts > 90% for all sessions
+ - Median response time for zero contrast stimuli < 2 seconds
+
+ Parameters
+ ----------
+ psych_20 : numpy.array
+ The fit psychometric parameters for the blocks where probability of a left stimulus is 0.2.
+ Parameters are bias, threshold, lapse high, lapse low.
+ psych_80 : numpy.array
+ The fit psychometric parameters for the blocks where probability of a left stimulus is 0.8.
+ Parameters are bias, threshold, lapse high, lapse low.
+ n_trials : numpy.array
+ The number of trials for each session (typically three consecutive sessions).
+ perf_easy : numpy.array
+ The proportion of correct high contrast trials for each session (typically three
+ consecutive sessions).
+ rt : float
+ The median response time for zero contrast trials.
+
+ Returns
+ -------
+ bool
+ True if subject passes the ready4ephysrig or ready4recording criteria. """
- criterion=(psych_20[2]<0.1andpsych_20[3]<0.1andpsych_80[2]<0.1andpsych_80[3]and
- psych_80[0]-psych_20[0]>5andnp.all(n_trials>400)and
- np.all(perf_easy>0.9)andrt<2)
+
+ criterion=(np.all(np.r_[psych_20[2:4],psych_80[2:4]]<0.1)and# lapse
+ psych_80[0]-psych_20[0]>5andnp.all(n_trials>400)and# bias shift and n trials
+ np.all(perf_easy>0.9)andrt<2)# overall performance and response timesreturncriterion
@@ -826,7 +1105,25 @@
Source code for brainbox.behavior.training
[docs]defcriterion_delay(n_trials,perf_easy):"""
- Returns bool indicating whether criterion for ready4delay is met.
+ Returns bool indicating whether criteria for 'ready4delay' is met.
+
+ Criteria
+ --------
+ - Total number of trials for any of the sessions is greater than 400
+ - Performance on easy contrasts is greater than 90% for any of the sessions
+
+ Parameters
+ ----------
+ n_trials : numpy.array of int
+ The number of trials for each session (typically three consecutive sessions).
+ perf_easy : numpy.array
+ The proportion of correct high contrast trials for each session (typically three
+ consecutive sessions).
+
+ Returns
+ -------
+ bool
+ True if subject passes the 'ready4delay' criteria. """criterion=np.any(n_trials>400)andnp.any(perf_easy>0.9)returncriterion
@@ -835,11 +1132,41 @@
Source code for brainbox.behavior.training
[docs]
-defplot_psychometric(trials,ax=None,title=None,plot_ci=False,ci_aplha=0.32,**kwargs):
+defplot_psychometric(trials,ax=None,title=None,plot_ci=False,ci_alpha=0.032,**kwargs):"""
- Function to plot psychometric curve plots a la datajoint webpage
- :param trials:
- :return:
+ Function to plot psychometric curve plots a la datajoint webpage.
+
+ Parameters
+ ----------
+ trials : one.alf.io.AlfBunch
+ An ALF trials object containing the keys {'probabilityLeft', 'contrastLeft',
+ 'contrastRight', 'feedbackType', 'choice', 'response_times', 'stimOn_times'}.
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ title : str
+ An optional plot title.
+ plot_ci : bool
+ If true, computes and plots the confidence intervals for response at each contrast.
+ ci_alpha : float, default=0.032
+ Significance level for confidence interval. Must be in (0, 1). If `plot_ci` is false,
+ this value is ignored.
+ **kwargs
+ If `ax` is None, these arguments are passed to matplotlib.pyplot.subplots.
+
+ Returns
+ -------
+ matplotlib.pyplot.Figure
+ The figure handle containing the plot.
+ matplotlib.pyplot.Axes
+ The plotted axes.
+
+ See Also
+ --------
+ statsmodels.stats.proportion.proportion_confint - The function used to compute confidence
+ interval.
+ psychofit.mle_fit_psycho - The function used to fit the psychometric parameters.
+ psychofit.erf_psycho_2gammas - The function used to transform contrast to response probability
+ using the fit parameters. """signed_contrast=get_signed_contrast(trials)
@@ -847,23 +1174,23 @@
[docs]defplot_reaction_time(trials,ax=None,title=None,plot_ci=False,ci_alpha=0.32,**kwargs):"""
- Function to plot reaction time against contrast a la datajoint webpage (inverted for some reason??)
- :param trials:
- :return:
+ Function to plot reaction time against contrast a la datajoint webpage.
+
+ The reaction times are plotted individually for the following three blocks: {0.5, 0.2, 0.8}.
+
+ Parameters
+ ----------
+ trials : one.alf.io.AlfBunch
+ An ALF trials object containing the keys {'probabilityLeft', 'contrastLeft',
+ 'contrastRight', 'feedbackType', 'choice', 'response_times', 'stimOn_times'}.
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ title : str
+ An optional plot title.
+ plot_ci : bool
+ If true, computes and plots the confidence intervals for response at each contrast.
+ ci_alpha : float, default=0.32
+ Significance level for confidence interval. Must be in (0, 1). If `plot_ci` is false,
+ this value is ignored.
+ **kwargs
+ If `ax` is None, these arguments are passed to matplotlib.pyplot.subplots.
+
+ Returns
+ -------
+ matplotlib.pyplot.Figure
+ The figure handle containing the plot.
+ matplotlib.pyplot.Axes
+ The plotted axes.
+
+ See Also
+ --------
+ scipy.stats.bootstrap - the function used to compute the confidence interval. """signed_contrast=get_signed_contrast(trials)
@@ -913,7 +1268,7 @@
[docs]defplot_reaction_time_over_trials(trials,stim_on_type='stimOn_times',ax=None,title=None,**kwargs):"""
- Function to plot reaction time with trial number a la datajoint webpage
-
- :param trials:
- :param stim_on_type:
- :param ax:
- :param title:
- :param kwargs:
- :return:
+ Function to plot reaction time with trial number a la datajoint webpage.
+
+ Parameters
+ ----------
+ trials : one.alf.io.AlfBunch
+ An ALF trials object containing the keys {'probabilityLeft', 'contrastLeft',
+ 'contrastRight', 'feedbackType', 'choice', 'response_times', 'stimOn_times'}.
+ stim_on_type : str, default='stimOn_times'
+ The trials key to use when calculating the response times. The difference between this and
+ 'feedback_times' is used (see notes for `compute_median_reaction_time`).
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ title : str
+ An optional plot title.
+ **kwargs
+ If `ax` is None, these arguments are passed to matplotlib.pyplot.subplots.
+
+ Returns
+ -------
+ matplotlib.pyplot.Figure
+ The figure handle containing the plot.
+ matplotlib.pyplot.Axes
+ The plotted axes. """reaction_time=pd.DataFrame()
diff --git a/_modules/brainbox/ephys_plots.html b/_modules/brainbox/ephys_plots.html
index 767cfc05..1b715484 100644
--- a/_modules/brainbox/ephys_plots.html
+++ b/_modules/brainbox/ephys_plots.html
@@ -113,7 +113,7 @@
'x', 'y', 'z', 'acronym', 'axial_um' those are the guide for the interpolation :param channels: Bunch or dictionary of aligned channels containing at least keys 'localCoordinates'
- :param brain_regions: None (default) or ibllib.atlas.BrainRegions object
+ :param brain_regions: None (default) or iblatlas.regions.BrainRegions object if None will return a dict with keys 'localCoordinates', 'mlapdv', 'brainLocationIds_ccf_2017 if a brain region object is provided, outputts a dict with keys 'x', 'y', 'z', 'acronym', 'atlas_id', 'axial_um', 'lateral_um'
@@ -488,7 +489,7 @@
Source code for brainbox.io.one
An instance of ONE (shouldn't be in 'local' mode) aligned : bool Whether to get the latest user aligned channel when not resolved or use histology track
- brain_atlas : ibllib.atlas.BrainAtlas
+ brain_atlas : iblatlas.BrainAtlas Brain atlas object (default: Allen atlas) Returns -------
@@ -536,7 +537,7 @@
Source code for brainbox.io.one
:param dataset_types: additional spikes/clusters objects to add to the standard default list :param spike_sorter: name of the spike sorting you want to load (None for default) :param collection: name of the spike sorting collection to load - exclusive with spike sorter name ex: "alf/probe00"
- :param brain_regions: ibllib.atlas.regions.BrainRegions object - will label acronyms if provided
+ :param brain_regions: iblatlas.regions.BrainRegions object - will label acronyms if provided :param nested: if a single probe is required, do not output a dictionary with the probe name as key :param return_collection: (False) if True, will return the collection used to load :return: spikes, clusters, channels (dict of bunch, 1 bunch per probe)
@@ -580,7 +581,7 @@
Source code for brainbox.io.one
:param probe: name of probe to load in, if not given all probes for session will be loaded :param dataset_types: additional spikes/clusters objects to add to the standard default list :param spike_sorter: name of the spike sorting you want to load (None for default)
- :param brain_regions: ibllib.atlas.regions.BrainRegions object - will label acronyms if provided
+ :param brain_regions: iblatlas.regions.BrainRegions object - will label acronyms if provided :param return_collection:(bool - False) if True, returns the collection for loading the data :return: spikes, clusters (dict of bunch, 1 bunch per probe) """
@@ -622,7 +623,7 @@
Source code for brainbox.io.one
spike_sorter : str Name of the spike sorting you want to load (None for default which is pykilosort if it's available otherwise the default MATLAB kilosort)
- brain_atlas : ibllib.atlas.BrainAtlas
+ brain_atlas : iblatlas.atlas.BrainAtlas Brain atlas object (default: Allen atlas) return_collection: bool Returns an extra argument with the collection chosen
@@ -1228,10 +1229,16 @@
Source code for brainbox.io.one
[docs]
- defraster(self,spikes,channels,save_dir=None,br=None,label='raster',time_series=None):
+ defraster(self,spikes,channels,save_dir=None,br=None,label='raster',time_series=None,**kwargs):"""
- :param spikes: spikes dictionary
- :param save_dir: optional if specified
+ :param spikes: spikes dictionary or Bunch
+ :param channels: channels dictionary or Bunch.
+ :param save_dir: if specified save to this directory as "{pid}_{probe}_{label}.png".
+ Otherwise, plot.
+ :param br: brain regions object (optional)
+ :param label: label for saved image (optional, default="raster")
+ :param time_series: timeseries dictionary for behavioral event times (optional)
+ :param **kwargs: kwargs passed to `driftmap()` (optional) :return: """br=brorBrainRegions()
@@ -1240,7 +1247,10 @@
+"""
+Classes for manipulating brain atlases, insertions, and coordinates.
+"""
+frompathlibimportPath,PurePosixPath
+fromdataclassesimportdataclass
+importlogging
+
+importmatplotlib.pyplotasplt
+importnumpyasnp
+importnrrd
+
+fromone.webclientimporthttp_download_file
+importone.params
+importone.remote.awsasaws
+fromiblutil.numericalimportismember
+fromiblatlas.regionsimportBrainRegions,FranklinPaxinosRegions
+
+ALLEN_CCF_LANDMARKS_MLAPDV_UM={'bregma':np.array([5739,5400,332])}
+"""dict: The ML AP DV voxel coordinates of brain landmarks in the Allen atlas."""
+
+PAXINOS_CCF_LANDMARKS_MLAPDV_UM={'bregma':np.array([5700,4300+160,330])}
+"""dict: The ML AP DV voxel coordinates of brain landmarks in the Franklin & Paxinos atlas."""
+
+S3_BUCKET_IBL='ibl-brain-wide-map-public'
+"""str: The name of the public IBL S3 bucket containing atlas data."""
+
+_logger=logging.getLogger(__name__)
+
+
+
+[docs]
+defcart2sph(x,y,z):
+"""
+ Converts cartesian to spherical coordinates.
+
+ Returns spherical coordinates (r, theta, phi).
+
+ Parameters
+ ----------
+ x : numpy.array
+ A 1D array of x-axis coordinates.
+ y : numpy.array
+ A 1D array of y-axis coordinates.
+ z : numpy.array
+ A 1D array of z-axis coordinates.
+
+ Returns
+ -------
+ numpy.array
+ The radial distance of each point.
+ numpy.array
+ The polar angle.
+ numpy.array
+ The azimuthal angle.
+
+ See Also
+ --------
+ sph2cart
+ """
+ r=np.sqrt(x**2+y**2+z**2)
+ phi=np.arctan2(y,x)*180/np.pi
+ theta=np.zeros_like(r)
+ iok=r!=0
+ theta[iok]=np.arccos(z[iok]/r[iok])*180/np.pi
+ iftheta.size==1:
+ theta=float(theta)
+ returnr,theta,phi
+
+
+
+
+[docs]
+defsph2cart(r,theta,phi):
+"""
+ Converts Spherical to Cartesian coordinates.
+
+ Returns Cartesian coordinates (x, y, z).
+
+ Parameters
+ ----------
+ r : numpy.array
+ A 1D array of radial distances.
+ theta : numpy.array
+ A 1D array of polar angles.
+ phi : numpy.array
+ A 1D array of azimuthal angles.
+
+ Returns
+ -------
+ x : numpy.array
+ A 1D array of x-axis coordinates.
+ y : numpy.array
+ A 1D array of y-axis coordinates.
+ z : numpy.array
+ A 1D array of z-axis coordinates.
+
+ See Also
+ --------
+ cart2sph
+ """
+ x=r*np.cos(phi/180*np.pi)*np.sin(theta/180*np.pi)
+ y=r*np.sin(phi/180*np.pi)*np.sin(theta/180*np.pi)
+ z=r*np.cos(theta/180*np.pi)
+ returnx,y,z
+
+
+
+
+[docs]
+classBrainCoordinates:
+"""
+ Class for mapping and indexing a 3D array to real-world coordinates.
+
+ * x = ml, right positive
+ * y = ap, anterior positive
+ * z = dv, dorsal positive
+
+ The layout of the Atlas dimension is done according to the most used sections so they lay
+ contiguous on disk assuming C-ordering: V[iap, iml, idv]
+
+ Parameters
+ ----------
+ nxyz : array_like
+ Number of elements along each Cartesian axis (nx, ny, nz) = (nml, nap, ndv).
+ xyz0 : array_like
+ Coordinates of the element volume[0, 0, 0] in the coordinate space.
+ dxyz : array_like, float
+ Spatial interval of the volume along the 3 dimensions.
+
+ Attributes
+ ----------
+ xyz0 : numpy.array
+ The Cartesian coordinates of the element volume[0, 0, 0], i.e. the origin.
+ x0 : int
+ The x-axis origin coordinate of the element volume.
+ y0 : int
+ The y-axis origin coordinate of the element volume.
+ z0 : int
+ The z-axis origin coordinate of the element volume.
+ """
+
+ def__init__(self,nxyz,xyz0=(0,0,0),dxyz=(1,1,1)):
+ ifnp.isscalar(dxyz):
+ dxyz=[dxyz]*3
+ self.x0,self.y0,self.z0=list(xyz0)
+ self.dx,self.dy,self.dz=list(dxyz)
+ self.nx,self.ny,self.nz=list(nxyz)
+
+ @property
+ defdxyz(self):
+"""numpy.array: Spatial interval of the volume along the 3 dimensions."""
+ returnnp.array([self.dx,self.dy,self.dz])
+
+ @property
+ defnxyz(self):
+"""numpy.array: Coordinates of the element volume[0, 0, 0] in the coordinate space."""
+ returnnp.array([self.nx,self.ny,self.nz])
+
+"""Methods ratios to indices"""
+
+
+
+"""Methods distance to indices"""
+ @staticmethod
+ def_round(i,round=True):
+"""
+ Round an input value to the nearest integer, replacing NaN values with 0.
+
+ Parameters
+ ----------
+ i : int, float, numpy.nan, numpy.array
+ A value or array of values to round.
+ round : bool
+ If false this function is identity.
+
+ Returns
+ -------
+ int, float, numpy.nan, numpy.array
+ If round is true, returns the nearest integer, replacing NaN values with 0, otherwise
+ returns the input unaffected.
+ """
+ nanval=0
+ ifround:
+ ii=np.array(np.round(i)).astype(int)
+ ii[np.isnan(i)]=nanval
+ returnii
+ else:
+ returni
+
+
+[docs]
+ defx2i(self,x,round=True,mode='raise'):
+"""
+ Find the nearest volume image index to a given x-axis coordinate.
+
+ Parameters
+ ----------
+ x : float, numpy.array
+ One or more x-axis coordinates, relative to the origin, x0.
+ round : bool
+ If true, round to the nearest index, replacing NaN values with 0.
+ mode : {'raise', 'clip', 'wrap'}, default='raise'
+ How to behave if the coordinate lies outside of the volume: raise (default) will raise
+ a ValueError; 'clip' will replace the index with the closest index inside the volume;
+ 'wrap' will return the index as is.
+
+ Returns
+ -------
+ numpy.array
+ The nearest indices of the image volume along the first dimension.
+
+ Raises
+ ------
+ ValueError
+ At least one x value lies outside of the atlas volume. Change 'mode' input to 'wrap' to
+ keep these values unchanged, or 'clip' to return the nearest valid indices.
+ """
+ i=np.asarray(self._round((x-self.x0)/self.dx,round=round))
+ ifnp.any(i<0)ornp.any(i>=self.nx):
+ ifmode=='clip':
+ i[i<0]=0
+ i[i>=self.nx]=self.nx-1
+ elifmode=='raise':
+ raiseValueError("At least one x value lies outside of the atlas volume.")
+ elifmode=='wrap':# This is only here for legacy reasons
+ pass
+ returni
+
+
+
+[docs]
+ defy2i(self,y,round=True,mode='raise'):
+"""
+ Find the nearest volume image index to a given y-axis coordinate.
+
+ Parameters
+ ----------
+ y : float, numpy.array
+ One or more y-axis coordinates, relative to the origin, y0.
+ round : bool
+ If true, round to the nearest index, replacing NaN values with 0.
+ mode : {'raise', 'clip', 'wrap'}
+ How to behave if the coordinate lies outside of the volume: raise (default) will raise
+ a ValueError; 'clip' will replace the index with the closest index inside the volume;
+ 'wrap' will return the index as is.
+
+ Returns
+ -------
+ numpy.array
+ The nearest indices of the image volume along the second dimension.
+
+ Raises
+ ------
+ ValueError
+ At least one y value lies outside of the atlas volume. Change 'mode' input to 'wrap' to
+ keep these values unchanged, or 'clip' to return the nearest valid indices.
+ """
+ i=np.asarray(self._round((y-self.y0)/self.dy,round=round))
+ ifnp.any(i<0)ornp.any(i>=self.ny):
+ ifmode=='clip':
+ i[i<0]=0
+ i[i>=self.ny]=self.ny-1
+ elifmode=='raise':
+ raiseValueError("At least one y value lies outside of the atlas volume.")
+ elifmode=='wrap':# This is only here for legacy reasons
+ pass
+ returni
+
+
+
+[docs]
+ defz2i(self,z,round=True,mode='raise'):
+"""
+ Find the nearest volume image index to a given z-axis coordinate.
+
+ Parameters
+ ----------
+ z : float, numpy.array
+ One or more z-axis coordinates, relative to the origin, z0.
+ round : bool
+ If true, round to the nearest index, replacing NaN values with 0.
+ mode : {'raise', 'clip', 'wrap'}
+ How to behave if the coordinate lies outside of the volume: raise (default) will raise
+ a ValueError; 'clip' will replace the index with the closest index inside the volume;
+ 'wrap' will return the index as is.
+
+ Returns
+ -------
+ numpy.array
+ The nearest indices of the image volume along the third dimension.
+
+ Raises
+ ------
+ ValueError
+ At least one z value lies outside of the atlas volume. Change 'mode' input to 'wrap' to
+ keep these values unchanged, or 'clip' to return the nearest valid indices.
+ """
+ i=np.asarray(self._round((z-self.z0)/self.dz,round=round))
+ ifnp.any(i<0)ornp.any(i>=self.nz):
+ ifmode=='clip':
+ i[i<0]=0
+ i[i>=self.nz]=self.nz-1
+ elifmode=='raise':
+ raiseValueError("At least one z value lies outside of the atlas volume.")
+ elifmode=='wrap':# This is only here for legacy reasons
+ pass
+ returni
+
+
+
+[docs]
+ defxyz2i(self,xyz,round=True,mode='raise'):
+"""
+ Find the nearest volume image indices to the given Cartesian coordinates.
+
+ Parameters
+ ----------
+ xyz : array_like
+ One or more Cartesian coordinates, relative to the origin, xyz0.
+ round : bool
+ If true, round to the nearest index, replacing NaN values with 0.
+ mode : {'raise', 'clip', 'wrap'}
+ How to behave if any coordinate lies outside of the volume: raise (default) will raise
+ a ValueError; 'clip' will replace the index with the closest index inside the volume;
+ 'wrap' will return the index as is.
+
+ Returns
+ -------
+ numpy.array
+ The nearest indices of the image volume.
+
+ Raises
+ ------
+ ValueError
+ At least one coordinate lies outside of the atlas volume. Change 'mode' input to 'wrap'
+ to keep these values unchanged, or 'clip' to return the nearest valid indices.
+ """
+ xyz=np.array(xyz)
+ dt=intifroundelsefloat
+ out=np.zeros_like(xyz,dtype=dt)
+ out[...,0]=self.x2i(xyz[...,0],round=round,mode=mode)
+ out[...,1]=self.y2i(xyz[...,1],round=round,mode=mode)
+ out[...,2]=self.z2i(xyz[...,2],round=round,mode=mode)
+ returnout
+
+
+"""Methods indices to distance"""
+
+[docs]
+ defi2x(self,ind):
+"""
+ Return the x-axis coordinate of a given index.
+
+ Parameters
+ ----------
+ ind : int, numpy.array
+ One or more indices along the first dimension of the image volume.
+
+ Returns
+ -------
+ float, numpy.array
+ The corresponding x-axis coordinate(s), relative to the origin, x0.
+ """
+ returnind*self.dx+self.x0
+
+
+
+[docs]
+ defi2y(self,ind):
+"""
+ Return the y-axis coordinate of a given index.
+
+ Parameters
+ ----------
+ ind : int, numpy.array
+ One or more indices along the second dimension of the image volume.
+
+ Returns
+ -------
+ float, numpy.array
+ The corresponding y-axis coordinate(s), relative to the origin, y0.
+ """
+ returnind*self.dy+self.y0
+
+
+
+[docs]
+ defi2z(self,ind):
+"""
+ Return the z-axis coordinate of a given index.
+
+ Parameters
+ ----------
+ ind : int, numpy.array
+ One or more indices along the third dimension of the image volume.
+
+ Returns
+ -------
+ float, numpy.array
+ The corresponding z-axis coordinate(s), relative to the origin, z0.
+ """
+ returnind*self.dz+self.z0
+
+
+
+[docs]
+ defi2xyz(self,iii):
+"""
+ Return the Cartesian coordinates of a given index.
+
+ Parameters
+ ----------
+ iii : array_like
+ One or more image volume indices.
+
+ Returns
+ -------
+ numpy.array
+ The corresponding xyz coordinates, relative to the origin, xyz0.
+ """
+
+ iii=np.array(iii,dtype=float)
+ out=np.zeros_like(iii)
+ out[...,0]=self.i2x(iii[...,0])
+ out[...,1]=self.i2y(iii[...,1])
+ out[...,2]=self.i2z(iii[...,2])
+ returnout
+[docs]
+classBrainAtlas:
+"""
+ Objects that holds image, labels and coordinate transforms for a brain Atlas.
+ Currently this is designed for the AllenCCF at several resolutions,
+ yet this class can be used for other atlases arises.
+ """
+
+"""numpy.array: An image volume."""
+ image=None
+"""numpy.array: An annotation label volume."""
+ label=None
+
+ def__init__(self,image,label,dxyz,regions,iorigin=[0,0,0],
+ dims2xyz=[0,1,2],xyz2dims=[0,1,2]):
+"""
+ self.image: image volume (ap, ml, dv)
+ self.label: label volume (ap, ml, dv)
+ self.bc: atlas.BrainCoordinate object
+ self.regions: atlas.BrainRegions object
+ self.top: 2d np array (ap, ml) containing the z-coordinate (m) of the surface of the brain
+ self.dims2xyz and self.zyz2dims: map image axis order to xyz coordinates order
+ """
+
+ self.image=image
+ self.label=label
+ self.regions=regions
+ self.dims2xyz=dims2xyz
+ self.xyz2dims=xyz2dims
+ assertnp.all(self.dims2xyz[self.xyz2dims]==np.array([0,1,2]))
+ assertnp.all(self.xyz2dims[self.dims2xyz]==np.array([0,1,2]))
+ # create the coordinate transform object that maps volume indices to real world coordinates
+ nxyz=np.array(self.image.shape)[self.dims2xyz]
+ bc=BrainCoordinates(nxyz=nxyz,xyz0=(0,0,0),dxyz=dxyz)
+ self.bc=BrainCoordinates(nxyz=nxyz,xyz0=-bc.i2xyz(iorigin),dxyz=dxyz)
+
+ self.surface=None
+ self.boundary=None
+
+ @staticmethod
+ def_get_cache_dir():
+ par=one.params.get(silent=True)
+ path_atlas=Path(par.CACHE_DIR).joinpath('histology','ATLAS','Needles','Allen','flatmaps')
+ returnpath_atlas
+
+
+[docs]
+ defcompute_surface(self):
+"""
+ Get the volume top, bottom, left and right surfaces, and from these the outer surface of
+ the image volume. This is needed to compute probe insertions intersections.
+
+ NOTE: In places where the top or bottom surface touch the top or bottom of the atlas volume, the surface
+ will be set to np.nan. If you encounter issues working with these surfaces check if this might be the cause.
+ """
+ ifself.surfaceisNone:# only compute if it hasn't already been computed
+ axz=self.xyz2dims[2]# this is the dv axis
+ _surface=(self.label==0).astype(np.int8)*2
+ l0=np.diff(_surface,axis=axz,append=2)
+ _top=np.argmax(l0==-2,axis=axz).astype(float)
+ _top[_top==0]=np.nan
+ _bottom=self.bc.nz-np.argmax(np.flip(l0,axis=axz)==2,axis=axz).astype(float)
+ _bottom[_bottom==self.bc.nz]=np.nan
+ self.top=self.bc.i2z(_top+1)
+ self.bottom=self.bc.i2z(_bottom-1)
+ self.surface=np.diff(_surface,axis=self.xyz2dims[0],append=2)+l0
+ idx_srf=np.where(self.surface!=0)
+ self.surface[idx_srf]=1
+ self.srf_xyz=self.bc.i2xyz(np.c_[idx_srf[self.xyz2dims[0]],idx_srf[self.xyz2dims[1]],
+ idx_srf[self.xyz2dims[2]]].astype(float))
+
+
+ def_lookup_inds(self,ixyz,mode='raise'):
+"""
+ Performs a 3D lookup from volume indices ixyz to the image volume
+ :param ixyz: [n, 3] array of indices in the mlapdv order
+ :return: n array of flat indices
+ """
+ idims=np.split(ixyz[...,self.xyz2dims],[1,2],axis=-1)
+ inds=np.ravel_multi_index(idims,self.bc.nxyz[self.xyz2dims],mode=mode)
+ returninds.squeeze()
+
+ def_lookup(self,xyz,mode='raise'):
+"""
+ Performs a 3D lookup from real world coordinates to the flat indices in the volume,
+ defined in the BrainCoordinates object.
+
+ Parameters
+ ----------
+ xyz : numpy.array
+ An (n, 3) array of Cartesian coordinates.
+ mode : {'raise', 'clip', 'wrap'}
+ How to behave if any coordinate lies outside of the volume: raise (default) will raise
+ a ValueError; 'clip' will replace the index with the closest index inside the volume;
+ 'wrap' will return the index as is.
+
+ Returns
+ -------
+ numpy.array
+ A 1D array of flat indices.
+ """
+ returnself._lookup_inds(self.bc.xyz2i(xyz,mode=mode),mode=mode)
+
+
+[docs]
+ defget_labels(self,xyz,mapping=None,radius_um=None,mode='raise'):
+"""
+ Performs a 3D lookup from real world coordinates to the volume labels
+ and return the regions ids according to the mapping
+ :param xyz: [n, 3] array of coordinates
+ :param mapping: brain region mapping (defaults to original Allen mapping)
+ :param radius_um: if not null, returns a regions ids array and an array of proportion
+ of regions in a sphere of size radius around the coordinates.
+ :param mode: {‘raise’, 'clip'} determines what to do when determined index lies outside the atlas volume
+ 'raise' will raise a ValueError (default)
+ 'clip' will replace the index with the closest index inside the volume
+ :return: n array of region ids
+ """
+ mapping=mappingorself.regions.default_mapping
+
+ ifradius_um:
+ nrx=int(np.ceil(radius_um/abs(self.bc.dx)/1e6))
+ nry=int(np.ceil(radius_um/abs(self.bc.dy)/1e6))
+ nrz=int(np.ceil(radius_um/abs(self.bc.dz)/1e6))
+ nr=[nrx,nry,nrz]
+ iii=self.bc.xyz2i(xyz,mode=mode)
+ # computing the cube radius and indices is more complicated as volume indices are not
+ # necessarily in ml, ap, dv order so the indices order is dynamic
+ rcube=np.meshgrid(*tuple((np.arange(
+ -nr[i],nr[i]+1)*self.bc.dxyz[i])**2foriinself.xyz2dims))
+ rcube=np.sqrt(rcube[0]+rcube[1],rcube[2])*1e6
+ icube=tuple(slice(-nr[i]+iii[i],nr[i]+iii[i]+1)foriinself.xyz2dims)
+ cube=self.regions.mappings[mapping][self.label[icube]]
+ ilabs,counts=np.unique(cube[rcube<=radius_um],return_counts=True)
+ returnself.regions.id[ilabs],counts/np.sum(counts)
+ else:
+ regions_indices=self._get_mapping(mapping=mapping)[self.label.flat[self._lookup(xyz,mode=mode)]]
+ returnself.regions.id[regions_indices]
+
+
+ def_get_mapping(self,mapping=None):
+"""
+ Safe way to get mappings if nothing defined in regions.
+ A mapping transforms from the full allen brain Atlas ids to the remapped ids
+ new_ids = ids[mapping]
+ """
+ mapping=mappingorself.regions.default_mapping
+ ifhasattr(self.regions,'mappings'):
+ returnself.regions.mappings[mapping]
+ else:
+ returnnp.arange(self.regions.id.size)
+
+ def_label2rgb(self,imlabel):
+"""
+ Converts a slice from the label volume to its RGB equivalent for display
+ :param imlabel: 2D np-array containing label ids (slice of the label volume)
+ :return: 3D np-array of the slice uint8 rgb values
+ """
+ ifgetattr(self.regions,'rgb',None)isNone:
+ returnself.regions.id[imlabel]
+ else:# if the regions exist and have the rgb attribute, do the rgb lookup
+ returnself.regions.rgb[imlabel]
+
+
+[docs]
+ deftilted_slice(self,xyz,axis,volume='image'):
+"""
+ From line coordinates, extracts the tilted plane containing the line from the 3D volume
+ :param xyz: np.array: points defining a probe trajectory in 3D space (xyz triplets)
+ if more than 2 points are provided will take the best fit
+ :param axis:
+ 0: along ml = sagittal-slice
+ 1: along ap = coronal-slice
+ 2: along dv = horizontal-slice
+ :param volume: 'image' or 'annotation'
+ :return: np.array, abscissa extent (width), ordinate extent (height),
+ squeezed axis extent (depth)
+ """
+ ifaxis==0:# sagittal slice (squeeze/take along ml-axis)
+ wdim,hdim,ddim=(1,2,0)
+ elifaxis==1:# coronal slice (squeeze/take along ap-axis)
+ wdim,hdim,ddim=(0,2,1)
+ elifaxis==2:# horizontal slice (squeeze/take along dv-axis)
+ wdim,hdim,ddim=(0,1,2)
+ # get the best fit and find exit points of the volume along squeezed axis
+ trj=Trajectory.fit(xyz)
+ sub_volume=trj._eval(self.bc.lim(axis=hdim),axis=hdim)
+ sub_volume[:,wdim]=self.bc.lim(axis=wdim)
+ sub_volume_i=self.bc.xyz2i(sub_volume)
+ tile_shape=np.array([np.diff(sub_volume_i[:,hdim])[0]+1,self.bc.nxyz[wdim]])
+ # get indices along each dimension
+ indx=np.arange(tile_shape[1])
+ indy=np.arange(tile_shape[0])
+ inds=np.linspace(*sub_volume_i[:,ddim],tile_shape[0])
+ # compute the slice indices and output the slice
+ _,INDS=np.meshgrid(indx,np.int64(np.around(inds)))
+ INDX,INDY=np.meshgrid(indx,indy)
+ indsl=[[INDX,INDY,INDS][i]foriinnp.argsort([wdim,hdim,ddim])[self.xyz2dims]]
+ ifisinstance(volume,np.ndarray):
+ tslice=volume[indsl[0],indsl[1],indsl[2]]
+ elifvolume.lower()=='annotation':
+ tslice=self._label2rgb(self.label[indsl[0],indsl[1],indsl[2]])
+ elifvolume.lower()=='image':
+ tslice=self.image[indsl[0],indsl[1],indsl[2]]
+ elifvolume.lower()=='surface':
+ tslice=self.surface[indsl[0],indsl[1],indsl[2]]
+
+ # get extents with correct convention NB: matplotlib flips the y-axis on imshow !
+ width=np.sort(sub_volume[:,wdim])[np.argsort(self.bc.lim(axis=wdim))]
+ height=np.flipud(np.sort(sub_volume[:,hdim])[np.argsort(self.bc.lim(axis=hdim))])
+ depth=np.flipud(np.sort(sub_volume[:,ddim])[np.argsort(self.bc.lim(axis=ddim))])
+ returntslice,width,height,depth
+
+
+
+[docs]
+ defplot_tilted_slice(self,xyz,axis,volume='image',cmap=None,ax=None,return_sec=False,**kwargs):
+"""
+ From line coordinates, extracts the tilted plane containing the line from the 3D volume
+ :param xyz: np.array: points defining a probe trajectory in 3D space (xyz triplets)
+ if more than 2 points are provided will take the best fit
+ :param axis:
+ 0: along ml = sagittal-slice
+ 1: along ap = coronal-slice
+ 2: along dv = horizontal-slice
+ :param volume: 'image' or 'annotation'
+ :return: matplotlib axis
+ """
+ ifaxis==0:
+ axis_labels=np.array(['ap (um)','dv (um)','ml (um)'])
+ elifaxis==1:
+ axis_labels=np.array(['ml (um)','dv (um)','ap (um)'])
+ elifaxis==2:
+ axis_labels=np.array(['ml (um)','ap (um)','dv (um)'])
+
+ tslice,width,height,depth=self.tilted_slice(xyz,axis,volume=volume)
+ width=width*1e6
+ height=height*1e6
+ depth=depth*1e6
+ ifnotax:
+ plt.figure()
+ ax=plt.gca()
+ ax.axis('equal')
+ ifnotcmap:
+ cmap=plt.get_cmap('bone')
+ # get the transfer function from y-axis to squeezed axis for second axe
+ ab=np.linalg.solve(np.c_[height,height*0+1],depth)
+ height*ab[0]+ab[1]
+ ax.imshow(tslice,extent=np.r_[width,height],cmap=cmap,**kwargs)
+ sec_ax=ax.secondary_yaxis('right',functions=(
+ lambdax:x*ab[0]+ab[1],
+ lambday:(y-ab[1])/ab[0]))
+ ax.set_xlabel(axis_labels[0])
+ ax.set_ylabel(axis_labels[1])
+ sec_ax.set_ylabel(axis_labels[2])
+ ifreturn_sec:
+ returnax,sec_ax
+ else:
+ returnax
+
+
+ @staticmethod
+ def_plot_slice(im,extent,ax=None,cmap=None,volume=None,**kwargs):
+"""
+ Plot an atlas slice.
+
+ Parameters
+ ----------
+ im : numpy.array
+ A 2D image slice to plot.
+ extent : array_like
+ The bounding box in data coordinates that the image will fill specified as (left,
+ right, bottom, top) in data coordinates.
+ ax : matplotlib.pyplot.Axes
+ An optional Axes object to plot to.
+ cmap : str, matplotlib.colors.Colormap
+ The Colormap instance or registered colormap name used to map scalar data to colors.
+ Defaults to 'bone'.
+ volume : str
+ If 'boundary', assumes image is an outline of boundaries between all regions.
+ FIXME How does this affect the plot?
+ **kwargs
+ See matplotlib.pyplot.imshow.
+
+ Returns
+ -------
+ matplotlib.pyplot.Axes
+ The image axes.
+ """
+ ifnotax:
+ ax=plt.gca()
+ ax.axis('equal')
+ ifnotcmap:
+ cmap=plt.get_cmap('bone')
+
+ ifvolume=='boundary':
+ imb=np.zeros((*im.shape[:2],4),dtype=np.uint8)
+ imb[im==1]=np.array([0,0,0,255])
+ im=imb
+
+ ax.imshow(im,extent=extent,cmap=cmap,**kwargs)
+ returnax
+
+
+[docs]
+ defextent(self,axis):
+"""
+ :param axis: direction along which the volume is stacked:
+ (2 = z for horizontal slice)
+ (1 = y for coronal slice)
+ (0 = x for sagittal slice)
+ :return:
+ """
+
+ ifaxis==0:
+ extent=np.r_[self.bc.ylim,np.flip(self.bc.zlim)]*1e6
+ elifaxis==1:
+ extent=np.r_[self.bc.xlim,np.flip(self.bc.zlim)]*1e6
+ elifaxis==2:
+ extent=np.r_[self.bc.xlim,np.flip(self.bc.ylim)]*1e6
+ returnextent
+
+
+
+[docs]
+ defslice(self,coordinate,axis,volume='image',mode='raise',region_values=None,
+ mapping=None,bc=None):
+"""
+ Get slice through atlas
+
+ :param coordinate: coordinate to slice in metres, float
+ :param axis: xyz convention: 0 for ml, 1 for ap, 2 for dv
+ - 0: sagittal slice (along ml axis)
+ - 1: coronal slice (along ap axis)
+ - 2: horizontal slice (along dv axis)
+ :param volume:
+ - 'image' - allen image volume
+ - 'annotation' - allen annotation volume
+ - 'surface' - outer surface of mesh
+ - 'boundary' - outline of boundaries between all regions
+ - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
+ - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
+ :param mode: error mode for out of bounds coordinates
+ - 'raise' raise an error
+ - 'clip' gets the first or last index
+ :param region_values: custom values to plot
+ - if volume='volume', region_values must have shape ba.image.shape
+ - if volume='value', region_values must have shape ba.regions.id
+ :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
+ :return: 2d array or 3d RGB numpy int8 array
+ """
+ ifaxis==0:
+ index=self.bc.x2i(np.array(coordinate),mode=mode)
+ elifaxis==1:
+ index=self.bc.y2i(np.array(coordinate),mode=mode)
+ elifaxis==2:
+ index=self.bc.z2i(np.array(coordinate),mode=mode)
+
+ # np.take is 50 thousand times slower than straight slicing !
+ def_take(vol,ind,axis):
+ ifmode=='clip':
+ ind=np.minimum(np.maximum(ind,0),vol.shape[axis]-1)
+ ifaxis==0:
+ returnvol[ind,:,:]
+ elifaxis==1:
+ returnvol[:,ind,:]
+ elifaxis==2:
+ returnvol[:,:,ind]
+
+ def_take_remap(vol,ind,axis,mapping):
+ # For the labels, remap the regions indices according to the mapping
+ returnself._get_mapping(mapping=mapping)[_take(vol,ind,axis)]
+
+ ifisinstance(volume,np.ndarray):
+ return_take(volume,index,axis=self.xyz2dims[axis])
+ elifvolumein'annotation':
+ iregion=_take_remap(self.label,index,self.xyz2dims[axis],mapping)
+ returnself._label2rgb(iregion)
+ elifvolume=='image':
+ return_take(self.image,index,axis=self.xyz2dims[axis])
+ elifvolume=='value':
+ returnregion_values[_take_remap(self.label,index,self.xyz2dims[axis],mapping)]
+ elifvolume=='image':
+ return_take(self.image,index,axis=self.xyz2dims[axis])
+ elifvolumein['surface','edges']:
+ self.compute_surface()
+ return_take(self.surface,index,axis=self.xyz2dims[axis])
+ elifvolume=='boundary':
+ iregion=_take_remap(self.label,index,self.xyz2dims[axis],mapping)
+ returnself.compute_boundaries(iregion)
+
+ elifvolume=='volume':
+ ifbcisnotNone:
+ index=bc.xyz2i(np.array([coordinate]*3))[axis]
+ return_take(region_values,index,axis=self.xyz2dims[axis])
+
+
+
+[docs]
+ defcompute_boundaries(self,values):
+"""
+ Compute the boundaries between regions on slice
+ :param values:
+ :return:
+ """
+ boundary=np.abs(np.diff(values,axis=0,prepend=0))
+ boundary=boundary+np.abs(np.diff(values,axis=1,prepend=0))
+ boundary=boundary+np.abs(np.diff(values,axis=1,append=0))
+ boundary=boundary+np.abs(np.diff(values,axis=0,append=0))
+
+ boundary[boundary!=0]=1
+
+ returnboundary
+
+
+
+[docs]
+ defplot_slices(self,xyz,*args,**kwargs):
+"""
+ From a single coordinate, plots the 3 slices that intersect at this point in a single
+ matplotlib figure
+ :param xyz: mlapdv coordinate in m
+ :param args: arguments to be forwarded to plot slices
+ :param kwargs: keyword arguments to be forwarded to plot slices
+ :return: 2 by 2 array of axes
+ """
+ fig,axs=plt.subplots(2,2)
+ self.plot_cslice(xyz[1],*args,ax=axs[0,0],**kwargs)
+ self.plot_sslice(xyz[0],*args,ax=axs[0,1],**kwargs)
+ self.plot_hslice(xyz[2],*args,ax=axs[1,0],**kwargs)
+ xyz_um=xyz*1e6
+ axs[0,0].plot(xyz_um[0],xyz_um[2],'g*')
+ axs[0,1].plot(xyz_um[1],xyz_um[2],'g*')
+ axs[1,0].plot(xyz_um[0],xyz_um[1],'g*')
+ returnaxs
+
+
+
+[docs]
+ defplot_cslice(self,ap_coordinate,volume='image',mapping=None,region_values=None,**kwargs):
+"""
+ Plot coronal slice through atlas at given ap_coordinate
+
+ :param: ap_coordinate (m)
+ :param volume:
+ - 'image' - allen image volume
+ - 'annotation' - allen annotation volume
+ - 'surface' - outer surface of mesh
+ - 'boundary' - outline of boundaries between all regions
+ - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
+ - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
+ :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
+ :param region_values: custom values to plot
+ - if volume='volume', region_values must have shape ba.image.shape
+ - if volume='value', region_values must have shape ba.regions.id
+ :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
+ :param **kwargs: matplotlib.pyplot.imshow kwarg arguments
+ :return: matplotlib ax object
+ """
+
+ cslice=self.slice(ap_coordinate,axis=1,volume=volume,mapping=mapping,region_values=region_values)
+ returnself._plot_slice(np.moveaxis(cslice,0,1),extent=self.extent(axis=1),volume=volume,**kwargs)
+
+
+
+[docs]
+ defplot_hslice(self,dv_coordinate,volume='image',mapping=None,region_values=None,**kwargs):
+"""
+ Plot horizontal slice through atlas at given dv_coordinate
+
+ :param: dv_coordinate (m)
+ :param volume:
+ - 'image' - allen image volume
+ - 'annotation' - allen annotation volume
+ - 'surface' - outer surface of mesh
+ - 'boundary' - outline of boundaries between all regions
+ - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
+ - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
+ :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
+ :param region_values: custom values to plot
+ - if volume='volume', region_values must have shape ba.image.shape
+ - if volume='value', region_values must have shape ba.regions.id
+ :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
+ :param **kwargs: matplotlib.pyplot.imshow kwarg arguments
+ :return: matplotlib ax object
+ """
+
+ hslice=self.slice(dv_coordinate,axis=2,volume=volume,mapping=mapping,region_values=region_values)
+ returnself._plot_slice(hslice,extent=self.extent(axis=2),volume=volume,**kwargs)
+
+
+
+[docs]
+ defplot_sslice(self,ml_coordinate,volume='image',mapping=None,region_values=None,**kwargs):
+"""
+ Plot sagittal slice through atlas at given ml_coordinate
+
+ :param: ml_coordinate (m)
+ :param volume:
+ - 'image' - allen image volume
+ - 'annotation' - allen annotation volume
+ - 'surface' - outer surface of mesh
+ - 'boundary' - outline of boundaries between all regions
+ - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
+ - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
+ :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
+ :param region_values: custom values to plot
+ - if volume='volume', region_values must have shape ba.image.shape
+ - if volume='value', region_values must have shape ba.regions.id
+ :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
+ :param **kwargs: matplotlib.pyplot.imshow kwarg arguments
+ :return: matplotlib ax object
+ """
+
+ sslice=self.slice(ml_coordinate,axis=0,volume=volume,mapping=mapping,region_values=region_values)
+ returnself._plot_slice(np.swapaxes(sslice,0,1),extent=self.extent(axis=0),volume=volume,**kwargs)
+
+
+
+[docs]
+ defplot_top(self,volume='annotation',mapping=None,region_values=None,ax=None,**kwargs):
+"""
+ Plot top view of atlas
+ :param volume:
+ - 'image' - allen image volume
+ - 'annotation' - allen annotation volume
+ - 'boundary' - outline of boundaries between all regions
+ - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
+ - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
+
+ :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
+ :param region_values:
+ :param ax:
+ :param kwargs:
+ :return:
+ """
+
+ self.compute_surface()
+ ix,iy=np.meshgrid(np.arange(self.bc.nx),np.arange(self.bc.ny))
+ iz=self.bc.z2i(self.top)
+ inds=self._lookup_inds(np.stack((ix,iy,iz),axis=-1))
+
+ regions=self._get_mapping(mapping=mapping)[self.label.flat[inds]]
+
+ ifvolume=='annotation':
+ im=self._label2rgb(regions)
+ elifvolume=='image':
+ im=self.top
+ elifvolume=='value':
+ im=region_values[regions]
+ elifvolume=='volume':
+ im=np.zeros((iz.shape))
+ forxinrange(im.shape[0]):
+ foryinrange(im.shape[1]):
+ im[x,y]=region_values[x,y,iz[x,y]]
+ elifvolume=='boundary':
+ im=self.compute_boundaries(regions)
+
+ returnself._plot_slice(im,self.extent(axis=2),ax=ax,volume=volume,**kwargs)
+
+
+
+
+
+[docs]
+@dataclass
+classTrajectory:
+"""
+ 3D Trajectory (usually for a linear probe), minimally defined by a vector and a point.
+
+ Examples
+ --------
+ Instantiate from a best fit from an n by 3 array containing xyz coordinates:
+
+ >>> trj = Trajectory.fit(xyz)
+ """
+ vector:np.ndarray
+ point:np.ndarray
+
+
+[docs]
+ @staticmethod
+ deffit(xyz):
+"""
+ Fits a line to a 3D cloud of points.
+
+ Parameters
+ ----------
+ xyz : numpy.array
+ An n by 3 array containing a cloud of points to fit a line to.
+
+ Returns
+ -------
+ Trajectory
+ A new trajectory object.
+ """
+ xyz_mean=np.mean(xyz,axis=0)
+ returnTrajectory(vector=np.linalg.svd(xyz-xyz_mean)[2][0],point=xyz_mean)
+
+
+
+[docs]
+ defeval_x(self,x):
+"""
+ given an array of x coordinates, returns the xyz array of coordinates along the insertion
+ :param x: n by 1 or numpy array containing x-coordinates
+ :return: n by 3 numpy array containing xyz-coordinates
+ """
+ returnself._eval(x,axis=0)
+
+
+
+[docs]
+ defeval_y(self,y):
+"""
+ given an array of y coordinates, returns the xyz array of coordinates along the insertion
+ :param y: n by 1 or numpy array containing y-coordinates
+ :return: n by 3 numpy array containing xyz-coordinates
+ """
+ returnself._eval(y,axis=1)
+
+
+
+[docs]
+ defeval_z(self,z):
+"""
+ given an array of z coordinates, returns the xyz array of coordinates along the insertion
+ :param z: n by 1 or numpy array containing z-coordinates
+ :return: n by 3 numpy array containing xyz-coordinates
+ """
+ returnself._eval(z,axis=2)
+
+
+
+[docs]
+ defproject(self,point):
+"""
+ projects a point onto the trajectory line
+ :param point: np.array(x, y, z) coordinates
+ :return:
+ """
+ # https://mathworld.wolfram.com/Point-LineDistance3-Dimensional.html
+ ifpoint.ndim==1:
+ returnself.project(point[np.newaxis])[0]
+ return(self.point+np.dot(point[:,np.newaxis]-self.point,self.vector)/
+ np.dot(self.vector,self.vector)*self.vector)
+
+
+
+[docs]
+ defmindist(self,xyz,bounds=None):
+"""
+ Computes the minimum distance to the trajectory line for one or a set of points.
+ If bounds are provided, computes the minimum distance to the segment instead of an
+ infinite line.
+ :param xyz: [..., 3]
+ :param bounds: defaults to None. np.array [2, 3]: segment boundaries, inf line if None
+ :return: minimum distance [...]
+ """
+ proj=self.project(xyz)
+ d=np.sqrt(np.sum((proj-xyz)**2,axis=-1))
+ ifboundsisnotNone:
+ # project the boundaries and the points along the traj
+ b=np.dot(bounds,self.vector)
+ ob=np.argsort(b)
+ p=np.dot(xyz[:,np.newaxis],self.vector).squeeze()
+ # for points below and above boundaries, compute cartesian distance to the boundary
+ imin=p<np.min(b)
+ d[imin]=np.sqrt(np.sum((xyz[imin,:]-bounds[ob[0],:])**2,axis=-1))
+ imax=p>np.max(b)
+ d[imax]=np.sqrt(np.sum((xyz[imax,:]-bounds[ob[1],:])**2,axis=-1))
+ returnd
+
+
+ def_eval(self,c,axis):
+ # uses symmetric form of 3d line equation to get xyz coordinates given one coordinate
+ ifnotisinstance(c,np.ndarray):
+ c=np.array(c)
+ whilec.ndim<2:
+ c=c[...,np.newaxis]
+ # there are cases where it's impossible to project if a line is // to the axis
+ ifself.vector[axis]==0:
+ returnnp.nan*np.zeros((c.shape[0],3))
+ else:
+ return(c-self.point[axis])*self.vector/self.vector[axis]+self.point
+
+
+[docs]
+ defexit_points(self,bc):
+"""
+ Given a Trajectory and a BrainCoordinates object, computes the intersection of the
+ trajectory with the brain coordinates bounding box
+ :param bc: BrainCoordinate objects
+ :return: np.ndarray 2 y 3 corresponding to exit points xyz coordinates
+ """
+ bounds=np.c_[bc.xlim,bc.ylim,bc.zlim]
+ epoints=np.r_[self.eval_x(bc.xlim),self.eval_y(bc.ylim),self.eval_z(bc.zlim)]
+ epoints=epoints[~np.all(np.isnan(epoints),axis=1)]
+ ind=np.all(np.bitwise_and(bounds[0,:]<=epoints,epoints<=bounds[1,:]),axis=1)
+ returnepoints[ind,:]
+
+
+
+
+
+[docs]
+@dataclass
+classInsertion:
+"""
+ Defines an ephys probe insertion in 3D coordinate. IBL conventions.
+
+ To instantiate, use the static methods: `Insertion.from_track` and `Insertion.from_dict`.
+ """
+ x:float
+ y:float
+ z:float
+ phi:float
+ theta:float
+ depth:float
+ label:str=''
+ beta:float=0
+
+
+[docs]
+ @staticmethod
+ deffrom_track(xyzs,brain_atlas=None):
+"""
+ Define an insersion from one or more trajectory.
+
+ Parameters
+ ----------
+ xyzs : numpy.array
+ An n by 3 array xyz coordinates representing an insertion trajectory.
+ brain_atlas : BrainAtlas
+ A brain atlas instance, used to attain the point of entry.
+
+ Returns
+ -------
+ Insertion
+ """
+ assertbrain_atlas,'Input argument brain_atlas must be defined'
+ traj=Trajectory.fit(xyzs)
+ # project the deepest point into the vector to get the tip coordinate
+ tip=traj.project(xyzs[np.argmin(xyzs[:,2]),:])
+ # get intersection with the brain surface as an entry point
+ entry=Insertion.get_brain_entry(traj,brain_atlas)
+ # convert to spherical system to store the insertion
+ depth,theta,phi=cart2sph(*(entry-tip))
+ insertion_dict={
+ 'x':entry[0],'y':entry[1],'z':entry[2],'phi':phi,'theta':theta,'depth':depth
+ }
+ returnInsertion(**insertion_dict)
+
+
+
+[docs]
+ @staticmethod
+ deffrom_dict(d,brain_atlas=None):
+"""
+ Constructs an Insertion object from the json information stored in probes.description file.
+
+ Parameters
+ ----------
+ d : dict
+ A dictionary containing at least the following keys {'x', 'y', 'z', 'phi', 'theta',
+ 'depth'}. The depth and xyz coordinates must be in um.
+ brain_atlas : BrainAtlas, default=None
+ If provided, disregards the z coordinate and locks the insertion point to the z of the
+ brain surface.
+
+ Returns
+ -------
+ Insertion
+
+ Examples
+ --------
+ >>> tri = {'x': 544.0, 'y': 1285.0, 'z': 0.0, 'phi': 0.0, 'theta': 5.0, 'depth': 4501.0}
+ >>> ins = Insertion.from_dict(tri)
+ """
+ assertbrain_atlas,'Input argument brain_atlas must be defined'
+ z=d['z']/1e6
+ ifnothasattr(brain_atlas,'top'):
+ brain_atlas.compute_surface()
+ iy=brain_atlas.bc.y2i(d['y']/1e6)
+ ix=brain_atlas.bc.x2i(d['x']/1e6)
+ # Only use the brain surface value as z if it isn't NaN (this happens when the surface touches the edges
+ # of the atlas volume
+ ifnotnp.isnan(brain_atlas.top[iy,ix]):
+ z=brain_atlas.top[iy,ix]
+ returnInsertion(x=d['x']/1e6,y=d['y']/1e6,z=z,
+ phi=d['phi'],theta=d['theta'],depth=d['depth']/1e6,
+ beta=d.get('beta',0),label=d.get('label',''))
+
+
+ @property
+ deftrajectory(self):
+"""
+ Gets the trajectory object matching insertion coordinates
+ :return: atlas.Trajectory
+ """
+ returnTrajectory.fit(self.xyz)
+
+ @property
+ defxyz(self):
+ returnnp.c_[self.entry,self.tip].transpose()
+
+ @property
+ defentry(self):
+ returnnp.array((self.x,self.y,self.z))
+
+ @property
+ deftip(self):
+ returnsph2cart(-self.depth,self.theta,self.phi)+np.array((self.x,self.y,self.z))
+
+ @staticmethod
+ def_get_surface_intersection(traj,brain_atlas,surface='top'):
+"""
+ TODO Document!
+
+ Parameters
+ ----------
+ traj
+ brain_atlas
+ surface
+
+ Returns
+ -------
+
+ """
+ brain_atlas.compute_surface()
+
+ distance=traj.mindist(brain_atlas.srf_xyz)
+ dist_sort=np.argsort(distance)
+ # In some cases the nearest two intersection points are not the top and bottom of brain
+ # So we find all intersection points that fall within one voxel and take the one with
+ # highest dV to be entry and lowest dV to be exit
+ idx_lim=np.sum(distance[dist_sort]*1e6<np.max(brain_atlas.res_um))
+ dist_lim=dist_sort[0:idx_lim]
+ z_val=brain_atlas.srf_xyz[dist_lim,2]
+ ifsurface=='top':
+ ma=np.argmax(z_val)
+ _xyz=brain_atlas.srf_xyz[dist_lim[ma],:]
+ _ixyz=brain_atlas.bc.xyz2i(_xyz)
+ _ixyz[brain_atlas.xyz2dims[2]]+=1
+ elifsurface=='bottom':
+ ma=np.argmin(z_val)
+ _xyz=brain_atlas.srf_xyz[dist_lim[ma],:]
+ _ixyz=brain_atlas.bc.xyz2i(_xyz)
+
+ xyz=brain_atlas.bc.i2xyz(_ixyz.astype(float))
+
+ returnxyz
+
+
+[docs]
+ @staticmethod
+ defget_brain_exit(traj,brain_atlas):
+"""
+ Given a Trajectory and a BrainAtlas object, computes the brain exit coordinate as the
+ intersection of the trajectory and the brain surface (brain_atlas.surface)
+ :param brain_atlas:
+ :return: 3 element array x,y,z
+ """
+ # Find point where trajectory intersects with bottom of brain
+ returnInsertion._get_surface_intersection(traj,brain_atlas,surface='bottom')
+
+
+
+[docs]
+ @staticmethod
+ defget_brain_entry(traj,brain_atlas):
+"""
+ Given a Trajectory and a BrainAtlas object, computes the brain entry coordinate as the
+ intersection of the trajectory and the brain surface (brain_atlas.surface)
+ :param brain_atlas:
+ :return: 3 element array x,y,z
+ """
+ # Find point where trajectory intersects with top of brain
+ returnInsertion._get_surface_intersection(traj,brain_atlas,surface='top')
+
+
+
+
+
+[docs]
+classAllenAtlas(BrainAtlas):
+"""
+ The Allan Common Coordinate Framework (CCF) brain atlas.
+
+ Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
+ using the IBL Bregma and coordinate system.
+ """
+
+"""pathlib.PurePosixPath: The default relative path of the Allen atlas file."""
+ atlas_rel_path=PurePosixPath('histology','ATLAS','Needles','Allen')
+
+"""numpy.array: A diffusion weighted imaging (DWI) image volume.
+
+ The Allen atlas DWI average template volume has with the shape (ap, ml, dv) and contains uint16
+ values. FIXME What do the values represent?
+ """
+ image=None
+
+"""numpy.array: An annotation label volume.
+
+ The Allen atlas label volume has with the shape (ap, ml, dv) and contains uint16 indices
+ of the Allen CCF brain regions to which each voxel belongs.
+ """
+ label=None
+
+ def__init__(self,res_um=25,scaling=(1,1,1),mock=False,hist_path=None):
+"""
+ Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
+ using the IBL Bregma and coordinate system.
+
+ Parameters
+ ----------
+ res_um : {10, 25, 50} int
+ The Atlas resolution in micrometres; one of 10, 25 or 50um.
+ scaling : float, numpy.array
+ Scale factor along ml, ap, dv for squeeze and stretch (default: [1, 1, 1]).
+ mock : bool
+ For testing purposes, return atlas object with image comprising zeros.
+ hist_path : str, pathlib.Path
+ The location of the image volume. May be a full file path or a directory.
+
+ Examples
+ --------
+ Instantiate Atlas from a non-default location, in this case the cache_dir of an ONE instance.
+ >>> target_dir = one.cache_dir / AllenAtlas.atlas_rel_path
+ ... ba = AllenAtlas(hist_path=target_dir)
+ """
+ LUT_VERSION='v01'# version 01 is the lateralized version
+ regions=BrainRegions()
+ xyz2dims=np.array([1,0,2])# this is the c-contiguous ordering
+ dims2xyz=np.array([1,0,2])
+ # we use Bregma as the origin
+ self.res_um=res_um
+ ibregma=(ALLEN_CCF_LANDMARKS_MLAPDV_UM['bregma']/self.res_um)
+ dxyz=self.res_um*1e-6*np.array([1,-1,-1])*scaling
+ ifmock:
+ image,label=[np.zeros((528,456,320),dtype=np.int16)for_inrange(2)]
+ label[:,:,100:105]=1327# lookup index for retina, id 304325711 (no id 1327)
+ else:
+ # Hist path may be a full path to an existing image file, or a path to a directory
+ cache_dir=Path(one.params.get(silent=True).CACHE_DIR)
+ hist_path=Path(hist_pathorcache_dir.joinpath(self.atlas_rel_path))
+ ifnothist_path.suffix:# check if folder
+ hist_path/=f'average_template_{res_um}.nrrd'
+ # get the image volume
+ ifnothist_path.exists():
+ hist_path=_download_atlas_allen(hist_path)
+ # get the remapped label volume
+ file_label=hist_path.with_name(f'annotation_{res_um}.nrrd')
+ ifnotfile_label.exists():
+ file_label=_download_atlas_allen(file_label)
+ file_label_remap=hist_path.with_name(f'annotation_{res_um}_lut_{LUT_VERSION}.npz')
+ ifnotfile_label_remap.exists():
+ label=self._read_volume(file_label).astype(dtype=np.int32)
+ _logger.info("Computing brain atlas annotations lookup table")
+ # lateralize atlas: for this the regions of the left hemisphere have primary
+ # keys opposite to to the normal ones
+ lateral=np.zeros(label.shape[xyz2dims[0]])
+ lateral[int(np.floor(ibregma[0]))]=1
+ lateral=np.sign(np.cumsum(lateral)[np.newaxis,:,np.newaxis]-0.5)
+ label=label*lateral.astype(np.int32)
+ # the 10 um atlas is too big to fit in memory so work by chunks instead
+ ifres_um==10:
+ first,ncols=(0,10)
+ whileTrue:
+ last=np.minimum(first+ncols,label.shape[-1])
+ _logger.info(f"Computing... {last} on {label.shape[-1]}")
+ _,im=ismember(label[:,:,first:last],regions.id)
+ label[:,:,first:last]=np.reshape(im,label[:,:,first:last].shape)
+ iflast==label.shape[-1]:
+ break
+ first+=ncols
+ label=label.astype(dtype=np.uint16)
+ _logger.info("Saving npz, this can take a long time")
+ else:
+ _,im=ismember(label,regions.id)
+ label=np.reshape(im.astype(np.uint16),label.shape)
+ np.savez_compressed(file_label_remap,label)
+ _logger.info(f"Cached remapping file {file_label_remap} ...")
+ # loads the files
+ label=self._read_volume(file_label_remap)
+ image=self._read_volume(hist_path)
+
+ super().__init__(image,label,dxyz,regions,ibregma,dims2xyz=dims2xyz,xyz2dims=xyz2dims)
+
+ @staticmethod
+ def_read_volume(file_volume):
+ iffile_volume.suffix=='.nrrd':
+ volume,_=nrrd.read(file_volume,index_order='C')# ml, dv, ap
+ # we want the coronal slice to be the most contiguous
+ volume=np.transpose(volume,(2,0,1))# image[iap, iml, idv]
+ eliffile_volume.suffix=='.npz':
+ volume=np.load(file_volume)['arr_0']
+ returnvolume
+
+
+[docs]
+ defxyz2ccf(self,xyz,ccf_order='mlapdv',mode='raise'):
+"""
+ Converts anatomical coordinates to CCF coordinates.
+
+ Anatomical coordinates are in meters, relative to bregma, which CFF coordinates are
+ assumed to be the volume indices multiplied by the spacing in micormeters.
+
+ Parameters
+ ----------
+ xyz : numpy.array
+ An N by 3 array of anatomical coordinates in meters, relative to bregma.
+ ccf_order : {'mlapdv', 'apdvml'}, default='mlapdv'
+ The order of the CCF coordinates returned. For IBL (the default) this is (ML, AP, DV),
+ for Allen MCC vertices, this is (AP, DV, ML).
+ mode : {'raise', 'clip', 'wrap'}, default='raise'
+ How to behave if the coordinate lies outside of the volume: raise (default) will raise
+ a ValueError; 'clip' will replace the index with the closest index inside the volume;
+ 'wrap' will return the index as is.
+
+ Returns
+ -------
+ numpy.array
+ Coordinates in CCF space (um, origin is the front left top corner of the data
+ volume, order determined by ccf_order
+ """
+ ordre=self._ccf_order(ccf_order)
+ ccf=self.bc.xyz2i(xyz,round=False,mode=mode)*float(self.res_um)
+ returnccf[...,ordre]
+
+
+
+[docs]
+ defccf2xyz(self,ccf,ccf_order='mlapdv'):
+"""
+ Convert anatomical coordinates from CCF coordinates.
+
+ Anatomical coordinates are in meters, relative to bregma, which CFF coordinates are
+ assumed to be the volume indices multiplied by the spacing in micormeters.
+
+ Parameters
+ ----------
+ ccf : numpy.array
+ An N by 3 array of coordinates in CCF space (atlas volume indices * um resolution). The
+ origin is the front left top corner of the data volume.
+ ccf_order : {'mlapdv', 'apdvml'}, default='mlapdv'
+ The order of the CCF coordinates given. For IBL (the default) this is (ML, AP, DV),
+ for Allen MCC vertices, this is (AP, DV, ML).
+
+ Returns
+ -------
+ numpy.array
+ The MLAPDV coordinates in meters, relative to bregma.
+ """
+ ordre=self._ccf_order(ccf_order,reverse=True)
+ returnself.bc.i2xyz((ccf[...,ordre]/float(self.res_um)))
+
+
+ @staticmethod
+ def_ccf_order(ccf_order,reverse=False):
+"""
+ Returns the mapping to go from CCF coordinates order to the brain atlas xyz
+ :param ccf_order: 'mlapdv' or 'apdvml'
+ :param reverse: defaults to False.
+ If False, returns from CCF to brain atlas
+ If True, returns from brain atlas to CCF
+ :return:
+ """
+ ifccf_order=='mlapdv':
+ return[0,1,2]
+ elifccf_order=='apdvml':
+ ifreverse:
+ return[2,0,1]
+ else:
+ return[1,2,0]
+ else:
+ ValueError("ccf_order needs to be either 'mlapdv' or 'apdvml'")
+
+
+[docs]
+ defcompute_regions_volume(self,cumsum=False):
+"""
+ Sums the number of voxels in the labels volume for each region.
+ Then compute volumes for all of the levels of hierarchy in cubic mm.
+ :param: cumsum: computes the cumulative sum of the volume as per the hierarchy (defaults to False)
+ :return:
+ """
+ nr=self.regions.id.shape[0]
+ count=np.bincount(self.label.flatten(),minlength=nr)
+ ifnotcumsum:
+ self.regions.volume=count*(self.res_um/1e3)**3
+ else:
+ self.regions.compute_hierarchy()
+ self.regions.volume=np.zeros_like(count)
+ foriinnp.arange(nr):
+ ifcount[i]==0:
+ continue
+ self.regions.volume[np.unique(self.regions.hierarchy[:,i])]+=count[i]
+ self.regions.volume=self.regions.volume*(self.res_um/1e3)**3
+
+
+
+
+
+[docs]
+defNeedlesAtlas(*args,**kwargs):
+"""
+ Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
+ using the IBL Bregma and coordinate system. The Needles atlas defines a stretch along AP
+ axis and a squeeze along the DV axis.
+
+ Parameters
+ ----------
+ res_um : {10, 25, 50} int
+ The Atlas resolution in micrometres; one of 10, 25 or 50um.
+ **kwargs
+ See AllenAtlas.
+
+ Returns
+ -------
+ AllenAtlas
+ An Allen atlas object with MRI atlas scaling applied.
+
+ Notes
+ -----
+ The scaling was determined by manually transforming the DSURQE atlas [1]_ onto the Allen CCF.
+ The DSURQE atlas is an MRI atlas acquired from 40 C57BL/6J mice post-mortem, with 40um
+ isometric resolution. The alignment was performed by Mayo Faulkner.
+ The atlas data can be found `here <http://repo.mouseimaging.ca/repo/DSURQE_40micron_nifti/>`__.
+ More information on the dataset and segmentation can be found
+ `here <http://repo.mouseimaging.ca/repo/DSURQE_40micron/notes_on_DSURQE_atlas>`__.
+
+ References
+ ----------
+ .. [1] Dorr AE, Lerch JP, Spring S, Kabani N, Henkelman RM (2008). High resolution
+ three-dimensional brain atlas using an average magnetic resonance image of 40 adult C57Bl/6J
+ mice. Neuroimage 42(1):60-9. [doi 10.1016/j.neuroimage.2008.03.037]
+ """
+ DV_SCALE=0.952# multiplicative factor on DV dimension, determined from MRI->CCF transform
+ AP_SCALE=1.087# multiplicative factor on AP dimension
+ kwargs['scaling']=np.array([1,AP_SCALE,DV_SCALE])
+ returnAllenAtlas(*args,**kwargs)
+
+
+
+
+[docs]
+defMRITorontoAtlas(*args,**kwargs):
+"""
+ The MRI Toronto brain atlas.
+
+ Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
+ using the IBL Bregma and coordinate system. The MRI Toronto atlas defines a stretch along AP
+ a squeeze along DV *and* a squeeze along ML. These are based on 12 p65 mice MRIs averaged [1]_.
+
+ Parameters
+ ----------
+ res_um : {10, 25, 50} int
+ The Atlas resolution in micrometres; one of 10, 25 or 50um.
+ **kwargs
+ See AllenAtlas.
+
+ Returns
+ -------
+ AllenAtlas
+ An Allen atlas object with MRI atlas scaling applied.
+
+ References
+ ----------
+ .. [1] Qiu, LR, Fernandes, DJ, Szulc-Lerch, KU et al. (2018) Mouse MRI shows brain areas
+ relatively larger in males emerge before those larger in females. Nat Commun 9, 2615.
+ [doi 10.1038/s41467-018-04921-2]
+ """
+ ML_SCALE=0.952
+ DV_SCALE=0.885# multiplicative factor on DV dimension, determined from MRI->CCF transform
+ AP_SCALE=1.031# multiplicative factor on AP dimension
+ kwargs['scaling']=np.array([ML_SCALE,AP_SCALE,DV_SCALE])
+ returnAllenAtlas(*args,**kwargs)
+[docs]
+classFranklinPaxinosAtlas(BrainAtlas):
+
+"""pathlib.PurePosixPath: The default relative path of the atlas file."""
+ atlas_rel_path=PurePosixPath('histology','ATLAS','Needles','FranklinPaxinos')
+
+ def__init__(self,res_um=(10,100,10),scaling=(1,1,1),mock=False,hist_path=None):
+"""The Franklin & Paxinos brain atlas.
+
+ Instantiates an atlas.BrainAtlas corresponding to the Franklin & Paxinos atlas [1]_ at the
+ given resolution, matched to the Allen coordinate Framework [2]_ and using the IBL Bregma
+ and coordinate system. The Franklin Paxisnos volume has resolution of 10um in ML and DV
+ axis and 100 um in AP direction.
+
+ Parameters
+ ----------
+ res_um : list, numpy.array
+ The Atlas resolution in micometres in each dimension.
+ scaling : float, numpy.array
+ Scale factor along ml, ap, dv for squeeze and stretch (default: [1, 1, 1]).
+ mock : bool
+ For testing purposes, return atlas object with image comprising zeros.
+ hist_path : str, pathlib.Path
+ The location of the image volume. May be a full file path or a directory.
+
+ Examples
+ --------
+ Instantiate Atlas from a non-default location, in this case the cache_dir of an ONE instance.
+ >>> target_dir = one.cache_dir / AllenAtlas.atlas_rel_path
+ ... ba = FranklinPaxinosAtlas(hist_path=target_dir)
+
+ References
+ ----------
+ .. [1] Paxinos G, and Franklin KBJ (2012) The Mouse Brain in Stereotaxic Coordinates, 4th
+ edition (Elsevier Academic Press)
+ .. [2] Chon U et al (2019) Enhanced and unified anatomical labeling for a common mouse
+ brain atlas [doi 10.1038/s41467-019-13057-w]
+ """
+ # TODO interpolate?
+ LUT_VERSION='v01'# version 01 is the lateralized version
+ regions=FranklinPaxinosRegions()
+ xyz2dims=np.array([1,0,2])# this is the c-contiguous ordering
+ dims2xyz=np.array([1,0,2])
+ # we use Bregma as the origin
+ self.res_um=np.asarray(res_um)
+ ibregma=(PAXINOS_CCF_LANDMARKS_MLAPDV_UM['bregma']/self.res_um)
+ dxyz=self.res_um*1e-6*np.array([1,-1,-1])*scaling
+ ifmock:
+ image,label=[np.zeros((528,456,320),dtype=np.int16)for_inrange(2)]
+ label[:,:,100:105]=1327# lookup index for retina, id 304325711 (no id 1327)
+ else:
+ # Hist path may be a full path to an existing image file, or a path to a directory
+ cache_dir=Path(one.params.get(silent=True).CACHE_DIR)
+ hist_path=Path(hist_pathorcache_dir.joinpath(self.atlas_rel_path))
+ ifnothist_path.suffix:# check if folder
+ hist_path/=f'average_template_{res_um[0]}_{res_um[1]}_{res_um[2]}.npz'
+
+ # get the image volume
+ ifnothist_path.exists():
+ hist_path.parent.mkdir(exist_ok=True,parents=True)
+ aws.s3_download_file(f'atlas/FranklinPaxinos/{hist_path.name}',str(hist_path))
+ # get the remapped label volume
+ file_label=hist_path.with_name(f'annotation_{res_um[0]}_{res_um[1]}_{res_um[2]}.npz')
+ ifnotfile_label.exists():
+ file_label.parent.mkdir(exist_ok=True,parents=True)
+ aws.s3_download_file(f'atlas/FranklinPaxinos/{file_label.name}',str(file_label))
+
+ file_label_remap=hist_path.with_name(f'annotation_{res_um[0]}_{res_um[1]}_{res_um[2]}_lut_{LUT_VERSION}.npz')
+
+ ifnotfile_label_remap.exists():
+ label=self._read_volume(file_label).astype(dtype=np.int32)
+ _logger.info("computing brain atlas annotations lookup table")
+ # lateralize atlas: for this the regions of the left hemisphere have primary
+ # keys opposite to to the normal ones
+ lateral=np.zeros(label.shape[xyz2dims[0]])
+ lateral[int(np.floor(ibregma[0]))]=1
+ lateral=np.sign(np.cumsum(lateral)[np.newaxis,:,np.newaxis]-0.5)
+ label=label*lateral.astype(np.int32)
+ _,im=ismember(label,regions.id)
+ label=np.reshape(im.astype(np.uint16),label.shape)
+ np.savez_compressed(file_label_remap,label)
+ _logger.info(f"Cached remapping file {file_label_remap} ...")
+ # loads the files
+ label=self._read_volume(file_label_remap)
+ image=self._read_volume(hist_path)
+
+ super().__init__(image,label,dxyz,regions,ibregma,dims2xyz=dims2xyz,xyz2dims=xyz2dims)
+
+ @staticmethod
+ def_read_volume(file_volume):
+"""
+ Loads an atlas image volume given a file path.
+
+ Parameters
+ ----------
+ file_volume : pathlib.Path
+ The file path of an image volume. Currently supports .nrrd and .npz files.
+
+ Returns
+ -------
+ numpy.array
+ The loaded image volume with dimensions (ap, ml, dv).
+
+ Raises
+ ------
+ ValueError
+ Unknown file extension, expects either '.nrrd' or '.npz'.
+ """
+ iffile_volume.suffix=='.nrrd':
+ volume,_=nrrd.read(file_volume,index_order='C')# ml, dv, ap
+ # we want the coronal slice to be the most contiguous
+ volume=np.transpose(volume,(2,0,1))# image[iap, iml, idv]
+ eliffile_volume.suffix=='.npz':
+ volume=np.load(file_volume)['arr_0']
+ else:
+ raiseValueError(
+ f'"{file_volume.suffix}" files not supported, must be either ".nrrd" or ".npz"')
+ returnvolume
+[docs]
+classFlatMap(AllenAtlas):
+"""The Allen Atlas flatmap.
+
+ FIXME Document! How are these flatmaps determined? Are they related to the Swansan atlas or is
+ that something else?
+ """
+
+ def__init__(self,flatmap='dorsal_cortex',res_um=25):
+"""
+ Available flatmaps are currently 'dorsal_cortex', 'circles' and 'pyramid'
+ :param flatmap:
+ :param res_um:
+ """
+ super().__init__(res_um=res_um)
+ self.name=flatmap
+ ifflatmap=='dorsal_cortex':
+ self._get_flatmap_from_file()
+ elifflatmap=='circles':
+ ifres_um!=25:
+ raiseNotImplementedError('Pyramid circles not implemented for resolution other than 25um')
+ self.flatmap,self.ml_scale,self.ap_scale=circles(N=5,atlas=self,display='flat')
+ elifflatmap=='pyramid':
+ ifres_um!=25:
+ raiseNotImplementedError('Pyramid circles not implemented for resolution other than 25um')
+ self.flatmap,self.ml_scale,self.ap_scale=circles(N=5,atlas=self,display='pyramid')
+
+ def_get_flatmap_from_file(self):
+ # gets the file in the ONE cache for the flatmap name in the property, downloads it if needed
+ file_flatmap=self._get_cache_dir().joinpath(f'{self.name}_{self.res_um}.nrrd')
+ ifnotfile_flatmap.exists():
+ file_flatmap.parent.mkdir(exist_ok=True,parents=True)
+ aws.s3_download_file(f'atlas/{file_flatmap.name}',file_flatmap)
+ self.flatmap,_=nrrd.read(file_flatmap)
+
+
+[docs]
+ defplot_flatmap(self,depth=0,volume='annotation',mapping='Allen',region_values=None,ax=None,**kwargs):
+"""
+ Displays the 2D image corresponding to the flatmap.
+
+ If there are several depths, by default it will display the first one.
+
+ Parameters
+ ----------
+ depth : int
+ Index of the depth to display in the flatmap volume (the last dimension).
+ volume : {'image', 'annotation', 'boundary', 'value'}
+ - 'image' - Allen image volume.
+ - 'annotation' - Allen annotation volume.
+ - 'boundary' - outline of boundaries between all regions.
+ - 'volume' - custom volume, must pass in volume of shape BrainAtlas.image.shape as
+ regions_value argument.
+ mapping : str, default='Allen'
+ The brain region mapping to use.
+ region_values : numpy.array
+ An array the shape of the brain atlas image containing custom region values. Used when
+ `volume` value is 'volume'.
+ ax : matplotlib.pyplot.Axes, optional
+ A set of axes to plot to.
+ **kwargs
+ See matplotlib.pyplot.imshow.
+
+ Returns
+ -------
+ matplotlib.pyplot.Axes
+ The plotted image axes.
+ """
+ ifself.flatmap.ndim==3:
+ inds=np.int32(self.flatmap[:,:,depth])
+ else:
+ inds=np.int32(self.flatmap[:,:])
+ regions=self._get_mapping(mapping=mapping)[self.label.flat[inds]]
+ ifvolume=='annotation':
+ im=self._label2rgb(regions)
+ elifvolume=='value':
+ im=region_values[regions]
+ elifvolume=='boundary':
+ im=self.compute_boundaries(regions)
+ elifvolume=='image':
+ im=self.image.flat[inds]
+ else:
+ raiseValueError(f'Volume type "{volume}" not supported')
+ ifnotax:
+ ax=plt.gca()
+
+ returnself._plot_slice(im,self.extent_flmap(),ax=ax,volume=volume,**kwargs)
+
+
+
+[docs]
+ defextent_flmap(self):
+"""
+ Returns the boundary coordinates of the flat map.
+
+ Returns
+ -------
+ numpy.array
+ The bounding coordinates of the flat map image, specified as (left, right, bottom, top).
+ """
+ extent=np.r_[0,self.flatmap.shape[1],0,self.flatmap.shape[0]]
+ returnextent
+[docs]
+defswanson(filename="swanson2allen.npz"):
+"""
+ FIXME Document! Which publication to reference? Are these specifically for flat maps?
+ Shouldn't this be made into an Atlas class with a mapping or scaling applied?
+
+ Parameters
+ ----------
+ filename
+
+ Returns
+ -------
+
+ """
+ # filename could be "swanson2allen_original.npz", or "swanson2allen.npz" for remapped indices to match
+ # existing labels in the brain atlas
+ OLD_MD5=[
+ 'bb0554ecc704dd4b540151ab57f73822',# version 2022-05-02 (remapped)
+ '7722c1307cf9a6f291ad7632e5dcc88b',# version 2022-05-09 (removed wolf pixels and 2 artefact regions)
+ ]
+ npz_file=AllenAtlas._get_cache_dir().joinpath(filename)
+ ifnotnpz_file.exists()ormd5(npz_file)inOLD_MD5:
+ npz_file.parent.mkdir(exist_ok=True,parents=True)
+ _logger.info(f'downloading swanson image from {aws.S3_BUCKET_IBL} s3 bucket...')
+ aws.s3_download_file(f'atlas/{npz_file.name}',npz_file)
+ s2a=np.load(npz_file)['swanson2allen']# inds contains regions ids
+ returns2a
+
+
+
+
+[docs]
+defswanson_json(filename="swansonpaths.json",remap=True):
+"""
+ Vectorized version of the swanson bitmap file. The vectorized version was generated from swanson() using matlab
+ contour to find the paths for each region. The paths for each region were then simplified using the
+ Ramer Douglas Peucker algorithm https://rdp.readthedocs.io/en/latest/
+
+ Parameters
+ ----------
+ filename
+ remap
+
+ Returns
+ -------
+
+ """
+ OLD_MD5=['97ccca2b675b28ba9b15ca8af5ba4111',# errored map with FOTU and CUL4, 5 mixed up
+ '56daa7022b5e03080d8623814cda6f38',# old md5 of swanson json without CENT and PTLp
+ # and CUL4 split (on s3 called swansonpaths_56daa.json)
+ 'f848783954883c606ca390ceda9e37d2']
+
+ json_file=AllenAtlas._get_cache_dir().joinpath(filename)
+ ifnotjson_file.exists()ormd5(json_file)inOLD_MD5:
+ json_file.parent.mkdir(exist_ok=True,parents=True)
+ _logger.info(f'downloading swanson paths from {aws.S3_BUCKET_IBL} s3 bucket...')
+ aws.s3_download_file(f'atlas/{json_file.name}',json_file,overwrite=True)
+
+ withopen(json_file)asf:
+ sw_json=json.load(f)
+
+ # The swanson contains regions that are children of regions contained within the Allen
+ # annotation volume. Here we remap these regions to the parent that is contained with the
+ # annotation volume
+ ifremap:
+ id_map={391:[392,393,394,395,396],
+ 474:[483,487],
+ 536:[537,541],
+ 601:[602,603,604,608],
+ 622:[624,625,626,627,628,629,630,631,632,634,635,636,637,638],
+ 686:[687,688,689],
+ 708:[709,710],
+ 721:[723,724,726,727,729,730,731],
+ 740:[741,742,743],
+ 758:[759,760,761,762],
+ 771:[772,773],
+ 777:[778,779,780],
+ 788:[789,790,791,792],
+ 835:[836,837,838],
+ 891:[894,895,896,897,898,900,901,902],
+ 926:[927,928],
+ 949:[950,951,952,953,954],
+ 957:[958,959,960,961,962],
+ 999:[1000,1001],
+ 578:[579,580]}
+
+ rev_map={}
+ fork,valsinid_map.items():
+ forvinvals:
+ rev_map[v]=k
+
+ forswinsw_json:
+ sw['thisID']=rev_map.get(sw['thisID'],sw['thisID'])
+
+ returnsw_json
+
+
+
+@lru_cache(maxsize=None)
+def_swanson_labels_positions(thres=20000):
+"""
+ Computes label positions to overlay on the Swanson flatmap.
+
+ Parameters
+ ----------
+ thres : int, default=20000
+ The number of pixels above which a region is labeled.
+
+ Returns
+ -------
+ dict of str
+ A map of brain acronym to a tuple of x y coordinates.
+ """
+ s2a=swanson()
+ iw,ih=np.meshgrid(np.arange(s2a.shape[1]),np.arange(s2a.shape[0]))
+ # compute the center of mass of all regions (fast enough to do on the fly)
+ bc=np.maximum(1,np.bincount(s2a.flatten()))
+ cmw=np.bincount(s2a.flatten(),weights=iw.flatten())/bc
+ cmh=np.bincount(s2a.flatten(),weights=ih.flatten())/bc
+ bc[0]=1
+
+ NWH,NWW=(200,600)
+ h,w=s2a.shape
+ labels={}
+ forilabelinnp.where(bc>thres)[0]:
+ x,y=(cmw[ilabel],cmh[ilabel])
+ # the polygon is convex and the label is outside. Dammit !!!
+ ifs2a[int(y),int(x)]!=ilabel:
+ # find the nearest point to the center of mass
+ ih,iw=np.where(s2a==ilabel)
+ iimin=np.argmin(np.abs((x-iw)+1j*(y-ih)))
+ # get the center of mass of a window around this point
+ sh=np.arange(np.maximum(0,ih[iimin]-NWH),np.minimum(ih[iimin]+NWH,h))
+ sw=np.arange(np.maximum(0,iw[iimin]-NWW),np.minimum(iw[iimin]+NWW,w))
+ roi=s2a[sh][:,sw]==ilabel
+ roi=roi/np.sum(roi)
+ # ax.plot(x, y, 'k+')
+ # ax.plot(iw[iimin], ih[iimin], '*k')
+ x=sw[np.searchsorted(np.cumsum(np.sum(roi,axis=0)),.5)-1]
+ y=sh[np.searchsorted(np.cumsum(np.sum(roi,axis=1)),.5)-1]
+ # ax.plot(x, y, 'r+')
+ labels[ilabel]=(x,y)
+ returnlabels
+
+[docs]
+defallen_gene_expression(filename='gene-expression.pqt',folder_cache=None):
+"""
+ Reads in the Allen gene expression experiments binary data.
+ :param filename:
+ :param folder_cache:
+ :return: a dataframe of experiments, where each record corresponds to a single gene expression
+ and a memmap of all experiments volumes, size (4345, 58, 41, 67) corresponding to
+ (nexperiments, ml, dv, ap). The spacing between slices is 200 um
+ """
+ OLD_MD5=[]
+ DIM_EXP=(4345,58,41,67)
+ folder_cache=folder_cacheorAllenAtlas._get_cache_dir().joinpath(filename)
+ file_parquet=Path(folder_cache).joinpath('gene-expression.pqt')
+ file_bin=file_parquet.with_suffix(".bin")
+
+ ifnotfile_parquet.exists()ormd5(file_parquet)inOLD_MD5:
+ file_parquet.parent.mkdir(exist_ok=True,parents=True)
+ _logger.info(f'downloading gene expression data from {aws.S3_BUCKET_IBL} s3 bucket...')
+ aws.s3_download_file(f'atlas/{file_parquet.name}',file_parquet)
+ aws.s3_download_file(f'atlas/{file_bin.name}',file_bin)
+ df_genes=pd.read_parquet(file_parquet)
+ gexp_all=np.memmap(file_bin,dtype=np.float16,mode='r',offset=0,shape=DIM_EXP)
+ returndf_genes,gexp_all
+[docs]
+defplot_polygon(ax,xy,color,reg_id,edgecolor='k',linewidth=0.3,alpha=1):
+"""
+ Function to plot matplotlib polygon on an axis
+
+ Parameters
+ ----------
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ xy: numpy.array
+ 2D array of x and y coordinates of vertices of polygon
+ color: str, tuple of int
+ The color to fill the polygon
+ reg_id: str, int
+ An id to assign to the polygon
+ edgecolor: str, tuple of int
+ The color of the edge of the polgon
+ linewidth: int
+ The width of the edges of the polygon
+ alpha: float between 0 and 1
+ The opacitiy of the polygon
+
+ Returns
+ -------
+
+ """
+ p=Polygon(xy,facecolor=color,edgecolor=edgecolor,linewidth=linewidth,alpha=alpha,gid=f'region_{reg_id}')
+ ax.add_patch(p)
+
+
+
+
+[docs]
+defplot_polygon_with_hole(ax,vertices,codes,color,reg_id,edgecolor='k',linewidth=0.3,alpha=1):
+"""
+ Function to plot matplotlib polygon that contains a hole on an axis
+
+ Parameters
+ ----------
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ vertices: numpy.array
+ 2D array of x and y coordinates of vertices of polygon
+ codes: numpy.array
+ 1D array of path codes used to link the vertices
+ (https://matplotlib.org/stable/tutorials/advanced/path_tutorial.html)
+ color: str, tuple of int
+ The color to fill the polygon
+ reg_id: str, int
+ An id to assign to the polygon
+ edgecolor: str, tuple of int
+ The color of the edge of the polgon
+ linewidth: int
+ The width of the edges of the polygon
+ alpha: float between 0 and 1
+ The opacitiy of the polygon
+
+ Returns
+ -------
+
+ """
+
+ path=mpath.Path(vertices,codes)
+ patch=PathPatch(path,facecolor=color,edgecolor=edgecolor,linewidth=linewidth,alpha=alpha,gid=f'region_{reg_id}')
+ ax.add_patch(patch)
+
+
+
+
+[docs]
+defcoords_for_poly_hole(coords):
+"""
+ Function to convert
+
+ Parameters
+ ----------
+ coords : dict
+ Dictionary containing keys x, y and invert. x and y contain numpy.array of x coordinates, y coordinates
+ for the vertices of the polgyon. The invert key is either 1 or -1 and deterimine how to assign the paths.
+ The value for invert for each polygon was assigned manually after looking at the result
+
+ Returns
+ -------
+ all_coords: numpy.array
+ 2D array of x and y coordinates of vertices of polygon
+ all_codes: numpy.array
+ 1D array of path codes used to link the vertices
+ (https://matplotlib.org/stable/tutorials/advanced/path_tutorial.html)
+
+ """
+ fori,cinenumerate(coords):
+ xy=np.c_[c['x'],c['y']]
+ codes=np.ones(len(xy),dtype=mpath.Path.code_type)*mpath.Path.LINETO
+ codes[0]=mpath.Path.MOVETO
+ ifi==0:
+ val=c.get('invert',1)
+ all_coords=xy[::val]
+ all_codes=codes
+ else:
+ codes[-1]=mpath.Path.CLOSEPOLY
+ val=c.get('invert',-1)
+ all_coords=np.concatenate((all_coords,xy[::val]))
+ all_codes=np.concatenate((all_codes,codes))
+
+ returnall_coords,all_codes
+
+
+
+
+[docs]
+defprepare_lr_data(acronyms_lh,values_lh,acronyms_rh,values_rh):
+"""
+ Prepare data in format needed for plotting when providing different region values per hemisphere
+
+ :param acronyms_lh: array of acronyms on left hemisphere
+ :param values_lh: values for each acronym on left hemisphere
+ :param acronyms_rh: array of acronyms on right hemisphere
+ :param values_rh: values for each acronym on left hemisphere
+ :return: combined acronyms and two column array of values
+ """
+
+ acronyms=np.unique(np.r_[acronyms_lh,acronyms_rh])
+ values=np.nan*np.ones((acronyms.shape[0],2))
+ _,l_idx=ismember(acronyms_lh,acronyms)
+ _,r_idx=ismember(acronyms_rh,acronyms)
+ values[l_idx,0]=values_lh
+ values[r_idx,1]=values_rh
+
+ returnacronyms,values
+
+
+
+
+[docs]
+defreorder_data(acronyms,values,brain_regions=None):
+"""
+ Reorder list of acronyms and values to match the Allen ordering.
+
+ TODO Document more
+
+ Parameters
+ ----------
+ acronyms : array_like of str
+ The acronyms to match the Allen ordering, whatever that means.
+ values : array_like
+ An array of some sort of values I guess...
+ brain_regions : iblatlas.regions.BrainRegions
+ A brain regions object.
+
+ Returns
+ -------
+ numpy.array of str
+ An ordered array of acronyms
+ numpy.array
+ An ordered array of values. I don't know what those values are, not IDs, so maybe indices?
+ """
+
+ br=brain_regionsorBrainRegions()
+ atlas_id=br.acronym2id(acronyms,hemisphere='right')
+ all_ids=br.id[br.order][:br.n_lr+1]
+ ordered_ids=np.zeros_like(all_ids)*np.nan
+ ordered_values=np.zeros_like(all_ids)*np.nan
+ _,idx=ismember(atlas_id,all_ids)
+ ordered_ids[idx]=atlas_id
+ ordered_values[idx]=values
+
+ ordered_ids=ordered_ids[~np.isnan(ordered_ids)]
+ ordered_values=ordered_values[~np.isnan(ordered_values)]
+ ordered_acronyms=br.id2acronym(ordered_ids)
+
+ returnordered_acronyms,ordered_values
+
+
+
+
+[docs]
+defload_slice_files(slice,mapping):
+"""
+ Function to load in set of vectorised atlas slices for a given atlas axis and mapping.
+
+ If the data does not exist locally, it will download the files automatically stored in a AWS S3
+ bucket.
+
+ Parameters
+ ----------
+ slice : {'coronal', 'sagittal', 'horizontal', 'top'}
+ The axis of the atlas to load.
+ mapping : {'Allen', 'Beryl', 'Cosmos'}
+ The mapping to load.
+
+ Returns
+ -------
+ slice_data : numpy.array
+ A json containing the vertices to draw each region for each slice in the Allen annotation volume.
+
+ """
+ OLD_MD5={
+ 'coronal':[],
+ 'sagittal':[],
+ 'horizontal':[],
+ 'top':[]
+ }
+
+ slice_file=AllenAtlas._get_cache_dir().parent.joinpath('svg',f'{slice}_{mapping}_paths.npy')
+ ifnotslice_file.exists()ormd5(slice_file)inOLD_MD5[slice]:
+ slice_file.parent.mkdir(exist_ok=True,parents=True)
+ _logger.info(f'downloading swanson paths from {aws.S3_BUCKET_IBL} s3 bucket...')
+ aws.s3_download_file(f'atlas/{slice_file.name}',slice_file)
+
+ slice_data=np.load(slice_file,allow_pickle=True)
+
+ returnslice_data
+
+
+
+def_plot_slice_vector(coords,slice,values,mapping,empty_color='silver',clevels=None,cmap='viridis',show_cbar=False,
+ ba=None,ax=None,slice_json=None,**kwargs):
+"""
+ Function to plot scalar value per allen region on vectorised version of histology slice. Do not use directly but use
+ through plot_scalar_on_slice function with vector=True.
+
+ Parameters
+ ----------
+ coords: float
+ Coordinate of slice in um (not needed when slice='top').
+ slice: {'coronal', 'sagittal', 'horizontal', 'top'}
+ The axis through the atlas volume to display.
+ values: numpy.array
+ Array of values for each of the lateralised Allen regions found using BrainRegions().acronym. If no
+ value is assigned to the acronym, the value at corresponding to that index should be NaN.
+ mapping: {'Allen', 'Beryl', 'Cosmos'}
+ The mapping to use.
+ empty_color: str, tuple of int, default='silver'
+ The color used to fill the regions that do not have any values assigned (regions with NaN).
+ clevels: numpy.array, list or tuple
+ The min and max values to use for the colormap.
+ cmap: string
+ Colormap to use.
+ show_cbar: bool, default=False
+ Whether to display a colorbar.
+ ba : iblatlas.atlas.AllenAtlas
+ A brain atlas object.
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ slice_json: numpy.array
+ The set of vectorised slices for this slice, obtained using load_slice_files(slice, mapping).
+ **kwargs
+ Set of kwargs passed into matplotlib.patches.Polygon.
+
+ Returns
+ -------
+ fig: matplotlib.figure.Figure
+ The plotted figure.
+ ax: matplotlib.pyplot.Axes
+ The plotted axes.
+ cbar: matplotlib.pyplot.colorbar, optional
+ matplotlib colorbar object, only returned if show_cbar=True
+
+ """
+ ba=baorAllenAtlas()
+ mapping=mapping.split('-')[0].lower()
+ ifclevelsisNone:
+ clevels=(np.nanmin(values),np.nanmax(values))
+
+ ifba.res_um==10:
+ bc10=ba.bc
+ else:
+ bc10=get_bc_10()
+
+ ifaxisNone:
+ fig,ax=plt.subplots()
+ ax.set_axis_off()
+ else:
+ fig=ax.get_figure()
+
+ colormap=cm.get_cmap(cmap)
+ norm=colors.Normalize(vmin=clevels[0],vmax=clevels[1])
+ nan_vals=np.isnan(values)
+ rgba_color=np.full((values.size,4),fill_value=np.nan)
+ rgba_color[~nan_vals]=colormap(norm(values[~nan_vals]),bytes=True)
+
+ ifslice_jsonisNone:
+ slice_json=load_slice_files(slice,mapping)
+
+ ifslice=='coronal':
+ idx=bc10.y2i(coords)
+ xlim=np.array([0,bc10.nx])
+ ylim=np.array([0,bc10.nz])
+ elifslice=='sagittal':
+ idx=bc10.x2i(coords)
+ xlim=np.array([0,bc10.ny])
+ ylim=np.array([0,bc10.nz])
+ elifslice=='horizontal':
+ idx=bc10.z2i(coords)
+ xlim=np.array([0,bc10.nx])
+ ylim=np.array([0,bc10.ny])
+ else:
+ # top case
+ xlim=np.array([0,bc10.nx])
+ ylim=np.array([0,bc10.ny])
+
+ ifslice!='top':
+ slice_json=slice_json.item().get(str(int(idx)))
+
+ fori,reginenumerate(slice_json):
+ color=rgba_color[reg['thisID']]
+ reg_id=reg['thisID']
+ ifany(np.isnan(color)):
+ color=empty_color
+ else:
+ color=color/255
+ coords=reg['coordsReg']
+
+ iflen(coords)==0:
+ continue
+
+ ifisinstance(coords,(list,tuple)):
+ vertices,codes=coords_for_poly_hole(coords)
+ plot_polygon_with_hole(ax,vertices,codes,color,reg_id,**kwargs)
+ else:
+ xy=np.c_[coords['x'],coords['y']]
+ plot_polygon(ax,xy,color,reg_id,**kwargs)
+
+ ax.set_xlim(xlim)
+ ax.set_ylim(ylim)
+ ax.invert_yaxis()
+
+ ifshow_cbar:
+ cbar=fig.colorbar(cm.ScalarMappable(norm=norm,cmap=cmap),ax=ax)
+ returnfig,ax,cbar
+ else:
+ returnfig,ax
+
+
+
+[docs]
+defplot_scalar_on_slice(regions,values,coord=-1000,slice='coronal',mapping=None,hemisphere='left',
+ background='image',cmap='viridis',clevels=None,show_cbar=False,empty_color='silver',
+ brain_atlas=None,ax=None,vector=False,slice_files=None,**kwargs):
+"""
+ Function to plot scalar value per region on histology slice.
+
+ Parameters
+ ----------
+ regions : array_like
+ An array of brain region acronyms.
+ values : numpy.array
+ An array of scalar value per acronym. If hemisphere is 'both' and different values want to
+ be shown on each hemisphere, values should contain 2 columns, 1st column for LH values, 2nd
+ column for RH values.
+ coord : float
+ Coordinate of slice in um (not needed when slice='top').
+ slice : {'coronal', 'sagittal', 'horizontal', 'top'}, default='coronal'
+ Orientation of slice.
+ mapping : str, optional
+ Atlas mapping to use, options are depend on atlas used (see `iblatlas.regions.BrainRegions`).
+ If None, the atlas default mapping is used.
+ hemisphere : {'left', 'right', 'both'}, default='left'
+ The hemisphere to display.
+ background : {image', 'boundary'}, default='image'
+ Background slice to overlay onto, options are 'image' or 'boundary'. If `vector` is false,
+ this argument is ignored.
+ cmap: str, default='viridis'
+ Colormap to use.
+ clevels : array_like
+ The min and max color levels to use.
+ show_cbar: bool, default=False
+ Whether to display a colorbar.
+ empty_color : str, default='silver'
+ Color to use for regions without any values (only used when `vector` is true).
+ brain_atlas : iblatlas.atlas.AllenAtlas
+ A brain atlas object.
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ vector : bool, default=False
+ Whether to show as bitmap or vector graphic.
+ slice_files: numpy.array
+ The set of vectorised slices for this slice, obtained using `load_slice_files(slice, mapping)`.
+ **kwargs
+ Set of kwargs passed into matplotlib.patches.Polygon, e.g. linewidth=2, edgecolor='None'
+ (only used when vector = True).
+
+ Returns
+ -------
+ fig: matplotlib.figure.Figure
+ The plotted figure.
+ ax: matplotlib.pyplot.Axes
+ The plotted axes.
+ cbar: matplotlib.pyplot.colorbar, optional
+ matplotlib colorbar object, only returned if show_cbar=True.
+ """
+
+ ba=brain_atlasorAllenAtlas()
+ br=ba.regions
+ mapping=mappingorbr.default_mapping
+
+ ifclevelsisNone:
+ clevels=(np.nanmin(values),np.nanmax(values))
+
+ # Find the mapping to use
+ if'-lr'inmapping:
+ map=mapping
+ else:
+ map=mapping+'-lr'
+
+ region_values=np.zeros_like(br.id)*np.nan
+
+ iflen(values.shape)==2:
+ forr,vL,vRinzip(regions,values[:,0],values[:,1]):
+ idx=np.where(br.acronym[br.mappings[map]]==r)[0]
+ idx_lh=idx[idx>br.n_lr]
+ idx_rh=idx[idx<=br.n_lr]
+ region_values[idx_rh]=vR
+ region_values[idx_lh]=vL
+ else:
+ forr,vinzip(regions,values):
+ region_values[np.where(br.acronym[br.mappings[map]]==r)[0]]=v
+ ifhemisphere=='left':
+ region_values[0:(br.n_lr+1)]=np.nan
+ elifhemisphere=='right':
+ region_values[br.n_lr:]=np.nan
+ region_values[0]=np.nan
+
+ ifshow_cbar:
+ ifvector:
+ fig,ax,cbar=_plot_slice_vector(coord/1e6,slice,region_values,map,clevels=clevels,cmap=cmap,ba=ba,
+ ax=ax,empty_color=empty_color,show_cbar=show_cbar,slice_json=slice_files,
+ **kwargs)
+ else:
+ fig,ax,cbar=_plot_slice(coord/1e6,slice,region_values,'value',background=background,map=map,
+ clevels=clevels,cmap=cmap,ba=ba,ax=ax,show_cbar=show_cbar)
+ returnfig,ax,cbar
+ else:
+ ifvector:
+ fig,ax=_plot_slice_vector(coord/1e6,slice,region_values,map,clevels=clevels,cmap=cmap,ba=ba,
+ ax=ax,empty_color=empty_color,show_cbar=show_cbar,slice_json=slice_files,**kwargs)
+ else:
+ fig,ax=_plot_slice(coord/1e6,slice,region_values,'value',background=background,map=map,clevels=clevels,
+ cmap=cmap,ba=ba,ax=ax,show_cbar=show_cbar)
+ returnfig,ax
+
+
+
+
+[docs]
+defplot_scalar_on_flatmap(regions,values,depth=0,flatmap='dorsal_cortex',mapping='Allen',hemisphere='left',
+ background='boundary',cmap='viridis',clevels=None,show_cbar=False,flmap_atlas=None,ax=None):
+"""
+ Function to plot scalar value per allen region on flatmap slice
+
+ :param regions: array of acronyms of Allen regions
+ :param values: array of scalar value per acronym. If hemisphere is 'both' and different values want to be shown on each
+ hemispheres, values should contain 2 columns, 1st column for LH values, 2nd column for RH values
+ :param depth: depth in flatmap in um
+ :param flatmap: name of flatmap (currently only option is 'dorsal_cortex')
+ :param mapping: atlas mapping to use, options are 'Allen', 'Beryl' or 'Cosmos'
+ :param hemisphere: hemisphere to display, options are 'left', 'right', 'both'
+ :param background: background slice to overlay onto, options are 'image' or 'boundary'
+ :param cmap: colormap to use
+ :param clevels: min max color levels [cmin, cmax]
+ :param show_cbar: whether to add colorbar to axis
+ :param flmap_atlas: FlatMap object
+ :param ax: optional axis object to plot on
+ :return:
+ """
+
+ ifclevelsisNone:
+ clevels=(np.nanmin(values),np.nanmax(values))
+
+ ba=flmap_atlasorFlatMap(flatmap=flatmap)
+ br=ba.regions
+
+ # Find the mapping to use
+ if'-lr'inmapping:
+ map=mapping
+ else:
+ map=mapping+'-lr'
+
+ region_values=np.zeros_like(br.id)*np.nan
+
+ iflen(values.shape)==2:
+ forr,vL,vRinzip(regions,values[:,0],values[:,1]):
+ idx=np.where(br.acronym[br.mappings[map]]==r)[0]
+ idx_lh=idx[idx>br.n_lr]
+ idx_rh=idx[idx<=br.n_lr]
+ region_values[idx_rh]=vR
+ region_values[idx_lh]=vL
+ else:
+ forr,vinzip(regions,values):
+ region_values[np.where(br.acronym[br.mappings[map]]==r)[0]]=v
+ ifhemisphere=='left':
+ region_values[0:(br.n_lr+1)]=np.nan
+ elifhemisphere=='right':
+ region_values[br.n_lr:]=np.nan
+ region_values[0]=np.nan
+
+ d_idx=int(np.round(depth/ba.res_um))# need to find nearest to 25
+
+ ifbackground=='boundary':
+ cmap_bound=cm.get_cmap("bone_r").copy()
+ cmap_bound.set_under([1,1,1],0)
+
+ ifax:
+ fig=ax.get_figure()
+ else:
+ fig,ax=plt.subplots()
+
+ ifbackground=='image':
+ ba.plot_flatmap(d_idx,volume='image',mapping=map,ax=ax)
+ ba.plot_flatmap(d_idx,volume='value',region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ else:
+ ba.plot_flatmap(d_idx,volume='value',region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ ba.plot_flatmap(d_idx,volume='boundary',mapping=map,ax=ax,cmap=cmap_bound,vmin=0.01,vmax=0.8)
+
+ # For circle flatmap we don't want to cut the axis
+ ifba.name!='circles':
+ ifhemisphere=='left':
+ ax.set_xlim(0,np.ceil(ba.flatmap.shape[1]/2))
+ elifhemisphere=='right':
+ ax.set_xlim(np.ceil(ba.flatmap.shape[1]/2),ba.flatmap.shape[1])
+
+ ifshow_cbar:
+ norm=colors.Normalize(vmin=clevels[0],vmax=clevels[1],clip=False)
+ cbar=fig.colorbar(cm.ScalarMappable(norm=norm,cmap=cmap),ax=ax)
+ returnfig,ax,cbar
+ else:
+ returnfig,ax
+
+
+
+
+[docs]
+defplot_volume_on_slice(volume,coord=-1000,slice='coronal',mapping='Allen',background='boundary',cmap='Reds',
+ clevels=None,show_cbar=False,brain_atlas=None,ax=None):
+"""
+ Plot slice through a volume
+
+ :param volume: 3D array of volume (must be same shape as brain_atlas object)
+ :param coord: coordinate of slice in um
+ :param slice: orientation of slice, options are 'coronal', 'sagittal', 'horizontal'
+ :param mapping: atlas mapping to use, options are 'Allen', 'Beryl' or 'Cosmos'
+ :param background: background slice to overlay onto, options are 'image' or 'boundary'
+ :param cmap: colormap to use
+ :param clevels: min max color levels [cmin, cmax]
+ :param show_cbar: whether to add colorbar to axis
+ :param brain_atlas: AllenAtlas object
+ :param ax: optional axis object to plot on
+ :return:
+ """
+
+ ba=brain_atlasorAllenAtlas()
+ assertvolume.shape==ba.image.shape,'Volume must have same shape as ba'
+
+ # Find the mapping to use
+ if'-lr'inmapping:
+ map=mapping
+ else:
+ map=mapping+'-lr'
+
+ ifshow_cbar:
+ fig,ax,cbar=_plot_slice(coord/1e6,slice,volume,'volume',background=background,map=map,clevels=clevels,
+ cmap=cmap,ba=ba,ax=ax,show_cbar=show_cbar)
+ returnfig,ax,cbar
+ else:
+ fig,ax=_plot_slice(coord/1e6,slice,volume,'volume',background=background,map=map,clevels=clevels,
+ cmap=cmap,ba=ba,ax=ax,show_cbar=show_cbar)
+ returnfig,ax
+
+
+
+
+[docs]
+defplot_points_on_slice(xyz,values=None,coord=-1000,slice='coronal',mapping='Allen',background='boundary',cmap='Reds',
+ clevels=None,show_cbar=False,aggr='mean',fwhm=100,brain_atlas=None,ax=None):
+"""
+ Plot xyz points on slice. Points that lie in the same voxel within slice are aggregated according to method specified.
+ A 3D Gaussian smoothing kernel with distance specified by fwhm is applied to images.
+
+ :param xyz: 3 column array of xyz coordinates of points in metres
+ :param values: array of values per xyz coordinates, if no values are given the sum of xyz points in each voxel is
+ returned
+ :param coord: coordinate of slice in um (not needed when slice='top')
+ :param slice: orientation of slice, options are 'coronal', 'sagittal', 'horizontal', 'top' (top view of brain)
+ :param mapping: atlas mapping to use, options are 'Allen', 'Beryl' or 'Cosmos'
+ :param background: background slice to overlay onto, options are 'image' or 'boundary'
+ :param cmap: colormap to use
+ :param clevels: min max color levels [cmin, cmax]
+ :param show_cbar: whether to add colorbar to axis
+ :param aggr: aggregation method. Options are sum, count, mean, std, median, min and max.
+ Can also give in custom function (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binned_statistic.html)
+ :param fwhm: fwhm distance of gaussian kernel in um
+ :param brain_atlas: AllenAtlas object
+ :param ax: optional axis object to plot on
+
+ :return:
+ """
+
+ ba=brain_atlasorAllenAtlas()
+
+ # Find the mapping to use
+ if'-lr'inmapping:
+ map=mapping
+ else:
+ map=mapping+'-lr'
+
+ region_values=compute_volume_from_points(xyz,values,aggr=aggr,fwhm=fwhm,ba=ba)
+
+ ifshow_cbar:
+ fig,ax,cbar=_plot_slice(coord/1e6,slice,region_values,'volume',background=background,map=map,clevels=clevels,
+ cmap=cmap,ba=ba,ax=ax,show_cbar=show_cbar)
+ returnfig,ax,cbar
+ else:
+ fig,ax=_plot_slice(coord/1e6,slice,region_values,'volume',background=background,map=map,clevels=clevels,
+ cmap=cmap,ba=ba,ax=ax,show_cbar=show_cbar)
+ returnfig,ax
+
+
+
+
+[docs]
+defcompute_volume_from_points(xyz,values=None,aggr='sum',fwhm=100,ba=None):
+"""
+ Creates a 3D volume with xyz points placed in corresponding voxel in volume. Points that fall into the same voxel within the
+ volume are aggregated according to the method specified in aggr. Gaussian smoothing with a 3D kernel with distance specified
+ by fwhm (full width half max) argument is applied. If fwhm = 0, no gaussian smoothing is applied.
+
+ :param xyz: 3 column array of xyz coordinates of points in metres
+ :param values: 1 column array of values per xyz coordinates, if no values are given the sum of xyz points in each voxel is
+ returned
+ :param aggr: aggregation method. Options are sum, count, mean, std, median, min and max. Can also give in custom function
+ (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binned_statistic.html)
+ :param fwhm: full width at half maximum of gaussian kernel in um
+ :param ba: AllenAtlas object
+ :return:
+ """
+
+ ba=baorAllenAtlas()
+
+ idx=ba._lookup(xyz)
+ ba_shape=ba.image.shape[0]*ba.image.shape[1]*ba.image.shape[2]
+
+ ifvaluesisnotNone:
+ volume=binned_statistic(idx,values,range=[0,ba_shape],statistic=aggr,bins=ba_shape).statistic
+ volume[np.isnan(volume)]=0
+ else:
+ volume=np.bincount(idx,minlength=ba_shape,weights=values)
+
+ volume=volume.reshape(ba.image.shape[0],ba.image.shape[1],ba.image.shape[2]).astype(np.float32)
+
+ iffwhm>0:
+ # Compute sigma used for gaussian kernel
+ fwhm_over_sigma_ratio=np.sqrt(8*np.log(2))
+ sigma=fwhm/(fwhm_over_sigma_ratio*ba.res_um)
+ # TODO to speed up only apply gaussian filter on slices within distance of chosen coordinate
+ volume=gaussian_filter(volume,sigma=sigma)
+
+ # Mask so that outside of the brain is set to nan
+ volume[ba.label==0]=np.nan
+
+ returnvolume
+
+
+
+def_plot_slice(coord,slice,region_values,vol_type,background='boundary',map='Allen',clevels=None,cmap='viridis',
+ show_cbar=False,ba=None,ax=None):
+"""
+ Function to plot scalar value per allen region on histology slice.
+
+ Do not use directly but use through plot_scalar_on_slice function.
+
+ Parameters
+ ----------
+ coord: float
+ coordinate of slice in um (not needed when slice='top').
+ slice: {'coronal', 'sagittal', 'horizontal', 'top'}
+ the axis through the atlas volume to display.
+ region_values: numpy.array
+ Array of values for each of the lateralised Allen regions found using BrainRegions().acronym. If no
+ value is assigned to the acronym, the value at corresponding to that index should be nan.
+ vol_type: 'value'
+ The type of volume to be displayed, should always be 'value' if values want to be displayed.
+ background: {'image', 'boundary'}
+ The background slice to overlay the values onto. When 'image' it uses the Allen dwi image, when
+ 'boundary' it displays the boundaries between regions.
+ map: {'Allen', 'Beryl', 'Cosmos'}
+ the mapping to use.
+ clevels: numpy.array, list or tuple
+ The min and max values to use for the colormap.
+ cmap: str, default='viridis'
+ Colormap to use.
+ show_cbar: bool, default=False
+ Whether to display a colorbar.
+ ba : iblatlas.atlas.AllenAtlas
+ A brain atlas object.
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+
+ Returns
+ -------
+ fig: matplotlib.figure.Figure
+ The plotted figure
+ ax: matplotlib.pyplot.Axes
+ The plotted axes.
+ cbar: matplotlib.pyplot.colorbar
+ matplotlib colorbar object, only returned if show_cbar=True.
+
+ """
+ ba=baorAllenAtlas()
+
+ ifclevelsisNone:
+ clevels=(np.nanmin(region_values),np.nanmax(region_values))
+
+ ifax:
+ fig=ax.get_figure()
+ else:
+ fig,ax=plt.subplots()
+
+ ifslice=='coronal':
+ ifbackground=='image':
+ ba.plot_cslice(coord,volume='image',mapping=map,ax=ax)
+ ba.plot_cslice(coord,volume=vol_type,region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ else:
+ ba.plot_cslice(coord,volume=vol_type,region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ ba.plot_cslice(coord,volume='boundary',mapping=map,ax=ax)
+
+ elifslice=='sagittal':
+ ifbackground=='image':
+ ba.plot_sslice(coord,volume='image',mapping=map,ax=ax)
+ ba.plot_sslice(coord,volume=vol_type,region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ else:
+ ba.plot_sslice(coord,volume=vol_type,region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ ba.plot_sslice(coord,volume='boundary',mapping=map,ax=ax)
+
+ elifslice=='horizontal':
+ ifbackground=='image':
+ ba.plot_hslice(coord,volume='image',mapping=map,ax=ax)
+ ba.plot_hslice(coord,volume=vol_type,region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ else:
+ ba.plot_hslice(coord,volume=vol_type,region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ ba.plot_hslice(coord,volume='boundary',mapping=map,ax=ax)
+
+ elifslice=='top':
+ ifbackground=='image':
+ ba.plot_top(volume='image',mapping=map,ax=ax)
+ ba.plot_top(volume=vol_type,region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ else:
+ ba.plot_top(volume=vol_type,region_values=region_values,mapping=map,cmap=cmap,vmin=clevels[0],
+ vmax=clevels[1],ax=ax)
+ ba.plot_top(volume='boundary',mapping=map,ax=ax)
+
+ ifshow_cbar:
+ norm=colors.Normalize(vmin=clevels[0],vmax=clevels[1],clip=False)
+ cbar=fig.colorbar(cm.ScalarMappable(norm=norm,cmap=cmap),ax=ax)
+ returnfig,ax,cbar
+ else:
+ returnfig,ax
+
+
+
+[docs]
+defplot_scalar_on_barplot(acronyms,values,errors=None,order=True,ax=None,brain_regions=None):
+"""
+ Function to plot scalar value per allen region on a bar plot. If order=True, the acronyms and values are reordered
+ according to the order defined in the Allen structure tree
+
+ Parameters
+ ----------
+ acronyms: numpy.array
+ A 1D array of acronyms
+ values: numpy.array
+ A 1D array of values corresponding to each acronym in the acronyms array
+ errors: numpy.array
+ A 1D array of error values corresponding to each acronym in the acronyms array
+ order: bool, default=True
+ Whether to order the acronyms according to the order defined by the Allen structure tree
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ brain_regions : iblatlas.regions.BrainRegions
+ A brain regions object
+
+ Returns
+ -------
+ fig: matplotlib.figure.Figure
+ The plotted figure
+ ax: matplotlib.pyplot.Axes
+ The plotted axes.
+
+ """
+ br=brain_regionsorBrainRegions()
+
+ iforder:
+ acronyms,values=reorder_data(acronyms,values,brain_regions)
+
+ _,idx=ismember(acronyms,br.acronym)
+ colours=br.rgb[idx]
+
+ ifax:
+ fig=ax.get_figure()
+ else:
+ fig,ax=plt.subplots()
+
+ ax.bar(np.arange(acronyms.size),values,color=colours)
+
+ returnfig,ax
+
+
+
+
+[docs]
+defplot_swanson_vector(acronyms=None,values=None,ax=None,hemisphere=None,br=None,orientation='landscape',
+ empty_color='silver',vmin=None,vmax=None,cmap='viridis',annotate=False,annotate_n=10,
+ annotate_order='top',annotate_list=None,mask=None,mask_color='w',fontsize=10,**kwargs):
+"""
+ Function to plot scalar value per allen region on the swanson projection. Plots on a vecortised version of the
+ swanson projection
+
+ Parameters
+ ----------
+ acronyms: numpy.array
+ A 1D array of acronyms or atlas ids
+ values: numpy.array
+ A 1D array of values corresponding to each acronym in the acronyms array
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ hemisphere : {'left', 'right', 'both', 'mirror'}
+ The hemisphere to display.
+ br : iblatlas.regions.BrainRegions
+ A brain regions object.
+ orientation : {landscape', 'portrait'}, default='landscape'
+ The plot orientation.
+ empty_color : str, tuple of int, default='silver'
+ The greyscale matplotlib color code or an RGBA int8 tuple defining the filling of brain
+ regions not provided.
+ vmin: float
+ Minimum value to restrict the colormap
+ vmax: float
+ Maximum value to restrict the colormap
+ cmap: string
+ matplotlib named colormap to use
+ annotate : bool, default=False
+ If true, labels the regions with acronyms.
+ annotate_n: int
+ The number of regions to annotate
+ annotate_order: {'top', 'bottom'}
+ If annotate_n is specified, whether to annotate the n regions with the highest (top) or lowest (bottom) values
+ annotate_list: numpy.array of list
+ List of regions to annotate, if this is provided, if overwrites annotate_n and annotate_order
+ mask: numpy.array or list
+ List of regions to apply a mask to (fill them with a specific color)
+ mask_color: string, tuple or list
+ Color for the mask
+ fontsize : int
+ The annotation font size in points.
+ **kwargs
+ See plot_polygon and plot_polygon_with_hole.
+
+ Returns
+ -------
+ matplotlib.pyplot.Axes
+ The plotted axes.
+
+ """
+ br=BrainRegions()ifbrisNoneelsebr
+ br.compute_hierarchy()
+ sw_shape=(2968,6820)
+
+ ifaxisNone:
+ fig,ax=plt.subplots()
+ ax.set_axis_off()
+
+ ifhemisphere!='both'andacronymsisnotNoneandnotisinstance(acronyms[0],str):
+ # If negative atlas ids are passed in and we are not going to lateralise (e.g hemisphere='both')
+ # transfer them over to one hemisphere
+ acronyms=np.abs(acronyms)
+
+ ifacronymsisnotNone:
+ ibr,vals=br.propagate_down(acronyms,values)
+ colormap=cm.get_cmap(cmap)
+ vmin=vminornp.nanmin(vals)
+ vmax=vmaxornp.nanmax(vals)
+ norm=colors.Normalize(vmin=vmin,vmax=vmax)
+ rgba_color=colormap(norm(vals),bytes=True)
+
+ ifmaskisnotNone:
+ imr,_=br.propagate_down(mask,np.ones_like(mask))
+ else:
+ imr=[]
+
+ sw_json=swanson_json()
+ ifhemisphere=='both':
+ sw_rev=copy.deepcopy(sw_json)
+ forswinsw_rev:
+ sw['thisID']=sw['thisID']+br.n_lr
+ sw_json=sw_json+sw_rev
+
+ plot_idx=[]
+ plot_val=[]
+ fori,reginenumerate(sw_json):
+
+ coords=reg['coordsReg']
+ reg_id=reg['thisID']
+
+ ifacronymsisNone:
+ color=br.rgba[br.mappings['Swanson'][reg['thisID']]]/255
+ ifhemisphereisNone:
+ col_l=None
+ col_r=color
+ elifhemisphere=='left':
+ col_l=empty_coloriforientation=='portrait'elsecolor
+ col_r=coloriforientation=='portrait'elseempty_color
+ elifhemisphere=='right':
+ col_l=coloriforientation=='portrait'elseempty_color
+ col_r=empty_coloriforientation=='portrait'elsecolor
+ elifhemispherein['both','mirror']:
+ col_l=color
+ col_r=color
+ else:
+ idx=np.where(ibr==reg['thisID'])[0]
+ idxm=np.where(imr==reg['thisID'])[0]
+ iflen(idx)>0:
+ plot_idx.append(ibr[idx[0]])
+ plot_val.append(vals[idx[0]])
+ color=rgba_color[idx[0]]/255
+ eliflen(idxm)>0:
+ color=mask_color
+ else:
+ color=empty_color
+
+ ifhemisphereisNone:
+ col_l=None
+ col_r=color
+ elifhemisphere=='left':
+ col_l=empty_coloriforientation=='portrait'elsecolor
+ col_r=coloriforientation=='portrait'elseempty_color
+ elifhemisphere=='right':
+ col_l=coloriforientation=='portrait'elseempty_color
+ col_r=empty_coloriforientation=='portrait'elsecolor
+ elifhemisphere=='mirror':
+ col_l=color
+ col_r=color
+ elifhemisphere=='both':
+ ifreg_id<=br.n_lr:
+ col_l=coloriforientation=='portrait'elseNone
+ col_r=Noneiforientation=='portrait'elsecolor
+ else:
+ col_l=Noneiforientation=='portrait'elsecolor
+ col_r=coloriforientation=='portrait'elseNone
+
+ ifreg['hole']:
+ vertices,codes=coords_for_poly_hole(coords)
+ iforientation=='portrait':
+ vertices[:,[0,1]]=vertices[:,[1,0]]
+ ifcol_risnotNone:
+ plot_polygon_with_hole(ax,vertices,codes,col_r,reg_id,**kwargs)
+ ifcol_lisnotNone:
+ vertices_inv=np.copy(vertices)
+ vertices_inv[:,0]=-1*vertices_inv[:,0]+(sw_shape[0]*2)
+ plot_polygon_with_hole(ax,vertices_inv,codes,col_l,reg_id,**kwargs)
+ else:
+ ifcol_risnotNone:
+ plot_polygon_with_hole(ax,vertices,codes,col_r,reg_id,**kwargs)
+ ifcol_lisnotNone:
+ vertices_inv=np.copy(vertices)
+ vertices_inv[:,1]=-1*vertices_inv[:,1]+(sw_shape[0]*2)
+ plot_polygon_with_hole(ax,vertices_inv,codes,col_l,reg_id,**kwargs)
+ else:
+ coords=[coords]ifisinstance(coords,dict)elsecoords
+ forcincoords:
+ iforientation=='portrait':
+ xy=np.c_[c['y'],c['x']]
+ ifcol_risnotNone:
+ plot_polygon(ax,xy,col_r,reg_id,**kwargs)
+ ifcol_lisnotNone:
+ xy_inv=np.copy(xy)
+ xy_inv[:,0]=-1*xy_inv[:,0]+(sw_shape[0]*2)
+ plot_polygon(ax,xy_inv,col_l,reg_id,**kwargs)
+ else:
+ xy=np.c_[c['x'],c['y']]
+ ifcol_risnotNone:
+ plot_polygon(ax,xy,col_r,reg_id,**kwargs)
+ ifcol_lisnotNone:
+ xy_inv=np.copy(xy)
+ xy_inv[:,1]=-1*xy_inv[:,1]+(sw_shape[0]*2)
+ plot_polygon(ax,xy_inv,col_l,reg_id,**kwargs)
+
+ iforientation=='portrait':
+ ax.set_ylim(0,sw_shape[1])
+ ifhemisphereisNone:
+ ax.set_xlim(0,sw_shape[0])
+ else:
+ ax.set_xlim(0,2*sw_shape[0])
+ else:
+ ax.set_xlim(0,sw_shape[1])
+ ifhemisphereisNone:
+ ax.set_ylim(0,sw_shape[0])
+ else:
+ ax.set_ylim(0,2*sw_shape[0])
+
+ ifannotate:
+ ifannotate_listisnotNone:
+ annotate_swanson(ax=ax,acronyms=annotate_list,orientation=orientation,br=br,thres=10,fontsize=fontsize)
+ elifacronymsisnotNone:
+ ids=br.index2id(np.array(plot_idx))
+ _,indices,_=np.intersect1d(br.id,br.remap(ids,'Swanson-lr'),return_indices=True)
+ a,b=ismember(ids,br.id[indices])
+ sorted_id=ids[a]
+ vals=np.array(plot_val)[a]
+ sort_vals=np.argsort(vals)ifannotate_order=='bottom'elsenp.argsort(vals)[::-1]
+ annotate_swanson(ax=ax,acronyms=sorted_id[sort_vals[:annotate_n]],orientation=orientation,br=br,
+ thres=10,fontsize=fontsize)
+ else:
+ annotate_swanson(ax=ax,orientation=orientation,br=br,fontsize=fontsize)
+
+ defformat_coord(x,y):
+ patch=next((pforpinax.patchesifp.contains_point(p.get_transform().transform(np.r_[x,y]))),None)
+ ifpatchisnotNone:
+ ind=int(patch.get_gid().split('_')[1])
+ ancestors=br.ancestors(br.id[ind])['acronym']
+ returnf'sw-{ind}, {ancestors}, aid={br.id[ind]}-{br.acronym[ind]}\n{br.name[ind]}'
+ else:
+ return''
+
+ ax.format_coord=format_coord
+
+ ax.invert_yaxis()
+ ax.set_aspect('equal')
+ returnax
+
+
+
+
+[docs]
+defplot_swanson(acronyms=None,values=None,ax=None,hemisphere=None,br=None,
+ orientation='landscape',annotate=False,empty_color='silver',**kwargs):
+"""
+ Displays the 2D image corresponding to the swanson flatmap.
+
+ This case is different from the others in the sense that only a region maps to another regions,
+ there is no correspondence to the spatial 3D coordinates.
+
+ Parameters
+ ----------
+ acronyms: numpy.array
+ A 1D array of acronyms or atlas ids
+ values: numpy.array
+ A 1D array of values corresponding to each acronym in the acronyms array
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ hemisphere : {'left', 'right', 'both', 'mirror'}
+ The hemisphere to display.
+ br : iblatlas.regions.BrainRegions
+ A brain regions object.
+ orientation : {landscape', 'portrait'}, default='landscape'
+ The plot orientation.
+ empty_color : str, tuple of int, default='silver'
+ The greyscale matplotlib color code or an RGBA int8 tuple defining the filling of brain
+ regions not provided.
+ vmin: float
+ Minimum value to restrict the colormap
+ vmax: float
+ Maximum value to restrict the colormap
+ cmap: string
+ matplotlib named colormap to use
+ annotate : bool, default=False
+ If true, labels the regions with acronyms.
+ **kwargs
+ See matplotlib.pyplot.imshow.
+
+ Returns
+ -------
+ matplotlib.pyplot.Axes
+ The plotted axes.
+ """
+ mapping='Swanson'
+ br=BrainRegions()ifbrisNoneelsebr
+ br.compute_hierarchy()
+ s2a=swanson()
+ # both hemispheres
+ ifhemisphere=='both':
+ _s2a=s2a+np.sum(br.id>0)
+ _s2a[s2a==0]=0
+ _s2a[s2a==1]=1
+ s2a=np.r_[s2a,np.flipud(_s2a)]
+ mapping='Swanson-lr'
+ elifhemisphere=='mirror':
+ s2a=np.r_[s2a,np.flipud(s2a)]
+ iforientation=='portrait':
+ s2a=np.transpose(s2a)
+ ifacronymsisNone:
+ regions=br.mappings[mapping][s2a]
+ im=br.rgba[regions]
+ iswan=None
+ else:
+ ibr,vals=br.propagate_down(acronyms,values)
+ # we now have the mapped regions and aggregated values, map values onto swanson map
+ iswan,iv=ismember(s2a,ibr)
+ im=np.zeros_like(s2a,dtype=np.float32)
+ im[iswan]=vals[iv]
+ im[~iswan]=np.nan
+ ifnotax:
+ ax=plt.gca()
+ ax.set_axis_off()# unless provided we don't need scales here
+ ax.imshow(im,**kwargs)
+ # overlay the boundaries if value plot
+ imb=np.zeros((*s2a.shape[:2],4),dtype=np.uint8)
+ # fill in the empty regions with the blank regions colours if necessary
+ ifiswanisnotNone:
+ imb[~iswan]=(np.array(colors.to_rgba(empty_color))*255).astype('uint8')
+ imb[s2a==0]=255
+ # imb[s2a == 1] = np.array([167, 169, 172, 255])
+ imb[s2a==1]=np.array([0,0,0,255])
+ ax.imshow(imb)
+ ifannotate:
+ annotate_swanson(ax=ax,orientation=orientation,br=br)
+
+ # provides the mean to see the region on axis
+ defformat_coord(x,y):
+ ind=s2a[int(y),int(x)]
+ ancestors=br.ancestors(br.id[ind])['acronym']
+ returnf'sw-{ind}, {ancestors}, aid={br.id[ind]}-{br.acronym[ind]}\n{br.name[ind]}'
+
+ ax.format_coord=format_coord
+ returnax
+
+
+
+
+[docs]
+defannotate_swanson(ax,acronyms=None,orientation='landscape',br=None,thres=20000,**kwargs):
+"""
+ Display annotations on a Swanson flatmap.
+
+ Parameters
+ ----------
+ ax : matplotlib.pyplot.Axes
+ An axis object to plot onto.
+ acronyms : array_like
+ A list or numpy array of acronyms or Allen region IDs. If None plot all acronyms.
+ orientation : {landscape', 'portrait'}, default='landscape'
+ The plot orientation.
+ br : iblatlas.regions.BrainRegions
+ A brain regions object.
+ thres : int, default=20000
+ The number of pixels above which a region is labelled.
+ **kwargs
+ See matplotlib.pyplot.Axes.annotate.
+
+ """
+ br=brorBrainRegions()
+ ifacronymsisNone:
+ indices=np.arange(br.id.size)
+ else:# TODO we should in fact remap and compute labels for hierarchical regions
+ aids=br.parse_acronyms_argument(acronyms)
+ _,indices,_=np.intersect1d(br.id,br.remap(aids,'Swanson-lr'),return_indices=True)
+ labels=_swanson_labels_positions(thres=thres)
+ forilabelinlabels:
+ # do not display unwanted labels
+ ifilabelnotinindices:
+ continue
+ # rotate the labels if the display is in portrait mode
+ xy=np.flip(labels[ilabel])iforientation=='portrait'elselabels[ilabel]
+ ax.annotate(br.acronym[ilabel],xy=xy,ha='center',va='center',**kwargs)
+"""Brain region mappings.
+
+Four mappings are currently available within the IBL, these are:
+
+* Allen Atlas - total of 1328 annotation regions provided by Allen Atlas.
+* Beryl Atlas - total of 308 annotation regions determined by Nick Steinmetz for the brain wide map, mainly at the level of
+ major cortical areas, nuclei/ganglia. Thus annotations relating to layers and nuclear subregions are absent.
+* Cosmos Atlas - total of 10 annotation regions determined by Nick Steinmetz for coarse analysis. Annotations include the major
+ divisions of the brain only.
+* Swanson Atlas - total of 319 annotation regions provided by the Swanson atlas (FIXME which one?).
+
+Terminology
+-----------
+* **Name** - The full anatomical name of a brain region.
+* **Acronymn** - A shortened version of a brain region name.
+* **Index** - The index of the of the brain region within the ordered list of brain regions.
+* **ID** - A unique numerical identifier of a brain region. These are typically integers that
+ therefore take up less space than storing the region names or acronyms.
+* **Mapping** - A function that maps one ordered list of brain region IDs to another, allowing one
+ to control annotation granularity and brain region hierarchy, or to translate brain region names
+ from one atlas to another. The default mapping is identity. See
+ [atlas package documentation](./ibllib.atlas.html#mappings) for other mappings.
+* **Order** - Each structure is assigned a consistent position within the flattened graph. This
+ value is known as the annotation index, i.e. the annotation volume contains the brain region
+ order at each point in the image.
+
+FIXME Document the two structure trees. Which Website did they come from, and which publication/edition?
+"""
+fromdataclassesimportdataclass
+importlogging
+frompathlibimportPath
+
+importnumpyasnp
+importpandasaspd
+fromiblutil.utilimportBunch
+fromiblutil.numericalimportismember
+
+_logger=logging.getLogger(__name__)
+FILE_MAPPINGS=str(Path(__file__).parent.joinpath('mappings.pqt'))
+ALLEN_FILE_REGIONS=str(Path(__file__).parent.joinpath('allen_structure_tree.csv'))
+FRANKLIN_FILE_REGIONS=str(Path(__file__).parent.joinpath('franklin_paxinos_structure_tree.csv'))
+
+
+@dataclass
+class_BrainRegions:
+"""A struct of brain regions, their names, IDs, relationships and associated plot colours."""
+
+"""numpy.array: An integer array of unique brain region IDs."""
+ id:np.ndarray
+"""numpy.array: A str array of verbose brain region names."""
+ name:object
+"""numpy.array: A str array of brain region acronyms."""
+ acronym:object
+"""numpy.array: A, (n, 3) uint8 array of brain region RGB colour values."""
+ rgb:np.uint8
+"""numpy.array: An unsigned integer array indicating the number of degrees removed from root."""
+ level:np.ndarray
+"""numpy.array: An integer array of parent brain region IDs."""
+ parent:np.ndarray
+"""numpy.array: The position within the flattened graph."""
+ order:np.uint16
+
+ def__post_init__(self):
+ self._compute_mappings()
+
+ def_compute_mappings(self):
+"""Compute default mapping for the structure tree.
+
+ Default mapping is identity. This method is intended to be overloaded by subclasses.
+ """
+ self.default_mapping=None
+ self.mappings=dict(default_mapping=self.order)
+ # the number of lateralized regions (typically half the number of regions in a lateralized structure tree)
+ self.n_lr=0
+
+ defto_df(self):
+"""
+ Return dataclass as a pandas DataFrame.
+
+ Returns
+ -------
+ pandas.DataFrame
+ The object as a pandas DataFrame with attributes as columns.
+ """
+ attrs=['id','name','acronym','hexcolor','level','parent','order']
+ d=dict(zip(attrs,list(map(self.__getattribute__,attrs))))
+ returnpd.DataFrame(d)
+
+ @property
+ defrgba(self):
+"""numpy.array: An (n, 4) uint8 array of RGBA values for all n brain regions."""
+ rgba=np.c_[self.rgb,self.rgb[:,0]*0+255]
+ rgba[0,:]=0# set the void to transparent
+ returnrgba
+
+ @property
+ defhexcolor(self):
+"""numpy.array of str: The RGB colour values as hexadecimal triplet strings."""
+ returnnp.apply_along_axis(lambdax:"#{0:02x}{1:02x}{2:02x}".format(*x.astype(int)),1,self.rgb)
+
+ defget(self,ids)->Bunch:
+"""
+ Return a map of id, name, acronym, etc. for the provided IDs.
+
+ Parameters
+ ----------
+ ids : int, tuple of ints, numpy.array
+ One or more brain region IDs to get information for.
+
+ Returns
+ -------
+ iblutil.util.Bunch[str, numpy.array]
+ A dict-like object containing the keys {'id', 'name', 'acronym', 'rgb', 'level',
+ 'parent', 'order'} with arrays the length of `ids`.
+ """
+ uid,uind=np.unique(ids,return_inverse=True)
+ a,iself,_=np.intersect1d(self.id,uid,assume_unique=False,return_indices=True)
+ b=Bunch()
+ forkinself.__dataclass_fields__.keys():
+ b[k]=self.__getattribute__(k)[iself[uind]]
+ returnb
+
+ def_navigate_tree(self,ids,direction='down',return_indices=False):
+"""
+ Navigate the tree and get all related objects either up, down or along the branch.
+
+ By convention the provided id is returned in the list of regions.
+
+ Parameters
+ ----------
+ ids : int, array_like
+ One or more brain region IDs (int32).
+ direction : {'up', 'down'}
+ Whether to return ancestors ('up') or descendants ('down').
+ return_indices : bool, default=False
+ If true returns a second argument with indices mapping to the current brain region
+ object.
+
+ Returns
+ -------
+ iblutil.util.Bunch[str, numpy.array]
+ A dict-like object containing the keys {'id', 'name', 'acronym', 'rgb', 'level',
+ 'parent', 'order'} with arrays the length of `ids`.
+ """
+ indices=ismember(self.id,ids)[0]
+ count=np.sum(indices)
+ whileTrue:
+ ifdirection=='down':
+ indices|=ismember(self.parent,self.id[indices])[0]
+ elifdirection=='up':
+ indices|=ismember(self.id,self.parent[indices])[0]
+ else:
+ raiseValueError("direction should be either 'up' or 'down'")
+ ifcount==np.sum(indices):# last iteration didn't find any match
+ break
+ else:
+ count=np.sum(indices)
+ ifreturn_indices:
+ returnself.get(self.id[indices]),np.where(indices)[0]
+ else:
+ returnself.get(self.id[indices])
+
+ defsubtree(self,scalar_id,return_indices=False):
+"""
+ Given a node, returns the subtree containing the node along with ancestors.
+
+ Parameters
+ ----------
+ scalar_id : int
+ A brain region ID.
+ return_indices : bool, default=False
+ If true returns a second argument with indices mapping to the current brain region
+ object.
+
+ Returns
+ -------
+ iblutil.util.Bunch[str, numpy.array]
+ A dict-like object containing the keys {'id', 'name', 'acronym', 'rgb', 'level',
+ 'parent', 'order'} with arrays the length of one.
+ """
+ ifnotnp.isscalar(scalar_id):
+ assertscalar_id.size==1
+ _,idown=self._navigate_tree(scalar_id,direction='down',return_indices=True)
+ _,iup=self._navigate_tree(scalar_id,direction='up',return_indices=True)
+ indices=np.unique(np.r_[idown,iup])
+ ifreturn_indices:
+ returnself.get(self.id[indices]),np.where(indices)[0]
+ else:
+ returnself.get(self.id[indices])
+
+ defdescendants(self,ids,**kwargs):
+"""
+ Get descendants from one or more IDs.
+
+ Parameters
+ ----------
+ ids : int, array_like
+ One or more brain region IDs.
+ return_indices : bool, default=False
+ If true returns a second argument with indices mapping to the current brain region
+ object.
+
+ Returns
+ -------
+ iblutil.util.Bunch[str, numpy.array]
+ A dict-like object containing the keys {'id', 'name', 'acronym', 'rgb', 'level',
+ 'parent', 'order'} with arrays the length of `ids`.
+ """
+ returnself._navigate_tree(ids,direction='down',**kwargs)
+
+ defancestors(self,ids,**kwargs):
+"""
+ Get ancestors from one or more IDs.
+
+ Parameters
+ ----------
+ ids : int, array_like
+ One or more brain region IDs.
+ return_indices : bool, default=False
+ If true returns a second argument with indices mapping to the current brain region
+ object.
+
+ Returns
+ -------
+ iblutil.util.Bunch[str, numpy.array]
+ A dict-like object containing the keys {'id', 'name', 'acronym', 'rgb', 'level',
+ 'parent', 'order'} with arrays the length of `ids`.
+ """
+ returnself._navigate_tree(ids,direction='up',**kwargs)
+
+ defleaves(self):
+"""
+ Get all regions that do not have children.
+
+ Returns
+ -------
+ iblutil.util.Bunch[str, numpy.array]
+ A dict-like object containing the keys {'id', 'name', 'acronym', 'rgb', 'level',
+ 'parent', 'order'} with arrays of matching length.
+ """
+ leaves=np.setxor1d(self.id,self.parent)
+ returnself.get(np.int64(leaves[~np.isnan(leaves)]))
+
+ def_mapping_from_regions_list(self,new_map,lateralize=False):
+"""
+ From a vector of region IDs, creates a structure tree index mapping.
+
+ For example, given a subset of brain region IDs, this returns an array the length of the
+ total number of brain regions, where each element is the structure tree index for that
+ region. The IDs in `new_map` and their descendants are given that ID's index and any
+ missing IDs are given the root index.
+
+
+ Parameters
+ ----------
+ new_map : array_like of int
+ An array of atlas brain region IDs.
+ lateralize : bool
+ If true, lateralized indices are assigned to all IDs. If false, IDs are assigned to q
+ non-lateralized index regardless of their sign.
+
+ Returns
+ -------
+ numpy.array
+ A vector of brain region indices representing the structure tree order corresponding to
+ each input ID and its descendants.
+ """
+ I_ROOT=1
+ I_VOID=0
+ # to lateralize we make sure all regions are represented in + and -
+ new_map=np.unique(np.r_[-new_map,new_map])
+ assertnp.all(np.isin(new_map,self.id)), \
+ "All mapping ids should be represented in the Allen ids"
+ # with the lateralization, self.id may have duplicate values so ismember is necessary
+ iid,inm=ismember(self.id,new_map)
+ iid=np.where(iid)[0]
+ mapind=np.zeros_like(self.id)+I_ROOT# non assigned regions are root
+ # TODO should root be lateralised?
+ mapind[iid]=iid# regions present in the list have the same index
+ # Starting by the higher up levels in the hierarchy, assign all descendants to the mapping
+ foriinnp.argsort(self.level[iid]):
+ descendants=self.descendants(self.id[iid[i]]).id
+ _,idesc,_=np.intersect1d(self.id,descendants,return_indices=True)
+ mapind[idesc]=iid[i]
+ mapind[0]=I_VOID# void stays void
+ # to delateralize the regions, assign the positive index to all mapind elements
+ iflateralizeisFalse:
+ _,iregion=ismember(np.abs(self.id),self.id)
+ mapind=mapind[iregion]
+ returnmapind
+
+ defacronym2acronym(self,acronym,mapping=None):
+"""
+ Remap acronyms onto mapping
+
+ :param acronym: list or array of acronyms
+ :param mapping: target map to remap acronyms
+ :return: array of remapped acronyms
+ """
+ mapping=mappingorself.default_mapping
+ inds=self._find_inds(acronym,self.acronym)
+ returnself.acronym[self.mappings[mapping]][inds]
+
+ defacronym2id(self,acronym,mapping=None,hemisphere=None):
+"""
+ Convert acronyms to atlas ids and remap
+
+ :param acronym: list or array of acronyms
+ :param mapping: target map to remap atlas_ids
+ :param hemisphere: which hemisphere to return atlas ids for, options left or right
+ :return: array of remapped atlas ids
+ """
+ mapping=mappingorself.default_mapping
+ inds=self._find_inds(acronym,self.acronym)
+ returnself.id[self.mappings[mapping]][self._filter_lr(inds,mapping,hemisphere)]
+
+ defacronym2index(self,acronym,mapping=None,hemisphere=None):
+"""
+ Convert acronym to index and remap
+ :param acronym:
+ :param mapping:
+ :param hemisphere:
+ :return: array of remapped acronyms and list of indexes for each acronnym
+ """
+ mapping=mappingorself.default_mapping
+ acronym=self.acronym2acronym(acronym,mapping=mapping)
+ index=list()
+ foridinacronym:
+ inds=np.where(self.acronym[self.mappings[mapping]]==id)[0]
+ index.append(self._filter_lr_index(inds,hemisphere))
+
+ returnacronym,index
+
+ defid2acronym(self,atlas_id,mapping=None):
+"""
+ Convert atlas id to acronym and remap
+
+ :param atlas_id: list or array of atlas ids
+ :param mapping: target map to remap acronyms
+ :return: array of remapped acronyms
+ """
+ mapping=mappingorself.default_mapping
+ inds=self._find_inds(atlas_id,self.id)
+ returnself.acronym[self.mappings[mapping]][inds]
+
+ defid2id(self,atlas_id,mapping='Allen'):
+"""
+ Remap atlas id onto mapping
+
+ :param atlas_id: list or array of atlas ids
+ :param mapping: target map to remap acronyms
+ :return: array of remapped atlas ids
+ """
+
+ inds=self._find_inds(atlas_id,self.id)
+ returnself.id[self.mappings[mapping]][inds]
+
+ defid2index(self,atlas_id,mapping='Allen'):
+"""
+ Convert atlas id to index and remap
+
+ :param atlas_id: list or array of atlas ids
+ :param mapping: mapping to use
+ :return: dict of indices for each atlas_id
+ """
+
+ atlas_id=self.id2id(atlas_id,mapping=mapping)
+ index=list()
+ foridinatlas_id:
+ inds=np.where(self.id[self.mappings[mapping]]==id)[0]
+ index.append(inds)
+
+ returnatlas_id,index
+
+ defindex2acronym(self,index,mapping=None):
+"""
+ Convert index to acronym and remap
+
+ :param index:
+ :param mapping:
+ :return:
+ """
+ mapping=mappingorself.default_mapping
+ inds=self.acronym[self.mappings[mapping]][index]
+ returninds
+
+ defindex2id(self,index,mapping=None):
+"""
+ Convert index to atlas id and remap
+
+ :param index:
+ :param mapping:
+ :return:
+ """
+ mapping=mappingorself.default_mapping
+ inds=self.id[self.mappings[mapping]][index]
+ returninds
+
+ def_filter_lr(self,values,mapping,hemisphere):
+"""
+ Filter values by those on left or right hemisphere
+ :param values: array of index values
+ :param mapping: mapping to use
+ :param hemisphere: hemisphere
+ :return:
+ """
+ if'lr'inmapping:
+ ifhemisphere=='left':
+ returnvalues+self.n_lr
+ elifhemisphere=='right':
+ returnvalues
+ else:
+ returnnp.c_[values+self.n_lr,values]
+ else:
+ returnvalues
+
+ def_filter_lr_index(self,values,hemisphere):
+"""
+ Filter index values by those on left or right hemisphere
+
+ :param values: array of index values
+ :param hemisphere: hemisphere
+ :return:
+ """
+ ifhemisphere=='left':
+ returnvalues[values>self.n_lr]
+ elifhemisphere=='right':
+ returnvalues[values<=self.n_lr]
+ else:
+ returnvalues
+
+ def_find_inds(self,values,all_values):
+ ifnotisinstance(values,list)andnotisinstance(values,np.ndarray):
+ values=np.array([values])
+ _,inds=ismember(np.array(values),all_values)
+
+ returninds
+
+ defparse_acronyms_argument(self,acronyms,mode='raise'):
+"""Parse input acronyms.
+
+ Returns a numpy array of region IDs regardless of the input: list of acronyms, array of
+ acronym strings or region IDs. To be used by functions to provide flexible input type.
+
+ Parameters
+ ----------
+ acronyms : array_like
+ An array of region acronyms to convert to IDs. An array of region IDs may also be
+ provided, in which case they are simply returned.
+ mode : str, optional
+ If 'raise', asserts that all acronyms exist in the structure tree.
+
+ Returns
+ -------
+ numpy.array of int
+ An array of brain regions corresponding to `acronyms`.
+ """
+ # first get the allen region ids regardless of the input type
+ acronyms=np.array(acronyms)
+ # if the user provides acronyms they're not signed by definition
+ ifnotnp.issubdtype(acronyms.dtype,np.number):
+ user_aids=self.acronym2id(acronyms)
+ ifmode=='raise':
+ assertuser_aids.size==acronyms.size,'all acronyms must exist in the ontology'
+ else:
+ user_aids=acronyms
+ returnuser_aids
+
+
+
+[docs]
+classFranklinPaxinosRegions(_BrainRegions):
+"""Mouse Brain in Stereotaxic Coordinates (MBSC).
+
+ Paxinos G, and Franklin KBJ (2012). The Mouse Brain in Stereotaxic Coordinates, 4th edition (Elsevier Academic Press).
+ """
+ def__init__(self):
+ df_regions=pd.read_csv(FRANKLIN_FILE_REGIONS)
+ # get rid of nan values, there are rows that are in Allen but are not in the Franklin Paxinos atlas
+ df_regions=df_regions[~df_regions['Structural ID'].isna()]
+ # add in root
+ root=[{'Structural ID':int(997),'Franklin-Paxinos Full name':'root','Franklin-Paxinos abbreviation':'root',
+ 'structure Order':50,'red':255,'green':255,'blue':255,'Allen Full name':'root',
+ 'Allen abbreviation':'root'}]
+ df_regions=pd.concat([pd.DataFrame(root),df_regions],ignore_index=True)
+
+ allen_regions=pd.read_csv(ALLEN_FILE_REGIONS)
+
+ # Find the level of acronyms that are the same as Allen
+ a,b=ismember(df_regions['Allen abbreviation'].values,allen_regions['acronym'].values)
+ level=allen_regions['depth'].values[b]
+ df_regions['level']=np.full(len(df_regions),np.nan)
+ df_regions['allen level']=np.full(len(df_regions),np.nan)
+ df_regions.loc[a,'level']=level
+ df_regions.loc[a,'allen level']=level
+
+ nan_idx=np.where(df_regions['Allen abbreviation'].isna())[0]
+ df_regions.loc[nan_idx,'Allen abbreviation']=df_regions['Franklin-Paxinos abbreviation'].values[nan_idx]
+ df_regions.loc[nan_idx,'Allen Full name']=df_regions['Franklin-Paxinos Full name'].values[nan_idx]
+
+ # Now fill in the nan values with one level up from their parents we need to this multiple times
+ whilenp.sum(np.isnan(df_regions['level'].values))>0:
+ nan_loc=np.isnan(df_regions['level'].values)
+ parent_level=df_regions['Parent ID'][nan_loc].values
+ a,b=ismember(parent_level,df_regions['Structural ID'].values)
+ assertlen(a)==len(b)==np.sum(nan_loc)
+ level=df_regions['level'].values[b]+1
+ df_regions.loc[nan_loc,'level']=level
+
+ # lateralize
+ df_regions_left=df_regions.iloc[np.array(df_regions['Structural ID']>0),:].copy()
+ df_regions_left['Structural ID']=-df_regions_left['Structural ID']
+ df_regions_left['Parent ID']=-df_regions_left['Parent ID']
+ df_regions_left['Allen Full name']= \
+ df_regions_left['Allen Full name'].apply(lambdax:x+' (left)')
+ df_regions=pd.concat((df_regions,df_regions_left),axis=0)
+
+ # insert void
+ void=[{'Structural ID':int(0),'Franklin-Paxinos Full Name':'void','Franklin-Paxinos abbreviation':'void',
+ 'Parent ID':int(0),'structure Order':0,'red':0,'green':0,'blue':0,'Allen Full name':'void',
+ 'Allen abbreviation':'void'}]
+ df_regions=pd.concat([pd.DataFrame(void),df_regions],ignore_index=True)
+
+ # converts colors to RGB uint8 array
+ c=np.c_[df_regions['red'],df_regions['green'],df_regions['blue']].astype(np.uint32)
+
+ super().__init__(id=df_regions['Structural ID'].to_numpy().astype(np.int64),
+ name=df_regions['Allen Full name'].to_numpy(),
+ acronym=df_regions['Allen abbreviation'].to_numpy(),
+ rgb=c,
+ level=df_regions['level'].to_numpy().astype(np.uint16),
+ parent=df_regions['Parent ID'].to_numpy(),
+ order=df_regions['structure Order'].to_numpy().astype(np.uint16))
+
+ def_compute_mappings(self):
+"""
+ Compute lateralized and non-lateralized mappings.
+
+ This method is called by __post_init__.
+ """
+ self.mappings={
+ 'FranklinPaxinos':self._mapping_from_regions_list(np.unique(np.abs(self.id)),lateralize=False),
+ 'FranklinPaxinos-lr':np.arange(self.id.size),
+ }
+ self.default_mapping='FranklinPaxinos'
+ self.n_lr=int((len(self.id)-1)/2)# the number of lateralized regions
+
+
+
+
+[docs]
+classBrainRegions(_BrainRegions):
+"""
+ A struct of Allen brain regions, their names, IDs, relationships and associated plot colours.
+
+ iblatlas.regions.BrainRegions(brainmap='Allen')
+
+ Notes
+ -----
+ The Allen atlas IDs are kept intact but lateralized as follows: labels are duplicated
+ and IDs multiplied by -1, with the understanding that left hemisphere regions have negative
+ IDs.
+ """
+ def__init__(self):
+ df_regions=pd.read_csv(ALLEN_FILE_REGIONS)
+ # lateralize
+ df_regions_left=df_regions.iloc[np.array(df_regions.id>0),:].copy()
+ df_regions_left['id']=-df_regions_left['id']
+ df_regions_left['parent_structure_id']=-df_regions_left['parent_structure_id']
+ df_regions_left['name']=df_regions_left['name'].apply(lambdax:x+' (left)')
+ df_regions=pd.concat((df_regions,df_regions_left),axis=0)
+ # converts colors to RGB uint8 array
+ c=np.uint32(df_regions.color_hex_triplet.map(
+ lambdax:int(x,16)ifisinstance(x,str)else256**3-1))
+ c=np.flip(np.reshape(c.view(np.uint8),(df_regions.id.size,4))[:,:3],1)
+ c[0,:]=0# set the void region to black
+ # For void assign the depth and level to avoid warnings of nan being converted to int
+ df_regions.loc[0,'depth']=0
+ df_regions.loc[0,'graph_order']=0
+ # creates the BrainRegion instance
+ super().__init__(id=df_regions.id.to_numpy(),
+ name=df_regions.name.to_numpy(),
+ acronym=df_regions.acronym.to_numpy(),
+ rgb=c,
+ level=df_regions.depth.to_numpy().astype(np.uint16),
+ parent=df_regions.parent_structure_id.to_numpy(),
+ order=df_regions.graph_order.to_numpy().astype(np.uint16))
+
+ def_compute_mappings(self):
+"""
+ Recomputes the mapping indices for all mappings.
+
+ Attempts to load mappings from the FILE_MAPPINGS file, otherwise generates from arrays of
+ brain IDs. In production, we use the MAPPING_FILES pqt to avoid recomputing at each
+ instantiation as this take a few seconds to execute.
+
+ Currently there are 8 available mappings (Allen, Beryl, Cosmos, and Swanson), lateralized
+ (with suffix -lr) and non-lateralized. Each row contains the correspondence to the Allen
+ CCF structure tree order (i.e. index) for each mapping.
+
+ This method is called by __post_init__.
+ """
+ # mappings are indices not ids: they range from 0 to n regions -1
+ ifPath(FILE_MAPPINGS).exists():
+ mappings=pd.read_parquet(FILE_MAPPINGS)
+ self.mappings={k:mappings[k].to_numpy()forkinmappings}
+ else:
+ beryl=np.load(Path(__file__).parent.joinpath('beryl.npy'))
+ cosmos=np.load(Path(__file__).parent.joinpath('cosmos.npy'))
+ swanson=np.load(Path(__file__).parent.joinpath('swanson_regions.npy'))
+ self.mappings={
+ 'Allen':self._mapping_from_regions_list(np.unique(np.abs(self.id)),lateralize=False),
+ 'Allen-lr':np.arange(self.id.size),
+ 'Beryl':self._mapping_from_regions_list(beryl,lateralize=False),
+ 'Beryl-lr':self._mapping_from_regions_list(beryl,lateralize=True),
+ 'Cosmos':self._mapping_from_regions_list(cosmos,lateralize=False),
+ 'Cosmos-lr':self._mapping_from_regions_list(cosmos,lateralize=True),
+ 'Swanson':self._mapping_from_regions_list(swanson,lateralize=False),
+ 'Swanson-lr':self._mapping_from_regions_list(swanson,lateralize=True),
+ }
+ pd.DataFrame(self.mappings).to_parquet(FILE_MAPPINGS)
+ self.default_mapping='Allen'
+ self.n_lr=int((len(self.id)-1)/2)# the number of lateralized regions
+
+
+[docs]
+ defcompute_hierarchy(self):
+"""
+ Creates a self.hierarchy attribute that is an n_levels by n_region array
+ of indices. This is useful to perform fast vectorized computations of
+ ancestors and descendants.
+ :return:
+ """
+ ifhasattr(self,'hierarchy'):
+ return
+ n_levels=np.max(self.level)
+ n_regions=self.id.size
+ # creates the parent index. Void and root are omitted from intersection
+ # as they figure as NaN
+ pmask,i_p=ismember(self.parent,self.id)
+ self.iparent=np.arange(n_regions)
+ self.iparent[pmask]=i_p
+ # the last level of the hierarchy is the actual mapping, then going up level per level
+ # we assign the parend index
+ self.hierarchy=np.tile(np.arange(n_regions),(n_levels,1))
+ _mask=np.zeros(n_regions,bool)
+ forlevinnp.flipud(np.arange(n_levels)):
+ iflev<(n_levels-1):
+ self.hierarchy[lev,_mask]=self.iparent[self.hierarchy[lev+1,_mask]]
+ sel=self.level==(lev+1)
+ self.hierarchy[lev,sel]=np.where(sel)[0]
+ _mask[sel]=True
+
+
+
+[docs]
+ defpropagate_down(self,acronyms,values):
+"""
+ This function remaps a set of user specified acronyms and values to the
+ swanson map, by filling down the child nodes when higher up values are
+ provided.
+ :param acronyms: list or array of allen ids or acronyms
+ :param values: list or array of associated values
+ :return:
+ # FIXME Why only the swanson map? Also, how is this actually related to the Swanson map?
+ """
+ user_aids=self.parse_acronyms_argument(acronyms)
+ _,user_indices=ismember(user_aids,self.id)
+ self.compute_hierarchy()
+ ia,ib=ismember(self.hierarchy,user_indices)
+ v=np.zeros_like(ia,dtype=np.float64)*np.NaN
+ v[ia]=values[ib]
+ all_values=np.nanmedian(v,axis=0)
+ indices=np.where(np.any(ia,axis=0))[0]
+ all_values=all_values[indices]
+ returnindices,all_values
+
+
+
+[docs]
+ defremap(self,region_ids,source_map='Allen',target_map='Beryl'):
+"""
+ Remap atlas regions IDs from source map to target map.
+
+ Any NaNs in `region_ids` remain as NaN in the output array.
+
+ Parameters
+ ----------
+ region_ids : array_like of int
+ The region IDs to remap.
+ source_map : str
+ The source map name, in `self.mappings`.
+ target_map : str
+ The target map name, in `self.mappings`.
+
+ Returns
+ -------
+ numpy.array of int
+ The input IDs mapped to `target_map`.
+ """
+ isnan=np.isnan(region_ids)
+ ifnp.sum(isnan)>0:
+ # In case the user provides nans
+ nan_loc=np.where(isnan)[0]
+ _,inds=ismember(region_ids[~isnan],self.id[self.mappings[source_map]])
+ mapped_ids=self.id[self.mappings[target_map][inds]].astype(float)
+ mapped_ids=np.insert(mapped_ids,nan_loc,np.full(nan_loc.shape,np.nan))
+ else:
+ _,inds=ismember(region_ids,self.id[self.mappings[source_map]])
+ mapped_ids=self.id[self.mappings[target_map][inds]]
+
+ returnmapped_ids
+
+
+
+
+
+[docs]
+defregions_from_allen_csv():
+"""
+ (DEPRECATED) Reads csv file containing the ALlen Ontology and instantiates a BrainRegions object.
+
+ NB: Instantiate BrainRegions directly instead.
+
+ :return: BrainRegions object
+ """
+ _logger.warning("iblatlas.regions.regions_from_allen_csv() is deprecated. "
+ "Use BrainRegions() instead")
+ returnBrainRegions()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_modules/ibllib/atlas/atlas.html b/_modules/ibllib/atlas/atlas.html
index e23e5b4d..9ce8fdc0 100644
--- a/_modules/ibllib/atlas/atlas.html
+++ b/_modules/ibllib/atlas/atlas.html
@@ -109,1070 +109,43 @@
Source code for ibllib.atlas.atlas
"""Classes for manipulating brain atlases, insertions, and coordinates."""
-frompathlibimportPath,PurePosixPath
-fromdataclassesimportdataclass
-importlogging
-importmatplotlib.pyplotasplt
-importnumpyasnp
-importnrrd
+importwarnings
+importiblatlas.atlas
-fromone.webclientimporthttp_download_file
-importone.params
-importone.remote.awsasaws
-fromiblutil.numericalimportismember
-fromibllib.atlas.regionsimportBrainRegions,FranklinPaxinosRegions
-ALLEN_CCF_LANDMARKS_MLAPDV_UM={'bregma':np.array([5739,5400,332])}
-"""dict: The ML AP DV voxel coordinates of brain landmarks in the Allen atlas."""
+
+[docs]
+defdeprecated_decorator(function):
+ defdeprecated_function(*args,**kwargs):
+ warning_text=f"{function.__module__}.{function.__name__} is deprecated. " \
+ f"Use iblatlas.{function.__module__.split('.')[-1]}.{function.__name__} instead"
+ warnings.warn(warning_text,DeprecationWarning)
+ returnfunction(*args,**kwargs)
-PAXINOS_CCF_LANDMARKS_MLAPDV_UM={'bregma':np.array([5700,4300+160,330])}
-"""dict: The ML AP DV voxel coordinates of brain landmarks in the Franklin & Paxinos atlas."""
-
-S3_BUCKET_IBL='ibl-brain-wide-map-public'
-"""str: The name of the public IBL S3 bucket containing atlas data."""
-
-_logger=logging.getLogger(__name__)
-
-
-
-[docs]
-defcart2sph(x,y,z):
-"""
- Converts cartesian to spherical coordinates.
-
- Returns spherical coordinates (r, theta, phi).
-
- Parameters
- ----------
- x : numpy.array
- A 1D array of x-axis coordinates.
- y : numpy.array
- A 1D array of y-axis coordinates.
- z : numpy.array
- A 1D array of z-axis coordinates.
-
- Returns
- -------
- numpy.array
- The radial distance of each point.
- numpy.array
- The polar angle.
- numpy.array
- The azimuthal angle.
-
- See Also
- --------
- sph2cart
- """
- r=np.sqrt(x**2+y**2+z**2)
- phi=np.arctan2(y,x)*180/np.pi
- theta=np.zeros_like(r)
- iok=r!=0
- theta[iok]=np.arccos(z[iok]/r[iok])*180/np.pi
- iftheta.size==1:
- theta=float(theta)
- returnr,theta,phi
-
-
-
-
-[docs]
-defsph2cart(r,theta,phi):
-"""
- Converts Spherical to Cartesian coordinates.
-
- Returns Cartesian coordinates (x, y, z).
-
- Parameters
- ----------
- r : numpy.array
- A 1D array of radial distances.
- theta : numpy.array
- A 1D array of polar angles.
- phi : numpy.array
- A 1D array of azimuthal angles.
-
- Returns
- -------
- x : numpy.array
- A 1D array of x-axis coordinates.
- y : numpy.array
- A 1D array of y-axis coordinates.
- z : numpy.array
- A 1D array of z-axis coordinates.
-
- See Also
- --------
- cart2sph
- """
- x=r*np.cos(phi/180*np.pi)*np.sin(theta/180*np.pi)
- y=r*np.sin(phi/180*np.pi)*np.sin(theta/180*np.pi)
- z=r*np.cos(theta/180*np.pi)
- returnx,y,z
+ returndeprecated_function
[docs]
-classBrainCoordinates:
-"""
- Class for mapping and indexing a 3D array to real-world coordinates.
-
- * x = ml, right positive
- * y = ap, anterior positive
- * z = dv, dorsal positive
-
- The layout of the Atlas dimension is done according to the most used sections so they lay
- contiguous on disk assuming C-ordering: V[iap, iml, idv]
-
- Parameters
- ----------
- nxyz : array_like
- Number of elements along each Cartesian axis (nx, ny, nz) = (nml, nap, ndv).
- xyz0 : array_like
- Coordinates of the element volume[0, 0, 0] in the coordinate space.
- dxyz : array_like, float
- Spatial interval of the volume along the 3 dimensions.
-
- Attributes
- ----------
- xyz0 : numpy.array
- The Cartesian coordinates of the element volume[0, 0, 0], i.e. the origin.
- x0 : int
- The x-axis origin coordinate of the element volume.
- y0 : int
- The y-axis origin coordinate of the element volume.
- z0 : int
- The z-axis origin coordinate of the element volume.
- """
-
- def__init__(self,nxyz,xyz0=(0,0,0),dxyz=(1,1,1)):
- ifnp.isscalar(dxyz):
- dxyz=[dxyz]*3
- self.x0,self.y0,self.z0=list(xyz0)
- self.dx,self.dy,self.dz=list(dxyz)
- self.nx,self.ny,self.nz=list(nxyz)
-
- @property
- defdxyz(self):
-"""numpy.array: Spatial interval of the volume along the 3 dimensions."""
- returnnp.array([self.dx,self.dy,self.dz])
-
- @property
- defnxyz(self):
-"""numpy.array: Coordinates of the element volume[0, 0, 0] in the coordinate space."""
- returnnp.array([self.nx,self.ny,self.nz])
-
-"""Methods ratios to indices"""
-
-
-
-"""Methods distance to indices"""
- @staticmethod
- def_round(i,round=True):
-"""
- Round an input value to the nearest integer, replacing NaN values with 0.
-
- Parameters
- ----------
- i : int, float, numpy.nan, numpy.array
- A value or array of values to round.
- round : bool
- If false this function is identity.
-
- Returns
- -------
- int, float, numpy.nan, numpy.array
- If round is true, returns the nearest integer, replacing NaN values with 0, otherwise
- returns the input unaffected.
- """
- nanval=0
- ifround:
- ii=np.array(np.round(i)).astype(int)
- ii[np.isnan(i)]=nanval
- returnii
- else:
- returni
-
-
-[docs]
- defx2i(self,x,round=True,mode='raise'):
-"""
- Find the nearest volume image index to a given x-axis coordinate.
-
- Parameters
- ----------
- x : float, numpy.array
- One or more x-axis coordinates, relative to the origin, x0.
- round : bool
- If true, round to the nearest index, replacing NaN values with 0.
- mode : {'raise', 'clip', 'wrap'}, default='raise'
- How to behave if the coordinate lies outside of the volume: raise (default) will raise
- a ValueError; 'clip' will replace the index with the closest index inside the volume;
- 'wrap' will return the index as is.
-
- Returns
- -------
- numpy.array
- The nearest indices of the image volume along the first dimension.
-
- Raises
- ------
- ValueError
- At least one x value lies outside of the atlas volume. Change 'mode' input to 'wrap' to
- keep these values unchanged, or 'clip' to return the nearest valid indices.
- """
- i=np.asarray(self._round((x-self.x0)/self.dx,round=round))
- ifnp.any(i<0)ornp.any(i>=self.nx):
- ifmode=='clip':
- i[i<0]=0
- i[i>=self.nx]=self.nx-1
- elifmode=='raise':
- raiseValueError("At least one x value lies outside of the atlas volume.")
- elifmode=='wrap':# This is only here for legacy reasons
- pass
- returni
-
-
-
-[docs]
- defy2i(self,y,round=True,mode='raise'):
-"""
- Find the nearest volume image index to a given y-axis coordinate.
-
- Parameters
- ----------
- y : float, numpy.array
- One or more y-axis coordinates, relative to the origin, y0.
- round : bool
- If true, round to the nearest index, replacing NaN values with 0.
- mode : {'raise', 'clip', 'wrap'}
- How to behave if the coordinate lies outside of the volume: raise (default) will raise
- a ValueError; 'clip' will replace the index with the closest index inside the volume;
- 'wrap' will return the index as is.
-
- Returns
- -------
- numpy.array
- The nearest indices of the image volume along the second dimension.
-
- Raises
- ------
- ValueError
- At least one y value lies outside of the atlas volume. Change 'mode' input to 'wrap' to
- keep these values unchanged, or 'clip' to return the nearest valid indices.
- """
- i=np.asarray(self._round((y-self.y0)/self.dy,round=round))
- ifnp.any(i<0)ornp.any(i>=self.ny):
- ifmode=='clip':
- i[i<0]=0
- i[i>=self.ny]=self.ny-1
- elifmode=='raise':
- raiseValueError("At least one y value lies outside of the atlas volume.")
- elifmode=='wrap':# This is only here for legacy reasons
- pass
- returni
-
-
-
-[docs]
- defz2i(self,z,round=True,mode='raise'):
-"""
- Find the nearest volume image index to a given z-axis coordinate.
-
- Parameters
- ----------
- z : float, numpy.array
- One or more z-axis coordinates, relative to the origin, z0.
- round : bool
- If true, round to the nearest index, replacing NaN values with 0.
- mode : {'raise', 'clip', 'wrap'}
- How to behave if the coordinate lies outside of the volume: raise (default) will raise
- a ValueError; 'clip' will replace the index with the closest index inside the volume;
- 'wrap' will return the index as is.
-
- Returns
- -------
- numpy.array
- The nearest indices of the image volume along the third dimension.
-
- Raises
- ------
- ValueError
- At least one z value lies outside of the atlas volume. Change 'mode' input to 'wrap' to
- keep these values unchanged, or 'clip' to return the nearest valid indices.
- """
- i=np.asarray(self._round((z-self.z0)/self.dz,round=round))
- ifnp.any(i<0)ornp.any(i>=self.nz):
- ifmode=='clip':
- i[i<0]=0
- i[i>=self.nz]=self.nz-1
- elifmode=='raise':
- raiseValueError("At least one z value lies outside of the atlas volume.")
- elifmode=='wrap':# This is only here for legacy reasons
- pass
- returni
-
-
-
-[docs]
- defxyz2i(self,xyz,round=True,mode='raise'):
-"""
- Find the nearest volume image indices to the given Cartesian coordinates.
-
- Parameters
- ----------
- xyz : array_like
- One or more Cartesian coordinates, relative to the origin, xyz0.
- round : bool
- If true, round to the nearest index, replacing NaN values with 0.
- mode : {'raise', 'clip', 'wrap'}
- How to behave if any coordinate lies outside of the volume: raise (default) will raise
- a ValueError; 'clip' will replace the index with the closest index inside the volume;
- 'wrap' will return the index as is.
-
- Returns
- -------
- numpy.array
- The nearest indices of the image volume.
-
- Raises
- ------
- ValueError
- At least one coordinate lies outside of the atlas volume. Change 'mode' input to 'wrap'
- to keep these values unchanged, or 'clip' to return the nearest valid indices.
- """
- xyz=np.array(xyz)
- dt=intifroundelsefloat
- out=np.zeros_like(xyz,dtype=dt)
- out[...,0]=self.x2i(xyz[...,0],round=round,mode=mode)
- out[...,1]=self.y2i(xyz[...,1],round=round,mode=mode)
- out[...,2]=self.z2i(xyz[...,2],round=round,mode=mode)
- returnout
-
-
-"""Methods indices to distance"""
-
-[docs]
- defi2x(self,ind):
-"""
- Return the x-axis coordinate of a given index.
-
- Parameters
- ----------
- ind : int, numpy.array
- One or more indices along the first dimension of the image volume.
-
- Returns
- -------
- float, numpy.array
- The corresponding x-axis coordinate(s), relative to the origin, x0.
- """
- returnind*self.dx+self.x0
-
-
-
-[docs]
- defi2y(self,ind):
-"""
- Return the y-axis coordinate of a given index.
-
- Parameters
- ----------
- ind : int, numpy.array
- One or more indices along the second dimension of the image volume.
-
- Returns
- -------
- float, numpy.array
- The corresponding y-axis coordinate(s), relative to the origin, y0.
- """
- returnind*self.dy+self.y0
-
-
-
-[docs]
- defi2z(self,ind):
-"""
- Return the z-axis coordinate of a given index.
-
- Parameters
- ----------
- ind : int, numpy.array
- One or more indices along the third dimension of the image volume.
-
- Returns
- -------
- float, numpy.array
- The corresponding z-axis coordinate(s), relative to the origin, z0.
- """
- returnind*self.dz+self.z0
-
-
-
-[docs]
- defi2xyz(self,iii):
-"""
- Return the Cartesian coordinates of a given index.
-
- Parameters
- ----------
- iii : array_like
- One or more image volume indices.
-
- Returns
- -------
- numpy.array
- The corresponding xyz coordinates, relative to the origin, xyz0.
- """
-
- iii=np.array(iii,dtype=float)
- out=np.zeros_like(iii)
- out[...,0]=self.i2x(iii[...,0])
- out[...,1]=self.i2y(iii[...,1])
- out[...,2]=self.i2z(iii[...,2])
- returnout
[docs]
-classBrainAtlas:
-"""
- Objects that holds image, labels and coordinate transforms for a brain Atlas.
- Currently this is designed for the AllenCCF at several resolutions,
- yet this class can be used for other atlases arises.
- """
-
-"""numpy.array: An image volume."""
- image=None
-"""numpy.array: An annotation label volume."""
- label=None
-
- def__init__(self,image,label,dxyz,regions,iorigin=[0,0,0],
- dims2xyz=[0,1,2],xyz2dims=[0,1,2]):
-"""
- self.image: image volume (ap, ml, dv)
- self.label: label volume (ap, ml, dv)
- self.bc: atlas.BrainCoordinate object
- self.regions: atlas.BrainRegions object
- self.top: 2d np array (ap, ml) containing the z-coordinate (m) of the surface of the brain
- self.dims2xyz and self.zyz2dims: map image axis order to xyz coordinates order
- """
-
- self.image=image
- self.label=label
- self.regions=regions
- self.dims2xyz=dims2xyz
- self.xyz2dims=xyz2dims
- assertnp.all(self.dims2xyz[self.xyz2dims]==np.array([0,1,2]))
- assertnp.all(self.xyz2dims[self.dims2xyz]==np.array([0,1,2]))
- # create the coordinate transform object that maps volume indices to real world coordinates
- nxyz=np.array(self.image.shape)[self.dims2xyz]
- bc=BrainCoordinates(nxyz=nxyz,xyz0=(0,0,0),dxyz=dxyz)
- self.bc=BrainCoordinates(nxyz=nxyz,xyz0=-bc.i2xyz(iorigin),dxyz=dxyz)
-
- self.surface=None
- self.boundary=None
-
- @staticmethod
- def_get_cache_dir():
- par=one.params.get(silent=True)
- path_atlas=Path(par.CACHE_DIR).joinpath('histology','ATLAS','Needles','Allen','flatmaps')
- returnpath_atlas
-
-
-[docs]
- defcompute_surface(self):
-"""
- Get the volume top, bottom, left and right surfaces, and from these the outer surface of
- the image volume. This is needed to compute probe insertions intersections.
-
- NOTE: In places where the top or bottom surface touch the top or bottom of the atlas volume, the surface
- will be set to np.nan. If you encounter issues working with these surfaces check if this might be the cause.
- """
- ifself.surfaceisNone:# only compute if it hasn't already been computed
- axz=self.xyz2dims[2]# this is the dv axis
- _surface=(self.label==0).astype(np.int8)*2
- l0=np.diff(_surface,axis=axz,append=2)
- _top=np.argmax(l0==-2,axis=axz).astype(float)
- _top[_top==0]=np.nan
- _bottom=self.bc.nz-np.argmax(np.flip(l0,axis=axz)==2,axis=axz).astype(float)
- _bottom[_bottom==self.bc.nz]=np.nan
- self.top=self.bc.i2z(_top+1)
- self.bottom=self.bc.i2z(_bottom-1)
- self.surface=np.diff(_surface,axis=self.xyz2dims[0],append=2)+l0
- idx_srf=np.where(self.surface!=0)
- self.surface[idx_srf]=1
- self.srf_xyz=self.bc.i2xyz(np.c_[idx_srf[self.xyz2dims[0]],idx_srf[self.xyz2dims[1]],
- idx_srf[self.xyz2dims[2]]].astype(float))
-
-
- def_lookup_inds(self,ixyz,mode='raise'):
-"""
- Performs a 3D lookup from volume indices ixyz to the image volume
- :param ixyz: [n, 3] array of indices in the mlapdv order
- :return: n array of flat indices
- """
- idims=np.split(ixyz[...,self.xyz2dims],[1,2],axis=-1)
- inds=np.ravel_multi_index(idims,self.bc.nxyz[self.xyz2dims],mode=mode)
- returninds.squeeze()
-
- def_lookup(self,xyz,mode='raise'):
-"""
- Performs a 3D lookup from real world coordinates to the flat indices in the volume,
- defined in the BrainCoordinates object.
-
- Parameters
- ----------
- xyz : numpy.array
- An (n, 3) array of Cartesian coordinates.
- mode : {'raise', 'clip', 'wrap'}
- How to behave if any coordinate lies outside of the volume: raise (default) will raise
- a ValueError; 'clip' will replace the index with the closest index inside the volume;
- 'wrap' will return the index as is.
-
- Returns
- -------
- numpy.array
- A 1D array of flat indices.
- """
- returnself._lookup_inds(self.bc.xyz2i(xyz,mode=mode),mode=mode)
-
-
-[docs]
- defget_labels(self,xyz,mapping=None,radius_um=None,mode='raise'):
-"""
- Performs a 3D lookup from real world coordinates to the volume labels
- and return the regions ids according to the mapping
- :param xyz: [n, 3] array of coordinates
- :param mapping: brain region mapping (defaults to original Allen mapping)
- :param radius_um: if not null, returns a regions ids array and an array of proportion
- of regions in a sphere of size radius around the coordinates.
- :param mode: {‘raise’, 'clip'} determines what to do when determined index lies outside the atlas volume
- 'raise' will raise a ValueError (default)
- 'clip' will replace the index with the closest index inside the volume
- :return: n array of region ids
- """
- mapping=mappingorself.regions.default_mapping
-
- ifradius_um:
- nrx=int(np.ceil(radius_um/abs(self.bc.dx)/1e6))
- nry=int(np.ceil(radius_um/abs(self.bc.dy)/1e6))
- nrz=int(np.ceil(radius_um/abs(self.bc.dz)/1e6))
- nr=[nrx,nry,nrz]
- iii=self.bc.xyz2i(xyz,mode=mode)
- # computing the cube radius and indices is more complicated as volume indices are not
- # necessarily in ml, ap, dv order so the indices order is dynamic
- rcube=np.meshgrid(*tuple((np.arange(
- -nr[i],nr[i]+1)*self.bc.dxyz[i])**2foriinself.xyz2dims))
- rcube=np.sqrt(rcube[0]+rcube[1],rcube[2])*1e6
- icube=tuple(slice(-nr[i]+iii[i],nr[i]+iii[i]+1)foriinself.xyz2dims)
- cube=self.regions.mappings[mapping][self.label[icube]]
- ilabs,counts=np.unique(cube[rcube<=radius_um],return_counts=True)
- returnself.regions.id[ilabs],counts/np.sum(counts)
- else:
- regions_indices=self._get_mapping(mapping=mapping)[self.label.flat[self._lookup(xyz,mode=mode)]]
- returnself.regions.id[regions_indices]
-
-
- def_get_mapping(self,mapping=None):
-"""
- Safe way to get mappings if nothing defined in regions.
- A mapping transforms from the full allen brain Atlas ids to the remapped ids
- new_ids = ids[mapping]
- """
- mapping=mappingorself.regions.default_mapping
- ifhasattr(self.regions,'mappings'):
- returnself.regions.mappings[mapping]
- else:
- returnnp.arange(self.regions.id.size)
-
- def_label2rgb(self,imlabel):
-"""
- Converts a slice from the label volume to its RGB equivalent for display
- :param imlabel: 2D np-array containing label ids (slice of the label volume)
- :return: 3D np-array of the slice uint8 rgb values
- """
- ifgetattr(self.regions,'rgb',None)isNone:
- returnself.regions.id[imlabel]
- else:# if the regions exist and have the rgb attribute, do the rgb lookup
- returnself.regions.rgb[imlabel]
-
-
-[docs]
- deftilted_slice(self,xyz,axis,volume='image'):
-"""
- From line coordinates, extracts the tilted plane containing the line from the 3D volume
- :param xyz: np.array: points defining a probe trajectory in 3D space (xyz triplets)
- if more than 2 points are provided will take the best fit
- :param axis:
- 0: along ml = sagittal-slice
- 1: along ap = coronal-slice
- 2: along dv = horizontal-slice
- :param volume: 'image' or 'annotation'
- :return: np.array, abscissa extent (width), ordinate extent (height),
- squeezed axis extent (depth)
- """
- ifaxis==0:# sagittal slice (squeeze/take along ml-axis)
- wdim,hdim,ddim=(1,2,0)
- elifaxis==1:# coronal slice (squeeze/take along ap-axis)
- wdim,hdim,ddim=(0,2,1)
- elifaxis==2:# horizontal slice (squeeze/take along dv-axis)
- wdim,hdim,ddim=(0,1,2)
- # get the best fit and find exit points of the volume along squeezed axis
- trj=Trajectory.fit(xyz)
- sub_volume=trj._eval(self.bc.lim(axis=hdim),axis=hdim)
- sub_volume[:,wdim]=self.bc.lim(axis=wdim)
- sub_volume_i=self.bc.xyz2i(sub_volume)
- tile_shape=np.array([np.diff(sub_volume_i[:,hdim])[0]+1,self.bc.nxyz[wdim]])
- # get indices along each dimension
- indx=np.arange(tile_shape[1])
- indy=np.arange(tile_shape[0])
- inds=np.linspace(*sub_volume_i[:,ddim],tile_shape[0])
- # compute the slice indices and output the slice
- _,INDS=np.meshgrid(indx,np.int64(np.around(inds)))
- INDX,INDY=np.meshgrid(indx,indy)
- indsl=[[INDX,INDY,INDS][i]foriinnp.argsort([wdim,hdim,ddim])[self.xyz2dims]]
- ifisinstance(volume,np.ndarray):
- tslice=volume[indsl[0],indsl[1],indsl[2]]
- elifvolume.lower()=='annotation':
- tslice=self._label2rgb(self.label[indsl[0],indsl[1],indsl[2]])
- elifvolume.lower()=='image':
- tslice=self.image[indsl[0],indsl[1],indsl[2]]
- elifvolume.lower()=='surface':
- tslice=self.surface[indsl[0],indsl[1],indsl[2]]
-
- # get extents with correct convention NB: matplotlib flips the y-axis on imshow !
- width=np.sort(sub_volume[:,wdim])[np.argsort(self.bc.lim(axis=wdim))]
- height=np.flipud(np.sort(sub_volume[:,hdim])[np.argsort(self.bc.lim(axis=hdim))])
- depth=np.flipud(np.sort(sub_volume[:,ddim])[np.argsort(self.bc.lim(axis=ddim))])
- returntslice,width,height,depth
-
-
-
-[docs]
- defplot_tilted_slice(self,xyz,axis,volume='image',cmap=None,ax=None,return_sec=False,**kwargs):
-"""
- From line coordinates, extracts the tilted plane containing the line from the 3D volume
- :param xyz: np.array: points defining a probe trajectory in 3D space (xyz triplets)
- if more than 2 points are provided will take the best fit
- :param axis:
- 0: along ml = sagittal-slice
- 1: along ap = coronal-slice
- 2: along dv = horizontal-slice
- :param volume: 'image' or 'annotation'
- :return: matplotlib axis
- """
- ifaxis==0:
- axis_labels=np.array(['ap (um)','dv (um)','ml (um)'])
- elifaxis==1:
- axis_labels=np.array(['ml (um)','dv (um)','ap (um)'])
- elifaxis==2:
- axis_labels=np.array(['ml (um)','ap (um)','dv (um)'])
-
- tslice,width,height,depth=self.tilted_slice(xyz,axis,volume=volume)
- width=width*1e6
- height=height*1e6
- depth=depth*1e6
- ifnotax:
- plt.figure()
- ax=plt.gca()
- ax.axis('equal')
- ifnotcmap:
- cmap=plt.get_cmap('bone')
- # get the transfer function from y-axis to squeezed axis for second axe
- ab=np.linalg.solve(np.c_[height,height*0+1],depth)
- height*ab[0]+ab[1]
- ax.imshow(tslice,extent=np.r_[width,height],cmap=cmap,**kwargs)
- sec_ax=ax.secondary_yaxis('right',functions=(
- lambdax:x*ab[0]+ab[1],
- lambday:(y-ab[1])/ab[0]))
- ax.set_xlabel(axis_labels[0])
- ax.set_ylabel(axis_labels[1])
- sec_ax.set_ylabel(axis_labels[2])
- ifreturn_sec:
- returnax,sec_ax
- else:
- returnax
-
-
- @staticmethod
- def_plot_slice(im,extent,ax=None,cmap=None,volume=None,**kwargs):
-"""
- Plot an atlas slice.
-
- Parameters
- ----------
- im : numpy.array
- A 2D image slice to plot.
- extent : array_like
- The bounding box in data coordinates that the image will fill specified as (left,
- right, bottom, top) in data coordinates.
- ax : matplotlib.pyplot.Axes
- An optional Axes object to plot to.
- cmap : str, matplotlib.colors.Colormap
- The Colormap instance or registered colormap name used to map scalar data to colors.
- Defaults to 'bone'.
- volume : str
- If 'boundary', assumes image is an outline of boundaries between all regions.
- FIXME How does this affect the plot?
- **kwargs
- See matplotlib.pyplot.imshow.
-
- Returns
- -------
- matplotlib.pyplot.Axes
- The image axes.
- """
- ifnotax:
- ax=plt.gca()
- ax.axis('equal')
- ifnotcmap:
- cmap=plt.get_cmap('bone')
-
- ifvolume=='boundary':
- imb=np.zeros((*im.shape[:2],4),dtype=np.uint8)
- imb[im==1]=np.array([0,0,0,255])
- im=imb
-
- ax.imshow(im,extent=extent,cmap=cmap,**kwargs)
- returnax
-
-
-[docs]
- defextent(self,axis):
-"""
- :param axis: direction along which the volume is stacked:
- (2 = z for horizontal slice)
- (1 = y for coronal slice)
- (0 = x for sagittal slice)
- :return:
- """
-
- ifaxis==0:
- extent=np.r_[self.bc.ylim,np.flip(self.bc.zlim)]*1e6
- elifaxis==1:
- extent=np.r_[self.bc.xlim,np.flip(self.bc.zlim)]*1e6
- elifaxis==2:
- extent=np.r_[self.bc.xlim,np.flip(self.bc.ylim)]*1e6
- returnextent
-
-
-
-[docs]
- defslice(self,coordinate,axis,volume='image',mode='raise',region_values=None,
- mapping=None,bc=None):
-"""
- Get slice through atlas
-
- :param coordinate: coordinate to slice in metres, float
- :param axis: xyz convention: 0 for ml, 1 for ap, 2 for dv
- - 0: sagittal slice (along ml axis)
- - 1: coronal slice (along ap axis)
- - 2: horizontal slice (along dv axis)
- :param volume:
- - 'image' - allen image volume
- - 'annotation' - allen annotation volume
- - 'surface' - outer surface of mesh
- - 'boundary' - outline of boundaries between all regions
- - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
- - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
- :param mode: error mode for out of bounds coordinates
- - 'raise' raise an error
- - 'clip' gets the first or last index
- :param region_values: custom values to plot
- - if volume='volume', region_values must have shape ba.image.shape
- - if volume='value', region_values must have shape ba.regions.id
- :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
- :return: 2d array or 3d RGB numpy int8 array
- """
- ifaxis==0:
- index=self.bc.x2i(np.array(coordinate),mode=mode)
- elifaxis==1:
- index=self.bc.y2i(np.array(coordinate),mode=mode)
- elifaxis==2:
- index=self.bc.z2i(np.array(coordinate),mode=mode)
-
- # np.take is 50 thousand times slower than straight slicing !
- def_take(vol,ind,axis):
- ifmode=='clip':
- ind=np.minimum(np.maximum(ind,0),vol.shape[axis]-1)
- ifaxis==0:
- returnvol[ind,:,:]
- elifaxis==1:
- returnvol[:,ind,:]
- elifaxis==2:
- returnvol[:,:,ind]
-
- def_take_remap(vol,ind,axis,mapping):
- # For the labels, remap the regions indices according to the mapping
- returnself._get_mapping(mapping=mapping)[_take(vol,ind,axis)]
-
- ifisinstance(volume,np.ndarray):
- return_take(volume,index,axis=self.xyz2dims[axis])
- elifvolumein'annotation':
- iregion=_take_remap(self.label,index,self.xyz2dims[axis],mapping)
- returnself._label2rgb(iregion)
- elifvolume=='image':
- return_take(self.image,index,axis=self.xyz2dims[axis])
- elifvolume=='value':
- returnregion_values[_take_remap(self.label,index,self.xyz2dims[axis],mapping)]
- elifvolume=='image':
- return_take(self.image,index,axis=self.xyz2dims[axis])
- elifvolumein['surface','edges']:
- self.compute_surface()
- return_take(self.surface,index,axis=self.xyz2dims[axis])
- elifvolume=='boundary':
- iregion=_take_remap(self.label,index,self.xyz2dims[axis],mapping)
- returnself.compute_boundaries(iregion)
-
- elifvolume=='volume':
- ifbcisnotNone:
- index=bc.xyz2i(np.array([coordinate]*3))[axis]
- return_take(region_values,index,axis=self.xyz2dims[axis])
-
-
-
-[docs]
- defcompute_boundaries(self,values):
-"""
- Compute the boundaries between regions on slice
- :param values:
- :return:
- """
- boundary=np.abs(np.diff(values,axis=0,prepend=0))
- boundary=boundary+np.abs(np.diff(values,axis=1,prepend=0))
- boundary=boundary+np.abs(np.diff(values,axis=1,append=0))
- boundary=boundary+np.abs(np.diff(values,axis=0,append=0))
-
- boundary[boundary!=0]=1
-
- returnboundary
-
-
-
-[docs]
- defplot_slices(self,xyz,*args,**kwargs):
-"""
- From a single coordinate, plots the 3 slices that intersect at this point in a single
- matplotlib figure
- :param xyz: mlapdv coordinate in m
- :param args: arguments to be forwarded to plot slices
- :param kwargs: keyword arguments to be forwarded to plot slices
- :return: 2 by 2 array of axes
- """
- fig,axs=plt.subplots(2,2)
- self.plot_cslice(xyz[1],*args,ax=axs[0,0],**kwargs)
- self.plot_sslice(xyz[0],*args,ax=axs[0,1],**kwargs)
- self.plot_hslice(xyz[2],*args,ax=axs[1,0],**kwargs)
- xyz_um=xyz*1e6
- axs[0,0].plot(xyz_um[0],xyz_um[2],'g*')
- axs[0,1].plot(xyz_um[1],xyz_um[2],'g*')
- axs[1,0].plot(xyz_um[0],xyz_um[1],'g*')
- returnaxs
-
-
-
-[docs]
- defplot_cslice(self,ap_coordinate,volume='image',mapping=None,region_values=None,**kwargs):
-"""
- Plot coronal slice through atlas at given ap_coordinate
-
- :param: ap_coordinate (m)
- :param volume:
- - 'image' - allen image volume
- - 'annotation' - allen annotation volume
- - 'surface' - outer surface of mesh
- - 'boundary' - outline of boundaries between all regions
- - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
- - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
- :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
- :param region_values: custom values to plot
- - if volume='volume', region_values must have shape ba.image.shape
- - if volume='value', region_values must have shape ba.regions.id
- :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
- :param **kwargs: matplotlib.pyplot.imshow kwarg arguments
- :return: matplotlib ax object
- """
-
- cslice=self.slice(ap_coordinate,axis=1,volume=volume,mapping=mapping,region_values=region_values)
- returnself._plot_slice(np.moveaxis(cslice,0,1),extent=self.extent(axis=1),volume=volume,**kwargs)
-
-
-
-[docs]
- defplot_hslice(self,dv_coordinate,volume='image',mapping=None,region_values=None,**kwargs):
-"""
- Plot horizontal slice through atlas at given dv_coordinate
-
- :param: dv_coordinate (m)
- :param volume:
- - 'image' - allen image volume
- - 'annotation' - allen annotation volume
- - 'surface' - outer surface of mesh
- - 'boundary' - outline of boundaries between all regions
- - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
- - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
- :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
- :param region_values: custom values to plot
- - if volume='volume', region_values must have shape ba.image.shape
- - if volume='value', region_values must have shape ba.regions.id
- :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
- :param **kwargs: matplotlib.pyplot.imshow kwarg arguments
- :return: matplotlib ax object
- """
-
- hslice=self.slice(dv_coordinate,axis=2,volume=volume,mapping=mapping,region_values=region_values)
- returnself._plot_slice(hslice,extent=self.extent(axis=2),volume=volume,**kwargs)
-
-
-
-[docs]
- defplot_sslice(self,ml_coordinate,volume='image',mapping=None,region_values=None,**kwargs):
-"""
- Plot sagittal slice through atlas at given ml_coordinate
-
- :param: ml_coordinate (m)
- :param volume:
- - 'image' - allen image volume
- - 'annotation' - allen annotation volume
- - 'surface' - outer surface of mesh
- - 'boundary' - outline of boundaries between all regions
- - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
- - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
- :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
- :param region_values: custom values to plot
- - if volume='volume', region_values must have shape ba.image.shape
- - if volume='value', region_values must have shape ba.regions.id
- :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
- :param **kwargs: matplotlib.pyplot.imshow kwarg arguments
- :return: matplotlib ax object
- """
-
- sslice=self.slice(ml_coordinate,axis=0,volume=volume,mapping=mapping,region_values=region_values)
- returnself._plot_slice(np.swapaxes(sslice,0,1),extent=self.extent(axis=0),volume=volume,**kwargs)
-
-
-
-[docs]
- defplot_top(self,volume='annotation',mapping=None,region_values=None,ax=None,**kwargs):
-"""
- Plot top view of atlas
- :param volume:
- - 'image' - allen image volume
- - 'annotation' - allen annotation volume
- - 'boundary' - outline of boundaries between all regions
- - 'volume' - custom volume, must pass in volume of shape ba.image.shape as regions_value argument
- - 'value' - custom value per allen region, must pass in array of shape ba.regions.id as regions_value argument
-
- :param mapping: mapping to use. Options can be found using ba.regions.mappings.keys()
- :param region_values:
- :param ax:
- :param kwargs:
- :return:
- """
-
- self.compute_surface()
- ix,iy=np.meshgrid(np.arange(self.bc.nx),np.arange(self.bc.ny))
- iz=self.bc.z2i(self.top)
- inds=self._lookup_inds(np.stack((ix,iy,iz),axis=-1))
-
- regions=self._get_mapping(mapping=mapping)[self.label.flat[inds]]
-
- ifvolume=='annotation':
- im=self._label2rgb(regions)
- elifvolume=='image':
- im=self.top
- elifvolume=='value':
- im=region_values[regions]
- elifvolume=='volume':
- im=np.zeros((iz.shape))
- forxinrange(im.shape[0]):
- foryinrange(im.shape[1]):
- im[x,y]=region_values[x,y,iz[x,y]]
- elifvolume=='boundary':
- im=self.compute_boundaries(regions)
-
- returnself._plot_slice(im,self.extent(axis=2),ax=ax,volume=volume,**kwargs)
[docs]
-@dataclass
-classTrajectory:
+classTrajectory(iblatlas.atlas.Trajectory):""" 3D Trajectory (usually for a linear probe), minimally defined by a vector and a point.
@@ -1181,537 +154,32 @@
Source code for ibllib.atlas.atlas
Instantiate from a best fit from an n by 3 array containing xyz coordinates: >>> trj = Trajectory.fit(xyz)
- """
- vector:np.ndarray
- point:np.ndarray
-
-
-[docs]
- @staticmethod
- deffit(xyz):
-"""
- Fits a line to a 3D cloud of points.
-
- Parameters
- ----------
- xyz : numpy.array
- An n by 3 array containing a cloud of points to fit a line to.
-
- Returns
- -------
- Trajectory
- A new trajectory object.
- """
- xyz_mean=np.mean(xyz,axis=0)
- returnTrajectory(vector=np.linalg.svd(xyz-xyz_mean)[2][0],point=xyz_mean)
-
-
-
-[docs]
- defeval_x(self,x):
-"""
- given an array of x coordinates, returns the xyz array of coordinates along the insertion
- :param x: n by 1 or numpy array containing x-coordinates
- :return: n by 3 numpy array containing xyz-coordinates
- """
- returnself._eval(x,axis=0)
-
-
-
-[docs]
- defeval_y(self,y):
-"""
- given an array of y coordinates, returns the xyz array of coordinates along the insertion
- :param y: n by 1 or numpy array containing y-coordinates
- :return: n by 3 numpy array containing xyz-coordinates
- """
- returnself._eval(y,axis=1)
-
-
-
-[docs]
- defeval_z(self,z):
-"""
- given an array of z coordinates, returns the xyz array of coordinates along the insertion
- :param z: n by 1 or numpy array containing z-coordinates
- :return: n by 3 numpy array containing xyz-coordinates
- """
- returnself._eval(z,axis=2)
-
-
-
-[docs]
- defproject(self,point):
-"""
- projects a point onto the trajectory line
- :param point: np.array(x, y, z) coordinates
- :return:
- """
- # https://mathworld.wolfram.com/Point-LineDistance3-Dimensional.html
- ifpoint.ndim==1:
- returnself.project(point[np.newaxis])[0]
- return(self.point+np.dot(point[:,np.newaxis]-self.point,self.vector)/
- np.dot(self.vector,self.vector)*self.vector)
-
-
-
-[docs]
- defmindist(self,xyz,bounds=None):
-"""
- Computes the minimum distance to the trajectory line for one or a set of points.
- If bounds are provided, computes the minimum distance to the segment instead of an
- infinite line.
- :param xyz: [..., 3]
- :param bounds: defaults to None. np.array [2, 3]: segment boundaries, inf line if None
- :return: minimum distance [...]
- """
- proj=self.project(xyz)
- d=np.sqrt(np.sum((proj-xyz)**2,axis=-1))
- ifboundsisnotNone:
- # project the boundaries and the points along the traj
- b=np.dot(bounds,self.vector)
- ob=np.argsort(b)
- p=np.dot(xyz[:,np.newaxis],self.vector).squeeze()
- # for points below and above boundaries, compute cartesian distance to the boundary
- imin=p<np.min(b)
- d[imin]=np.sqrt(np.sum((xyz[imin,:]-bounds[ob[0],:])**2,axis=-1))
- imax=p>np.max(b)
- d[imax]=np.sqrt(np.sum((xyz[imax,:]-bounds[ob[1],:])**2,axis=-1))
- returnd
-
-
- def_eval(self,c,axis):
- # uses symmetric form of 3d line equation to get xyz coordinates given one coordinate
- ifnotisinstance(c,np.ndarray):
- c=np.array(c)
- whilec.ndim<2:
- c=c[...,np.newaxis]
- # there are cases where it's impossible to project if a line is // to the axis
- ifself.vector[axis]==0:
- returnnp.nan*np.zeros((c.shape[0],3))
- else:
- return(c-self.point[axis])*self.vector/self.vector[axis]+self.point
-
-
-[docs]
- defexit_points(self,bc):
-"""
- Given a Trajectory and a BrainCoordinates object, computes the intersection of the
- trajectory with the brain coordinates bounding box
- :param bc: BrainCoordinate objects
- :return: np.ndarray 2 y 3 corresponding to exit points xyz coordinates
- """
- bounds=np.c_[bc.xlim,bc.ylim,bc.zlim]
- epoints=np.r_[self.eval_x(bc.xlim),self.eval_y(bc.ylim),self.eval_z(bc.zlim)]
- epoints=epoints[~np.all(np.isnan(epoints),axis=1)]
- ind=np.all(np.bitwise_and(bounds[0,:]<=epoints,epoints<=bounds[1,:]),axis=1)
- returnepoints[ind,:]
-
+ """
[docs]
-@dataclass
-classInsertion:
+classInsertion(iblatlas.atlas.Insertion):""" Defines an ephys probe insertion in 3D coordinate. IBL conventions. To instantiate, use the static methods: `Insertion.from_track` and `Insertion.from_dict`.
- """
- x:float
- y:float
- z:float
- phi:float
- theta:float
- depth:float
- label:str=''
- beta:float=0
-
-
-[docs]
- @staticmethod
- deffrom_track(xyzs,brain_atlas=None):
-"""
- Define an insersion from one or more trajectory.
-
- Parameters
- ----------
- xyzs : numpy.array
- An n by 3 array xyz coordinates representing an insertion trajectory.
- brain_atlas : BrainAtlas
- A brain atlas instance, used to attain the point of entry.
-
- Returns
- -------
- Insertion
- """
- assertbrain_atlas,'Input argument brain_atlas must be defined'
- traj=Trajectory.fit(xyzs)
- # project the deepest point into the vector to get the tip coordinate
- tip=traj.project(xyzs[np.argmin(xyzs[:,2]),:])
- # get intersection with the brain surface as an entry point
- entry=Insertion.get_brain_entry(traj,brain_atlas)
- # convert to spherical system to store the insertion
- depth,theta,phi=cart2sph(*(entry-tip))
- insertion_dict={
- 'x':entry[0],'y':entry[1],'z':entry[2],'phi':phi,'theta':theta,'depth':depth
- }
- returnInsertion(**insertion_dict)
-
-
-
-[docs]
- @staticmethod
- deffrom_dict(d,brain_atlas=None):
-"""
- Constructs an Insertion object from the json information stored in probes.description file.
-
- Parameters
- ----------
- d : dict
- A dictionary containing at least the following keys {'x', 'y', 'z', 'phi', 'theta',
- 'depth'}. The depth and xyz coordinates must be in um.
- brain_atlas : BrainAtlas, default=None
- If provided, disregards the z coordinate and locks the insertion point to the z of the
- brain surface.
-
- Returns
- -------
- Insertion
-
- Examples
- --------
- >>> tri = {'x': 544.0, 'y': 1285.0, 'z': 0.0, 'phi': 0.0, 'theta': 5.0, 'depth': 4501.0}
- >>> ins = Insertion.from_dict(tri)
- """
- assertbrain_atlas,'Input argument brain_atlas must be defined'
- z=d['z']/1e6
- ifnothasattr(brain_atlas,'top'):
- brain_atlas.compute_surface()
- iy=brain_atlas.bc.y2i(d['y']/1e6)
- ix=brain_atlas.bc.x2i(d['x']/1e6)
- # Only use the brain surface value as z if it isn't NaN (this happens when the surface touches the edges
- # of the atlas volume
- ifnotnp.isnan(brain_atlas.top[iy,ix]):
- z=brain_atlas.top[iy,ix]
- returnInsertion(x=d['x']/1e6,y=d['y']/1e6,z=z,
- phi=d['phi'],theta=d['theta'],depth=d['depth']/1e6,
- beta=d.get('beta',0),label=d.get('label',''))
-
-
- @property
- deftrajectory(self):
-"""
- Gets the trajectory object matching insertion coordinates
- :return: atlas.Trajectory
- """
- returnTrajectory.fit(self.xyz)
-
- @property
- defxyz(self):
- returnnp.c_[self.entry,self.tip].transpose()
-
- @property
- defentry(self):
- returnnp.array((self.x,self.y,self.z))
-
- @property
- deftip(self):
- returnsph2cart(-self.depth,self.theta,self.phi)+np.array((self.x,self.y,self.z))
-
- @staticmethod
- def_get_surface_intersection(traj,brain_atlas,surface='top'):
-"""
- TODO Document!
-
- Parameters
- ----------
- traj
- brain_atlas
- surface
-
- Returns
- -------
-
- """
- brain_atlas.compute_surface()
-
- distance=traj.mindist(brain_atlas.srf_xyz)
- dist_sort=np.argsort(distance)
- # In some cases the nearest two intersection points are not the top and bottom of brain
- # So we find all intersection points that fall within one voxel and take the one with
- # highest dV to be entry and lowest dV to be exit
- idx_lim=np.sum(distance[dist_sort]*1e6<np.max(brain_atlas.res_um))
- dist_lim=dist_sort[0:idx_lim]
- z_val=brain_atlas.srf_xyz[dist_lim,2]
- ifsurface=='top':
- ma=np.argmax(z_val)
- _xyz=brain_atlas.srf_xyz[dist_lim[ma],:]
- _ixyz=brain_atlas.bc.xyz2i(_xyz)
- _ixyz[brain_atlas.xyz2dims[2]]+=1
- elifsurface=='bottom':
- ma=np.argmin(z_val)
- _xyz=brain_atlas.srf_xyz[dist_lim[ma],:]
- _ixyz=brain_atlas.bc.xyz2i(_xyz)
-
- xyz=brain_atlas.bc.i2xyz(_ixyz.astype(float))
-
- returnxyz
-
-
-[docs]
- @staticmethod
- defget_brain_exit(traj,brain_atlas):
-"""
- Given a Trajectory and a BrainAtlas object, computes the brain exit coordinate as the
- intersection of the trajectory and the brain surface (brain_atlas.surface)
- :param brain_atlas:
- :return: 3 element array x,y,z
- """
- # Find point where trajectory intersects with bottom of brain
- returnInsertion._get_surface_intersection(traj,brain_atlas,surface='bottom')
-
-
-
-[docs]
- @staticmethod
- defget_brain_entry(traj,brain_atlas):
-"""
- Given a Trajectory and a BrainAtlas object, computes the brain entry coordinate as the
- intersection of the trajectory and the brain surface (brain_atlas.surface)
- :param brain_atlas:
- :return: 3 element array x,y,z
- """
- # Find point where trajectory intersects with top of brain
- returnInsertion._get_surface_intersection(traj,brain_atlas,surface='top')
-
+ """
[docs]
-classAllenAtlas(BrainAtlas):
-"""
- The Allan Common Coordinate Framework (CCF) brain atlas.
-
- Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
- using the IBL Bregma and coordinate system.
- """
-
-"""pathlib.PurePosixPath: The default relative path of the Allen atlas file."""
- atlas_rel_path=PurePosixPath('histology','ATLAS','Needles','Allen')
-
-"""numpy.array: A diffusion weighted imaging (DWI) image volume.
-
- The Allen atlas DWI average template volume has with the shape (ap, ml, dv) and contains uint16
- values. FIXME What do the values represent?
- """
- image=None
-
-"""numpy.array: An annotation label volume.
-
- The Allen atlas label volume has with the shape (ap, ml, dv) and contains uint16 indices
- of the Allen CCF brain regions to which each voxel belongs.
- """
- label=None
-
- def__init__(self,res_um=25,scaling=(1,1,1),mock=False,hist_path=None):
-"""
- Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
- using the IBL Bregma and coordinate system.
-
- Parameters
- ----------
- res_um : {10, 25, 50} int
- The Atlas resolution in micrometres; one of 10, 25 or 50um.
- scaling : float, numpy.array
- Scale factor along ml, ap, dv for squeeze and stretch (default: [1, 1, 1]).
- mock : bool
- For testing purposes, return atlas object with image comprising zeros.
- hist_path : str, pathlib.Path
- The location of the image volume. May be a full file path or a directory.
-
- Examples
- --------
- Instantiate Atlas from a non-default location, in this case the cache_dir of an ONE instance.
- >>> target_dir = one.cache_dir / AllenAtlas.atlas_rel_path
- ... ba = AllenAtlas(hist_path=target_dir)
- """
- LUT_VERSION='v01'# version 01 is the lateralized version
- regions=BrainRegions()
- xyz2dims=np.array([1,0,2])# this is the c-contiguous ordering
- dims2xyz=np.array([1,0,2])
- # we use Bregma as the origin
- self.res_um=res_um
- ibregma=(ALLEN_CCF_LANDMARKS_MLAPDV_UM['bregma']/self.res_um)
- dxyz=self.res_um*1e-6*np.array([1,-1,-1])*scaling
- ifmock:
- image,label=[np.zeros((528,456,320),dtype=np.int16)for_inrange(2)]
- label[:,:,100:105]=1327# lookup index for retina, id 304325711 (no id 1327)
- else:
- # Hist path may be a full path to an existing image file, or a path to a directory
- cache_dir=Path(one.params.get(silent=True).CACHE_DIR)
- hist_path=Path(hist_pathorcache_dir.joinpath(self.atlas_rel_path))
- ifnothist_path.suffix:# check if folder
- hist_path/=f'average_template_{res_um}.nrrd'
- # get the image volume
- ifnothist_path.exists():
- hist_path=_download_atlas_allen(hist_path)
- # get the remapped label volume
- file_label=hist_path.with_name(f'annotation_{res_um}.nrrd')
- ifnotfile_label.exists():
- file_label=_download_atlas_allen(file_label)
- file_label_remap=hist_path.with_name(f'annotation_{res_um}_lut_{LUT_VERSION}.npz')
- ifnotfile_label_remap.exists():
- label=self._read_volume(file_label).astype(dtype=np.int32)
- _logger.info("Computing brain atlas annotations lookup table")
- # lateralize atlas: for this the regions of the left hemisphere have primary
- # keys opposite to to the normal ones
- lateral=np.zeros(label.shape[xyz2dims[0]])
- lateral[int(np.floor(ibregma[0]))]=1
- lateral=np.sign(np.cumsum(lateral)[np.newaxis,:,np.newaxis]-0.5)
- label=label*lateral.astype(np.int32)
- # the 10 um atlas is too big to fit in memory so work by chunks instead
- ifres_um==10:
- first,ncols=(0,10)
- whileTrue:
- last=np.minimum(first+ncols,label.shape[-1])
- _logger.info(f"Computing... {last} on {label.shape[-1]}")
- _,im=ismember(label[:,:,first:last],regions.id)
- label[:,:,first:last]=np.reshape(im,label[:,:,first:last].shape)
- iflast==label.shape[-1]:
- break
- first+=ncols
- label=label.astype(dtype=np.uint16)
- _logger.info("Saving npz, this can take a long time")
- else:
- _,im=ismember(label,regions.id)
- label=np.reshape(im.astype(np.uint16),label.shape)
- np.savez_compressed(file_label_remap,label)
- _logger.info(f"Cached remapping file {file_label_remap} ...")
- # loads the files
- label=self._read_volume(file_label_remap)
- image=self._read_volume(hist_path)
-
- super().__init__(image,label,dxyz,regions,ibregma,dims2xyz=dims2xyz,xyz2dims=xyz2dims)
-
- @staticmethod
- def_read_volume(file_volume):
- iffile_volume.suffix=='.nrrd':
- volume,_=nrrd.read(file_volume,index_order='C')# ml, dv, ap
- # we want the coronal slice to be the most contiguous
- volume=np.transpose(volume,(2,0,1))# image[iap, iml, idv]
- eliffile_volume.suffix=='.npz':
- volume=np.load(file_volume)['arr_0']
- returnvolume
-
-
-[docs]
- defxyz2ccf(self,xyz,ccf_order='mlapdv',mode='raise'):
-"""
- Converts anatomical coordinates to CCF coordinates.
-
- Anatomical coordinates are in meters, relative to bregma, which CFF coordinates are
- assumed to be the volume indices multiplied by the spacing in micormeters.
-
- Parameters
- ----------
- xyz : numpy.array
- An N by 3 array of anatomical coordinates in meters, relative to bregma.
- ccf_order : {'mlapdv', 'apdvml'}, default='mlapdv'
- The order of the CCF coordinates returned. For IBL (the default) this is (ML, AP, DV),
- for Allen MCC vertices, this is (AP, DV, ML).
- mode : {'raise', 'clip', 'wrap'}, default='raise'
- How to behave if the coordinate lies outside of the volume: raise (default) will raise
- a ValueError; 'clip' will replace the index with the closest index inside the volume;
- 'wrap' will return the index as is.
-
- Returns
- -------
- numpy.array
- Coordinates in CCF space (um, origin is the front left top corner of the data
- volume, order determined by ccf_order
- """
- ordre=self._ccf_order(ccf_order)
- ccf=self.bc.xyz2i(xyz,round=False,mode=mode)*float(self.res_um)
- returnccf[...,ordre]
-
-
-
-[docs]
- defccf2xyz(self,ccf,ccf_order='mlapdv'):
-"""
- Convert anatomical coordinates from CCF coordinates.
-
- Anatomical coordinates are in meters, relative to bregma, which CFF coordinates are
- assumed to be the volume indices multiplied by the spacing in micormeters.
-
- Parameters
- ----------
- ccf : numpy.array
- An N by 3 array of coordinates in CCF space (atlas volume indices * um resolution). The
- origin is the front left top corner of the data volume.
- ccf_order : {'mlapdv', 'apdvml'}, default='mlapdv'
- The order of the CCF coordinates given. For IBL (the default) this is (ML, AP, DV),
- for Allen MCC vertices, this is (AP, DV, ML).
-
- Returns
- -------
- numpy.array
- The MLAPDV coordinates in meters, relative to bregma.
- """
- ordre=self._ccf_order(ccf_order,reverse=True)
- returnself.bc.i2xyz((ccf[...,ordre]/float(self.res_um)))
-
-
- @staticmethod
- def_ccf_order(ccf_order,reverse=False):
-"""
- Returns the mapping to go from CCF coordinates order to the brain atlas xyz
- :param ccf_order: 'mlapdv' or 'apdvml'
- :param reverse: defaults to False.
- If False, returns from CCF to brain atlas
- If True, returns from brain atlas to CCF
- :return:
- """
- ifccf_order=='mlapdv':
- return[0,1,2]
- elifccf_order=='apdvml':
- ifreverse:
- return[2,0,1]
- else:
- return[1,2,0]
- else:
- ValueError("ccf_order needs to be either 'mlapdv' or 'apdvml'")
-
-
-[docs]
- defcompute_regions_volume(self,cumsum=False):
-"""
- Sums the number of voxels in the labels volume for each region.
- Then compute volumes for all of the levels of hierarchy in cubic mm.
- :param: cumsum: computes the cumulative sum of the volume as per the hierarchy (defaults to False)
- :return:
- """
- nr=self.regions.id.shape[0]
- count=np.bincount(self.label.flatten(),minlength=nr)
- ifnotcumsum:
- self.regions.volume=count*(self.res_um/1e3)**3
- else:
- self.regions.compute_hierarchy()
- self.regions.volume=np.zeros_like(count)
- foriinnp.arange(nr):
- ifcount[i]==0:
- continue
- self.regions.volume[np.unique(self.regions.hierarchy[:,i])]+=count[i]
- self.regions.volume=self.regions.volume*(self.res_um/1e3)**3
[docs]
+@deprecated_decoratordefNeedlesAtlas(*args,**kwargs):""" Instantiates an atlas.BrainAtlas corresponding to the Allen CCF at the given resolution
@@ -1745,15 +213,14 @@
Source code for ibllib.atlas.atlas
three-dimensional brain atlas using an average magnetic resonance image of 40 adult C57Bl/6J mice. Neuroimage 42(1):60-9. [doi 10.1016/j.neuroimage.2008.03.037] """
- DV_SCALE=0.952# multiplicative factor on DV dimension, determined from MRI->CCF transform
- AP_SCALE=1.087# multiplicative factor on AP dimension
- kwargs['scaling']=np.array([1,AP_SCALE,DV_SCALE])
- returnAllenAtlas(*args,**kwargs)
[docs]
+@deprecated_decoratordefMRITorontoAtlas(*args,**kwargs):""" The MRI Toronto brain atlas.
@@ -1780,171 +247,15 @@
Source code for ibllib.atlas.atlas
relatively larger in males emerge before those larger in females. Nat Commun 9, 2615. [doi 10.1038/s41467-018-04921-2] """
- ML_SCALE=0.952
- DV_SCALE=0.885# multiplicative factor on DV dimension, determined from MRI->CCF transform
- AP_SCALE=1.031# multiplicative factor on AP dimension
- kwargs['scaling']=np.array([ML_SCALE,AP_SCALE,DV_SCALE])
- returnAllenAtlas(*args,**kwargs)
[docs]
-classFranklinPaxinosAtlas(BrainAtlas):
-
-"""pathlib.PurePosixPath: The default relative path of the atlas file."""
- atlas_rel_path=PurePosixPath('histology','ATLAS','Needles','FranklinPaxinos')
-
- def__init__(self,res_um=(10,100,10),scaling=(1,1,1),mock=False,hist_path=None):
-"""The Franklin & Paxinos brain atlas.
-
- Instantiates an atlas.BrainAtlas corresponding to the Franklin & Paxinos atlas [1]_ at the
- given resolution, matched to the Allen coordinate Framework [2]_ and using the IBL Bregma
- and coordinate system. The Franklin Paxisnos volume has resolution of 10um in ML and DV
- axis and 100 um in AP direction.
-
- Parameters
- ----------
- res_um : list, numpy.array
- The Atlas resolution in micometres in each dimension.
- scaling : float, numpy.array
- Scale factor along ml, ap, dv for squeeze and stretch (default: [1, 1, 1]).
- mock : bool
- For testing purposes, return atlas object with image comprising zeros.
- hist_path : str, pathlib.Path
- The location of the image volume. May be a full file path or a directory.
-
- Examples
- --------
- Instantiate Atlas from a non-default location, in this case the cache_dir of an ONE instance.
- >>> target_dir = one.cache_dir / AllenAtlas.atlas_rel_path
- ... ba = FranklinPaxinosAtlas(hist_path=target_dir)
-
- References
- ----------
- .. [1] Paxinos G, and Franklin KBJ (2012) The Mouse Brain in Stereotaxic Coordinates, 4th
- edition (Elsevier Academic Press)
- .. [2] Chon U et al (2019) Enhanced and unified anatomical labeling for a common mouse
- brain atlas [doi 10.1038/s41467-019-13057-w]
- """
- # TODO interpolate?
- LUT_VERSION='v01'# version 01 is the lateralized version
- regions=FranklinPaxinosRegions()
- xyz2dims=np.array([1,0,2])# this is the c-contiguous ordering
- dims2xyz=np.array([1,0,2])
- # we use Bregma as the origin
- self.res_um=np.asarray(res_um)
- ibregma=(PAXINOS_CCF_LANDMARKS_MLAPDV_UM['bregma']/self.res_um)
- dxyz=self.res_um*1e-6*np.array([1,-1,-1])*scaling
- ifmock:
- image,label=[np.zeros((528,456,320),dtype=np.int16)for_inrange(2)]
- label[:,:,100:105]=1327# lookup index for retina, id 304325711 (no id 1327)
- else:
- # Hist path may be a full path to an existing image file, or a path to a directory
- cache_dir=Path(one.params.get(silent=True).CACHE_DIR)
- hist_path=Path(hist_pathorcache_dir.joinpath(self.atlas_rel_path))
- ifnothist_path.suffix:# check if folder
- hist_path/=f'average_template_{res_um[0]}_{res_um[1]}_{res_um[2]}.npz'
-
- # get the image volume
- ifnothist_path.exists():
- hist_path.parent.mkdir(exist_ok=True,parents=True)
- aws.s3_download_file(f'atlas/FranklinPaxinos/{hist_path.name}',str(hist_path))
- # get the remapped label volume
- file_label=hist_path.with_name(f'annotation_{res_um[0]}_{res_um[1]}_{res_um[2]}.npz')
- ifnotfile_label.exists():
- file_label.parent.mkdir(exist_ok=True,parents=True)
- aws.s3_download_file(f'atlas/FranklinPaxinos/{file_label.name}',str(file_label))
-
- file_label_remap=hist_path.with_name(f'annotation_{res_um[0]}_{res_um[1]}_{res_um[2]}_lut_{LUT_VERSION}.npz')
-
- ifnotfile_label_remap.exists():
- label=self._read_volume(file_label).astype(dtype=np.int32)
- _logger.info("computing brain atlas annotations lookup table")
- # lateralize atlas: for this the regions of the left hemisphere have primary
- # keys opposite to to the normal ones
- lateral=np.zeros(label.shape[xyz2dims[0]])
- lateral[int(np.floor(ibregma[0]))]=1
- lateral=np.sign(np.cumsum(lateral)[np.newaxis,:,np.newaxis]-0.5)
- label=label*lateral.astype(np.int32)
- _,im=ismember(label,regions.id)
- label=np.reshape(im.astype(np.uint16),label.shape)
- np.savez_compressed(file_label_remap,label)
- _logger.info(f"Cached remapping file {file_label_remap} ...")
- # loads the files
- label=self._read_volume(file_label_remap)
- image=self._read_volume(hist_path)
-
- super().__init__(image,label,dxyz,regions,ibregma,dims2xyz=dims2xyz,xyz2dims=xyz2dims)
-
- @staticmethod
- def_read_volume(file_volume):
-"""
- Loads an atlas image volume given a file path.
-
- Parameters
- ----------
- file_volume : pathlib.Path
- The file path of an image volume. Currently supports .nrrd and .npz files.
-
- Returns
- -------
- numpy.array
- The loaded image volume with dimensions (ap, ml, dv).
-
- Raises
- ------
- ValueError
- Unknown file extension, expects either '.nrrd' or '.npz'.
- """
- iffile_volume.suffix=='.nrrd':
- volume,_=nrrd.read(file_volume,index_order='C')# ml, dv, ap
- # we want the coronal slice to be the most contiguous
- volume=np.transpose(volume,(2,0,1))# image[iap, iml, idv]
- eliffile_volume.suffix=='.npz':
- volume=np.load(file_volume)['arr_0']
- else:
- raiseValueError(
- f'"{file_volume.suffix}" files not supported, must be either ".nrrd" or ".npz"')
- returnvolume
sync,chmap=ephys_fpga.get_main_probe_sync(sess_path,bin_exists=False)_=ephys_fpga.extract_all(sess_path,output_path=temp_alf_folder,save=True)# check that the output is complete
- fpga_trials=ephys_fpga.extract_behaviour_sync(sync,chmap=chmap,display=display)
+ fpga_trials,*_=ephys_fpga.extract_behaviour_sync(sync,chmap=chmap,display=display)# align with the bpodbpod2fpga=ephys_fpga.align_with_bpod(temp_alf_folder.parent)alf_trials=alfio.load_object(temp_alf_folder,'trials')
diff --git a/_modules/ibllib/io/extractors/biased_trials.html b/_modules/ibllib/io/extractors/biased_trials.html
index cbf75117..c6a05f77 100644
--- a/_modules/ibllib/io/extractors/biased_trials.html
+++ b/_modules/ibllib/io/extractors/biased_trials.html
@@ -203,12 +203,12 @@
Source code for ibllib.io.extractors.biased_trials
exceptAssertionErrorasex:_logger.critical('Failed to extract using %s: %s',sync_label,ex)
- # If you reach here extracting using sync TTLs was not possible
- _logger.warning('Alignment by wheel data not yet implemented')
+ # If you reach here extracting using sync TTLs was not possible, we attempt to align using wheel motion energy
+ _logger.warning('Attempting to align using wheel')
+
+ try:
+ ifself.labelnotin['left','right']:
+ # Can only use wheel alignment for left and right cameras
+ raiseValueError(f'Wheel alignment not supported for {self.label} camera')
+
+ motion_class=vmotion.MotionAlignmentFullSession(self.session_path,self.label,sync='nidq',upload=True)
+ new_times=motion_class.process()
+ ifnotmotion_class.qc_outcome:
+ raiseValueError(f'Wheel alignment for {self.label} camera failed to pass qc: {motion_class.qc}')
+ else:
+ _logger.warning(f'Wheel alignment for {self.label} camera successful, qc: {motion_class.qc}')
+ returnnew_times
+
+ exceptExceptionaserr:
+ _logger.critical(f'Failed to align with wheel for {self.label} camera: {err}')
+
iflength<raw_ts.size:df=raw_ts.size-length_logger.info(f'Discarding first {df} pulses')raw_ts=raw_ts[df:]
+
returnraw_ts
<
A set of Bpod trials fields to keep. bpod_rsync_fields : tuple A set of Bpod trials fields to sync to the DAQ times.
-
- TODO Turn into property getter; requires ensuring the output field are the same for legacy """ifself.bpod_extractor:
- self.var_names=self.bpod_extractor.var_names
- self.save_names=self.bpod_extractor.save_names
- self.bpod_rsync_fields=bpod_rsync_fieldsorself._time_fields(self.bpod_extractor.var_names)
- self.bpod_fields=bpod_fieldsor[xforxinself.bpod_extractor.var_namesifxnotinself.bpod_rsync_fields]
+ forvar_name,save_nameinzip(self.bpod_extractor.var_names,self.bpod_extractor.save_names):
+ ifvar_namenotinself.var_names:
+ self.var_names+=(var_name,)
+ self.save_names+=(save_name,)
+
+ # self.var_names = self.bpod_extractor.var_names
+ # self.save_names = self.bpod_extractor.save_names
+ self.settings=self.bpod_extractor.settings# This is used by the TaskQC
+ self.bpod_rsync_fields=bpod_rsync_fields
+ ifself.bpod_rsync_fieldsisNone:
+ self.bpod_rsync_fields=tuple(self._time_fields(self.bpod_extractor.var_names))
+ if'table'inself.bpod_extractor.var_names:
+ ifnotself.bpod_trials:
+ self.bpod_trials=self.bpod_extractor.extract(save=False)
+ table_keys=alfio.AlfBunch.from_df(self.bpod_trials['table']).keys()
+ self.bpod_rsync_fields+=tuple(self._time_fields(table_keys))
+ elifbpod_rsync_fields:
+ self.bpod_rsync_fields=bpod_rsync_fields
+ excluded=(*self.bpod_rsync_fields,'table')
+ ifbpod_fields:
+ assertnotset(self.bpod_fields).intersection(excluded),'bpod_fields must not also be bpod_rsync_fields'
+ self.bpod_fields=bpod_fields
+ elifself.bpod_extractor:
+ self.bpod_fields=tuple(xforxinself.bpod_extractor.var_namesifxnotinexcluded)
+ if'table'inself.bpod_extractor.var_names:
+ ifnotself.bpod_trials:
+ self.bpod_trials=self.bpod_extractor.extract(save=False)
+ table_keys=alfio.AlfBunch.from_df(self.bpod_trials['table']).keys()
+ self.bpod_fields+=(*[xforxintable_keysifxnotinexcluded],self.sync_field+'_bpod')@staticmethoddef_time_fields(trials_attr)->set:
@@ -915,7 +939,8 @@
Source code for ibllib.io.extractors.ephys_fpga
<
pattern=re.compile(fr'^[_\w]*({"|".join(FIELDS)})[_\w]*$')returnset(filter(pattern.match,trials_attr))
- def_extract(self,sync=None,chmap=None,sync_collection='raw_ephys_data',task_collection='raw_behavior_data',**kwargs):
+ def_extract(self,sync=None,chmap=None,sync_collection='raw_ephys_data',
+ task_collection='raw_behavior_data',**kwargs)->dict:"""Extracts ephys trials by combining Bpod and FPGA sync pulses"""# extract the behaviour data from bpodifsyncisNoneorchmapisNone:
@@ -941,7 +966,8 @@
Source code for ibllib.io.extractors.ephys_fpga
<
else:tmin=tmax=None
- fpga_trials=extract_behaviour_sync(
+ # Store the cleaned frame2ttl, audio, and bpod pulses as this will be used for QC
+ fpga_trials,self.frame2ttl,self.audio,self.bpod=extract_behaviour_sync(sync=sync,chmap=chmap,bpod_trials=self.bpod_trials,tmin=tmin,tmax=tmax)assertself.sync_fieldinself.bpod_trialsandself.sync_fieldinfpga_trialsself.bpod_trials[f'{self.sync_field}_bpod']=np.copy(self.bpod_trials[self.sync_field])
@@ -964,18 +990,20 @@
<
If save is True, a list of file paths to the extracted data. """# Extract Bpod trials
- bpod_raw=raw_data_loaders.load_data(session_path,task_collection=task_collection)
+ bpod_raw=raw.load_data(session_path,task_collection=task_collection)assertbpod_rawisnotNone,'No task trials data in raw_behavior_data - Exit'bpod_trials,*_=bpod_extract_all(session_path=session_path,bpod_trials=bpod_raw,task_collection=task_collection,
diff --git a/_modules/ibllib/io/extractors/ephys_passive.html b/_modules/ibllib/io/extractors/ephys_passive.html
index 08069b02..ffbd5b45 100644
--- a/_modules/ibllib/io/extractors/ephys_passive.html
+++ b/_modules/ibllib/io/extractors/ephys_passive.html
@@ -335,7 +335,7 @@
Source code for ibllib.io.extractors.ephys_passive
f'trace ({int(np.size(spacer_times)/2)})')
- iftmaxisNone:
+ iftmaxisNone:# TODO THIS NEEDS CHANGING AS FOR DYNAMIC PIPELINE F2TTL slower than valvetmax=fttl['times'][-1]spacer_times=np.r_[spacer_times.flatten(),tmax]
diff --git a/_modules/ibllib/io/extractors/habituation_trials.html b/_modules/ibllib/io/extractors/habituation_trials.html
index dc156d68..318f3722 100644
--- a/_modules/ibllib/io/extractors/habituation_trials.html
+++ b/_modules/ibllib/io/extractors/habituation_trials.html
@@ -125,16 +125,15 @@
Source code for ibllib.io.extractors.habituation_trials
Source code for ibllib.io.extractors.habituation_trials
["iti"][0][0]fortrinself.bpod_trials])
+ # Phase and position
+ out['position']=np.array([t['position']fortinself.bpod_trials])
+ out['phase']=np.array([t['stim_phase']fortinself.bpod_trials])
+
# NB: We lose the last trial because the stim off event occurs at trial_num + 1n_trials=out['stimOff_times'].size
- return[out[k][:n_trials]forkinself.var_names]
+ # return [out[k][:n_trials] for k in self.var_names]
+ return{k:out[k][:n_trials]forkinself.var_names}
[docs]
defpatch_imaging_meta(meta:dict)->dict:"""
- Patch imaging meta data for compatibility across versions.
+ Patch imaging metadata for compatibility across versions. A copy of the dict is NOT returned. Parameters ----------
- dict : dict
+ meta : dict A folder path that contains a rawImagingData.meta file. Returns ------- dict
- The loaded meta data file, updated to the most recent version.
+ The loaded metadata file, updated to the most recent version. """
- # 2023-05-17 (unversioned) adds nFrames and channelSaved keys
- ifparse_version(meta.get('version')or'0.0.0')<=parse_version('0.0.0'):
+ # 2023-05-17 (unversioned) adds nFrames, channelSaved keys, MM and Deg keys
+ version=parse_version(meta.get('version')or'0.0.0')
+ ifversion<=parse_version('0.0.0'):if'channelSaved'notinmeta:meta['channelSaved']=next((x['channelIdx']forxinmeta['FOV']if'channelIdx'inx),[])
+ fields=('topLeft','topRight','bottomLeft','bottomRight')
+ forfovinmeta.get('FOV',[]):
+ forunitin('Deg','MM'):
+ ifunitnotinfov:# topLeftDeg, etc. -> Deg[topLeft]
+ fov[unit]={f:fov.pop(f+unit,None)forfinfields}
+ elifversion==parse_version('0.1.0'):
+ forfovinmeta.get('FOV',[]):
+ if'roiUuid'infov:
+ fov['roiUUID']=fov.pop('roiUuid')returnmeta
roi =(*[slice(*r)forrinself.roi[side]],0)try:# TODO Add function arg to make grayscale
- self.alignment.frames= \
- vidio.get_video_frames_preload(camera_path,frame_numbers,mask=roi)
+ self.alignment.frames=vidio.get_video_frames_preload(camera_path,frame_numbers,mask=roi)assertself.alignment.frames.size!=0exceptAssertionError:self.log.error('Failed to open video')
@@ -352,8 +354,8 @@
+[docs]
+classMotionAlignmentFullSession:
+ def__init__(self,session_path,label,**kwargs):
+"""
+ Class to extract camera times using video motion energy wheel alignment
+ :param session_path: path of the session
+ :param label: video label, only 'left' and 'right' videos are supported
+ :param kwargs: threshold - the threshold to apply when identifying frames with artefacts (default 20)
+ upload - whether to upload summary figure to alyx (default False)
+ twin - the window length used when computing the shifts between the wheel and video
+ nprocesses - the number of CPU processes to use
+ sync - the type of sync scheme used (options 'nidq' or 'bpod')
+ location - whether the code is being run on SDSC or not (options 'SDSC' or None)
+ """
+ self.session_path=session_path
+ self.label=label
+ self.threshold=kwargs.get('threshold',20)
+ self.upload=kwargs.get('upload',False)
+ self.twin=kwargs.get('twin',150)
+ self.nprocess=kwargs.get('nprocess',int(cpu_count()-cpu_count()/4))
+
+ self.load_data(sync=kwargs.get('sync','nidq'),location=kwargs.get('location',None))
+ self.roi,self.mask=self.get_roi_mask()
+
+ ifself.upload:
+ self.one=ONE(mode='remote')
+ self.one.alyx.authenticate()
+ self.eid=self.one.path2eid(self.session_path)
+
+
+[docs]
+ defload_data(self,sync='nidq',location=None):
+"""
+ Loads relevant data from disk to perform motion alignment
+ :param sync: type of sync used, 'nidq' or 'bpod'
+ :param location: where the code is being run, if location='SDSC', the dataset uuids are removed
+ when loading the data
+ :return:
+ """
+ deffix_keys(alf_object):
+"""
+ Given an alf object removes the dataset uuid from the keys
+ :param alf_object:
+ :return:
+ """
+ ob=Bunch()
+ forkeyinalf_object.keys():
+ vals=alf_object[key]
+ ob[key.split('.')[0]]=vals
+ returnob
+
+ alf_path=self.session_path.joinpath('alf')
+ wheel=(fix_keys(alfio.load_object(alf_path,'wheel'))iflocation=='SDSC'elsealfio.load_object(alf_path,'wheel'))
+ self.wheel_timestamps=wheel.timestamps
+ # Compute interpolated wheel position and wheel times
+ wheel_pos,self.wheel_time=wh.interpolate_position(wheel.timestamps,wheel.position,freq=1000)
+ # Compute wheel velocity
+ self.wheel_vel,_=wh.velocity_filtered(wheel_pos,1000)
+ # Load in original camera times
+ self.camera_times=alfio.load_file_content(next(alf_path.glob(f'_ibl_{self.label}Camera.times*.npy')))
+ self.camera_path=str(next(self.session_path.joinpath('raw_video_data').glob(f'_iblrig_{self.label}Camera.raw*.mp4')))
+ self.camera_meta=vidio.get_video_meta(self.camera_path)
+
+ # TODO should read in the description file to get the correct sync location
+ ifsync=='nidq':
+ # If the sync is 'nidq' we read in the camera ttls from the spikeglx sync object
+ sync,chmap=get_sync_and_chn_map(self.session_path,sync_collection='raw_ephys_data')
+ sr=get_sync_fronts(sync,chmap[f'{self.label}_camera'])
+ self.ttls=sr.times[::2]
+ else:
+ # Otherwise we assume the sync is 'bpod' and we read in the camera ttls from the raw bpod data
+ cam_extractor=cam.CameraTimestampsBpod(session_path=self.session_path)
+ cam_extractor.bpod_trials=raw.load_data(self.session_path,task_collection='raw_behavior_data')
+ self.ttls=cam_extractor._times_from_bpod()
+
+ # Check if the ttl and video sizes match up
+ self.tdiff=self.ttls.size-self.camera_meta['length']
+
+ ifself.tdiff<0:
+ # In this case there are fewer ttls than camera frames. This is not ideal, for now we pad the ttls with
+ # nans but if this is too many we reject the wheel alignment based on the qc
+ self.ttl_times=self.ttls
+ self.times=np.r_[self.ttl_times,np.full((np.abs(self.tdiff)),np.nan)]
+ self.short_flag=True
+ elifself.tdiff>0:
+ # In this case there are more ttls than camera frames. This happens often, for now we remove the first
+ # tdiff ttls from the ttls
+ self.ttl_times=self.ttls[self.tdiff:]
+ self.times=self.ttls[self.tdiff:]
+ self.short_flag=False
+
+ # Compute the frame rate of the camera
+ self.frate=round(1/np.nanmedian(np.diff(self.ttl_times)))
+
+ # We attempt to load in some behavior data (trials and dlc). This is only needed for the summary plots, having
+ # trial aligned paw velocity (from the dlc) is a nice sanity check to make sure the alignment went well
+ try:
+ self.trials=alfio.load_file_content(next(alf_path.glob('_ibl_trials.table*.pqt')))
+ self.dlc=alfio.load_file_content(next(alf_path.glob(f'_ibl_{self.label}Camera.dlc*.pqt')))
+ self.dlc=likelihood_threshold(self.dlc)
+ self.behavior=True
+ except(ALFObjectNotFound,StopIteration):
+ self.behavior=False
+
+ # Load in a single frame that we will use for the summary plot
+ self.frame_example=vidio.get_video_frames_preload(self.camera_path,np.arange(10,11),mask=np.s_[:,:,0])
+
+
+
+[docs]
+ defget_roi_mask(self):
+"""
+ Compute the region of interest mask for a given camera. This corresponds to a box in the video that we will
+ use to compute the wheel motion energy
+ :return:
+ """
+
+ ifself.label=='right':
+ roi=((450,512),(120,200))
+ else:
+ roi=((900,1024),(850,1010))
+ roi_mask=(*[slice(*r)forrinroi],0)
+
+ returnroi,roi_mask
+
+
+
+[docs]
+ deffind_contaminated_frames(self,video_frames,thresold=20,normalise=True):
+"""
+ Finds frames in the video that have artefacts such as the mouse's paw or a human hand. In order to determine
+ frames with contamination an Otsu thresholding is applied to each frame to detect the artefact from the
+ background image
+ :param video_frames: np array of video frames (nframes, nwidth, nheight)
+ :param thresold: threshold to differentiate artefact from background
+ :param normalise: whether to normalise the threshold values for each frame to the baseline
+ :return: mask of frames that are contaminated
+ """
+ high=np.zeros((video_frames.shape[0]))
+ # Iterate through each frame and compute and store the otsu threshold value for each frame
+ foridx,frameinenumerate(video_frames):
+ ret,_=cv2.threshold(cv2.GaussianBlur(frame,(5,5),0),0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
+ high[idx]=ret
+
+ # If normalise is True, we divide the threshold values for each frame by the minimum value
+ ifnormalise:
+ high-=np.min(high)
+
+ # Identify the frames that have a threshold value greater than the specified threshold cutoff
+ contaminated_frames=np.where(high>thresold)[0]
+
+ returncontaminated_frames
+
+
+
+[docs]
+ defcompute_motion_energy(self,first,last,wg,iw):
+"""
+ Computes the video motion energy for frame indexes between first and last. This function is written to be run
+ in a parallel fashion jusing joblib.parallel
+ :param first: first frame index of frame interval to consider
+ :param last: last frame index of frame interval to consider
+ :param wg: WindowGenerator
+ :param iw: iteration of the WindowGenerator
+ :return:
+ """
+
+ ifiw==wg.nwin-1:
+ return
+
+ # Open the video and read in the relvant video frames between first idx and last idx
+ cap=cv2.VideoCapture(self.camera_path)
+ frames=vidio.get_video_frames_preload(cap,np.arange(first,last),mask=self.mask)
+ # Identify if any of the frames have artefacts in them
+ idx=self.find_contaminated_frames(frames,self.threshold)
+
+ # If some of the frames are contaminated we find all the continuous intervals of contamination
+ # and set the value for contaminated pixels for these frames to the average of the first frame before and after
+ # this contamination interval
+ iflen(idx)!=0:
+
+ before_status=False
+ after_status=False
+
+ counter=0
+ n_frames=200
+ # If it is the first frame that is contaminated, we need to read in a bit more of the video to find a
+ # frame prior to contamination. We attempt this 20 times, after that we just take the value for the first
+ # frame
+ whilenp.any(idx==0)andcounter<20andiw!=0:
+ n_before_offset=(counter+1)*n_frames
+ first-=n_frames
+ extra_frames=vidio.get_video_frames_preload(cap,frame_numbers=np.arange(first-n_frames,first),
+ mask=self.mask)
+ frames=np.concatenate([extra_frames,frames],axis=0)
+
+ idx=self.find_contaminated_frames(frames,self.threshold)
+ before_status=True
+ counter+=1
+ ifcounter>0:
+ print(f'In before: {counter}')
+
+ counter=0
+ # If it is the last frame that is contaminated, we need to read in a bit more of the video to find a
+ # frame after the contamination. We attempt this 20 times, after that we just take the value for the last
+ # frame
+ whilenp.any(idx==frames.shape[0]-1)andcounter<20andiw!=wg.nwin-1:
+ n_after_offset=(counter+1)*n_frames
+ last+=n_frames
+ extra_frames=vidio.get_video_frames_preload(cap,frame_numbers=np.arange(last,last+n_frames),mask=self.mask)
+ frames=np.concatenate([frames,extra_frames],axis=0)
+ idx=self.find_contaminated_frames(frames,self.threshold)
+ after_status=True
+ counter+=1
+
+ ifcounter>0:
+ print(f'In after: {counter}')
+
+ # We find all the continuous intervals that contain contamination and fix the affected pixels
+ # by taking the average value of the frame prior and after contamination
+ intervals=np.split(idx,np.where(np.diff(idx)!=1)[0]+1)
+ forintsinintervals:
+ iflen(ints)>0andints[0]==0:
+ ints=ints[1:]
+ iflen(ints)>0andints[-1]==frames.shape[0]-1:
+ ints=ints[:-1]
+ th_all=np.zeros_like(frames[0])
+ # We find all affected pixels
+ foridxinints:
+ img=np.copy(frames[idx])
+ blur=cv2.GaussianBlur(img,(5,5),0)
+ ret,th=cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
+ th=cv2.GaussianBlur(th,(5,5),10)
+ th_all+=th
+ # Compute the average image of the frame prior and after the interval
+ vals=np.mean(np.dstack([frames[ints[0]-1],frames[ints[-1]+1]]),axis=-1)
+ # For each frame set the affected pixels to the value of the clean average image
+ foridxinints:
+ img=frames[idx]
+ img[th_all>0]=vals[th_all>0]
+
+ # If we have read in extra video frames we need to cut these off and make sure we only
+ # consider the frames between the interval first and last given as args
+ ifbefore_status:
+ frames=frames[n_before_offset:]
+ ifafter_status:
+ frames=frames[:(-1*n_after_offset)]
+
+ # Once the frames have been cleaned we compute the motion energy between frames
+ frame_me,_=video.motion_energy(frames,diff=2,normalize=False)
+
+ cap.release()
+
+ returnframe_me[2:]
+
+
+
+[docs]
+ defcompute_shifts(self,times,me,first,last,iw,wg):
+"""
+ Compute the cross-correlation between the video motion energy and the wheel velocity to find the mismatch
+ between the camera ttls and the video frames. This function is written to run in a parallel manner using
+ joblib.parallel
+
+ :param times: the times of the video frames across the whole session (ttls)
+ :param me: the video motion energy computed across the whole session
+ :param first: first time idx to consider
+ :param last: last time idx to consider
+ :param wg: WindowGenerator
+ :param iw: iteration of the WindowGenerator
+ :return:
+ """
+
+ # If we are in the last window we exit
+ ifiw==wg.nwin-1:
+ returnnp.nan,np.nan
+
+ # Find the time interval we are interested in
+ t_first=times[first]
+ t_last=times[last]
+
+ # If both times during this interval are nan exit
+ ifnp.isnan(t_last)andnp.isnan(t_first):
+ returnnp.nan,np.nan
+ # If only the last time is nan, we find the last non nan time value
+ elifnp.isnan(t_last):
+ t_last=times[np.where(~np.isnan(times))[0][-1]]
+
+ # Find the mask of timepoints that fall in this interval
+ mask=np.logical_and(times>=t_first,times<=t_last)
+ # Restrict the video motion energy to this interval and normalise the values
+ align_me=me[np.where(mask)[0]]
+ align_me=(align_me-np.nanmin(align_me))/(np.nanmax(align_me)-np.nanmin(align_me))
+
+ # Find closest timepoints in wheel that match the time interval
+ wh_mask=np.logical_and(self.wheel_time>=t_first,self.wheel_time<=t_last)
+ ifnp.sum(wh_mask)==0:
+ returnnp.nan,np.nan
+ # Find the mask for the wheel times
+ xs=np.searchsorted(self.wheel_time[wh_mask],times[mask])
+ xs[xs==np.sum(wh_mask)]=np.sum(wh_mask)-1
+ # Convert to normalized speed
+ vs=np.abs(self.wheel_vel[wh_mask][xs])
+ vs=(vs-np.min(vs))/(np.max(vs)-np.min(vs))
+
+ # Account for nan values in the video motion energy
+ isnan=np.isnan(align_me)
+ ifnp.sum(isnan)>0:
+ where_nan=np.where(isnan)[0]
+ assertwhere_nan[0]==0
+ assertwhere_nan[-1]==np.sum(isnan)-1
+
+ ifnp.all(isnan):
+ returnnp.nan,np.nan
+
+ # Compute the cross correlation between the video motion energy and the wheel speed
+ xcorr=signal.correlate(align_me[~isnan],vs[~isnan])
+ # The max value of the cross correlation indicates the shift that needs to be applied
+ # The +2 comes from the fact that the video motion energy was computed from the difference between frames
+ shift=np.nanargmax(xcorr)-align_me[~isnan].size+2
+
+ returnshift,t_first+(t_last-t_first)/2
+
+
+
+[docs]
+ defclean_shifts(self,x,n=1):
+"""
+ Removes artefacts from the computed shifts across time. We assume that the shifts should never increase
+ over time and that the jump between consecutive shifts shouldn't be greater than 1
+ :param x: computed shifts
+ :param n: condition to apply
+ :return:
+ """
+ y=x.copy()
+ dy=np.diff(y,prepend=y[0])
+ whileTrue:
+ pos=np.where(dy==1)[0]ifn==1elsenp.where(dy>2)[0]
+ # added frames: this doesn't make sense and this is noise
+ ifpos.size==0:
+ break
+ neg=np.where(dy==-1)[0]ifn==1elsenp.where(dy<-2)[0]
+
+ iflen(pos)>len(neg):
+ neg=np.append(neg,dy.size-1)
+
+ iss=np.minimum(np.searchsorted(neg,pos),neg.size-1)
+ imin=np.argmin(np.minimum(np.abs(pos-neg[iss-1]),np.abs(pos-neg[iss])))
+
+ idx=np.max([0,iss[imin]-1])
+ ineg=neg[idx:iss[imin]+1]
+ ineg=ineg[np.argmin(np.abs(pos[imin]-ineg))]
+ dy[pos[imin]]=0
+ dy[ineg]=0
+
+ returnnp.cumsum(dy)+y[0]
+
+
+
+[docs]
+ defqc_shifts(self,shifts,shifts_filt):
+"""
+ Compute qc values for the wheel alignment. We consider 4 things
+ 1. The number of camera ttl values that are missing (when we have less ttls than video frames)
+ 2. The number of shifts that have nan values, this means the video motion energy computation
+ 3. The number of large jumps (>10) between the computed shifts
+ 4. The number of jumps (>1) between the shifts after they have been cleaned
+
+ :param shifts: np.array of shifts over session
+ :param shifts_filt: np.array of shifts after being cleaned over session
+ :return:
+ """
+
+ ttl_per=(np.abs(self.tdiff)/self.camera_meta['length'])*100ifself.tdiff<0else0
+ nan_per=(np.sum(np.isnan(shifts_filt))/shifts_filt.size)*100
+ shifts_sum=np.where(np.abs(np.diff(shifts))>10)[0].size
+ shifts_filt_sum=np.where(np.abs(np.diff(shifts_filt))>1)[0].size
+
+ qc=dict()
+ qc['ttl_per']=ttl_per
+ qc['nan_per']=nan_per
+ qc['shifts_sum']=shifts_sum
+ qc['shifts_filt_sum']=shifts_filt_sum
+
+ qc_outcome=True
+ # If more than 10% of ttls are missing we don't get new times
+ ifttl_per>10:
+ qc_outcome=False
+ # If too many of the shifts are nans it means the alignment is not accurate
+ ifnan_per>40:
+ qc_outcome=False
+ # If there are too many artefacts could be errors
+ ifshifts_sum>60:
+ qc_outcome=False
+ # If there are jumps > 1 in the filtered shifts then there is a problem
+ ifshifts_filt_sum>0:
+ qc_outcome=False
+
+ returnqc,qc_outcome
+
+
+
+[docs]
+ defextract_times(self,shifts_filt,t_shifts):
+"""
+ Extracts new camera times after applying the computed shifts across the session
+
+ :param shifts_filt: filtered shifts computed across session
+ :param t_shifts: time point of computed shifts
+ :return:
+ """
+
+ # Compute the interpolation function to apply to the ttl times
+ t_new=t_shifts-(shifts_filt*1/self.frate)
+ fcn=interpolate.interp1d(t_shifts,t_new,fill_value="extrapolate")
+ # Apply the function and get out new times
+ new_times=fcn(self.ttl_times)
+
+ # If we are missing ttls then interpolate and append the correct number at the end
+ ifself.tdiff<0:
+ to_app=(np.arange(np.abs(self.tdiff),)+1)/self.frate+new_times[-1]
+ new_times=np.r_[new_times,to_app]
+
+ returnnew_times
+[docs]
+ defplot_with_behavior(self):
+"""
+ Makes a summary figure of the alignment when behaviour data is available
+ :return:
+ """
+
+ self.dlc=likelihood_threshold(self.dlc)
+ trial_idx,dividers=find_trial_ids(self.trials,sort='side')
+ feature_ext=get_speed(self.dlc,self.camera_times,self.label,feature='paw_r')
+ feature_new=get_speed(self.dlc,self.new_times,self.label,feature='paw_r')
+
+ fig=plt.figure()
+ fig.set_size_inches(15,9)
+ gs=gridspec.GridSpec(1,5,figure=fig,width_ratios=[4,1,1,1,3],wspace=0.3,hspace=0.5)
+ gs0=gridspec.GridSpecFromSubplotSpec(3,1,subplot_spec=gs[0,0])
+ ax01=fig.add_subplot(gs0[0,0])
+ ax02=fig.add_subplot(gs0[1,0])
+ ax03=fig.add_subplot(gs0[2,0])
+ gs1=gridspec.GridSpecFromSubplotSpec(2,1,subplot_spec=gs[0,1],height_ratios=[1,3])
+ ax11=fig.add_subplot(gs1[0,0])
+ ax12=fig.add_subplot(gs1[1,0])
+ gs2=gridspec.GridSpecFromSubplotSpec(2,1,subplot_spec=gs[0,2],height_ratios=[1,3])
+ ax21=fig.add_subplot(gs2[0,0])
+ ax22=fig.add_subplot(gs2[1,0])
+ gs3=gridspec.GridSpecFromSubplotSpec(2,1,subplot_spec=gs[0,3],height_ratios=[1,3])
+ ax31=fig.add_subplot(gs3[0,0])
+ ax32=fig.add_subplot(gs3[1,0])
+ gs4=gridspec.GridSpecFromSubplotSpec(2,1,subplot_spec=gs[0,4])
+ ax41=fig.add_subplot(gs4[0,0])
+ ax42=fig.add_subplot(gs4[1,0])
+
+ ax01.plot(self.t_shifts,self.shifts,label='shifts')
+ ax01.plot(self.t_shifts,self.shifts_filt,label='shifts_filt')
+ ax01.set_ylim(np.min(self.shifts_filt)-10,np.max(self.shifts_filt)+10)
+ ax01.legend()
+ ax01.set_ylabel('Frames')
+ ax01.set_xlabel('Time in session')
+
+ xs=np.searchsorted(self.ttl_times,self.t_shifts)
+ ttl_diff=(self.times-self.camera_times)[xs]*self.camera_meta['fps']
+ ax02.plot(self.t_shifts,ttl_diff,label='extracted - ttl')
+ ax02.set_ylim(np.min(ttl_diff)-10,np.max(ttl_diff)+10)
+ ax02.legend()
+ ax02.set_ylabel('Frames')
+ ax02.set_xlabel('Time in session')
+
+ ax03.plot(self.camera_times,(self.camera_times-self.new_times)*self.camera_meta['fps'],'k',label='extracted - new')
+ ax03.legend()
+ ax03.set_ylim(-5,5)
+ ax03.set_ylabel('Frames')
+ ax03.set_xlabel('Time in session')
+
+ self.single_cluster_raster(self.wheel_timestamps,self.trials['firstMovement_times'].values,trial_idx,dividers,
+ ['g','y'],['left','right'],weights=self.wheel_vel,fr=False,axs=[ax11,ax12])
+ ax11.sharex(ax12)
+ ax11.set_ylabel('Wheel velocity')
+ ax11.set_title('Wheel')
+ ax12.set_xlabel('Time from first move')
+
+ self.single_cluster_raster(self.camera_times,self.trials['firstMovement_times'].values,trial_idx,dividers,['g','y'],
+ ['left','right'],weights=feature_ext,fr=False,axs=[ax21,ax22])
+ ax21.sharex(ax22)
+ ax21.set_ylabel('Paw r velocity')
+ ax21.set_title('Extracted times')
+ ax22.set_xlabel('Time from first move')
+
+ self.single_cluster_raster(self.new_times,self.trials['firstMovement_times'].values,trial_idx,dividers,['g','y'],
+ ['left','right'],weights=feature_new,fr=False,axs=[ax31,ax32])
+ ax31.sharex(ax32)
+ ax31.set_ylabel('Paw r velocity')
+ ax31.set_title('New times')
+ ax32.set_xlabel('Time from first move')
+
+ ax41.imshow(self.frame_example[0])
+ rect=matplotlib.patches.Rectangle((self.roi[1][1],self.roi[0][0]),self.roi[1][0]-self.roi[1][1],
+ self.roi[0][1]-self.roi[0][0],linewidth=4,edgecolor='g',facecolor='none')
+ ax41.add_patch(rect)
+
+ ax42.plot(self.all_me)
+
+ returnfig
+
+
+
+[docs]
+ defplot_without_behavior(self):
+"""
+ Makes a summary figure of the alignment when behaviour data is not available
+ :return:
+ """
+
+ fig=plt.figure()
+ fig.set_size_inches(7,7)
+ gs=gridspec.GridSpec(1,2,figure=fig)
+ gs0=gridspec.GridSpecFromSubplotSpec(3,1,subplot_spec=gs[0,0])
+ ax01=fig.add_subplot(gs0[0,0])
+ ax02=fig.add_subplot(gs0[1,0])
+ ax03=fig.add_subplot(gs0[2,0])
+
+ gs1=gridspec.GridSpecFromSubplotSpec(2,1,subplot_spec=gs[0,1])
+ ax04=fig.add_subplot(gs1[0,0])
+ ax05=fig.add_subplot(gs1[1,0])
+
+ ax01.plot(self.t_shifts,self.shifts,label='shifts')
+ ax01.plot(self.t_shifts,self.shifts_filt,label='shifts_filt')
+ ax01.set_ylim(np.min(self.shifts_filt)-10,np.max(self.shifts_filt)+10)
+ ax01.legend()
+ ax01.set_ylabel('Frames')
+ ax01.set_xlabel('Time in session')
+
+ xs=np.searchsorted(self.ttl_times,self.t_shifts)
+ ttl_diff=(self.times-self.camera_times)[xs]*self.camera_meta['fps']
+ ax02.plot(self.t_shifts,ttl_diff,label='extracted - ttl')
+ ax02.set_ylim(np.min(ttl_diff)-10,np.max(ttl_diff)+10)
+ ax02.legend()
+ ax02.set_ylabel('Frames')
+ ax02.set_xlabel('Time in session')
+
+ ax03.plot(self.camera_times,(self.camera_times-self.new_times)*self.camera_meta['fps'],'k',label='extracted - new')
+ ax03.legend()
+ ax03.set_ylim(-5,5)
+ ax03.set_ylabel('Frames')
+ ax03.set_xlabel('Time in session')
+
+ ax04.imshow(self.frame_example[0])
+ rect=matplotlib.patches.Rectangle((self.roi[1][1],self.roi[0][0]),self.roi[1][0]-self.roi[1][1],
+ self.roi[0][1]-self.roi[0][0],linewidth=4,edgecolor='g',facecolor='none')
+ ax04.add_patch(rect)
+
+ ax05.plot(self.all_me)
+
+ returnfig
+
+
+
+[docs]
+ defprocess(self):
+"""
+ Main function used to apply the video motion wheel alignment to the camera times. This function does the
+ following
+ 1. Computes the video motion energy across the whole session (computed in windows and parallelised)
+ 2. Computes the shift that should be applied to the camera times across the whole session by computing
+ the cross correlation between the video motion energy and the wheel speed (computed in
+ overlapping windows and parallelised)
+ 3. Removes artefacts from the computed shifts
+ 4. Computes the qc for the wheel alignment
+ 5. Extracts the new camera times using the shifts computed from the video wheel alignment
+ 6. If upload is True, creates a summary plot of the alignment and uploads the figure to the relevant session
+ on alyx
+ :return:
+ """
+
+ # Compute the motion energy of the wheel for the whole video
+ wg=WindowGenerator(self.camera_meta['length'],5000,4)
+ out=Parallel(n_jobs=self.nprocess)(
+ delayed(self.compute_motion_energy)(first,last,wg,iw)foriw,(first,last)inenumerate(wg.firstlast))
+ # Concatenate the motion energy into one big array
+ self.all_me=np.array([])
+ forvalsinout[:-1]:
+ self.all_me=np.r_[self.all_me,vals]
+
+ toverlap=self.twin-1
+ all_me=np.r_[np.full((int(self.camera_meta['fps']*toverlap)),np.nan),self.all_me]
+ to_app=self.times[0]-((np.arange(int(self.camera_meta['fps']*toverlap),)+1)/self.frate)[::-1]
+ times=np.r_[to_app,self.times]
+
+ wg=WindowGenerator(all_me.size-1,int(self.camera_meta['fps']*self.twin),int(self.camera_meta['fps']*toverlap))
+
+ out=Parallel(n_jobs=1)(delayed(self.compute_shifts)(times,all_me,first,last,iw,wg)
+ foriw,(first,last)inenumerate(wg.firstlast))
+
+ self.shifts=np.array([])
+ self.t_shifts=np.array([])
+ forvalsinout[:-1]:
+ self.shifts=np.r_[self.shifts,vals[0]]
+ self.t_shifts=np.r_[self.t_shifts,vals[1]]
+
+ idx=np.bitwise_and(self.t_shifts>=self.ttl_times[0],self.t_shifts<self.ttl_times[-1])
+ self.shifts=self.shifts[idx]
+ self.t_shifts=self.t_shifts[idx]
+ shifts_filt=ndimage.percentile_filter(self.shifts,80,120)
+ shifts_filt=self.clean_shifts(shifts_filt,n=1)
+ self.shifts_filt=self.clean_shifts(shifts_filt,n=2)
+
+ self.qc,self.qc_outcome=self.qc_shifts(self.shifts,self.shifts_filt)
+
+ self.new_times=self.extract_times(self.shifts_filt,self.t_shifts)
+
+ ifself.upload:
+ fig=self.plot_with_behavior()ifself.behaviorelseself.plot_without_behavior()
+ save_fig_path=Path(self.session_path.joinpath('snapshot','video',f'video_wheel_alignment_{self.label}.png'))
+ save_fig_path.parent.mkdir(exist_ok=True,parents=True)
+ fig.savefig(save_fig_path)
+ snp=ReportSnapshot(self.session_path,self.eid,content_type='session',one=self.one)
+ snp.outputs=[save_fig_path]
+ snp.register_images(widths=['orig'])
+ plt.close(fig)
+
+ returnself.new_times
[docs]
-defget_collections(sess_params):
+defget_collections(sess_params,flat=False):""" Find all collections associated with the session.
@@ -545,12 +545,17 @@
Source code for ibllib.io.session_params
---------- sess_params : dict The loaded experiment description map.
+ flat : bool (False)
+ If True, return a flat list of unique collections, otherwise return a map of device/sync/task Returns ------- dict[str, str] A map of device/sync/task and the corresponding collection name.
+ list[str]
+ A flat list of unique collection names.
+
Notes ----- - Assumes only the following data types contained: list, dict, None, str.
@@ -563,12 +568,27 @@
Source code for ibllib.io.session_params
fordinfilter(lambdax:isinstance(x,dict),v):iter_dict(d)elifisinstance(v,dict)and'collection'inv:
- collection_map[k]=v['collection']
+ print(k)
+ # if the key already exists, append the collection name to the list
+ ifkincollection_map:
+ clist=collection_map[k]ifisinstance(collection_map[k],list)else[collection_map[k]]
+ collection_map[k]=list(set(clist+[v['collection']]))
+ else:
+ collection_map[k]=v['collection']elifisinstance(v,dict):iter_dict(v)iter_dict(sess_params)
- returncollection_map
# won't be preserved by create_basic_transfer_params by defaultremote=FalseifremoteisFalseelseparams['REMOTE_DATA_FOLDER_PATH']
- # THis is in the docstring but still, if the session Path is absolute, we need to make it relative
+ # This is in the docstring but still, if the session Path is absolute, we need to make it relativeifPath(session_path).is_absolute():session_path=Path(*session_path.parts[-3:])
diff --git a/_modules/ibllib/oneibl/data_handlers.html b/_modules/ibllib/oneibl/data_handlers.html
index 5b6c5fba..3de68813 100644
--- a/_modules/ibllib/oneibl/data_handlers.html
+++ b/_modules/ibllib/oneibl/data_handlers.html
@@ -265,7 +265,8 @@
Source code for ibllib.oneibl.data_handlers
# For cortex lab we need to get the endpoint from the ibl alyxifself.lab=='cortexlab':
- self.globus.add_endpoint(f'flatiron_{self.lab}',alyx=ONE(base_url='https://alyx.internationalbrainlab.org').alyx)
+ alyx=AlyxClient(base_url='https://alyx.internationalbrainlab.org',cache_rest=None)
+ self.globus.add_endpoint(f'flatiron_{self.lab}',alyx=alyx)else:self.globus.add_endpoint(f'flatiron_{self.lab}',alyx=self.one.alyx)
@@ -276,21 +277,19 @@
Source code for ibllib.oneibl.data_handlers
defsetUp(self):"""Function to download necessary data to run tasks using globus-sdk."""ifself.lab=='cortexlab':
- one=ONE(base_url='https://alyx.internationalbrainlab.org')
- df=super().getData(one=one)
+ df=super().getData(one=ONE(base_url='https://alyx.internationalbrainlab.org'))else:
- one=self.one
- df=super().getData()
+ df=super().getData(one=self.one)iflen(df)==0:
- # If no datasets found in the cache only work off local file system do not attempt to download any missing data
- # using globus
+ # If no datasets found in the cache only work off local file system do not attempt to
+ # download any missing data using Globusreturn# Check for space on local server. If less that 500 GB don't download new dataspace_free=shutil.disk_usage(self.globus.endpoints['local']['root_path'])[2]ifspace_free<500e9:
- _logger.warning('Space left on server is < 500GB, wont redownload new data')
+ _logger.warning('Space left on server is < 500GB, won\'t re-download new data')returnrel_sess_path='/'.join(df.iloc[0]['session_path'].split('/')[-3:])
@@ -332,7 +331,7 @@
Source code for ibllib.oneibl.data_handlers
[docs]defcleanUp(self):
-"""Clean up, remove the files that were downloaded from globus once task has completed."""
+"""Clean up, remove the files that were downloaded from Globus once task has completed."""forfileinself.local_paths:os.unlink(file)
# Read in the experiment description file if it exists and get projects and procedures from hereexperiment_description_file=session_params.read_params(ses_path)
+ _,subject,date,number,*_=folder_parts(ses_path)ifexperiment_description_fileisNone:collections=['raw_behavior_data']else:
- projects=experiment_description_file.get('projects',projects)
- procedures=experiment_description_file.get('procedures',procedures)
- collections=ensure_list(session_params.get_task_collection(experiment_description_file))
-
- # read meta data from the rig for the session from the task settings file
- task_data=(raw.load_bpod(ses_path,collection)forcollectioninsorted(collections))
- # Filter collections where settings file was not found
- ifnot(task_data:=list(zip(*filter(lambdax:x[0]isnotNone,task_data)))):
- raiseValueError(f'_iblrig_taskSettings.raw.json not found in {ses_path} Abort.')
- settings,task_data=task_data
- iflen(settings)!=len(collections):
- raiseValueError(f'_iblrig_taskSettings.raw.json not found in {ses_path} Abort.')
-
- # Do some validation
- _,subject,date,number,*_=folder_parts(ses_path)
- assertlen({x['SUBJECT_NAME']forxinsettings})==1andsettings[0]['SUBJECT_NAME']==subject
- assertlen({x['SESSION_DATE']forxinsettings})==1andsettings[0]['SESSION_DATE']==date
- assertlen({x['SESSION_NUMBER']forxinsettings})==1andsettings[0]['SESSION_NUMBER']==number
- assertlen({x['IS_MOCK']forxinsettings})==1
- assertlen({md['PYBPOD_BOARD']formdinsettings})==1
- assertlen({md.get('IBLRIG_VERSION')formdinsettings})==1
- # assert len({md['IBLRIG_VERSION_TAG'] for md in settings}) == 1
+ # Combine input projects/procedures with those in experiment description
+ projects=list({*experiment_description_file.get('projects',[]),*(projectsor[])})
+ procedures=list({*experiment_description_file.get('procedures',[]),*(proceduresor[])})
+ collections=session_params.get_task_collection(experiment_description_file)# query Alyx endpoints for subject, error if not foundsubject=self.assert_exists(subject,'subjects')
@@ -324,31 +307,62 @@
Source code for ibllib.oneibl.registration
date_range=date,number=number,details=True,query_type='remote')
- users=[]
- foruserinfilter(None,map(lambdax:x.get('PYBPOD_CREATOR'),settings)):
- user=self.assert_exists(user[0],'users')# user is list of [username, uuid]
- users.append(user['username'])
-
- # extract information about session duration and performance
- start_time,end_time=_get_session_times(str(ses_path),settings,task_data)
- n_trials,n_correct_trials=_get_session_performance(settings,task_data)
-
- # TODO Add task_protocols to Alyx sessions endpoint
- task_protocols=[md['PYBPOD_PROTOCOL']+md['IBLRIG_VERSION_TAG']formdinsettings]
- # unless specified label the session projects with subject projects
- projects=subject['projects']ifprojectsisNoneelseprojects
- # makes sure projects is a list
- projects=[projects]ifisinstance(projects,str)elseprojects
-
- # unless specified label the session procedures with task protocol lookup
- procedures=proceduresorlist(set(filter(None,map(self._alyx_procedure_from_task,task_protocols))))
- procedures=[procedures]ifisinstance(procedures,str)elseprocedures
- json_fields_names=['IS_MOCK','IBLRIG_VERSION']
- json_field={k:settings[0].get(k)forkinjson_fields_names}
- # The poo count field is only updated if the field is defined in at least one of the settings
- poo_counts=[md.get('POOP_COUNT')formdinsettingsifmd.get('POOP_COUNT')isnotNone]
- ifpoo_counts:
- json_field['POOP_COUNT']=int(sum(poo_counts))
+ ifcollectionsisNone:# No task data
+ assertlen(session)!=0,'no session on Alyx and no tasks in experiment description'
+ # Fetch the full session JSON and assert that some basic information is present.
+ # Basically refuse to extract the data if key information is missing
+ session_details=self.one.alyx.rest('sessions','read',id=session_id[0],no_cache=True)
+ required=('location','start_time','lab','users')
+ missing=[kforkinrequiredifnotsession_details[k]]
+ assertnotany(missing),'missing session information: '+', '.join(missing)
+ task_protocols=task_data=settings=[]
+ json_field=None
+ users=session_details['users']
+ else:# Get session info from task data
+ collections=ensure_list(collections)
+ # read meta data from the rig for the session from the task settings file
+ task_data=(raw.load_bpod(ses_path,collection)forcollectioninsorted(collections))
+ # Filter collections where settings file was not found
+ ifnot(task_data:=list(zip(*filter(lambdax:x[0]isnotNone,task_data)))):
+ raiseValueError(f'_iblrig_taskSettings.raw.json not found in {ses_path} Abort.')
+ settings,task_data=task_data
+ iflen(settings)!=len(collections):
+ raiseValueError(f'_iblrig_taskSettings.raw.json not found in {ses_path} Abort.')
+
+ # Do some validation
+ assertlen({x['SUBJECT_NAME']forxinsettings})==1andsettings[0]['SUBJECT_NAME']==subject['nickname']
+ assertlen({x['SESSION_DATE']forxinsettings})==1andsettings[0]['SESSION_DATE']==date
+ assertlen({x['SESSION_NUMBER']forxinsettings})==1andsettings[0]['SESSION_NUMBER']==number
+ assertlen({x['IS_MOCK']forxinsettings})==1
+ assertlen({md['PYBPOD_BOARD']formdinsettings})==1
+ assertlen({md.get('IBLRIG_VERSION')formdinsettings})==1
+ # assert len({md['IBLRIG_VERSION_TAG'] for md in settings}) == 1
+
+ users=[]
+ foruserinfilter(None,map(lambdax:x.get('PYBPOD_CREATOR'),settings)):
+ user=self.assert_exists(user[0],'users')# user is list of [username, uuid]
+ users.append(user['username'])
+
+ # extract information about session duration and performance
+ start_time,end_time=_get_session_times(str(ses_path),settings,task_data)
+ n_trials,n_correct_trials=_get_session_performance(settings,task_data)
+
+ # TODO Add task_protocols to Alyx sessions endpoint
+ task_protocols=[md['PYBPOD_PROTOCOL']+md['IBLRIG_VERSION_TAG']formdinsettings]
+ # unless specified label the session projects with subject projects
+ projects=subject['projects']ifprojectsisNoneelseprojects
+ # makes sure projects is a list
+ projects=[projects]ifisinstance(projects,str)elseprojects
+
+ # unless specified label the session procedures with task protocol lookup
+ procedures=proceduresorlist(set(filter(None,map(self._alyx_procedure_from_task,task_protocols))))
+ procedures=[procedures]ifisinstance(procedures,str)elseprocedures
+ json_fields_names=['IS_MOCK','IBLRIG_VERSION']
+ json_field={k:settings[0].get(k)forkinjson_fields_names}
+ # The poo count field is only updated if the field is defined in at least one of the settings
+ poo_counts=[md.get('POOP_COUNT')formdinsettingsifmd.get('POOP_COUNT')isnotNone]
+ ifpoo_counts:
+ json_field['POOP_COUNT']=int(sum(poo_counts))ifnotsession:# Create session and weighingsses_={'subject':subject['nickname'],
@@ -376,9 +390,13 @@
Source code for ibllib.oneibl.registration
user=self.one.alyx.userself.register_weight(subject['nickname'],md['SUBJECT_WEIGHT'],date_time=md['SESSION_DATETIME'],user=user)
- else:# if session exists update the JSON field
- session=self.one.alyx.rest('sessions','read',id=session_id[0],no_cache=True)
- self.one.alyx.json_field_update('sessions',session['id'],data=json_field)
+ else:# if session exists update a few key fields
+ data={'procedures':procedures,'projects':projects}
+ iftask_protocols:
+ data['task_protocol']='/'.join(task_protocols)
+ session=self.one.alyx.rest('sessions','partial_update',id=session_id[0],data=data)
+ ifjson_field:
+ session['json']=self.one.alyx.json_field_update('sessions',session['id'],data=json_field)_logger.info(session['url']+' ')# create associated water administration if not found
@@ -397,7 +415,8 @@
Source code for ibllib.oneibl.registration
returnsession,None# register all files that match the Alyx patterns and file_list
- rename_files_compatibility(ses_path,settings[0]['IBLRIG_VERSION_TAG'])
+ ifany(settings):
+ rename_files_compatibility(ses_path,settings[0]['IBLRIG_VERSION_TAG'])F=filter(lambdax:self._register_bool(x.name,file_list),self.find_files(ses_path))recs=self.register_files(F,created_by=users[0]ifuserselseNone,versions=ibllib.__version__)returnsession,recs
-#!/usr/bin/env python
-# -*- coding:utf-8 -*-
-# @Author: Niccolò Bonacchi
-# @Date: Friday, July 5th 2019, 11:46:37 am
-fromibllib.io.flagsimportFLAG_FILE_NAMES
+"""IBL preprocessing pipeline.
+
+This module concerns the data extraction and preprocessing for IBL data. The lab servers routinely
+call `local_server.job_creator` to search for new sessions to extract. The job creator registers
+the new session to Alyx (i.e. creates a new session record on the database), if required, then
+deduces a set of tasks (a.k.a. the pipeline [*]_) from the 'experiment.description' file at the
+root of the session (see `dynamic_pipeline.make_pipeline`). If no file exists one is created,
+inferring the acquisition hardware from the task protocol. The new session's pipeline tasks are
+then registered for another process (or server) to query.
+
+Another process calls `local_server.task_queue` to get a list of queued tasks from Alyx, then
+`local_server.tasks_runner` to loop through tasks. Each task is run by called
+`tasks.run_alyx_task` with a dictionary of task information, including the Task class and its
+parameters.
+
+.. [*] A pipeline is a collection of tasks that depend on one another. A pipeline consists of
+ tasks associated with the same session path. Unlike pipelines, tasks are represented in Alyx.
+ A pipeline can be recreated given a list of task dictionaries. The order is defined by the
+ 'parents' field of each task.
+
+Notes
+-----
+All new tasks are subclasses of the base_tasks.DynamicTask class. All others are defunct and shall
+be removed in the future.
+"""
-importlogging
+"""Task pipeline creation from an acquisition description.
+
+The principal function here is `make_pipeline` which reads an `_ibl_experiment.description.yaml`
+file and determines the set of tasks required to preprocess the session.
+"""
+importloggingimportrefromcollectionsimportOrderedDictfrompathlibimportPath
@@ -118,7 +123,6 @@
# Video tasks
if'cameras'indevices:
- video_kwargs={'device_collection':'raw_video_data',
- 'cameras':list(devices['cameras'].keys())}
+ cams=list(devices['cameras'].keys())
+ subset_cams=[cforcincamsifcin('left','right','body','belly')]
+ video_kwargs={'device_collection':'raw_video_data','cameras':cams}video_compressed=sess_params.get_video_compressed(acquisition_description)ifvideo_compressed:# This is for widefield case where the video is already compressed
- tasks[tn]=type((tn:='VideoConvert'),(vtasks.VideoConvert,),{})(
- **kwargs,**video_kwargs)
+ tasks[tn]=type((tn:='VideoConvert'),(vtasks.VideoConvert,),{})(**kwargs,**video_kwargs)dlc_parent_task=tasks['VideoConvert']tasks[tn]=type((tn:=f'VideoSyncQC_{sync}'),(vtasks.VideoSyncQcCamlog,),{})(**kwargs,**video_kwargs,**sync_kwargs)
@@ -443,14 +447,32 @@
Source code for ibllib.pipes.dynamic_pipeline
tasks[tn]=type((tn:=f'VideoSyncQC_{sync}'),(vtasks.VideoSyncQcBpod,),{})(**kwargs,**video_kwargs,**sync_kwargs,parents=[tasks['VideoCompress']])elifsync=='nidq':
+ # Here we restrict to videos that we support (left, right or body)
+ video_kwargs['cameras']=subset_camstasks[tn]=type((tn:=f'VideoSyncQC_{sync}'),(vtasks.VideoSyncQcNidq,),{})(**kwargs,**video_kwargs,**sync_kwargs,parents=[tasks['VideoCompress']]+sync_tasks)ifsync_kwargs['sync']!='bpod':
+ # Here we restrict to videos that we support (left, right or body)
+ # Currently there is no plan to run DLC on the belly cam
+ subset_cams=[cforcincamsifcin('left','right','body')]
+ video_kwargs['cameras']=subset_camstasks[tn]=type((tn:='DLC'),(vtasks.DLC,),{})(**kwargs,**video_kwargs,parents=[dlc_parent_task])
- tasks['PostDLC']=type('PostDLC',(epp.EphysPostDLC,),{})(
- **kwargs,parents=[tasks['DLC'],tasks[f'VideoSyncQC_{sync}']])
+
+ # The PostDLC plots require a trials object for QC
+ # Find the first task that outputs a trials.table dataset
+ trials_task=(
+ tfortintasks.values()ifany('trials.table'infforfint.signature.get('output_files',[]))
+ )
+ iftrials_task:=next(trials_task,None):
+ parents=[tasks['DLC'],tasks[f'VideoSyncQC_{sync}'],trials_task]
+ trials_collection=getattr(trials_task,'output_collection','alf')
+ else:
+ parents=[tasks['DLC'],tasks[f'VideoSyncQC_{sync}']]
+ trials_collection='alf'
+ tasks[tn]=type((tn:='PostDLC'),(vtasks.EphysPostDLC,),{})(
+ **kwargs,cameras=subset_cams,trials_collection=trials_collection,parents=parents)# Audio tasksif'microphone'indevices:
diff --git a/_modules/ibllib/pipes/ephys_alignment.html b/_modules/ibllib/pipes/ephys_alignment.html
index 4c400e43..ee220446 100644
--- a/_modules/ibllib/pipes/ephys_alignment.html
+++ b/_modules/ibllib/pipes/ephys_alignment.html
@@ -110,7 +110,7 @@
-importlogging
+"""(Deprecated) Electrophysiology data preprocessing tasks.
+
+These tasks are part of the old pipeline. This module has been replaced by the `ephys_tasks` module
+and the dynamic pipeline.
+"""
+importloggingimportreimportshutilimportsubprocess
@@ -1463,8 +1468,7 @@
""" Computes a coverage volume from :param trajs: dictionary of trajectories from Alyx rest endpoint (one.alyx.rest...)
- :param ba: ibllib.atlas.BrainAtlas instance
+ :param ba: iblatlas.atlas.BrainAtlas instance :return: 3D np.array the same size as the volume provided in the brain atlas """# in um. Coverage = 1 below the first value, 0 after the second, cosine taper in between
diff --git a/_modules/ibllib/pipes/local_server.html b/_modules/ibllib/pipes/local_server.html
index 93d9ed63..6d80bd0c 100644
--- a/_modules/ibllib/pipes/local_server.html
+++ b/_modules/ibllib/pipes/local_server.html
@@ -107,7 +107,13 @@
Source code for ibllib.pipes.local_server
-importtime
+"""Lab server pipeline construction and task runner.
+
+This is the module called by the job services on the lab servers. See
+iblscripts/deploy/serverpc/crons for the service scripts that employ this module.
+"""
+importlogging
+importtimefromdatetimeimportdatetimefrompathlibimportPathimportpkg_resources
@@ -119,8 +125,7 @@
[docs]defreport_health(one):"""
- Get a few indicators and label the json field of the corresponding lab with them
+ Get a few indicators and label the json field of the corresponding lab with them. """status={'python_version':sys.version,'ibllib_version':pkg_resources.get_distribution("ibllib").version,
@@ -185,9 +192,10 @@
deftask_queue(mode='all',lab=None,alyx=None):""" Query waiting jobs from the specified Lab
- :param mode: Whether to return all waiting tasks, or only small or large (specified in LARGE_TASKS) jobs
- :param lab: lab name as per Alyx, otherwise try to infer from local globus install
- :param one: ONE instance
- -------
+ Parameters
+ ----------
+ mode : {'all', 'small', 'large'}
+ Whether to return all waiting tasks, or only small or large (specified in LARGE_TASKS) jobs.
+ lab : str
+ Lab name as per Alyx, otherwise try to infer from local Globus install.
+ alyx : one.webclient.AlyxClient
+ An Alyx instance.
+
+ Returns
+ -------
+ list of dict
+ A list of Alyx tasks associated with `lab` that have a 'Waiting' status. """alyx=alyxorAlyxClient(cache_rest=None)iflabisNone:
@@ -326,14 +343,29 @@
Source code for ibllib.pipes.local_server
deftasks_runner(subjects_path,tasks_dict,one=None,dry=False,count=5,time_out=None,**kwargs):""" Function to run a list of tasks (task dictionary from Alyx query) on a local server
- :param subjects_path:
- :param tasks_dict:
- :param one:
- :param dry:
- :param count: maximum number of tasks to run
- :param time_out: between each task, if time elapsed is greater than time out, returns (seconds)
- :param kwargs:
- :return: list of dataset dictionaries
+
+ Parameters
+ ----------
+ subjects_path : str, pathlib.Path
+ The location of the subject session folders, e.g. '/mnt/s0/Data/Subjects'.
+ tasks_dict : list of dict
+ A list of tasks to run. Typically the output of `task_queue`.
+ one : one.api.OneAlyx
+ An instance of ONE.
+ dry : bool, default=False
+ If true, simply prints the full session paths and task names without running the tasks.
+ count : int, default=5
+ The maximum number of tasks to run from the tasks_dict list.
+ time_out : float, optional
+ The time in seconds to run tasks before exiting. If set this will run tasks until the
+ timeout has elapsed. NB: Only checks between tasks and will not interrupt a running task.
+ **kwargs
+ See ibllib.pipes.tasks.run_alyx_task.
+
+ Returns
+ -------
+ list of pathlib.Path
+ A list of datasets registered to Alyx. """ifoneisNone:one=ONE(cache_rest=None)
diff --git a/_modules/ibllib/pipes/mesoscope_tasks.html b/_modules/ibllib/pipes/mesoscope_tasks.html
index 567f264a..722e8c64 100644
--- a/_modules/ibllib/pipes/mesoscope_tasks.html
+++ b/_modules/ibllib/pipes/mesoscope_tasks.html
@@ -124,6 +124,7 @@
Inputs to suite2p run that deviate from default parameters.
"""
- # Currently only supporting single plane, assert that this is the case
- # FIXME This checks for zstacks but not dual plane mode
- ifnotisinstance(meta['scanImageParams']['hStackManager']['zs'],int):
- raiseNotImplementedError('Multi-plane imaging not yet supported, data seems to be multi-plane')
-
# Computing dx and dy
- cXY=np.array([fov['topLeftDeg']forfovinmeta['FOV']])
+ cXY=np.array([fov['Deg']['topLeft']forfovinmeta['FOV']])cXY-=np.min(cXY,axis=0)nXnYnZ=np.array([fov['nXnYnZ']forfovinmeta['FOV']])
- sW=np.sqrt(np.sum((np.array([fov['topRightDeg']forfovinmeta['FOV']])-np.array(
- [fov['topLeftDeg']forfovinmeta['FOV']]))**2,axis=1))
- sH=np.sqrt(np.sum((np.array([fov['bottomLeftDeg']forfovinmeta['FOV']])-np.array(
- [fov['topLeftDeg']forfovinmeta['FOV']]))**2,axis=1))
+
+ # Currently supporting z-stacks but not supporting dual plane / volumetric imaging, assert that this is not the case
+ ifnp.any(nXnYnZ[:,2]>1):
+ raiseNotImplementedError('Dual-plane imaging not yet supported, data seems to more than one plane per FOV')
+
+ sW=np.sqrt(np.sum((np.array([fov['Deg']['topRight']forfovinmeta['FOV']])-np.array(
+ [fov['Deg']['topLeft']forfovinmeta['FOV']]))**2,axis=1))
+ sH=np.sqrt(np.sum((np.array([fov['Deg']['bottomLeft']forfovinmeta['FOV']])-np.array(
+ [fov['Deg']['topLeft']forfovinmeta['FOV']]))**2,axis=1))pixSizeX=nXnYnZ[:,0]/sWpixSizeY=nXnYnZ[:,1]/sHdx=np.round(cXY[:,0]*pixSizeX).astype(dtype=np.int32)dy=np.round(cXY[:,1]*pixSizeY).astype(dtype=np.int32)nchannels=len(meta['channelSaved'])ifisinstance(meta['channelSaved'],list)else1
+ # Computing number of unique z-planes (slices in tiff)
+ # FIXME this should work if all FOVs are discrete or if all FOVs are continuous, but may not work for combination of both
+ slice_ids=[fov['slice_id']forfovinmeta['FOV']]
+ nplanes=len(set(slice_ids))
+
+ # Figuring out how many SI Rois we have (one unique ROI may have several FOVs)
+ # FIXME currently unused
+ # roiUUIDs = np.array([fov['roiUUID'] for fov in meta['FOV']])
+ # nrois = len(np.unique(roiUUIDs))
+
db={'data_path':sorted(map(str,self.session_path.glob(f'{self.device_collection}'))),'save_path0':str(self.session_path.joinpath('alf')),
@@ -624,13 +641,13 @@
Source code for ibllib.pipes.mesoscope_tasks
'block_size':[128,128],'save_mat':True,# save the data to Fall.mat'move_bin':True,# move the binary file to save_path
- 'scalefactor':1,# scale manually in x to account for overlap between adjacent ribbons UCL mesoscope'mesoscan':True,
- 'nplanes':1,
+ 'nplanes':nplanes,'nrois':len(meta['FOV']),'nchannels':nchannels,'fs':meta['scanImageParams']['hRoiManager']['scanVolumeRate'],'lines':[list(np.asarray(fov['lineIdx'])-1)forfovinmeta['FOV']],# subtracting 1 to make 0-based
+ 'slices':slice_ids,# this tells us which FOV corresponds to which tiff slices'tau':self.get_default_tau(),# deduce the GCamp used from Alyx mouse line (defaults to 1.5; that of GCaMP6s)'functional_chan':1,# for now, eventually find(ismember(meta.channelSaved == meta.channelID.green))'align_by_chan':1,# for now, eventually find(ismember(meta.channelSaved == meta.channelID.red))
@@ -823,13 +840,14 @@
Source code for ibllib.pipes.mesoscope_tasks
Notes
-----
- Once the FOVs have been registered they cannot be updated with with task. Rerunning this
- task will result in an error.
+ - Once the FOVs have been registered they cannot be updated with this task. Rerunning this
+ task will result in an error.
+ - This task modifies the first meta JSON file. All meta files are registered by this task. """# Load necessary data(filename,collection,_),*_=self.signature['input_files']
- meta_file=next(self.session_path.glob(f'{collection}/{filename}'),None)
- meta=alfio.load_file_content(meta_file)or{}
+ meta_files=sorted(self.session_path.glob(f'{collection}/{filename}'))
+ meta=mesoscope.patch_imaging_meta(alfio.load_file_content(meta_files[0])or{})nFOV=len(meta.get('FOV',[]))suffix=NoneifprovenanceisProvenance.HISTOLOGYelseprovenance.name.lower()
@@ -839,7 +857,7 @@
Source code for ibllib.pipes.mesoscope_tasks
mean_image_mlapdv,mean_image_ids=self.project_mlapdv(meta)# Save the meta data file with new coordinate fields
- withopen(meta_file,'w')asfp:
+ withopen(meta_files[0],'w')asfp:json.dump(meta,fp)# Save the mean image datasets
@@ -868,7 +886,50 @@
+[docs]
+ defupdate_surgery_json(self,meta,normal_vector):
+"""
+ Update surgery JSON with surface normal vector.
+
+ Adds the key 'surface_normal_unit_vector' to the most recent surgery JSON, containing the
+ provided three element vector. The recorded craniotomy center must match the coordinates
+ in the provided meta file.
+
+ Parameters
+ ----------
+ meta : dict
+ The imaging meta data file containing the 'centerMM' key.
+ normal_vector : array_like
+ A three element unit vector normal to the surface of the craniotomy center.
+
+ Returns
+ -------
+ dict
+ The updated surgery record, or None if no surgeries found.
+ """
+ ifnotself.oneorself.one.offline:
+ _logger.warning('failed to update surgery JSON: ONE offline')
+ return
+ # Update subject JSON with unit normal vector of craniotomy centre (used in histology)
+ subject=self.one.path2ref(self.session_path,parse=False)['subject']
+ surgeries=self.one.alyx.rest('surgeries','list',subject=subject,procedure='craniotomy')
+ ifnotsurgeries:
+ _logger.error(f'Surgery not found for subject "{subject}"')
+ return
+ surgery=surgeries[0]# Check most recent surgery in list
+ center=(meta['centerMM']['ML'],meta['centerMM']['AP'])
+ match=(kfork,vinsurgery['json'].items()if
+ str(k).startswith('craniotomy')andnp.allclose(v['center'],center))
+ if(key:=next(match,None))isNone:
+ _logger.error('Failed to update surgery JSON: no matching craniotomy found')
+ returnsurgery
+ data={key:{**surgery['json'][key],'surface_normal_unit_vector':tuple(normal_vector)}}
+ surgery['json']=self.one.alyx.json_field_update('subjects',subject,data=data)
+ returnsurgery
Returns
-------
- dict of int: numpy.array
+ dict of int : numpy.array A map of field of view to ROI MLAPDV coordinates.
- dict of int: numpy.array
+ dict of int : numpy.array A map of field of view to ROI brain location IDs. """all_mlapdv={}
@@ -982,8 +1043,11 @@
Source code for ibllib.pipes.mesoscope_tasks
slice_counts =Counter(f['roiUUID']forfinmeta.get('FOV',[]))# Create a new stack in Alyx for all stacks containing more than one slice.# Map of ScanImage ROI UUID to Alyx ImageStack UUID.
- stack_ids={i:self.one.alyx.rest('imaging-stack','create',data={'name':i})['id']
- foriinslice_countsifslice_counts[i]>1}
+ ifdry:
+ stack_ids={i:uuid.uuid4()foriinslice_countsifslice_counts[i]>1}
+ else:
+ stack_ids={i:self.one.alyx.rest('imaging-stack','create',data={'name':i})['id']
+ foriinslice_countsifslice_counts[i]>1}fori,fovinenumerate(meta.get('FOV',[])):assertset(fov.keys())>={'MLAPDV','nXnYnZ','roiUUID'}
@@ -1108,6 +1172,9 @@
Source code for ibllib.pipes.mesoscope_tasks
# Get the surface normal unit vector of dorsal triangle
normal_vector=surface_normal(dorsal_triangle)
+ # Update the surgery JSON field with normal unit vector, for use in histology alignment
+ self.update_surgery_json(meta,normal_vector)
+
# find the coordDV that sits on the triangular face and had [coordML, coordAP] coordinates;# the three vertices defining the triangleface_vertices=points[dorsal_connectivity_list[face_ind,:],:]
@@ -1151,13 +1218,6 @@
Source code for ibllib.pipes.mesoscope_tasks
# xx and yy are in mm in coverslip space
points=((0,fov['nXnYnZ'][0]-1),(0,fov['nXnYnZ'][1]-1))
- if'MM'notinfov:
- fov['MM']={
- 'topLeft':fov.pop('topLeftMM'),
- 'topRight':fov.pop('topRightMM'),
- 'bottomLeft':fov.pop('bottomLeftMM'),
- 'bottomRight':fov.pop('bottomRightMM')
- }# The four corners of the FOV, determined by taking the center of the craniotomy in MM,# the x-y coordinates of the imaging window center (from the tiled reference image) in# galvanometer units, and the x-y coordinates of the FOV center in galvanometer units.
diff --git a/_modules/ibllib/pipes/misc.html b/_modules/ibllib/pipes/misc.html
index d8624696..eb8559eb 100644
--- a/_modules/ibllib/pipes/misc.html
+++ b/_modules/ibllib/pipes/misc.html
@@ -107,7 +107,8 @@
-#!/usr/bin/env python
-# -*- coding:utf-8 -*-
-# @Author: Niccolò Bonacchi
-# @Date: Thursday, March 28th 2019, 7:53:44 pm
-"""
-Purge data from RIG
+"""
+Purge data from acquisition PC.
+
+Steps:
+
- Find all files by rglob- Find all sessions of the found files- Check Alyx if corresponding datasetTypes have been registered as existing
-sessions and files on Flatiron
+ sessions and files on Flatiron- Delete local raw file if found on Flatiron"""importargparse
diff --git a/_modules/ibllib/pipes/tasks.html b/_modules/ibllib/pipes/tasks.html
index bb826604..d180212f 100644
--- a/_modules/ibllib/pipes/tasks.html
+++ b/_modules/ibllib/pipes/tasks.html
@@ -107,7 +107,8 @@
Source code for ibllib.pipes.tasks
-frompathlibimportPath
+"""The abstract Pipeline and Task superclasses and concrete task runner."""
+frompathlibimportPathimportabcimportloggingimportio
@@ -782,22 +783,39 @@
Source code for ibllib.pipes.tasks
defrun_alyx_task(tdict=None,session_path=None,one=None,job_deck=None,max_md5_size=None,machine=None,clobber=True,location='server',mode='log'):"""
- Runs a single Alyx job and registers output datasets
- :param tdict:
- :param session_path:
- :param one:
- :param job_deck: optional list of job dictionaries belonging to the session. Needed
- to check dependency status if the jdict has a parent field. If jdict has a parent and
- job_deck is not entered, will query the database
- :param max_md5_size: in bytes, if specified, will not compute the md5 checksum above a given
- filesize to save time
- :param machine: string identifying the machine the task is run on, optional
- :param clobber: bool, if True any existing logs are overwritten, default is True
- :param location: where you are running the task, 'server' - local lab server, 'remote' - any
- compute node/ computer, 'SDSC' - flatiron compute node, 'AWS' - using data from aws s3
- :param mode: str ('log' or 'raise') behaviour to adopt if an error occured. If 'raise', it
- will Raise the error at the very end of this function (ie. after having labeled the tasks)
- :return:
+ Runs a single Alyx job and registers output datasets.
+
+ Parameters
+ ----------
+ tdict : dict
+ An Alyx task dictionary to instantiate and run.
+ session_path : str, pathlib.Path
+ A session path containing the task input data.
+ one : one.api.OneAlyx
+ An instance of ONE.
+ job_deck : list of dict, optional
+ A list of all tasks in the same pipeline. If None, queries Alyx to get this.
+ max_md5_size : int, optional
+ An optional maximum file size in bytes. Files with sizes larger than this will not have
+ their MD5 checksum calculated to save time.
+ machine : str, optional
+ A string identifying the machine the task is run on.
+ clobber : bool, default=True
+ If true any existing logs are overwritten on Alyx.
+ location : {'remote', 'server', 'sdsc', 'aws'}
+ Where you are running the task, 'server' - local lab server, 'remote' - any
+ compute node/ computer, 'sdsc' - Flatiron compute node, 'aws' - using data from AWS S3
+ node.
+ mode : {'log', 'raise}, default='log'
+ Behaviour to adopt if an error occurred. If 'raise', it will raise the error at the very
+ end of this function (i.e. after having labeled the tasks).
+
+ Returns
+ -------
+ Task
+ The instantiated task object that was run.
+ list of pathlib.Path
+ A list of registered datasets. """registered_dsets=[]# here we need to check parents' status, get the job_deck if not available
diff --git a/_modules/ibllib/pipes/training_preprocessing.html b/_modules/ibllib/pipes/training_preprocessing.html
index 71efaccb..cfa105ad 100644
--- a/_modules/ibllib/pipes/training_preprocessing.html
+++ b/_modules/ibllib/pipes/training_preprocessing.html
@@ -107,7 +107,13 @@
Source code for ibllib.pipes.training_preprocessing
-importlogging
+"""(Deprecated) Training data preprocessing tasks.
+
+These tasks are part of the old pipeline. This module has been replaced by the dynamic pipeline
+and the `behavior_tasks` module.
+"""
+
+importloggingfromcollectionsimportOrderedDictfromone.alf.filesimportsession_path_partsimportwarnings
diff --git a/_modules/ibllib/pipes/training_status.html b/_modules/ibllib/pipes/training_status.html
index 7a2094d1..3943aba9 100644
--- a/_modules/ibllib/pipes/training_status.html
+++ b/_modules/ibllib/pipes/training_status.html
@@ -742,7 +742,8 @@
Source code for ibllib.pipes.training_status
if len(protocols)>0andlen(set(protocols))!=1:print(f'Different protocols on same date {sess_dicts[0]["date"]} : {protocols}')
- iflen(sess_dicts)>1andlen(set(protocols))==1:# Only if all protocols are the same
+ # Only if all protocols are the same and are not habituation
+ iflen(sess_dicts)>1andlen(set(protocols))==1andprotocols[0]!='habituation':# Only if all protocols are the sameprint(f'{len(sess_dicts)} sessions being combined for date {sess_dicts[0]["date"]}')combined_trials=load_combined_trials(session_paths,one,force=force)performance,contrasts,_=training.compute_performance(combined_trials,prob_right=True)
diff --git a/_modules/ibllib/pipes/video_tasks.html b/_modules/ibllib/pipes/video_tasks.html
index d738d715..9283afbe 100644
--- a/_modules/ibllib/pipes/video_tasks.html
+++ b/_modules/ibllib/pipes/video_tasks.html
@@ -109,10 +109,14 @@
classVideoSyncQcBpod(base_tasks.VideoTask):""" Task to sync camera timestamps to main DAQ timestamps
- N.B Signatures only reflect new daq naming convention, non compatible with ephys when not running on server
+ N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server """priority=40job_size='small'
@@ -370,7 +377,7 @@