Labels#
Labels are stored as interval DataFrames (onset_s, offset_s,
labels, individual, trial, …) in a TSV file alongside each
.nc dataset. See Labels for the user-facing
workflow and the storage-format reference.
Interval operations#
- ethograph.labels.intervals.add_interval(df, onset_s, offset_s, labels, individual, protected_label_ids=None)[source]#
Add an interval, resolving overlaps for the same individual.
If the new interval overlaps existing intervals for the same individual, the existing intervals are trimmed or split — unless their label ID is in protected_label_ids, in which case they are kept untouched.
- Parameters:
df (pd.DataFrame) – Current intervals DataFrame.
onset_s (float) – Start and end times in seconds.
offset_s (float) – Start and end times in seconds.
labels (int) – Label class ID.
individual (str) – Individual identifier.
protected_label_ids (set[int] | None) – Label IDs that must not be trimmed or split (e.g. labels belonging to inactive branches).
Nonemeans no protection.
- Returns:
Updated intervals DataFrame sorted by
onset_s.- Return type:
pd.DataFrame
Examples
>>> df = empty_intervals() >>> df = add_interval(df, 0.0, 1.0, 1, "crow_A") >>> df = add_interval(df, 0.5, 1.5, 2, "crow_A") >>> len(df) 2 >>> float(df.iloc[0]["offset_s"]) # first interval trimmed 0.499
- ethograph.labels.intervals.delete_interval(df, idx)[source]#
Drop interval by DataFrame index.
- Return type:
- ethograph.labels.intervals.find_interval_at(df, time_s, individual, label_ids=None)[source]#
Return DataFrame index of state interval containing time_s for individual.
Point events are never returned here — use
find_point_at()for those.
- ethograph.labels.intervals.get_interval_bounds(df, idx)[source]#
Return
(onset_s, offset_s, labels)for interval at idx.
- ethograph.labels.intervals.empty_intervals()[source]#
Create an empty intervals DataFrame with the correct columns and dtypes.
- Returns:
Empty DataFrame with columns
onset_s,offset_s,labels,individual,event_type.- Return type:
pd.DataFrame
Examples
>>> from ethograph.labels.intervals import empty_intervals >>> df = empty_intervals() >>> df.columns.tolist() ['onset_s', 'offset_s', 'labels', 'individual', 'event_type'] >>> len(df) 0
- ethograph.labels.intervals.purge_short_intervals(df, min_duration_s, label_thresholds_s=None)[source]#
Drop intervals shorter than a threshold.
- Parameters:
- Returns:
Filtered DataFrame.
- Return type:
pd.DataFrame
Examples
>>> df = add_interval(empty_intervals(), 0.0, 0.01, 1, "A") >>> df = add_interval(df, 1.0, 2.0, 2, "A") >>> purged = purge_short_intervals(df, min_duration_s=0.1) >>> len(purged) 1
- ethograph.labels.intervals.stitch_intervals(df, max_gap_s, individual=None)[source]#
Merge adjacent same-label intervals where gap <= max_gap_s.
- Parameters:
- Returns:
Stitched intervals DataFrame.
- Return type:
pd.DataFrame
Examples
>>> df = add_interval(empty_intervals(), 0.0, 1.0, 1, "A") >>> df = add_interval(df, 1.05, 2.0, 1, "A") >>> stitched = stitch_intervals(df, max_gap_s=0.1) >>> len(stitched) 1 >>> float(stitched.iloc[0]["offset_s"]) 2.0
- ethograph.labels.intervals.snap_boundaries(df, cp_times, max_expansion_s, max_shrink_s)[source]#
Snap interval onset/offset to nearest changepoint times.
- Parameters:
- Returns:
Snapped intervals with overlaps resolved.
- Return type:
pd.DataFrame
- ethograph.labels.intervals.load_label_mapping(mapping_file='mapping.txt')[source]#
Load a label mapping with colors for visualization.
- Parameters:
mapping_file (str or Path) – Path to the mapping file. Each line is
<id> <name> [<branch>] [<event_type>]where branch is an optional integer (default 0) grouping labels into branches for independent labeling, and event_type is"state"(default) or"point". Missing trailing columns inherit their defaults.- Returns:
{label_id: {"name": str, "color": ndarray(3,), "order": int, "branch": int, "event_type": str}}.- Return type:
- Raises:
FileNotFoundError – If mapping_file does not exist.
Examples
>>> mappings = load_label_mapping("mapping.txt") >>> mappings[1]["name"] 'walk' >>> mappings[1]["color"].shape (3,)
Use the RGB colors to draw labelled rectangles on a plot:
import matplotlib.pyplot as plt import matplotlib.patches as mpatches mappings = load_label_mapping("mapping.txt") fig, ax = plt.subplots() ax.plot(time, signal) for _, row in intervals_df.iterrows(): color = mappings[int(row["labels"])]["color"] # (3,) RGB in [0, 1] ax.axvspan(row["onset_s"], row["offset_s"], alpha=0.5, color=color) # Build a legend from the mapping handles = [ mpatches.Patch(color=m["color"], label=m["name"]) for m in mappings.values() ] ax.legend(handles=handles) plt.show()
- ethograph.labels.intervals.save_label_mapping(mapping_file, mappings)[source]#
Write a label mapping back to disk, preserving branch and event_type.
Lines have the form
<id> <name> <branch> <event_type>for scalar IDs. Theevent_typecolumn is omitted when it equals the default ("state") so files stay backward-compatible with older readers.
- ethograph.labels.intervals.load_mapping(mapping_file)[source]#
Load a class-name ↔ index mapping file.
The file is whitespace-delimited with lines
<index> <name>.- Parameters:
mapping_file (str or Path) – Path to the mapping file.
- Return type:
- Returns:
class_to_idx (dict[str, int])
idx_to_class (dict[int, str])
Examples
>>> class_to_idx, idx_to_class = load_mapping("mapping.txt") >>> class_to_idx["walk"] 1 >>> idx_to_class[1] 'walk'
Dense ↔ interval conversion (ML)#
- ethograph.labels.ml.dense_to_intervals(dense_array, individuals, *, sample_rate=None, time_coord=None)[source]#
Convert a dense label array to an intervals DataFrame.
Provide either sample_rate (uniform spacing starting at t = 0) or an explicit time_coord array.
- Parameters:
dense_array (np.ndarray) – Shape
(n_samples,)for a single individual, or(n_samples, n_individuals)for multiple.individuals (list[str]) – Individual identifiers — length must match the second axis.
sample_rate (float, optional) – Sampling rate in Hz. Timestamps are computed as
np.arange(n_samples) / sample_rate.time_coord (np.ndarray, optional) – Explicit time array of length
n_samples. Use this when timestamps are non-uniform or do not start at zero.
- Returns:
Intervals with columns
onset_s,offset_s,labels,individual.offset_sis inclusive (last sample of the segment).- Return type:
pd.DataFrame
- Raises:
ValueError – If neither sample_rate nor time_coord is given, or if the number of individuals does not match the array width.
Examples
Convert a 1-D dense array at 10 Hz:
>>> import numpy as np >>> from ethograph.labels.ml import dense_to_intervals >>> labels = np.array([0, 1, 1, 1, 0, 2, 2]) >>> df = dense_to_intervals(labels, ["crow_A"], sample_rate=10.0) >>> df[["onset_s", "offset_s", "labels"]].values.tolist() [[0.1, 0.3, 1], [0.5, 0.6, 2]]
With explicit timestamps:
>>> times = np.array([0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6]) >>> df = dense_to_intervals(labels, ["crow_A"], time_coord=times) >>> df["onset_s"].tolist() [0.1, 0.5]
- ethograph.labels.ml.intervals_to_dense(df, sample_rate, individuals, n_samples)[source]#
Convert an intervals DataFrame to a dense label array.
Each interval is mapped onto the nearest sample indices using
round(time * sample_rate). Overlapping intervals for the same individual are resolved by last-write-wins.- Parameters:
df (pd.DataFrame) – Intervals DataFrame with columns
onset_s,offset_s,labels,individual.sample_rate (float) – Sampling rate in Hz (e.g. 30.0 for 30 fps video features).
individuals (list[str]) – Individual identifiers. The output column order matches this list.
n_samples (int) – Number of output time steps. Typically available as per-trial
n_samplesmetadata in the TSV file.
- Returns:
Dense label array of shape
(n_samples, len(individuals)), dtypeint8. Background (unlabeled) time steps are 0.- Return type:
np.ndarray
Examples
>>> import pandas as pd >>> from ethograph.labels.ml import intervals_to_dense >>> df = pd.DataFrame({ ... "onset_s": [0.1, 0.5], "offset_s": [0.3, 0.6], ... "labels": [1, 2], "individual": ["A", "A"], ... }) >>> dense = intervals_to_dense(df, sample_rate=10.0, individuals=["A"], n_samples=7) >>> dense[:, 0].tolist() [0, 1, 1, 1, 0, 2, 2]
- ethograph.labels.ml.find_blocks(mask)[source]#
Find contiguous True blocks in a boolean array.
- Parameters:
mask (np.ndarray) – Boolean array.
- Return type:
- Returns:
starts (np.ndarray) – Start indices of True blocks.
ends (np.ndarray) – End indices (inclusive) of True blocks.
Examples
>>> import numpy as np >>> mask = np.array([False, True, True, False, True]) >>> starts, ends = find_blocks(mask) >>> starts array([1, 4]) >>> ends array([2, 4])
- ethograph.labels.ml.get_labels_start_end_indices(col, bg_class=0)[source]#
Return segment boundaries as sample indices (exclusive end).
Useful for slicing dense arrays or computing segment-level metrics.
- Parameters:
col (array-like) – 1-D dense label array.
bg_class (int) – Background class to ignore (default 0).
- Returns:
labels (list[int]) – Label class for each segment.
starts (list[int]) – Start index (inclusive) of each segment.
ends (list[int]) – End index (exclusive) — use
array[start:end]to slice.
Examples
>>> from ethograph.labels.ml import get_labels_start_end_indices >>> labels, starts, ends = get_labels_start_end_indices([0,1,1,1,0,2,2]) >>> labels [1, 2] >>> starts [1, 5] >>> ends [4, 7] >>> # To extract the first segment from a feature array: >>> # segment_features = features[starts[0]:ends[0], :]
- ethograph.labels.ml.purge_small_blocks(labels, min_length, label_thresholds=None)[source]#
Remove label blocks shorter than a threshold (set to background).
Scans the dense label array for contiguous runs of the same non-zero label. If a run is shorter than its threshold, every sample in it is set to 0 (background).
This is the dense-array counterpart of
purge_short_intervals()(which works in seconds on interval DataFrames).- Parameters:
labels (np.ndarray) – 1-D dense label array (int), where 0 = background.
min_length (int) – Default minimum block length in samples. Blocks shorter than this are zeroed out. Convert from seconds:
int(min_duration_s * sample_rate).label_thresholds (dict[int, int], optional) – Per-label minimum lengths that override min_length. For example,
{1: 10, 3: 30}means label 1 needs ≥10 samples and label 3 needs ≥30 samples; all other labels use min_length.
- Returns:
Copy of labels with short blocks zeroed out.
- Return type:
np.ndarray
Examples
Remove any block shorter than 3 samples:
>>> import numpy as np >>> from ethograph.labels.ml import purge_small_blocks >>> pred = np.array([0, 1, 0, 2, 2, 2, 2, 0]) >>> purge_small_blocks(pred, min_length=3).tolist() [0, 0, 0, 2, 2, 2, 2, 0]
With per-label thresholds (label 2 needs ≥5 samples):
>>> purge_small_blocks(pred, min_length=1, label_thresholds={2: 5}).tolist() [0, 1, 0, 0, 0, 0, 0, 0]
Typical pipeline — purge then stitch:
>>> pred = np.array([1,1,1, 0, 1, 0, 1,1,1]) >>> cleaned = purge_small_blocks(pred, min_length=2) # remove isolated 1-sample >>> cleaned.tolist() [1, 1, 1, 0, 0, 0, 1, 1, 1] >>> stitch_gaps(cleaned, max_gap_len=4).tolist() [1, 1, 1, 1, 1, 1, 1, 1, 1]
- ethograph.labels.ml.stitch_gaps(labels, max_gap_len, skip_labels=None)[source]#
Fill small background gaps between same-label segments.
Scans the dense label array for short runs of zeros (background) flanked by the same non-zero label on both sides. When the gap is at most max_gap_len samples, it is filled with that label.
This is typically used after model prediction to clean up fragmented outputs where a behaviour is briefly interrupted by a few background frames.
- Parameters:
labels (np.ndarray) – 1-D dense label array (int), where 0 = background.
max_gap_len (int) – Maximum gap length in samples to fill. Gaps longer than this are left untouched. Convert from seconds:
int(gap_s * sample_rate).skip_labels (set[int], optional) – Labels whose trailing gaps should never be filled. For example,
skip_labels={3}means that a gap preceded by label 3 is always kept even if the same label follows.
- Returns:
Copy of labels with qualifying gaps filled.
- Return type:
np.ndarray
Examples
Basic gap stitching at 30 Hz (fill gaps up to 5 frames):
>>> import numpy as np >>> from ethograph.labels.ml import stitch_gaps >>> pred = np.array([1, 1, 0, 1, 1, 0, 0, 0, 2, 2]) >>> stitch_gaps(pred, max_gap_len=2).tolist() [1, 1, 1, 1, 1, 0, 0, 0, 2, 2]
The single-frame gap (index 2) is filled because label 1 appears on both sides. The 3-frame gap (indices 5–7) is left alone because it exceeds
max_gap_len=2.Using
skip_labelsto protect specific transitions:>>> pred = np.array([3, 3, 0, 3, 3]) >>> stitch_gaps(pred, max_gap_len=2, skip_labels={3}).tolist() [3, 3, 0, 3, 3]
- ethograph.labels.ml.fix_endings(labels, changepoints)[source]#
Extend label endings by one sample at changepoint boundaries.
When a labelled segment ends and the very next sample is a changepoint, the segment is extended by one sample. This accounts for the common off-by-one between predicted segment boundaries and detected changepoints.
- Parameters:
labels (np.ndarray) – 1-D dense label array (int).
changepoints (array-like) – Either a boolean mask of the same length (True = changepoint), or an array of integer changepoint indices.
- Returns:
Copy of labels with qualifying endings extended by one sample.
- Return type:
np.ndarray
Examples
>>> import numpy as np >>> from ethograph.labels.ml import fix_endings >>> labels = np.array([0, 1, 1, 0, 0, 2, 2, 0]) >>> cps = np.array([0, 0, 0, 1, 0, 0, 0, 1], dtype=bool) >>> fix_endings(labels, cps).tolist() [0, 1, 1, 1, 0, 2, 2, 2]
The segment of label 1 ended at index 2, and index 3 is a changepoint, so label 1 is extended to index 3. Same for label 2 at index 7.
TSV storage#
- ethograph.labels.tsv_store.labels_tsv_path(nc_path, suffix='')[source]#
Derive the labels TSV path from the .nc file path.
- Return type:
Examples
>>> labels_tsv_path("experiment/data.nc") PosixPath('experiment/data_labels.tsv') >>> labels_tsv_path("experiment/data.nc", suffix="_downsampled_100x") PosixPath('experiment/data_downsampled_100x_labels.tsv')
- ethograph.labels.tsv_store.load_labels_tsv(path)[source]#
Load labels from a TSV file.
- Parameters:
path (str or Path) – Path to a
_labels.tsvfile.- Returns:
Columns:
trial,onset_s,offset_s,labels(int),individual,human_verified,changepoint_corrected,prediction_source.- Return type:
pd.DataFrame
Examples
>>> df = load_labels_tsv("experiment/data_labels.tsv") >>> df[["trial", "onset_s", "offset_s", "labels", "individual"]].head() trial onset_s offset_s labels individual 0 1 0.41 0.505 1 crow1 1 1 0.51 0.620 2 crow1
- ethograph.labels.tsv_store.save_labels_tsv(path, df)[source]#
Save labels DataFrame to TSV. Uses atomic write (tmp + rename).
- Parameters:
path (str or Path) – Destination path.
df (pd.DataFrame) – Labels DataFrame with required columns (see
REQUIRED_COLUMNS).
- Return type:
- ethograph.labels.tsv_store.validate_labels_tsv(df, path='')[source]#
Validate that a labels DataFrame has all required columns.
- Raises:
ValueError – If any of
onset_s,offset_s,labels,individual,trialare missing from the DataFrame columns.- Return type:
- ethograph.labels.tsv_store.init_empty_labels(trials)[source]#
Create empty labels DataFrame.
- Return type:
- ethograph.labels.tsv_store.get_trial_from_tsv(all_df, trial)[source]#
Extract all rows for a single trial from the all-labels DataFrame.
Returns a DataFrame with the full
TSV_COLUMNSset: trial, individual, labels, onset_s, offset_s, plus per-trial metadata columns. Callers that only need interval data can ignore the extras;set_trial_in_tsvalready discards everything exceptINTERVAL_COLUMNSwhen writing back.- Return type:
- ethograph.labels.tsv_store.set_trial_in_tsv(all_df, trial, trial_df)[source]#
Replace all rows for a trial in the all-labels DataFrame.
Preserves per-trial metadata columns from the existing rows.
- Return type:
Predictions#
- ethograph.labels.predictions.load_prediction_file(path)[source]#
Load a prediction file (.npy or .pickle). Returns a numpy array.
For .npy files with shape (T,) (confidence or dense labels), uses memory-mapping (mmap_mode=’r’) so no data is copied into RAM until accessed.
- Return type:
- ethograph.labels.predictions.prediction_to_labels_and_confidence(pred)[source]#
Convert prediction array to dense labels and optional confidence.
- Parameters:
pred (np.ndarray) – Shape (T, n_classes) for softmax probabilities, or (T,) for dense labels.
- Return type:
tuple[np.ndarray, np.ndarray | None]
- Returns:
labels (np.ndarray, shape (T,)) – Dense integer labels (argmax for softmax input).
confidence (np.ndarray or None) – Shape (T,) confidence scores. For softmax input: 1 - normalized_entropy. None if input is already dense labels.
- class ethograph.labels.predictions.PredictionsStore(folder)[source]#
Lazy per-trial loader for a predictions folder.
Scans the folder at construction time (fast — filesystem only, no file reads). Individual trial data is loaded on demand via
get_confidence().Supports
.npy(memory-mapped when shape is 1-D) and.pkl/.pickleformats. Additional formats can be added toload_prediction_file.- Parameters:
folder (str or Path) – Folder containing per-trial prediction files.
Example
store = PredictionsStore("predictions_cetnet_20260330/uncorr") confidence = store.get_confidence(trial=5, dt=dt) labels_df, levels = store.load_all(dt, individual="Poppy", threshold=0.75)
- get_confidence(trial, dt)[source]#
Load and return the confidence array for one trial.
For
.npyprobability files the array is memory-mapped; for.pklfiles the full file is read (typically ~150 KB — a few milliseconds). The returned array is not cached — call again to re-load if needed.- Return type:
np.ndarray | None
- load_all(dt, individual, confidence_threshold=0.75, segment_confidence_threshold=0.6)[source]#
Load all trials — convert to intervals and compute confidence levels.
Confidence arrays are computed in one pass then discarded; only the per-trial high/low classification is kept. The same two-condition criterion used in the confidence PDF is applied: a trial is “low” if its overall mean confidence < confidence_threshold OR any labeled segment’s mean confidence < segment_confidence_threshold.
- Parameters:
- Return type:
- Returns:
all_labels_df (pd.DataFrame)
confidence_levels (dict) –
{trial: "low" | "high"}
Crowsetta / pynapple converters#
- class ethograph.labels.crowsetta_format.EthographSeq(onsets_s, offsets_s, labels, individuals=None, trials=None, annot_path='')#
Extended simple-seq format with individual and trial columns.
- class ethograph.labels.converters.LabelConverter[source]#
Base class for converting external label sources to ethograph intervals.
Subclasses override
extract()to pull intervals from their source (NWB, pynapple, crowsetta, …). The sharedresolve_labels()method centralises the “TSV on disk → extract from source → empty” fallback chain used by everyLoadResult-producing function indata_loader.
- class ethograph.labels.converters.CrowsettaLabelConverter(file_path, format_name, name_to_id, individual='ind0')[source]#
Convert crowsetta annotation files to ethograph intervals.
Crowsetta labels are already in file-local time, so no trial table is needed for time conversion. If a
trials_dfis provided the first trial id is attached; otherwisetrial=1.
- class ethograph.labels.converters.PynappleLabelConverter(data, trials_ep=None)[source]#
Extract labels from pynapple IntervalSet objects.
Collects every
nap.IntervalSetin the data dict whose key is not"trials"or"epochs"(those are trial boundaries, not labels). Each IntervalSet name becomes a label class.
- ethograph.labels.converters.crowsetta_to_intervals(file_path, format_name, name_to_id, individual='ind0')[source]#
Convert a crowsetta annotation file to an intervals DataFrame.
- Return type:
- ethograph.labels.converters.extract_crowsetta_labels(file_path, format_name)[source]#
Extract unique string labels from a crowsetta annotation file.
- ethograph.labels.converters.resolve_crowsetta_mapping(file_path, format_name, mapping_path, configs_dir)[source]#
Check existing mapping against crowsetta labels; create new if needed.
Export helpers#
- ethograph.labels.export.enrich_labels_df(all_labels_df, nwb_alignment=None, keep_attrs=None, dt=None)[source]#
Enrich a raw labels DataFrame with computed columns for analysis export.
Takes the in-memory
_all_labels_df(with columnsonset_s,offset_s,labels,individual,trial) and adds session timing, duration, sequence info, and trial attributes.- Parameters:
all_labels_df (pd.DataFrame) – Raw labels with required columns: onset_s, offset_s, labels, individual, trial.
nwb_alignment – Session metadata (for trial timing).
keep_attrs (list[str], optional) – Trial-level
ds.attrskeys to include as extra columns (xarray only).dt (TrialTree, optional) – Xarray data tree (only needed for
keep_attrsand session name).
- Returns:
Enriched DataFrame with one row per non-background segment.
- Return type:
pd.DataFrame
- ethograph.labels.export.correct_offsets_trial(df)[source]#
Apply gap correction to a single trial’s interval DataFrame.
For each individual, pulls back
offset_swhen the gap to the next onset is smaller thanepsso pynapple can resolve all intervals.Works on the per-trial format (columns: trial, onset_s, offset_s, labels, individual) returned by
app_state.get_trial_intervals().- Return type:
Plotting#
- ethograph.labels.plots.draw_label_rectangle(ax, start_time, end_time, labels, label_mappings, is_main=True, fraction=None, alpha=0.8)[source]#
Draw a label rectangle on a matplotlib axis.
- Parameters:
ax (
Axes) – Matplotlib axis to plot onstart_time (
float) – Start time of the labelend_time (
float) – End time of the labellabels (
int) – Label class ID for color mappinglabel_mappings (
Dict[int,Dict]) – Dict mapping label IDs to color infois_main (
bool) – If True, draw full-height rectangle; if False, draw small rectangle at topfraction (
Optional[float]) – Height fraction for non-main rectangles
- Return type:
Example:
fig, ax = plt.subplots() ax.plot(time, signal) draw_label_rectangle(ax, 1.2, 3.5, label_id=1, label_mappings=label_mappings)
- ethograph.labels.plots.plot_label_segments(ax, df, label_mappings, individual=None, is_main=True, fraction=0.2, alpha=0.8)[source]#
Plot label segments from an intervals DataFrame.
- Parameters:
ax (
Axes) – Matplotlib axis to plot ondf (
DataFrame) – Intervals DataFrame with columns onset_s, offset_s, labels, individuallabel_mappings (
Dict[int,Dict]) – Dict mapping label IDs to color infoindividual (
Optional[str]) – If given, only plot segments for this individualis_main (
bool) – If True, plot full-height rectangles; if False, plot small rectangles at topfraction (
float) – Height fraction for non-main rectangles
- Return type:
Example:
import ethograph as eto from ethograph.labels.intervals import load_label_mapping dt = eto.open("data.nc") label_mappings = load_label_mapping("mapping.txt") fig, ax = plt.subplots() # df is an intervals DataFrame with onset_s, offset_s, labels, individual plot_label_segments(ax, df, label_mappings) plt.show()
- ethograph.labels.plots.plot_label_segments_multirow(ax, df, label_mappings, row_index=0, row_spacing=0.8, rect_height=0.7, alpha=0.7, individual=None)[source]#
Plot label segments at a specific row position.
Useful for comparing ground truth vs. predictions on the same axis by placing each on a different row.
- Parameters:
ax (
Axes) – Matplotlib axis to plot ondf (
DataFrame) – Intervals DataFrame with columns onset_s, offset_s, labels, individuallabel_mappings (
Dict[int,Dict[str,str]]) – Dict mapping label IDs to color inforow_index (
int) – Row number (0-based) for vertical positioningrow_spacing (
float) – Vertical spacing between rowsrect_height (
float) – Height of each rectanglealpha (
float) – Transparency of rectanglesindividual (
Optional[str]) – If given, only plot segments for this individual
- Return type:
Example:
import ethograph as eto from ethograph.labels.intervals import load_label_mapping dt = eto.open("data.nc") pred_dt = eto.open("predictions.nc") label_mappings = load_label_mapping("mapping.txt") fig, ax = plt.subplots() ax.set_yticks([0, 0.8]) ax.set_yticklabels(["ground truth", "predictions"]) # gt_df, pred_df are intervals DataFrames with onset_s, offset_s, labels, individual gt_df = ... pred_df = ... plot_label_segments_multirow(ax, gt_df, label_mappings, row_index=0) plot_label_segments_multirow(ax, pred_df, label_mappings, row_index=1) plt.show()