Recording class#

class pyneon.Recording(recording_dir: str | Path)#

Container of a multi-modal recording with Stream, Events, and Video

The recording directory is expected to follow either the Pupil Cloud format (tested with data format version >= 2.3) or the native Pupil Labs format (tested with data format version >= 2.5). In both cases, the directory must contain an info.json file.

Example Pupil Cloud recording directory structure:

recording_dir/
├── 3d_eye_states.csv
├── blinks.csv
├── events.csv
├── fixations.csv
├── gaze.csv
├── imu.csv
├── info.json (REQUIRED)
├── labels.csv
├── saccades.csv
├── scene_camera.json
├── world_timestamps.csv
└── *.mp4

Example native Pupil Labs recording directory structure:

recording_dir/
├── blinks ps1.raw
├── blinks ps1.time
├── blinks.dtype
├── calibration.bin
├── event.time
├── event.txt
├── ...
├── gaze ps1.raw
├── gaze ps1.time
├── gaze.dtype
├── ...
├── info.json (REQUIRED)
├── Neon Scene Camera v1 ps1.mp4
├── Neon Scene Camera v1 ps1.time
├── Neon Sensor Module v1 ps1.mp4
├── Neon Sensor Module v1 ps1.time
├── ...
├── wearer.json
├── worn ps1.raw
└── worn.dtype

Streams, events, and scene video will be located but not loaded until accessed as properties such as gaze, fixations, and scene_video.

Parameters:
recording_dirstr or pathlib.Path

Path to the directory containing the recording.

Attributes:
recording_idstr

Recording ID.

recording_dirpathlib.Path

Path to the recording directory.

format{“cloud”, “native”}

Recording format, either “cloud” for Pupil Cloud format or “native” for native format.

infodict

Information about the recording. Read from info.json. For details, see https://docs.pupil-labs.com/neon/data-collection/data-format/#info-json.

data_format_versionstr | None

Data format version as in info.json.

Methods

close()

Release cached video handles, if any.

concat_events(events_names)

Concatenate different events.

concat_streams(stream_names[, ...])

Concatenate data from different streams under common timestamps.

export_cloud_format(target_dir[, rebase])

Export native data to cloud-like format.

export_eye_tracking_bids(output_dir[, ...])

Export eye-tracking data to Eye-Tracking-BIDS format.

export_motion_bids(motion_dir[, prefix, ...])

Export IMU data to Motion-BIDS format.

plot_distribution([heatmap_source, ...])

Plot a heatmap of gaze or fixation data on a matplotlib axis.

sync_gaze_to_video([window_size, inplace])

Synchronize gaze data to video frames by applying windowed averaging around timestamps of each video frame.

close() None#

Release cached video handles, if any.

property gaze: Stream#

Return a cached Stream instance containing gaze data.

For Pupil Cloud recordings, the data is loaded from gaze.csv.

For native recordings, the data is loaded from gaze_200hz.raw (if present; otherwise from gaze ps1.raw) along with the corresponding .time and .dtype files.

property imu: Stream#

Return a cached Stream instance containing IMU data.

For Pupil Cloud recordings, the data is loaded from imu.csv.

For native recordings, the data is loaded from imu ps1.raw, along with the corresponding .time and .dtype files.

property eye_states: Stream#

Return a cached Stream instance containing eye states data.

For Pupil Cloud recordings, the data is loaded from 3d_eye_states.csv.

For native recordings, the data is loaded from eye_state ps1.raw, along with the corresponding .time and .dtype files.

Return a cached Events instance containing blink event data.

For Pupil Cloud recordings, the data is loaded from blinks.csv.

For native recordings, the data is loaded from blinks ps1.raw, along with the corresponding .time and .dtype files.

property fixations: Events#

Return a cached Events instance containing fixations data.

For Pupil Cloud recordings, the data is loaded from fixations.csv.

For native recordings, the data is loaded from fixations ps1.raw, along with the corresponding .time and .dtype files.

property saccades: Events#

Return a cached Events instance containing saccades data.

For Pupil Cloud recordings, the data is loaded from saccades.csv.

For native recordings, the data is loaded from fixations ps1.raw, along with the corresponding .time and .dtype files.

property events: Events#

Return a cached Events instance containing events data.

For Pupil Cloud recordings, the events data is loaded from events.csv.

For native recordings, the events data is loaded from event.txt and event.time.

property scene_video: Video#

Return a cached Video instance containing scene video data.

For Pupil Cloud recordings, the video is loaded from the only *.mp4 file in the recording directory.

For native recordings, the video is loaded from the Neon Scene Camera*.mp4 file in the recording directory.

property eye_video: Video#

Return a cached Video instance containing eye video data.

Eye video is only available for native recordings and is loaded from the Neon Sensor Module*.mp4 file in the recording directory.

property start_time: int#

Start time (in ns) of the recording as in info.json. May not match the start time of each data stream.

property start_datetime: datetime#

Start time (datetime) of the recording as in info.json. May not match the start time of each data stream.

concat_streams(stream_names: str | list[str], sampling_freq: Number | str = 'min', float_kind: str | int = 'linear', other_kind: str | int = 'nearest', inplace: bool = False) Stream#

Concatenate data from different streams under common timestamps. Since the streams may have different timestamps and sampling frequencies, resampling of all streams to a set of common timestamps is performed. The latest start timestamp and earliest last timestamp of the selected streams are used to define the common timestamps.

Parameters:
stream_namesstr or list of str

Stream names to concatenate. If “all”, then all streams will be used. If a list, items must be in {"gaze", "imu", "eye_states"} (“3d_eye_states” is also tolerated as an alias for “eye_states”).

sampling_freqfloat or int or str, optional

Sampling frequency of the concatenated streams. If numeric, the streams will be interpolated to this frequency. If “min” (default), the lowest nominal sampling frequency of the selected streams will be used. If “max”, the highest nominal sampling frequency will be used.

float_kindstr, optional

Kind of interpolation applied on columns of float type, Defaults to “linear”. For details see scipy.interpolate.interp1d.

other_kindstr, optional

Kind of interpolation applied on columns of other types, Defaults to “nearest”. Only “nearest”, “previous”, and “next” are recommended.

inplacebool, optional

Replace selected stream data with interpolated data during concatenation if True. Defaults to False.

Returns:
Stream

Stream instance containing concatenated data.

concat_events(events_names: str | list[str]) Events#

Concatenate different events. All columns in the selected event type will be present in the final DataFrame. An additional “type” column denotes the event type. If events is selected, its “timestamp [ns]” column will be renamed to “start timestamp [ns]”, and the “name” and “type” columns will be renamed to “message name” and “message type” respectively to provide a more readable output.

Parameters:
events_nameslist of str

List of event names to concatenate. Event names must be in {"blinks", "fixations", "saccades", "events"} (singular forms are tolerated).

Returns:
Events

Events instance containing concatenated data.

plot_distribution(heatmap_source: Literal['gaze', 'fixations', None] = 'gaze', scatter_source: Literal['gaze', 'fixations', None] = 'fixations', step_size: int = 10, sigma: int | float = 2, width_height: tuple[int, int] = (1600, 1200), cmap: str = 'inferno', ax: Axes | None = None, show: bool = True) tuple[Figure, Axes]#

Plot a heatmap of gaze or fixation data on a matplotlib axis. Users can flexibly choose to generate a smoothed heatmap and/or scatter plot and the source of the data (gaze or fixation).

Parameters:
heatmap_source{‘gaze’, ‘fixations’, None}

Source of the data to plot as a heatmap. If None, no heatmap is plotted. Defaults to ‘gaze’.

scatter_source{‘gaze’, ‘fixations’, None}

Source of the data to plot as a scatter plot. If None, no scatter plot is plotted. Defaults to ‘fixations’. Gaze data is typically more dense and thus less suitable for scatter plots.

step_sizeint

Size of the grid cells in pixels. Defaults to 10.

sigmaint or float

Standard deviation of the Gaussian kernel used to smooth the heatmap. If None or 0, no smoothing is applied. Defaults to 2.

width_heighttuple[int, int]

If video is not available, the width and height of the scene camera frames to specify the heatmap dimensions. Defaults to (1600, 1200).

cmapstr

Colormap to use for the heatmap. Defaults to ‘inferno’.

axmatplotlib.axes.Axes or None

Axis to plot on. If None, a new figure is created. Defaults to None.

showbool

Show the figure if True. Defaults to True.

Returns:
figmatplotlib.figure.Figure

Figure instance containing the plot.

axmatplotlib.axes.Axes

Axis instance containing the plot.

sync_gaze_to_video(window_size: int | None = None, inplace: bool = False) Stream | None#

Synchronize gaze data to video frames by applying windowed averaging around timestamps of each video frame.

Parameters:
window_sizeint, optional

Size of the time window in nanoseconds used for averaging. If None, defaults to the median interval between video frame timestamps.

inplacebool, optional

If True, update the gaze stream in-place and return None. If False, return a new Stream. Defaults to False.

Returns:
Stream or None

A Stream indexed by “timestamp [ns]” containing the window-averaged gaze data, or None if inplace=True.

export_cloud_format(target_dir: str | Path, rebase: bool = True)#

Export native data to cloud-like format.

Parameters:
target_dirstr or pathlib.Path

Output directory to save the Cloud-Format structured data.

rebasebool, optional

If True, re-initialize the recording on the target directory after export.

export_motion_bids(motion_dir: str | Path, prefix: str | None = None, extra_metadata: dict = {})#

Export IMU data to Motion-BIDS format.

Motion-BIDS [1] is an extension to the Brain Imaging Data Structure (BIDS) that standardizes motion sensor data from IMUs for reproducible research. This method creates motion time-series and metadata files, channels files, and updates the scans file in the subject/session directory. The output files are BIDS-compliant templates and may require additional metadata editing for full compliance with your specific use case.

The exported files are:

<motion_dir>/
    <prefix>_channels.json
    <prefix>_channels.tsv
    <prefix>_motion.json
    <prefix>_motion.tsv
sub-<label>_[ses-<label>]_scans.tsv

For example:

sub-01/
    ses-1/
        motion/
            sub-01_ses-1_task-MyTask_tracksys-NeonIMU_run-1_channels.json
            sub-01_ses-1_task-MyTask_tracksys-NeonIMU_run-1_channels.tsv
            sub-01_ses-1_task-MyTask_tracksys-NeonIMU_run-1_motion.json
            sub-01_ses-1_task-MyTask_tracksys-NeonIMU_run-1_motion.tsv
        sub-01_ses-1_scans.tsv

Motion-BIDS specification can be found at: https://bids-specification.readthedocs.io/en/stable/modality-specific-files/motion.html

Parameters:
motion_dirstr or pathlib.Path

Output directory to save the Motion-BIDS formatted data. The directory name itself should be “motion” as specified by Motion-BIDS.

prefixstr, optional

BIDS naming prefix. The format is:

sub-<label>[_ses-<label>]_task-<label>_tracksys-<label>[_acq-<label>][_run-<index>]

Required fields are sub-<label>, task-<label>, and tracksys-<label> (use tracksys-NeonIMU for Neon IMU data). If not provided, inferred from the directory structure (parent directories) or defaults to sub-{wearer_name}_task-TaskName_tracksys-NeonIMU.

extra_metadatadict, optional

Extra metadata to include in the JSON metadata file. Keys must be valid BIDS fields (for example, TaskName).

References

[1]

Jeung, S., Cockx, H., Appelhoff, S., Berg, T., Gramann, K., Grothkopp, S., Warmerdam, E., Hansen, C., Oostenveld, R., & Welzel, J. (2024). Motion-BIDS: An extension to the brain imaging data structure to organize motion data for reproducible research. Scientific Data, 11(1), 716. https://doi.org/10.1038/s41597-024-03559-8

export_eye_tracking_bids(output_dir: str | Path, prefix: str | None = None, extra_metadata: dict = {})#

Export eye-tracking data to Eye-Tracking-BIDS format.

Eye-Tracking-BIDS [2] standardizes gaze position, pupil data, and eye-tracking events by treating eye-tracking data as physiological data that can be organized alongside most BIDS modalities. The export creates gaze (and pupil diameter) time-series and event data files with accompanying metadata.

The exported files are:

<output_dir>/
    <prefix>_physio.tsv.gz
    <prefix>_physio.json
    <prefix>_physioevents.tsv.gz
    <prefix>_physioevents.json
sub-<label>_[ses-<label>]_scans.tsv

For example:

sub-01/
    ses-1/
        motion/
            <existing motion files if motion export is performed>
            sub-01_ses-1_task-MyTask_tracksys-NeonIMU_run-1_physio.json
            sub-01_ses-1_task-MyTask_tracksys-NeonIMU_run-1_physio.tsv.gz
            sub-01_ses-1_task-MyTask_tracksys-NeonIMU_run-1_physioevents.json
            sub-01_ses-1_task-MyTask_tracksys-NeonIMU_run-1_physioevents.tsv.gz

BIDS specifications for physiological recordings and specifically eye-tracking data can be found at:

Parameters:
output_dirstr or pathlib.Path

Output directory to save the Eye-tracking-BIDS formatted data.

prefixstr, optional

BIDS naming prefix. Must include sub-<label> and task-<label>. If not provided, the function attempts to infer it from the directory structure or detect it from existing files (e.g., from export_motion_bids()). Defaults to sub-{wearer_name}_task-TaskName if no existing files are found.

extra_metadatadict, optional

Extra metadata to include in the JSON metadata file. Keys must be valid BIDS fields.

References

[2]

Szinte, M., Bach, D. R., Draschkow, D., Esteban, O., Gagl, B., Gau, R., Gregorova, K., Halchenko, Y. O., Huberty, S., Kling, S. M., Kulkarni, S., Maintainers, T. B., Markiewicz, C. J., Mikkelsen, M., Oostenveld, R., & Pfarr, J.-K. (2026). Eye-Tracking-BIDS: The Brain Imaging Data Structure extended to gaze position and pupil data. bioRxiv. https://doi.org/10.64898/2026.02.03.703514