NeonRecording class#
- class pyneon.NeonRecording(recording_dir: str | Path)#
Data from a single recording. The recording directory could be downloaded from either a single recording or a project on Pupil Cloud. In either case, the directory must contain an
info.json
file. For example, a recording directory could have the following structure:recording_dir/ ├── info.json (REQUIRED) ├── gaze.csv ├── 3d_eye_states.csv ├── imu.csv ├── blinks.csv ├── fixations.csv ├── saccades.csv ├── events.csv ├── labels.csv ├── world_timestamps.csv ├── scene_camera.json ├── <scene_video>.mp4 (if present) ├── scanpath.pkl (after executing `estimate_scanpath`) └── video_with_scanpath.mp4 (after executing `overlay_scanpath_on_video`)
Streams, events, (and scene video) will be located but not loaded until accessed as properties such as
gaze
,imu
, andeye_states
.- Parameters:
recording_dir (str or
pathlib.Path
) – Path to the directory containing the recording.
- recording_dir#
Path to the recording directory.
- Type:
- info#
Information about the recording. Read from
info.json
. For details, see https://docs.pupil-labs.com/neon/data-collection/data-format/#info-json.- Type:
- start_time#
Start time (in ns) of the recording as in
info.json
. May not match the start time of each data stream.- Type:
- start_datetime#
Start time (datetime) of the recording as in
info.json
. May not match the start time of each data stream.- Type:
- contents#
Contents of the recording directory. Each index is a stream or event name (e.g.
gaze
orimu
) and columns areexist
(bool),filename
(str), andpath
(Path).- Type:
- property eye_states: NeonEyeStates | None#
Returns a NeonEyeStates object or None if no eye states data is found.
- property blinks: NeonBlinks | None#
Returns a NeonBlinks object or None if no blinks data is found.
- property fixations: NeonFixations | None#
Returns a NeonFixations object or None if no fixations data is found.
- property saccades: NeonSaccades | None#
Returns a NeonSaccades object or None if no saccades data is found.
- property events: NeonEvents | None#
Returns a NeonEvents object or None if no events data is found.
- concat_streams(stream_names: str | list[str], sampling_freq: Number | str = 'min', resamp_float_kind: str = 'linear', resamp_other_kind: str = 'nearest', inplace: bool = False) DataFrame #
Concatenate data from different streams under common timestamps. Since the streams may have different timestamps and sampling frequencies, resampling of all streams to a set of common timestamps is performed. The latest start timestamp and earliest last timestamp of the selected sreams are used to define the common timestamps.
- Parameters:
stream_names (str or list of str) – Stream names to concatenate. If “all”, then all streams will be used. If a list, items must be in
{"gaze", "imu", "eye_states"}
("3d_eye_states"
) is also tolerated as an alias for"eye_states"
).sampling_freq (float or int or str, optional) – Sampling frequency to resample the streams to. If numeric, the streams will be resampled to this frequency. If
"min"
, the lowest nominal sampling frequency of the selected streams will be used. If"max"
, the highest nominal sampling frequency will be used. Defaults to"min"
.resamp_float_kind (str, optional) – Kind of interpolation applied on columns of float type, Defaults to
"linear"
. For details seescipy.interpolate.interp1d
.resamp_other_kind (str, optional) – Kind of interpolation applied on columns of other types. Defaults to
"nearest"
.inplace (bool, optional) – Replace selected stream data with resampled data during concatenation if``True``. Defaults to
False
.
- Returns:
concat_data – Concatenated data.
- Return type:
- concat_events(event_names: list[str]) DataFrame #
Concatenate different events. All columns in the selected event type will be present in the final DataFrame. An additional
"type"
column denotes the event type. Ifevents
is selected, its"timestamp [ns]"
column will be renamed to"start timestamp [ns]"
, and the"name
and"type"
columns will be renamed to"message name"
and"message type"
respectively to provide a more readable output.
- plot_distribution(heatmap_source: Literal['gaze', 'fixations', None] = 'gaze', scatter_source: Literal['gaze', 'fixations', None] = 'fixations', step_size: int = 10, sigma: float | None = 2, width_height: tuple[int, int] = (1600, 1200), cmap: str | None = 'inferno', ax: Axes | None = None, show: bool = True)#
Plot a heatmap of gaze or fixation data on a matplotlib axis. Users can flexibly choose to generate a smoothed heatmap and/or scatter plot and the source of the data (gaze or fixation).
- Parameters:
rec (
NeonRecording
) – Recording object containing the gaze and video data.heatmap_source ({'gaze', 'fixations', None}) – Source of the data to plot as a heatmap. If None, no heatmap is plotted. Defaults to ‘gaze’.
scatter_source ({'gaze', 'fixations', None}) – Source of the data to plot as a scatter plot. If None, no scatter plot is plotted. Defaults to ‘fixations’. Gaze data is typically more dense and thus less suitable for scatter plots.
step_size (int) – Size of the grid cells in pixels. Defaults to 10.
sigma (float or None) – Standard deviation of the Gaussian kernel used to smooth the heatmap. If None or 0, no smoothing is applied. Defaults to 2.
width_height (tuple[int, int]) – If video is not available, the width and height of the scene camera frames to specify the heatmap dimensions. Defaults to (1600, 1200).
cmap (str or None) – Colormap to use for the heatmap. Defaults to ‘inferno’.
ax (
matplotlib.pyplot.Axes
or None) – Axis to plot the frame on. IfNone
, a new figure is created. Defaults toNone
.show (bool) – Show the figure if
True
. Defaults to True.
- Returns:
fig (
matplotlib.pyplot.Figure
) – Figure object containing the plot.ax (
matplotlib.pyplot.Axes
) – Axis object containing the plot.
- map_gaze_to_video(resamp_float_kind: str = 'linear', resamp_other_kind: str = 'nearest') DataFrame #
Map gaze data to video frames.
Parameters:#
- recNeonRecording
Recording object containing gaze and video data.
- resamp_float_kindstr
Interpolation method for float columns.
- resamp_other_kindstr
Interpolation method for non-float columns.
- estimate_scanpath(lk_params: None | dict = None) DataFrame #
Map fixations to video frames.
Parameters:#
- recNeonRecording
Recording object containing gaze and video data.
- lk_paramsdict
Parameters for the Lucas-Kanade optical flow algorithm.
- overlay_scanpath_on_video(video_output_path: Path | str = 'sacnpath_overlay_video.mp4', circle_radius: int = 10, show_lines: bool = True, line_thickness: int = 2, show_video: bool = False, max_fixations: int = 10) None #
Overlay fixations and gaze data on video frames and save the resulting video.
Parameters:#
- recNeonRecording
Recording object containing gaze and video data.
- video_output_pathstr
Path where the video with fixations will be saved.
- circle_radiusint
Radius of the circle used to represent fixations.
- line_thicknessint
Thickness of the lines connecting successive fixations.
- show_videobool
Flag to display the video with fixations overlaid in
- to_motion_bids(motion_dir: str | Path, prefix: str = '', extra_metadata: dict = {})#
Export IMU data to Motion-BIDS format. Continuous samples are saved to a .tsv file and metadata (with template fields) are saved to a .json file. Users should later edit the metadata file according to the experiment to make it BIDS-compliant.
- Parameters:
motion_dir (str or
pathlib.Path
) – Output directory to save the Motion-BIDS formatted data.prefix (str, optional) – Prefix for the BIDS filenames, by default “sub-XX_task-YY_tracksys-NeonIMU”. The format should be sub-<label>[_ses-<label>]_task-<label>_tracksys-<label>[_acq-<label>][_run-<index>] (Fields in [] are optional). Files will be saved as
{prefix}_motion.<tsv|json>
.
Notes
Motion-BIDS is an extension to the Brain Imaging Data Structure (BIDS) to standardize the organization of motion data for reproducible research [1]. For more information, see https://bids-specification.readthedocs.io/en/stable/modality-specific-files/motion.html.
References
- to_eye_bids(output_dir: str | Path, prefix: str = '', extra_metadata: dict = {})#
Export eye-tracking data to Eye-tracking-BIDS format. Continuous samples and events are saved to .tsv.gz files with accompanying .json metadata files. Users should later edit the metadata files according to the experiment.
- Parameters:
output_dir (str or
pathlib.Path
) – Output directory to save the Eye-tracking-BIDS formatted data.prefix (str, optional) – Prefix for the BIDS filenames, by default “sub-XX_recording-eye”. The format should be <matches>[_recording-<label>]_<physio|physioevents>.<tsv.gz|json> (Fields in [] are optional). Files will be saved as
{prefix}_physio.<tsv.gz|json>
and{prefix}_physioevents.<tsv.gz|json>
.
Notes
Eye-tracking-BIDS is an extension to the Brain Imaging Data Structure (BIDS) to standardize the organization of eye-tracking data for reproducible research. The extension is still being finialized. This method follows the latest standards outlined in bids-standard/bids-specification#1128.