{ "cells": [ { "cell_type": "markdown", "id": "06928832", "metadata": {}, "source": [ "# Reading Native-Format Recordings\n", "\n", "In this tutorial, we demonstrate how to load a Neon recording in native format (stored on the companion device or downloaded as native data) and explore the data structure. We also illustrate PyNeon's unified API, which handles both native and cloud formats seamlessly, and show how to convert native data to cloud format.\n", "\n", "## Downloading Sample Data\n", "\n", "We will use the same \"simple\" dataset as in the [previous tutorial](read_recording_cloud.ipynb), and analyze the native format instead." ] }, { "cell_type": "code", "execution_count": 1, "id": "ca9f3271", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "C:\\Users\\qian.chu\\Documents\\GitHub\\PyNeon\\data\\simple\\Native Recording Data\n" ] } ], "source": [ "from pyneon import Dataset, Recording, get_sample_data\n", "\n", "# Download sample data (if not existing) and return the path\n", "sample_dir = get_sample_data(\"simple\")\n", "native_dir = sample_dir / \"Native Recording Data\"\n", "cloud_dir = sample_dir / \"Timeseries Data + Scene Video\"\n", "print(native_dir)" ] }, { "cell_type": "markdown", "id": "c9ff88da", "metadata": {}, "source": [ "A dataset in native format has the following directory structure:" ] }, { "cell_type": "code", "execution_count": 2, "id": "5241e768", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Native Recording Data/\n", "├─simple1-56fcec49/\n", "│ ├─android.log.zip\n", "│ ├─blinks ps1.raw\n", "│ ├─blinks ps1.time\n", "│ ├─blinks.dtype\n", "│ ├─calibration.bin\n", "│ ├─event.time\n", "│ ├─event.txt\n", "│ ├─extimu ps1.raw\n", "│ ├─extimu ps1.time\n", "│ ├─eye_state ps1.raw\n", "│ ├─eye_state ps1.time\n", "│ ├─eye_state.dtype\n", "│ ├─fixations ps1.raw\n", "│ ├─fixations ps1.time\n", "│ ├─fixations.dtype\n", "│ ├─gaze ps1.raw\n", "│ ├─gaze ps1.time\n", "│ ├─gaze.dtype\n", "│ ├─gaze_200hz.raw\n", "│ ├─gaze_200hz.time\n", "│ ├─gaze_right ps1.raw\n", "│ ├─gaze_right ps1.time\n", "│ ├─imu ps1.raw\n", "│ ├─imu ps1.time\n", "│ ├─imu.dtype\n", "│ ├─imu.proto\n", "│ ├─info.json\n", "│ ├─manifest.json\n", "│ ├─manifest.json.crc\n", "│ ├─Neon Scene Camera v1 ps1.mp4\n", "│ ├─Neon Scene Camera v1 ps1.time\n", "│ ├─Neon Scene Camera v1 ps1.time_aux\n", "│ ├─Neon Sensor Module v1 ps1.mp4\n", "│ ├─Neon Sensor Module v1 ps1.time\n", "│ ├─Neon Sensor Module v1 ps1.time_aux\n", "│ ├─Neon Sensor Module v1_sae_log_1.bin\n", "│ ├─template.json\n", "│ ├─wearer.json\n", "│ ├─worn ps1.raw\n", "│ ├─worn.dtype\n", "│ └─worn_200hz.raw\n", "└─simple2-6ca28606/\n", " ├─android.log.zip\n", " ├─blinks ps1.raw\n", " ├─blinks ps1.time\n", " ├─blinks.dtype\n", " ├─calibration.bin\n", " ├─event.time\n", " ├─event.txt\n", " ├─extimu ps1.raw\n", " ├─extimu ps1.time\n", " ├─eye_state ps1.raw\n", " ├─eye_state ps1.time\n", " ├─eye_state.dtype\n", " ├─fixations ps1.raw\n", " ├─fixations ps1.time\n", " ├─fixations.dtype\n", " ├─gaze ps1.raw\n", " ├─gaze ps1.time\n", " ├─gaze.dtype\n", " ├─gaze_200hz.raw\n", " ├─gaze_200hz.time\n", " ├─gaze_right ps1.raw\n", " ├─gaze_right ps1.time\n", " ├─imu ps1.raw\n", " ├─imu ps1.time\n", " ├─imu.dtype\n", " ├─imu.proto\n", " ├─info.json\n", " ├─manifest.json\n", " ├─manifest.json.crc\n", " ├─Neon Scene Camera v1 ps1.mp4\n", " ├─Neon Scene Camera v1 ps1.time\n", " ├─Neon Scene Camera v1 ps1.time_aux\n", " ├─Neon Sensor Module v1 ps1.mp4\n", " ├─Neon Sensor Module v1 ps1.time\n", " ├─Neon Sensor Module v1 ps1.time_aux\n", " ├─Neon Sensor Module v1_sae_log_1.bin\n", " ├─template.json\n", " ├─wearer.json\n", " ├─worn ps1.raw\n", " ├─worn.dtype\n", " └─worn_200hz.raw\n" ] } ], "source": [ "from seedir import seedir\n", "\n", "seedir(native_dir)" ] }, { "cell_type": "markdown", "id": "a9536f10", "metadata": {}, "source": [ "PyNeon provides a `Dataset` class to represent a collection of recordings. A dataset can contain one or more recordings. Here, we instantiate a `Dataset` by providing the path to the native format data directory." ] }, { "cell_type": "code", "execution_count": 3, "id": "c9723b64", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Dataset | 2 recordings\n" ] } ], "source": [ "dataset = Dataset(native_dir)\n", "print(dataset)" ] }, { "cell_type": "markdown", "id": "4969350a", "metadata": {}, "source": [ "`Dataset` provides index-based access to its recordings through the `recordings` attribute, which contains a list of `Recording` instances. Individual recordings can be accessed by index:" ] }, { "cell_type": "code", "execution_count": 4, "id": "e912369a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "C:\\Users\\qian.chu\\Documents\\GitHub\\PyNeon\\data\\simple\\Native Recording Data\\simple1-56fcec49\n" ] } ], "source": [ "rec = dataset[0] # Internally accesses the recordings attribute\n", "print(type(rec))\n", "print(rec.recording_dir)" ] }, { "cell_type": "markdown", "id": "22c182e6", "metadata": {}, "source": [ "Alternatively, you can directly load a single `Recording` by specifying the recording's folder path:" ] }, { "cell_type": "markdown", "id": "542cd4f0", "metadata": {}, "source": [ "## Recording Metadata and Data Access\n", "\n", "You can quickly obtain an overview of a `Recording` by printing the instance. This displays basic metadata (recording ID, wearer ID, recording start time, and duration) and the paths to available data files. Note that at this point, data files are located but not yet loaded into memory." ] }, { "cell_type": "code", "execution_count": 5, "id": "fee66dd9", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Data format: native (version: 2.5)\n", "Recording ID: 56fcec49-d660-4d67-b5ed-ba8a083a448a\n", "Wearer ID: 028e4c69-f333-4751-af8c-84a09af079f5\n", "Wearer name: Pilot\n", "Recording start time: 2025-12-18 17:13:49.460000\n", "Recording duration: 8235000000 ns (8.235 s)\n", "\n" ] } ], "source": [ "print(rec)" ] }, { "cell_type": "markdown", "id": "ae2aa2df", "metadata": {}, "source": [ "## Format-Agnostic API: Accessing Data\n", "\n", "One of PyNeon's key strengths is its **format-agnostic API**. Whether your data is in native or cloud format, the same code works identically. This means you can write analysis pipelines that work seamlessly with either format. Below, we demonstrate accessing data from this native recording using the same approach as the cloud format tutorial.\n", "\n", "Individual data streams can be accessed as properties of the `Recording` instance. For example, `recording.gaze` retrieves gaze data and loads it into memory. If you attempt to access unavailable data, PyNeon returns `None` and issues a warning message." ] }, { "cell_type": "code", "execution_count": 6, "id": "595c91b4", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Stream type: gaze\n", "Number of samples: 1048\n", "First timestamp: 1766074431275967547\n", "Last timestamp: 1766074436535834547\n", "Uniformly sampled: False\n", "Duration: 5.26 seconds\n", "Effective sampling frequency: 199.05 Hz\n", "Nominal sampling frequency: 200 Hz\n", "Columns: ['gaze x [px]', 'gaze y [px]', 'worn', 'azimuth [deg]', 'elevation [deg]']\n", "\n", "Events type: saccades\n", "Number of samples: 11\n", "Columns: ['start timestamp [ns]', 'end timestamp [ns]', 'amplitude [px]', 'amplitude [deg]', 'mean velocity [px/s]', 'peak velocity [px/s]', 'duration [ms]']\n", "\n", "Video name: Neon Scene Camera v1 ps1.mp4\n", "Video height: 1200 px\n", "Video width: 1600 px\n", "Number of frames: 153\n", "First timestamp: 1766074431584148547\n", "Last timestamp: 1766074436631408547\n", "Duration: 5.05 seconds\n", "Effective FPS: 30.11\n", "\n" ] } ], "source": [ "# Gaze and fixation data are available\n", "gaze = rec.gaze\n", "print(gaze)\n", "\n", "saccades = rec.saccades\n", "print(saccades)\n", "\n", "scene_video = rec.scene_video\n", "print(scene_video)" ] }, { "cell_type": "markdown", "id": "892d1a1f", "metadata": {}, "source": [ "Note that accessing native data may trigger on-the-fly conversion from raw binary files (`.raw`, `.time`, `.dtype`) to DataFrames. PyNeon handles this transparently, so the resulting data structures are identical to cloud format data." ] }, { "cell_type": "code", "execution_count": 7, "id": "4db25d49", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " gaze x [px] gaze y [px] worn azimuth [deg] \\\n", "timestamp [ns] \n", "1766074431275967547 731.885864 503.253845 -1 -4.384848 \n", "1766074431280967547 735.500916 502.152618 -1 -4.152129 \n", "1766074431285967547 735.843140 499.517426 -1 -4.130098 \n", "1766074431290967547 735.056641 502.690063 -1 -4.180729 \n", "1766074431295967547 736.322205 501.840668 -1 -4.099258 \n", "\n", " elevation [deg] \n", "timestamp [ns] \n", "1766074431275967547 6.207878 \n", "1766074431280967547 6.278540 \n", "1766074431285967547 6.447632 \n", "1766074431290967547 6.244054 \n", "1766074431295967547 6.298557 \n", "gaze x [px] float64\n", "gaze y [px] float64\n", "worn Int8\n", "azimuth [deg] float64\n", "elevation [deg] float64\n", "dtype: object\n" ] } ], "source": [ "print(gaze.data.head())\n", "print(gaze.data.dtypes)" ] }, { "cell_type": "markdown", "id": "dbb52643", "metadata": {}, "source": [ "## Converting Native Data to Cloud Format\n", "\n", "A common workflow is to convert native format data to cloud format for easier sharing or integration with other tools. PyNeon provides the `export_cloud_format()` method to accomplish this seamlessly.\n", "\n", "The conversion process:\n", "- Reads native binary files and converts them to CSV format\n", "- Preserves all data integrity and metadata\n", "- Outputs a standardized directory structure compatible with Pupil Cloud\n", "\n", "Let's export this recording to cloud format:" ] }, { "cell_type": "code", "execution_count": 8, "id": "3e21e568", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Successfully exported to: C:\\Users\\qian.chu\\Documents\\GitHub\\PyNeon\\source\\tutorials\\export\n" ] } ], "source": [ "from pathlib import Path\n", "\n", "# Define output directory for cloud format data\n", "export_dir = Path(\"./export\")\n", "\n", "# Export the native recording to cloud format\n", "rec.export_cloud_format(export_dir, rebase=False)\n", "print(f\"Successfully exported to: {export_dir.resolve()}\")" ] }, { "cell_type": "markdown", "id": "5572200a", "metadata": {}, "source": [ "Let's verify the exported cloud format directory structure:" ] }, { "cell_type": "code", "execution_count": 9, "id": "545e2a8e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "export/\n", "├─3d_eye_states.csv\n", "├─blinks.csv\n", "├─events.csv\n", "├─fixations.csv\n", "├─gaze.csv\n", "├─imu.csv\n", "├─info.json\n", "├─Neon Scene Camera v1 ps1.mp4\n", "├─saccades.csv\n", "├─scene_camera.json\n", "├─template.csv\n", "└─world_timestamps.csv\n" ] } ], "source": [ "seedir(export_dir)" ] }, { "cell_type": "markdown", "id": "cb878390", "metadata": {}, "source": [ "Now we can load and use the exported data with the same PyNeon API:" ] }, { "cell_type": "code", "execution_count": 10, "id": "0997a777", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Exported gaze data (first 5 rows):\n", " gaze x [px] gaze y [px] worn azimuth [deg] \\\n", "timestamp [ns] \n", "1766074431275967547 731.885864 503.253845 -1 -4.384848 \n", "1766074431280967547 735.500916 502.152618 -1 -4.152129 \n", "1766074431285967547 735.843140 499.517426 -1 -4.130098 \n", "1766074431290967547 735.056641 502.690063 -1 -4.180729 \n", "1766074431295967547 736.322205 501.840668 -1 -4.099258 \n", "\n", " elevation [deg] \n", "timestamp [ns] \n", "1766074431275967547 6.207878 \n", "1766074431280967547 6.278540 \n", "1766074431285967547 6.447632 \n", "1766074431290967547 6.244054 \n", "1766074431295967547 6.298557 \n", "\n", "Data shapes match: True\n" ] } ], "source": [ "# Load the exported cloud format data\n", "rec_cloud = Recording(export_dir)\n", "gaze_cloud = rec_cloud.gaze\n", "\n", "# Verify that the data is identical\n", "print(\"Exported gaze data (first 5 rows):\")\n", "print(gaze_cloud.data.head())\n", "print(\"\\nData shapes match:\", gaze.data.shape == gaze_cloud.data.shape)" ] } ], "metadata": { "kernelspec": { "display_name": "pyneon", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.13.11" } }, "nbformat": 4, "nbformat_minor": 5 }