Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset - IDEX UCA JEDI Université Côte d'Azur Access content directly
Preprints, Working Papers, ... Year : 2023

Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset

Abstract

We present the CREATTIVE3D dataset of human interaction and navigation at road crossings in virtual reality. The dataset has three main breakthroughs: (1) it is the largest dataset of human motion in fully-annotated scenarios (40 hours, 2.6 million poses), (2) it is captured in dynamic 3D scenes with multivariate-gaze, physiology, and motion-data, and (3) it investigates the impact of simulated low-vision conditions using dynamic eye tracking under real walking and simulated walking conditions. Extensive effort has been made to ensure the transparency, usability, and reproducibility of the study and collected data, even under extremely complex study conditions involving 6 degrees of freedom interactions, and multiple sensors. We believe this will allow studies using the same or similar protocols to be comparable to existing study results, and allow a much more fine-grained analysis of individual nuances of user behavior across datasets or study designs. This is what we call a living contextual dataset.
Fichier principal
Vignette du fichier
2023_CREATTIVE3D_dataset_arxiv_.pdf (3.27 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
licence : CC BY - Attribution

Dates and versions

hal-04429351 , version 1 (31-01-2024)

Licence

Attribution

Identifiers

  • HAL Id : hal-04429351 , version 1

Cite

Hui-Yin Wu, Florent Alain Sauveur Robert, Franz Franco Gallo, Kateryna Pirkovets, Clément Quere, et al.. Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset. 2023. ⟨hal-04429351⟩
21 View
7 Download

Share

Gmail Facebook X LinkedIn More