The Virtual Stereosonic Task is a navigation-based assay that allows the evaluation of Sensory Substitution Devices. It allows assessing the effectiveness of auditory SSDs to convert spatial information into reliable auditory navigational information.

The virtual task allows the creation of environments that are authentic to the real-world while allowing the investigator to control the navigational difficulty. The virtual task also reduces the stress and anxiety associated with navigation among the visually impaired or blind.

Mazeengineers offers the Virtual Stereosonic Task Apparatus.

Request a Virtual Stereosonic Task Apparatus

Request a Virtual Stereosonic Task Apparatus

Price & Dimensions

Virtual Stereosonic Task Apparatus

$ 2290

+S&H
  • Length of virtual maze grid arena: 5m
  • Width of virtual maze grid arena: 7m
  • Overall length of arena: 15m
  • Overall width of arena: 21m
  • Overall height of arena: 3m
  • Sides of cube for path: 3m
  • Width of virtual obstacle corridor: 6m
  • Empty space: 3m
  • Segment of obstacles: 7m
  • Diameter of obstacles: 0.8m
  • Height of obstacles: 1m

Documentation

Introduction

Vision plays a primary role in navigation, and loss or impairments in vision can have a significant impact on the quality of life of an individual. Visual impairments not only make navigation difficult but also make the task daunting and stressful for the individual. Thus, Sensory Substitution Devices (SSD) have a crucial role to play in independent navigation by eliminating dependence and the anxiety associated with the task. Hence, assessment of these devices needs to be performed in meaningful and safe environments, such as offered by the Virtual Stereosonic Task.

The Virtual Stereosonic Task is a navigation-based assay that allows the evaluation of Sensory Substitution Devices. Traditional virtual navigational assays in humans (see Simian Virtual Reality Mazes), such as the Virtual Morris Water Maze and the Virtual Radial Arm Maze, are often translated from animal models of navigation assessment. However, these assays do not offer the everyday environment encountered by humans. Additionally, they do not allow assessing the full potential of SSDs. The Virtual Stereosonic Task allows assessing the effectiveness of auditory SSDs to convert spatial information into reliable auditory navigational information. The virtual task allows the creation of environments that are authentic to the real-world while allowing the investigator to control the navigational difficulty. The virtual task also reduces the stress and anxiety associated with navigation among the visually impaired or blind.

Training protocol

Participants are informed of the experimental process beforehand. Participant’s comfortability with the virtual reality technology used is also noted as this could also be a potential influencer on the performance. Ancillary tests may also be part of the investigation.

Prior to actual testing in the Virtual Stereosonic Task, participants go through familiarization stages with the virtual environment as well as the audio sounds/SSDs that will be used to assist with the navigation task.

  • Visual only condition-stage: This stage is used for sighted participants to visually familiarize them with the virtual environments used in the task. The participants are expected to navigate the maze environments using both visual and auditory (usually an audio cue emitted from the goal) information. The performances in this stage usually serve as the baseline performance.
  • Spatial audio training: The training is performed using audio information only in order to familiarize the participants with spatial sounds. Participants are instructed to explore the environment and reach the endpoint/goal of the maze in an obstacle-free environment.
  • Sonification trials: The trials are used for evaluation of the participant’s navigation and obstacle avoidance ability while relying solely on the SSD. The participants are at first familiarized with the task using a simplified version of the virtual navigation maze (usually containing a single obstacle). Following the familiarization process, participants are tested in the complex version of the virtual navigation maze.

Behavioral Observations and Task Data

Observed behaviors and task data may vary depending on the investigatory aims and the complexity of the environments used in the Virtual Stereosonic Task. In general, behavioral measures can include the following:

  • Latency to initiate the task
  • Time taken to reach the goal
  • Navigational strategy used
  • Navigation accuracy
  • Navigation speed
  • Number of collisions
  • Number of head rotations
  • Head rotation angle
  • Distance traveled
  • Deviation distance
  • Trial duration

Based on the requirements of the investigation, EEG data may also be recorded. Other measures (relevant to the investigation) may include assessment of stress, anxiety, and heart rate levels, among others. Ancillary questionnaires may also be used to further refine the data and the understanding of the task performance. (For digital health research tools visit Qolty).

Literature Review

Investigation of simulated echolocation and distance-dependent hum volume modulation-based navigation

 

Objective:Massiceti, Hicks, and van Rheede (2018) evaluated two visual-to-audio sensory substitution methods: simulated echolocation and distance-dependent hum volume modulation, using Virtual Stereosonic Task.
Participants:Participants included 18 volunteers (11 males and 7 females, mean age 28.78 ± 8.00 years) with full sight and full stereo hearing.

 

Participants were rated according to the experience they had with sensory substitution devices (SSDs), first-person-controller computer games, and virtual-reality devices.

Experimental Design:Two virtual environments were used in the Virtual Stereosonic Task:

Maze: The virtual maze used was a 5 × 7 grid arena having an overall area of 15 × 21 × 3 m. The maze paths were created using virtual cubes with sides 3 m, with each path having a constant length that was equal to 7 cubes. The goal in the maze was marked by a golden star. A path was randomly selected from a total of 20 pre-generated mazes.

 

Obstacle corridor:  The obstacle corridor environment consisted of a 6-m wide virtual corridor, which was sequentially divided into 3m of empty space, a 7m segment of obstacles, 3m of empty space, and goal.  The goal was marked by a golden star that was randomly placed along the corridor’s width, 13 m from the starting line. The obstacle segment comprised of 5 randomly placed columnar objects that were 0.8m in diameter and 1m in height. The entire corridor was bounded by a left and right wall. The obstacle corridor arrangement was randomly selected from 20 pre-generated arrangements.

 

Ancillary Questionnaires:·         Demographic questionnaire

·         Virtual reality experience/ gaming experience

·         Naivety Scale

 

Procedure

 

 

 

 

 

 

      

:

 

 

 

 

 

 

 

 

 

 

 

The potential of simulated echolocation and distance-dependent hum volume modulation in assisting navigation was assessed in the Virtual Stereosonic Task. Trials were conducted in a large indoor hall, which measured 20 × 25 m, or in a large outdoor flat lawn. Participants wore a head-mounted tablet to allow wire-free tracking of 3D position and rotation. Additionally, the tablet vibrated when the participants collided with obstacles during the navigation task. Participants were trained under 6 experimental conditions: visual, humming, and echolocation in both maze and obstacle corridor environments for a minimum of 6 repeated trials in each condition. The participants were allotted a maximum of 150 seconds to complete each trial. Before formal training began, the participants underwent familiarization periods in each virtual environment under each sonification condition.
Results:In both environments, the visual condition outperformed the sonification conditions by having fewer collisions, fewer head rotations, a shorter trial duration, a shorter path length, and a higher mean velocity. No significant differences in participant performances could be observed in both sonification conditions in terms of the number of collisions made, path length, and the number of head rotations. However, performances differed in terms of trial duration and mean velocity, with participants navigating faster in the humming condition. In the obstacle corridor environment, it was observed that the participants moved closer to the objects in both sonification conditions, whereas they deviated from the obstacles in the visual condition. Overall results from both echolocation and humming conditions revealed that the efficiency of the participants’ navigational performance improved with trials in both the maze and obstacle corridors environments.

 

Investigation of navigational performance using the Virtual-EyeCane

 

Objective:Maidenbaum, Levy-Tzedek, Chebat, Amedi (2013) evaluated the potential of Virtual-EyeCane single point distance parameter in assisting navigation in blind and blindfolded participants in the virtual environment. The Virtual-EyeCane was based on the real world EyeCane and mimicked the IR sensors-based distance-to-sound translation
Participants:Participants included 23 volunteers (9 males and 14 females, mean age 27.6±8.4). Twenty of the participants were sighted, and 3 were congenitally blind.  The sighted participants were blindfolded during the study.
Maze Design:

 

 

 

 

 

The virtual maze consisted of a single maze path with two 90° turns. A path could be selected out of 4 saved paths. The maze environments had a graphical output to the computer screen, which could be used to track the participant’s progress. Distances within the virtual environment were set that each meter within the virtual environment corresponded to one meter in the real world. The IR sensors in the Virtual-EyeCane measured the distance from the objects as the participant navigated around the environment and transformed it into sound. The sound cues were given out such that the shorter the distance from the object, the higher the frequency of the sound. Clapping hands sound cue indicated success while and a person colliding into a wall sound served as collision cue. The maze was implemented on a computer, and participants controlled their movements in the maze using the arrow keys on the keyboard.
Ancillary Questionnaires:·         Demographic questionnaire

·         Virtual reality experience

Procedure:Participants underwent a 7-minute familiarization training wherein they explored the virtual environment and the audio cues using the keyboard controls. Three training routes (a straight corridor, a left turn, and a right turn) that were half the size of the regular routes were used for familiarization. Training sessions were accompanied by verbal feedback from the instructor.

 

Each participant performed 24 regular trials which comprised of 4 different maze levels that were pseudo-randomly administered. Six consecutive trials were performed for each level. Participants were instructed to complete the task as quickly as possible while avoiding collisions. At the end of each level, participants were asked to draw the route they had taken. Participants were offered the choice to give up a level and move on to the next. This choice was counted as level failed.

 

Results:All participants completed all levels of the maze with similar results in terms of time to complete, the number of collisions, and distance traveled. As trials progressed, all participants required a shorter amount of time to complete the trial. The participants also had fewer collisions across trials; 62.3% of the trials had 0 collisions, 8.1% had only a single collision, and 6% had more than 10 collisions. Path length was observed to improve significantly, though a relatively small improvement, across trials (first 3 trials vs. final 3 trials). Overall, participants required minimal training to complete the trials successfully. Additionally, the single feedback parameter of the Virtual EyeCane proved to be sufficient for effective navigation.

References

  1. Massiceti, D., Hicks, S. L., & van Rheede, J. J. (2018). Stereosonic vision: Exploring visual-to-auditory sensory substitution mappings in an immersive virtual reality navigation paradigmPloS one13(7), e0199389.
  2. Maidenbaum, S., Levy-Tzedek, S., Chebat, D. R., & Amedi, A. (2013). Increasing accessibility to the blind of virtual environments, using a virtual mobility aid based on the” EyeCane”: Feasibility studyPloS one8(8), e72555.