The Primate Virtual Star Maze is a visual exploration-based navigation task used for observing navigational behaviors in primates. The Primate Virtual Star Maze, in general, is designed as a central decision point with five arms radiating outwards with equal distances between them.

The virtual maze allows easy manipulation of the environment, such as introducing cues and landmarks, without disturbing the experiment. Additionally, the virtual maze makes it possible to track eye position and record neural activity as the animal performs the task without causing any hindrance.

Mazeengineers offers the Primate Virtual Star Maze.

Price & Dimensions

Primate Virtual Star Maze

$ 990

One maze
  • Radius of maze: 16m
  • Speed of displacement: 5m/s
  • Length of screen for display of virtual task: 152cm
  • Width of screen for display of virtual task: 114cm
  • Distance of screen from subjects: 101cm

Documentation

Introduction

The Primate Virtual Star Maze is a visual exploration-based navigation task used for observing navigational behaviors in primates. The virtual maze is an adaptation of conventional arm-based navigation and exploration mazes, such as the Rodent Water Star Maze, the Rodent Radial Arm Maze, and  Pig 8-Arm Radial Maze. The Primate Virtual Star Maze, in general, is designed as a central decision point with five arms radiating outwards with equal distances between them.

Like human virtual mazes (for human virtual mazes see Simian Virtual Reality Mazes), the Primate Virtual Star Maze allows the experimenter greater control over the test parameters and environment design. The virtual maze allows easy manipulation of the environment, such as introducing cues and landmarks, without disturbing the experiment. Additionally, the virtual maze makes it possible to track eye position and record neural activity as the animal performs the task without causing any hindrance. While the virtual reality task has its benefits, it may, however, be time-consuming to train the animals to learn using the equipment.

Training Protocol

The Primate Virtual Star Maze task can be varied in terms of design and parameters assessed in an investigation. In general, the protocol involves shaping trials and the Star Maze trials. The shaping trials help familiarize the subject with the equipment, virtual environments, and virtual navigation protocol using reward incentives. The Star Maze trials are usually reward-based trials where the subject is tasked with finding the correct arm. The following is a sample protocol for Virtual Star Maze task:

  • Shaping Trials: In order to familiarize the subjects with the VR set-up, animals are first trained to find reward targets by controlling a virtual sphere in a two-dimensional virtual environment. Following the successful learning of the 2D task, animals are trained in a Virtual Y-Maze. Initially, animals are trained to approach the sphere in this environment without any cues present. Next, landmarks are introduced into the maze along with the sphere, and animals are tasked to approach the sphere. These trials are then followed by test trials wherein the subjects are tasked with approaching the landmark where the sphere was last present in order to receive a reward.
  • Virtual Star Maze Trials: Following successful shaping trials, animals are introduced to the Virtual Star Maze with the animal starting in one of the arms facing the maze. The animals are tasked with finding the rewarded arm based on the landmarks present in between the arm ends. The animals begin the next trial in a new arm following a correct (rewarded) or incorrect (not rewarded) arm entry.
  • Probe Trials: Probe trials may be performed to assess successful learning of the Star Maze task. Probe trial sessions include learning trials and test trials performed using the same pattern of landmark placement. In the learning trials of the 4 possible start points, only one or two of the arms are selected. The animals begin the trials in the selected start arm and learn to navigate towards the rewarded arm. Once the learning trials are completed, the start point is changed to one of the two previously unused arms, and the ability of the animal to navigate to the rewarded arm is observed.

Behavioral Observations and Task Data

The observed parameters and recorded data vary with the investigatory aims. In general, behavioral measures can include the following,

  • Latency to initiate the task
  • First choice
  • Percentage of correct choices
  • Percentage of incorrect choices
  • Navigation accuracy
  • Time spent in the correct zone
  • Time spent in the incorrect zone
  • Distance traveled
  • Trial duration
  • Frequency of backtracking
  • Navigation strategy used

Based on the requirements of the investigation, EEG data may also be recorded. Other measures (relevant to the investigation) may include assessment of stress, anxiety, and heart rate levels, among others. Ancillary observations such as eye-tracking may also be recorded.

Literature Review

Investigation of gaze-informed wayfinding in rhesus macaque

Objective : Wirth, Baraduc, Planté, Pinède, and Duhamel (2017) investigated navigation behavior, visual exploration, and hippocampal activity of macaque monkeys searching for reward in the Primate Virtual Star Maze.
Subjects : Two rhesus macaque monkeys.
Maze Design : Along with the Virtual Star Maze, two additional virtual environments were used for the shaping trials in the investigation; A two-dimensional environment consisting of a controllable sphere and a Virtual Y-Maze that included a virtual sphere and landmark cues.

 

The Virtual Star Maze environment was created as an outdoor environment with blue skies and 5 paths laid in a grass field radiating out of a central point. The maze had a radius of 16 m, and the speed of displacement was 5 m/s. Different landmarks such as a house, flower, crescent moon, etc. were placed between two paths. The landmarks were changed for each day.

 

The virtual task was displayed on a 152 × 114 cm screen placed at a distance of 101 cm from the subjects. The animals navigated the virtual maze using a joystick.

 

The animals were head restrained and equipped with active shutter glasses to allow 3D projection of the virtual environment, and two-infrared cameras above each eye. The reward was delivered via a juice dispenser directly in the animal’s mouth; the correct area was found.

Ancillary Protocols & Recordings : ·       Movement of the pupils of each eye was monitored and utilized for gaze-mapping.

·       Electrophysiological recordings

Procedure : Both subjects underwent shaping trials over the course of 6 months, as mentioned in the training protocol.

 

Following successful completion of the shaping trials, the animals were tested in the Virtual Star Maze. For each day, animals were trained to learn a new arrangement of landmarks in order to locate the rewarded area. The animals started in one of the arms, randomly chosen for each trial, facing the maze. If the subject reached the end of the correct arm, it was rewarded with juice that was delivered directly to its mouth. An incorrect choice was not rewarded. The next trial was initiated at another randomly selected arm regardless of choice made. Approximately 80 trials were administered per day. Additionally, both subjects also underwent probe sessions, as described in the training protocol.

Results : Both subjects quickly acquired the task and displayed flexible trajectory planning. The allocentric point of gaze density maps revealed that the subjects’ gaze anticipated the direction of subsequent movement 500 ms prior to joystick action. Both animals were observed to proactively gaze at the rewarded path and the landmarks, even when trials initiated from a new entry point.

 

Hippocampal cell firing during the task performances suggested the use of a combination of the current sensory state with a goal-related action context. Thus, the neural representation of the maze in the monkeys’ hippocampus was expressed as an abstract, multidimensional representation of self-position.

References

Wirth, S., Baraduc, P., Planté, A., Pinède, S., & Duhamel, J.-R. (2017). Gaze-informed, task-situated representation of space in primate hippocampus during virtual navigation. PLOS Biology, 15(2), e2001045. doi:10.1371/journal.pbio.2001045

Request a quote

"*" indicates required fields

This field is for validation purposes and should be left unchanged.