Touchscreen Visual Discrimination

Overview

Touchscreen visual discrimination is a translational cognitive task in which rodents learn to discriminate between two visual stimuli displayed on a touch-sensitive screen, responding to the rewarded stimulus (S+) and withholding responses to the unrewarded stimulus (S−). This paradigm leverages the same touchscreen operant platform used in human cognitive testing batteries (CANTAB), enabling direct cross-species comparison of learning, memory, and executive function.

The task is highly flexible: by varying stimulus similarity, the number of stimuli, or imposing reversal contingencies, the protocol can probe visual perceptual learning, stimulus-response association, cognitive flexibility, and pattern separation. Touchscreen tasks are particularly valued because they do not rely on spatial navigation or swimming ability, making them suitable for models where motor or stress confounds limit water maze or T-maze approaches.

ConductMaze controls the touchscreen operant chamber: it presents visual stimuli with pseudorandom left-right positioning, detects touch responses via IR sensors, delivers pellet rewards for correct choices, and imposes correction trial or timeout penalties for errors. The software supports configurable training stages, criterion-based advancement, and simultaneous multi-chamber operation for high-throughput phenotyping.

Trial Flow

start

Initiation

Subject nose-pokes in magazine to start trial

process

Stimulus Display

Two visual stimuli appear on touchscreen (S+ and S−)

decision

Touch Response

Subject touches one of the two stimuli

output

Correct (S+)

Tone, magazine light, pellet delivery

output

Incorrect (S−)

House light off, timeout, correction trial

process

Magazine Collection

Subject collects reward to initiate next trial

end

Session End

Max trials or session time reached

Parameters

ParameterTypeDefaultDescription
S+ StimulusenumMarbleRewarded visual stimulus image
S− StimulusenumFanUnrewarded visual stimulus image
Max Trialsinteger30Maximum trials per session
Max Session Timeseconds3600Session time limit
Criterionenum80% over 2 daysAccuracy threshold for acquisition
Correction TrialsbooleanYesRepeat stimulus positions after incorrect response
Timeout Durationseconds5Penalty period after incorrect response
ITI Durationseconds20Inter-trial interval after reward collection

Metrics

MetricUnitDescription
Percent Correct%Correct non-correction trials / total non-correction trials — primary learning measure
Sessions to CriterioncountNumber of sessions to reach accuracy criterion
Correction TrialscountTotal correction trials needed — perseveration index
Response LatencysecondsMean time from stimulus display to touch
Magazine LatencysecondsTime from correct touch to reward collection
Bias IndexratioLeft vs right response bias (0.5 = no bias)

Sample Data

SubjectGroupSessionPct_CorrectCorrection_TrialsResp_Latency_sMag_Latency_s

Representative data for illustration purposes. Actual values will vary by species, strain, and experimental conditions.

Applications

  • 1
    Alzheimer diseasevisual discrimination and reversal learning deficits in transgenic models
  • 2
    Intellectual disabilitylearning impairment phenotyping in genetic models (e.g., Fmr1 KO, Ts65Dn)
  • 3
    Psychopharmacologyprocognitive compound screening with translational face validity
  • 4
    Agingage-related associative learning decline without motor or stress confounds
  • 5
    Comparative cognitioncross-species learning comparisons using identical stimuli and contingencies

Compatible Products

ME-OC-BASEME-OC-PELLET

Ready to Automate Your Behavioral Protocols?

Contact us for a demo and pricing information.