Touchscreen Visual Discrimination
Overview
Touchscreen visual discrimination is a translational cognitive task in which rodents learn to discriminate between two visual stimuli displayed on a touch-sensitive screen, responding to the rewarded stimulus (S+) and withholding responses to the unrewarded stimulus (S−). This paradigm leverages the same touchscreen operant platform used in human cognitive testing batteries (CANTAB), enabling direct cross-species comparison of learning, memory, and executive function.
The task is highly flexible: by varying stimulus similarity, the number of stimuli, or imposing reversal contingencies, the protocol can probe visual perceptual learning, stimulus-response association, cognitive flexibility, and pattern separation. Touchscreen tasks are particularly valued because they do not rely on spatial navigation or swimming ability, making them suitable for models where motor or stress confounds limit water maze or T-maze approaches.
ConductMaze controls the touchscreen operant chamber: it presents visual stimuli with pseudorandom left-right positioning, detects touch responses via IR sensors, delivers pellet rewards for correct choices, and imposes correction trial or timeout penalties for errors. The software supports configurable training stages, criterion-based advancement, and simultaneous multi-chamber operation for high-throughput phenotyping.
Trial Flow
Initiation
Subject nose-pokes in magazine to start trial
Stimulus Display
Two visual stimuli appear on touchscreen (S+ and S−)
Touch Response
Subject touches one of the two stimuli
Correct (S+)
Tone, magazine light, pellet delivery
Incorrect (S−)
House light off, timeout, correction trial
Magazine Collection
Subject collects reward to initiate next trial
Session End
Max trials or session time reached
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| S+ Stimulus | enum | Marble | Rewarded visual stimulus image |
| S− Stimulus | enum | Fan | Unrewarded visual stimulus image |
| Max Trials | integer | 30 | Maximum trials per session |
| Max Session Time | seconds | 3600 | Session time limit |
| Criterion | enum | 80% over 2 days | Accuracy threshold for acquisition |
| Correction Trials | boolean | Yes | Repeat stimulus positions after incorrect response |
| Timeout Duration | seconds | 5 | Penalty period after incorrect response |
| ITI Duration | seconds | 20 | Inter-trial interval after reward collection |
Metrics
| Metric | Unit | Description |
|---|---|---|
| Percent Correct | % | Correct non-correction trials / total non-correction trials — primary learning measure |
| Sessions to Criterion | count | Number of sessions to reach accuracy criterion |
| Correction Trials | count | Total correction trials needed — perseveration index |
| Response Latency | seconds | Mean time from stimulus display to touch |
| Magazine Latency | seconds | Time from correct touch to reward collection |
| Bias Index | ratio | Left vs right response bias (0.5 = no bias) |
Sample Data
| Subject | Group | Session | Pct_Correct | Correction_Trials | Resp_Latency_s | Mag_Latency_s |
|---|
Representative data for illustration purposes. Actual values will vary by species, strain, and experimental conditions.
Applications
- 1Alzheimer disease — visual discrimination and reversal learning deficits in transgenic models
- 2Intellectual disability — learning impairment phenotyping in genetic models (e.g., Fmr1 KO, Ts65Dn)
- 3Psychopharmacology — procognitive compound screening with translational face validity
- 4Aging — age-related associative learning decline without motor or stress confounds
- 5Comparative cognition — cross-species learning comparisons using identical stimuli and contingencies
Related Protocols
Compatible Products
Ready to Automate Your Behavioral Protocols?
Contact us for a demo and pricing information.