Variable Ratio

Overview

The variable ratio (VR) schedule delivers reinforcement after an unpredictable number of responses that varies around a specified mean. For example, VR5 requires an average of five responses per reinforcer, but individual ratios might be 2, 7, 3, 8, 1, 9 — drawn from a distribution centered on 5. This unpredictability produces the highest, most stable response rates of any simple reinforcement schedule.

VR schedules generate persistent, extinction-resistant responding because the subject cannot predict which response will produce reinforcement. This is the schedule underlying gambling behavior in humans and is used extensively in addiction research. The absence of post-reinforcement pauses distinguishes VR from fixed ratio performance.

ConductMaze implements VR schedules using either Fleshler-Hoffman distributions or custom ratio lists. The software randomizes ratio sequences across sessions to prevent pattern learning, logs every response with sub-second precision, and supports within-session VR value changes for parametric studies.

Trial Flow

start

Session Start

House light ON, levers extended

input

Active Lever Press

Subject presses the active lever

decision

Ratio Check

Response count >= current variable ratio value?

output

Reinforcer Delivery

Pellet dispensed, cue light activated

process

New Ratio Drawn

Next ratio sampled from distribution around mean

decision

Session Limit Check

Max reinforcers or time limit reached?

end

Session End

Levers retracted, data saved

Parameters

ParameterTypeDefaultDescription
Mean Ratiointeger5Average number of responses required per reinforcer
Distribution TypeenumFleshler-HoffmanDistribution algorithm for generating ratio values (Fleshler-Hoffman, arithmetic, geometric)
Max Reinforcersinteger50Maximum reinforcers before session ends
Max Session Timeseconds3600Absolute session time limit
Active Lever SideenumRightWhich lever is reinforced (Left, Right, Counterbalanced)
Cue Light Durationseconds3Duration of cue light paired with reinforcer delivery
Reinforcer TypeenumSucrose PelletType of reinforcement delivered

Metrics

MetricUnitDescription
Total Active PressescountTotal presses on the active lever
Total Inactive PressescountTotal presses on the inactive lever
Reinforcers EarnedcountTotal reinforcers delivered
Response Ratepresses/minMean rate of active lever pressing
Inter-Response TimesecondsMean time between successive active presses
Actual Mean RatioratioObserved mean ratio across the session (should approximate programmed mean)
Latency to First PresssecondsTime from session start to first active press

Sample Data

TrialRatio_RequiredActive_PressesInactive_PressesIRT_sRate_ppm

Representative data for illustration purposes. Actual values will vary by species, strain, and experimental conditions.

Applications

  • 1
    Gambling and impulsivity researchVR schedules model the unpredictable reinforcement underlying problem gambling
  • 2
    Drug self-administrationVR schedules maintain stable intake patterns for pharmacological studies
  • 3
    Extinction resistance studiesVR-trained responses are more resistant to extinction than FR-trained responses
  • 4
    Behavioral economicsmeasuring demand elasticity under variable reinforcement contingencies
  • 5
    Comparative schedule analysiscontrasting VR and FR performance on response rate and patterning

Compatible Products

ME-OC-BASEME-OC-LEVERME-OC-PELLETME-OC-BUNDLE

Ready to Automate Your Behavioral Protocols?

Contact us for a demo and pricing information.