Operant Conditioning Schedule Fundamentals
Reinforcement schedules control when and how frequently responses are rewarded, fundamentally shaping behavior patterns:
Design reinforcement schedules, build trial state machines, and export Bpod-compatible protocols for operant conditioning experiments.
Try it out
Load example Trial Protocol Designer data to see the full workflow
Reinforcement after a fixed number of responses
Don't use for
Reinforcement schedules control when and how frequently responses are rewarded, fundamentally shaping behavior patterns:
Several factors can compromise behavioral data quality:
• Session too long: Sessions beyond 60–90 min cause satiety and fatigue artifacts. Monitor lick rate and response latency for decline. • Reward volume mismatch: Total session reward must match the daily fluid/food restriction target. Too much causes satiety; too little causes motivational drift. • ITI too short: Brief ITIs cause behavioral carry-over between trials. Minimum 3–5s for operant, 10–30s for Pavlovian. • Missing timeout states: Omitting timeout penalties for incorrect responses reduces discrimination learning speed. • State machine dead-ends: States with no outgoing transitions will freeze the protocol. Always verify every state can exit. • Progressive ratio ceiling: Without a breakpoint criterion (typically 5 min no response), PR sessions can run indefinitely.