National Institutes of Health

Replication Prize · Track 2 · 2026

ConductScience and MazeEngineers, named winners of the NIH Replication Prize.

For thirteen years of building the apparatus, software, and publishing infrastructure that holds behavioral research methods stable across more than 1,150 independent laboratories.

NIH Replication Prize Winner, Track 2, 2026

Video overview

Manufacturing for scientific replication.

ConductScience manufactures research tools so experiments can be replicated across laboratories.

https://youtu.be/B5e_vtOA_KQ

About the prize

What the NIH recognized.

The NIH Replication Prize honors work that materially improves the reproducibility of biomedical research. Track 2 recognizes field-deployed replication infrastructure: tools, methods, and practices already in routine use that demonstrably reduce cross-laboratory variation.

ConductScience and MazeEngineers were selected for an integrated framework that holds five things stable across sites: the apparatus, the procedure, the analysis, the pathway by which new methods reach other laboratories, and the published record that preserves all of it for reuse. The work has been continuously maintained since 2013 and is cited in 174 peer-reviewed publications across anxiety, learning and memory, addiction, stroke models, and neurodegeneration.

In 2025 the same team also received a Phase 1 win in the NIH Data Sharing Index Challenge, recognizing complete, citable research objects with standardized metadata and exports.

Official NIH announcement

Announcing the Winners of the NIH Replication Prize: Shaping the Future of Rigorous Science.

https://commonfund.nih.gov/replication-initiative/news/announcing-winners-nih-replication-prize-shaping-future-rigorous

The submission

Five things, working together.

The prize submission described five replication activities, each addressing a specific source of cross-laboratory variation. Together they hold methods stable site to site.

Activity 01

Apparatus

Equipment built from the published designs.

When a paper specifies a Morris water maze of a particular diameter, an elevated plus maze with arms of a particular width, or a four-arm maze with specific visual stimuli, two laboratories ordering from MazeEngineers receive equipment that is physically equivalent. Same dimensions. Same materials. Same sensor placement. Tied directly to the citation.

Delivery linkMazeEngineers

Activity 01

Apparatus built to spec

1

Method

Diameter120 cm
Arm10 cm
Wall40 cm
2

Verified spec

3

Matched labs

Lab A

120 cm

Lab B

120 cm

Replication starts before the experiment begins.

Activity 02

Procedure

Software that runs the experiment the same way every time.

ConductMaze runs the procedure under software control. Trial timing, reinforcement schedules, chamber events, and inter-trial intervals execute the same way in every session, and every session writes a time-stamped event log so the run can be replayed and compared.

Delivery linkConductMaze

Activity 02

Procedure automation

Protocol file

Timing window
Cue order
Gate state
Reward rule

ConductMaze run logic

1

Start

2

Cue

3

Access

4

Close

Event log

00:00Session start
00:30Cue state set
01:00Arm access
02:30Trial closed

Same protocol. Same timing. Same event log.

Activity 03

Analysis

Tracking that exports in formats other laboratories can read.

ConductVision turns video into structured behavioral measurements with documented analytic rules, and exports directly to open data formats. The next laboratory inspects the boundaries of every event and recomputes derived measures from the same source.

Delivery linkConductVision

Activity 03

ConductVision analysis

Source video with rules

Event boundaries

Center entry00:18.4
Arm choice00:31.2
Freeze bout00:44.8

Open exports

CSV
JSON
HDF5

The next lab can inspect the same event rules and recompute the measures.

Behavioral measurements stay tied to the visible source video.

Activity 04

Technology transfer

A pathway from prototype to deployable research tool.

We work with originating laboratories and their technology transfer offices to translate investigator-developed prototypes into citable, deployable tools that other laboratories can run, with originating credit and the citation chain preserved. Each method is converted into apparatus specifications, fixed procedure logic, and defined analytic outputs so it ports cleanly to a new site.

Delivered in partnership with university technology transfer offices.

Activity 04

Technology transfer

Prototype to deployable method

1

Academic prototype

Originating lab

2

Transfer package

Specs + credit

3

Deployed tool

Many labs

Origin credit

Inventor and institution stay visible.

Fixed build spec

Prototype becomes reproducible hardware.

Runnable procedure

New sites receive the same method logic.

Lab inventions become citable tools other sites can actually run.

Activity 05

Method publishing

Complete methods released alongside the manuscript.

ConductScience.org publishes apparatus designs, procedure files, analysis code, and datasets together with the paper, with permanent identifiers and version tracking. A reader can inspect the work directly, cite the specific version, and run the same experiment without reconstruction.

Activity 05

Method publishing

Published record

Method package

v1.0.0
Protocol
Apparatus spec
Analysis code
Dataset

ConductScience.org release

Permanent ID

citable method object

Version history

what changed and when

Open files

downloadable protocol bundle

Reuse path

next lab can rerun

Citation chain

PaperPackageReplication

The paper, files, data, and versions travel together.

The mechanism

Continuously Verified.

Across every product. Every order. Every methods section.

Inspect the live system

Continuous Verification records every product spec as a citable research object.

Open verified specs →

Replication only holds if the specifications hold. Every ConductScience product page is continuously verified against the equipment we actually manufacture, with full change history and version tracking. When a method references a piece of our apparatus by version, that version is preserved, citable, and reproducible years later.

This is what lets a paper published in 2026 be independently replicated in 2031, and what allows a small team to sustain replication infrastructure across more than 1,150 organizations. Verification lives in the system, where any reader can inspect it.

Distributed core facility

One verified core, many independent laboratories.

The replication infrastructure acts like a core facility without forcing every laboratory into one building. A shared verification layer defines the apparatus, procedure, analysis, and version history; each independent site runs locally against the same maintained specification.

That turns a distributed network of laboratories into a coordinated methods system. A team ordering later can reconstruct the same physical and procedural conditions without relying on memory, screenshots, or informal lab notes.

Shared specification layer

Apparatus

Procedure

Analysis

Manufacturing to methods

Physical

dimensions and materials

Digital

versioned specification

Scientific

citable methods record

Scientific specification

The manufactured object becomes part of the methods section.

For behavioral research, reproducibility begins before the experiment starts. The maze diameter, arm width, materials, sensors, lighting, and software timing all become scientific variables if they drift from site to site.

ConductScience closes that gap by tying production checks to the public specification. The thing we build, the version we verify, and the method a researcher cites all point to the same controlled record.

Scientific manufacturing

A category of manufacturing designed for scientific replication.

ConductScience pioneered a category of manufacturing designed for scientific replication. Mass manufacturing optimizes for cost, speed, and volume, accepting small variations between units as the price of producing them cheaply. Scientific manufacturing inverts those priorities entirely.

Each piece of equipment is bespoke, hand-built to a documented specification, and verified against that specification using the same methodological rigor a laboratory applies to its own experiments. The physical object, the manufacturing record, and the published method all describe the same setup. This is the foundation that makes cross-laboratory replication possible.

Scientific manufacturing

Built for replication, not interchangeability.

Mass manufacturing

Optimizescost, speed, volume
Acceptssmall unit variation
Recordbatch-level, not method-level

Scientific manufacturing

Bespoke build

Each unit is hand-built to a documented specification.

Verification

Dimensions, materials, sensors, and settings are checked against the spec.

Method fidelity

The delivered apparatus matches the setup described in the paper.

One scientific setup, three matching records

1

Published method

the setup the paper describes

2

Manufacturing record

how the unit was built and checked

3

Physical object

the equipment delivered to the lab

The object, the record, and the method describe the same setup.

The case

The numbers that won the prize.

Cross-site variation

0.76

Historical benchmark

Crabbe, Wahlsten, and Dudek, 1999

0.41

ConductScience cohorts

58 independent control cohorts

Lower coefficient of variation means the method travels more consistently between laboratories.

The headline finding

In laboratories using our standardized apparatus, results from independent control groups varied roughly half as much as the historical multi-laboratory benchmark.

We compared 58 independent control cohorts from 41 published studies running ConductScience and MazeEngineers apparatus against the Crabbe, Wahlsten, and Dudek (1999) multi-laboratory benchmark, which is the most widely cited measurement of cross-laboratory variation in mouse behavior. Mean coefficient of variation: 0.41 in our cohorts, versus 0.76 in the historical reference. Less noise between sites means more statistical power to detect real biology.

Source: NIH Replication Prize submission, Section 1 (Replication Strategy). The comparison isolates hardware-linked variability and does not capture all sources of procedural or analytic variation.

Independent coverage

Recognized before the prize.

Press coverage from 2017 and 2018, nearly a decade before the current federal replication initiatives.

“Standardized equipment and tasks so that researchers can compare their results directly.”
Nature Toolbox · 2018
Standardized apparatus offered as a practical mechanism to reduce “a potential confound for comparing results across institutions, labs and studies.”
Yahoo · 2017
“More objective and standardized.” Reduces reliance on subjective observer judgment.
Discover Magazine · 2018

In use today

Where this infrastructure runs.

Six categories of behavioral assays, hundreds of named tests, deployed across rodent, zebrafish, and drosophila laboratories.

For media

Press inquiries.

For interviews, hi-resolution imagery, the prize submission, or background briefings, contact the team directly. We respond within one business day.

Build replication into your next study.

Talk to the team behind the NIH-recognized replication infrastructure. We can help you select the apparatus, automate the procedure, and publish the methods so the work travels.