Human motion and facial keypoint analysis
ConductVision Human Tracking brings markerless gait, face landmarks, review overlays, and exportable coordinates into one research video workflow. It is built for controlled studies that need auditable visual outputs, not black-box scoring.
Human tracking modules
Start with gait or facial landmarks, then keep the raw coordinates, confidence values, and overlays available for review.
Motion analysis
Human gait tracking
Markerless mobility analysis for gait speed, step length, cadence, stance/swing timing, trunk sway, and left-right asymmetry.
Open gait pageFace landmarks
Facial keypoint tracking
MMPose wholebody facial landmark output with 68 points across jaw, brows, nose, eyes, and lips, plus reviewable overlays and JSON export.
View keypoint outputData workflow
Research exports
Frame-level coordinates, confidence values, and overlay artifacts that can move into reproducible downstream analysis.
Discuss workflow68-point facial keypoint output
The current feature artifact uses MMPose wholebody output from a ConductScience camera sample. It includes jaw, brow, nose, eye, and lip landmarks at 30 fps across 202 frames, with overlay video and frame-level JSON available from the analysis run.
Plan a human tracking workflow
Send a representative video or schedule a consultation to map the keypoints, outputs, and review steps your protocol needs.