We gave AI a body.

Human Operator is a human augmentation tool that allows AI to briefly take control of your body to help you learn or do things you cannot do.

MIT Media Lab Hard Mode Winner Advancing Humans AI Design Intell

From the Project

Human Operator is a human augmentation tool that allows AI to briefly take control of your body to help you learn and do things you normally cannot do. To do this, it uses a vision-language model for human motor control through electrical muscle stimulation.

Vision-based commands are generated through open-ended speech input to control finger and wrist stimulation for intuitive on-body interaction. It is a working prototype that won the Learn Track at MIT Hard Mode 2026.

Meet your physical co-pilot.

AI guides a short sequence of finger movement to play a simple melody.

Learn anything, instantly.

Voice and visual context are turned into a guided hand action in real time.

Mastery without the years.

Small EMS pulses can sequence the fingers into learned gestures and positions.

Digital intent, embodied action.

Speech, camera input, model reasoning, and EMS output all feed the same loop.

Humanity, augmented.

Even a simple wave shows the basic idea of AI-assisted physical guidance.

How It Works

Voice to movement.

A spoken command becomes a guided physical action through camera input, model interpretation, hardware control, and EMS output.

01

Voice input

The user says what they want to do.

02

POV camera

A head-mounted camera captures the scene in front of them using a VLM.

03

Model plan

The model reads the request and scene, then decides on a motor action.

04

Relay control

Arduino and relays turn that plan into specific stimulation signals for EMS.

05

EMS output

Electrical stimulation guides a brief finger or wrist movement.