Meet your physical co-pilot.
AI guides a short sequence of finger movement to play a simple melody.
Human Operator is a human augmentation tool that allows AI to briefly take control of your body to help you learn or do things you cannot do.
From the Project
Human Operator is a human augmentation tool that allows AI to briefly take control of your body to help you learn and do things you normally cannot do. To do this, it uses a vision-language model for human motor control through electrical muscle stimulation.
Vision-based commands are generated through open-ended speech input to control finger and wrist stimulation for intuitive on-body interaction. It is a working prototype that won the Learn Track at MIT Hard Mode 2026.
AI guides a short sequence of finger movement to play a simple melody.
Voice and visual context are turned into a guided hand action in real time.
Small EMS pulses can sequence the fingers into learned gestures and positions.
Speech, camera input, model reasoning, and EMS output all feed the same loop.
Even a simple wave shows the basic idea of AI-assisted physical guidance.
How It Works
A spoken command becomes a guided physical action through camera input, model interpretation, hardware control, and EMS output.
The user says what they want to do.
A head-mounted camera captures the scene in front of them using a VLM.
The model reads the request and scene, then decides on a motor action.
Arduino and relays turn that plan into specific stimulation signals for EMS.
Electrical stimulation guides a brief finger or wrist movement.
Inspired by research and systems from the Human Computer Integration Lab at UChicago on neuromuscular interfaces and electrode placement optimization.