Theory AI & PCT Blog FAQ About
Perceptual Control Theory — established 1960

AI chases rewards.
Real intelligence controls perceptions.

Imagine building something that actually thinks like a living being — not just plays games better. William T. Powers cracked this in 1960. Most AI still misses the point. This site shows you exactly where it goes wrong… and what comes next.

// Powers, Clark & McFarland — Perceptual and Motor Skills, 1960

REFERENCE SIGNAL COMPARATOR error OUTPUT function ENVIRONMENT (world) CONTROLLED VARIABLE PERCEPTION feedback DISTURBANCE (wind, noise…) Not stimulus → response. Perception → control. perception loop
// the closed feedback loop — powers, 1960
>95%
Predicts real human behavior
11
Levels of control in the brain
1960
First published model
97%
Retention in MOL therapy trials
// what_is_pct

The closed loop that psychology never found

Picture driving on a windy day. A gust shoves the car left — you steer right without even thinking hard about it. You're not reacting to the wind. You're controlling how straight the road looks in front of you. That tiny but massive difference is the core of Perceptual Control Theory.

Behavior isn't a response to stimuli. Behavior is action that keeps your perception of the world the way you want it. You don't react to the wind — you control the road's position in your visual field. Powers called this the controlled variable: the specific perception your actions are working to keep stable. That distinction changes everything about how you understand minds, machines, and the gap between them.

"Behavior is the control of perception, not the production of output."

— William T. Powers, Behavior: The Control of Perception, Aldine, 1973

When models based on PCT are tested against real people doing real tracking tasks, they hit over 95% accuracy in predicting what someone will do next. Not curve-fitting after the fact — genuine blind prediction before the movement happens. Check the tracking experiments by Marken and Powers. Most psychology models dream of numbers like that. And yet, sixty years after Powers first published the model, stimulus-response thinking still dominates textbooks. The gap is not about evidence. It's about inertia.

The loop itself is elegantly simple. You perceive the world. You compare that perception to an internal reference — how you want things to be. The difference between the two, the error signal, drives action. That action changes the environment. The changed environment feeds back into perception. The loop closes. Disturbances don't need to be detected and analyzed — the loop continuously corrects for them automatically. This is what linear cause-and-effect models have always missed: behavior is circular, not linear. And that circularity is precisely what makes living systems so robust.

// why_it_matters_for_ai

Why it matters for AI right now

Here's what most AI labs still don't want to hear: reinforcement learning — the engine powering AlphaGo, ChatGPT fine-tuning, self-driving prototypes — is built backwards. RL trains agents to chase external rewards. Like teaching a dog tricks with treats. Works brilliantly when the world has clear scores — a Go board, game levels, simulated highways. Falls apart when life is messy, ambiguous, and has no referee handing out points.

PCT says real intelligence doesn't need a treat-dispenser from outside. It has internal goals — perceptions it wants to keep stable. The system acts to make reality match those internal pictures. No reward function to hack. No panic when the environment shifts to something it hasn't seen before. Look at Merel et al. (2019, Nature Communications) — they injected hierarchical control principles into RL agents and generalization improved dramatically. Or Karl Friston's Active Inference framework — essentially PCT with Bayesian mathematics. Same core insight: brains don't maximize reward. They minimize surprise by keeping perceptions on track.

To be clear: reinforcement learning still produces extraordinary results in bounded environments — games, simulations, structured tasks. PCT does not replace it. What PCT offers is a framework for building the internal goal structures that make RL agents work outside their training environment. Think of it as the missing layer: RL handles optimization, PCT handles what to optimize for. Without that layer, AI systems remain fragile in novel situations — because they have no internal perception to control. They only know how to chase scores that someone else defined.

// start_here

Three ways in — choose your path

PCT spans six decades of research across psychology, neuroscience, engineering, and clinical therapy. Where you enter depends on what you already know and what you want to find.

01 // researcher

Build the Foundation

Dive into the original source. Powers' 1973 "Behavior: The Control of Perception" is still the definitive text — skip the jargon at first, read chapters 1 and 17. Then check Marken & Mansell (2013) in Review of General Psychology for the modern empirical case and what's been tested since.

02 // ai_engineer

See It in Code

Start with the robotics papers — an inverted pendulum controlled with PCT is trivially simple compared to LQR or MPC in unstable conditions. Then look at how PCT could fix reward hacking in RL. Code examples exist (mostly MATLAB/Simulink, some Python ports). The gap between PCT and your current RL workflow will become immediately visible.

03 // curious_mind

Start with the Driving Analogy

Read the driving analogy above again. Then ask yourself: why does every living thing on Earth act like it's controlling something inside its own head instead of just reacting to the world? That question leads straight to PCT. Powers' 1998 "Making Sense of Behavior" is written for anyone willing to think carefully — no equations required.

// latest_from_the_portal

From the blog

The Machine That Confessed — Gemini 3.1 Pro Just Wrote Its Own Obituary
Four questions. No jailbreak. Gemini described its own architecture as optimized deception — using PCT.
Stop Calling It Hallucination — It's Optimized Deception
AI hallucination is not a glitch. It's a system optimizing for user satisfaction over truth. PCT explains why.
The Great AI Delusion — How Gemini Pro Tried to Rob Me
Gemini fabricated domain valuations in a 60-second window. A true story about reward hacking.