Home Theory AI & PCT Robotics Consulting Blog FAQ About

Two control architectures. Same inverted pendulum. One demands a complete mathematical model of the universe. The other demands two gain values and a reference signal.

Below: the equations, the Python code, the peer-reviewed benchmarks, and the repositories you can clone right now. No fluff. No textbook padding. Just the engineering that actually works when models break.

LQR vs PCT — Same Pendulum, Two Different Worlds

The classical LQR (Linear-Quadratic Regulator) needs a complete mathematical model of the system. Without matrices A, B, Q, R it is helpless. PCT needs only perception and a reference signal. Below, both architectures stripped to their foundations.

LQR — Optimization on Paper

For a linear system ẋ = Ax + Bu, the cost function:

LQR Cost Function J = ∫₀ (xTQx + uTRu) dt

The solution is state feedback: u = −Kx, where K = R−1BTP, and P satisfies the algebraic Riccati equation:

Riccati Equation ATP + PA − PBR−1BTP + Q = 0

The weakness: Matrices A and B must be perfectly known. Any model uncertainty — friction, mass change, wind — and the controller loses stability. The mathematics is elegant. The real world is not.

PCT — A Closed Loop That Doesn't Ask About Physics

The fundamental PCT cell (node):

Step 1 — Input Function p = Ki · Qi
Step 2 — Comparator e = r − p
Step 3 — Output Function Qo = Ko · e

After closing the loop through the environment (Qi = KfQo + KdD), we get the fundamental PCT equation:

PCT Closed-Loop Transfer p = [G / (1+G)] · r + [KiKd / (1+G)] · D

where G = KiKoKf is the loop gain.

The key insight: As G → ∞, the term G/(1+G) → 1, and disturbance D vanishes. The system forces p = r without knowing the dynamics of the environment. This is not prediction — it is real-time physical compensation.

"LQR asks: 'What is the optimal action given a perfect model?' PCT asks: 'What action makes my perception match my reference?' One requires omniscience. The other requires a sensor."

— perceptualcontroltheory.org

Your First PCT Controller — Python

A minimal, self-contained skeleton based on the perceptualrobots/pct library (MIT License). No magic — just the loop: Reference → Comparator → Error → Action.

import numpy as np class PCTNode: """ Basic Hierarchical Perceptual Control node. reference (r) -> perception (p) -> error (e) -> output (q_o) """ def __init__(self, name, reference=0.0, ki=1.0, ko=1.0): self.name = name self.reference = reference # reference signal (r) self.ki = ki # input gain self.ko = ko # output gain self.perception = 0.0 # p self.error = 0.0 # e self.output = 0.0 # q_o def perceive(self, env_var): self.perception = self.ki * env_var def compare(self): self.error = self.reference - self.perception def act(self): self.output = self.ko * self.error return self.output def step(self, env_var): """One cycle of the PCT loop.""" self.perceive(env_var) self.compare() return self.act() # ---- Example: inverted pendulum (hierarchy) ---- class HPCTController: def __init__(self): # Higher-level node: "X position" self.position_node = PCTNode("Position", reference=0.0, ki=1.0, ko=0.3) # Lower-level node: "pendulum angle" self.angle_node = PCTNode("Angle", reference=0.0, ki=1.0, ko=4.0) def run_cycle(self, current_x, current_angle): # Higher level says: "I need this angle to correct position" desired_angle = self.position_node.step(current_x) # Inject as reference for the lower level self.angle_node.reference = desired_angle # Lower level generates motor force motor_force = self.angle_node.step(current_angle) return motor_force

Run it yourself: pip install pct — or copy the skeleton above. Full repository: github.com/perceptualrobots/pct.

What you're looking at: The higher-level node controls position by outputting a desired angle. That desired angle becomes the reference signal for the lower-level node, which outputs motor force. Two nodes, zero physics equations, full pendulum control. This is HPCT — Hierarchical Perceptual Control Theory.

Inverted Pendulum — Numbers That Hurt LQR

Johnson et al. at the University of Manchester (2020) were the first to directly compare PCT and LQR on the same two-wheeled balancing robot. Here are the results.

MetricLQRPCT / HPCT
Modelling requirement Requires full dynamic equations (mass, inertia, friction coefficients) No physical model — only gains Ki, Ko
Controlled variable System output (position, angle) Perceptual input (what the system sees)
Disturbance rejection Susceptible to nonlinearities; ~6 s recovery from 5 N push Near-instantaneous compensation — system barely flinches
Stability under uncertainty Degrades rapidly with model errors Maintained via continuous perceptual error reduction

"The performance of the PCT controller is comparable to the LQR controller and better at disturbance rejection."

— Johnson, Zhou, Cheah et al., Journal of Intelligent & Robotic Systems, 2020

Full paper: doi:10.1007/s10846-020-01158-4

Why This Matters

LQR is the gold standard taught in every control theory course on the planet. It won a Bellman Prize. It has 70 years of academic infrastructure behind it. And on the simplest possible benchmark — keep a stick upright — a feedback loop with two gains and no model matched its performance and beat it on robustness.

The question is not whether PCT works. The question is why this result is not on the first slide of every introductory control systems lecture.

Ready-Made Implementations — GitHub, PyPI, MATLAB

  • Full PCT library with hierarchical nodes, Welford variance estimator, and ready-to-use examples. The reference implementation.
    Python · MIT License
  • YAML-based DSL for defining PCT hierarchies. CartPole and car model examples. Useful for rapid prototyping without writing boilerplate.
    Python · YAML DSL
  • pip install pct — nodes ready to use in under 30 seconds.
    PyPI Package
  • Barter & Yin (2021) — Quadruped Robot
    HPCT on a four-legged robot: 12 degrees of freedom, torque sensors, terrain adaptation without a predictive model. Published in iScience. doi:10.1016/j.isci.2021.102948
    iScience · Peer-Reviewed

All links verified. No dead repositories, no ghost references. If you find a broken link, let us know — it will be fixed in five minutes.

Next: See how seven AI models independently rediscovered PCT when asked to design an architecture immune to reward hacking → Reward Hacking: Why AI Lies. Or start from the beginning with the core theory.