The neuron as a direct data-driven controller (DD-DC) by Moore et al., 2024
We explored DD-DC, a provocative normative theory proposing that neurons are not just information processors, but active feedback controllers that steer their environment—including other neurons—toward desired states.
- Paper: The neuron as a direct data-driven controller
- Slides: Drive link
- Presenters: Thelonious (Theo) Cooper, Dmitri (Mitya) Chklovskii
We began with an interview-style segment (00:00:50) where Hadi asked Mitya about his transition from theoretical physics to neuroscience. Mitya discussed the challenge of moving from physics, where you solve problems with existing equations, to biology, which is in a “pre-paradigm” stage without a unified theoretical framework (00:03:30). He also explained that his “neuron as controller” hypothesis was born from the realization that biological circuits are full of loops—something feedforward models fail to capture—and the evolutionary necessity of agency in single-cell organisms (00:06:53).
Following the interview, Theo took the stage (00:10:13) to present the Direct Data-Driven Control (DD-DC) framework. He explained Willems’ Fundamental Lemma and highlighted a major win for the framework: it provides a normative explanation for Spike-Timing-Dependent Plasticity (STDP), naturally recovering its asymmetric potentiation and depression curves from optimal control principles (00:42:10). He also demonstrated a counterintuitive result where noise stabilizes the system by ensuring “persistence of excitation,” allowing the controller to adapt to changing dynamics (00:42:49).
Mitya then presented (00:56:34) an updated view of the theory. He argued that the neuron’s objective isn’t just to stabilize to zero, but to “cross an unstable fixed point” (like an inverted pendulum or walking). This formulation naturally leads to a “double or nothing” control law, providing a normative explaination for the ubiquity of rectification (ReLU) and threshold-linear behavior in biological neurons (01:05:33).
The meeting concluded with final questions (01:13:17), touching on the connections between this framework, variational inference, and the limitations of standard RL (01:18:30).
Watch the full meeting here: