Juan Benet|Oct 26, 2025 12:51
Hey @hive_echo this is cool. Love the UI to observe the learning process.
Can you get learning if you decouple the input clock cycle from the learning & output cycles?
The chain-of-thought LLM hack is a crude way to add recurrence around the whole transformer, but I bet we’ll find extremely successful ways to add brain-like recurrence within the networks, mediated by internal processes (logic gates that activate subnets/subprograms).
There are whole-brain clock cycles, so you can keep that, but the signals coming in, being processed, learning, thinking, deciding on actions all could be running on separate loops/programs in the brain, not 1 big loop.
You could test this in your network by running the output sampling in a “separate” cycle, or by adding a structure between your hidden layer and the output layer that adds some asynchrony.
Maybe a way to do this in visual classifiers would be to add an “active focusing” intent:
- build a structure that “focuses” the input (most of the input layer is low-res / colorless (like eye periphery), part of the input range is focused (higher res, color).
- add a structure between the hidden layer and the output layer that “moves” the focus field across the whole input (like moving the eye, focusing vision on an object)
- add a structure that “looks around then decides when to output” based on confidence increasing above some threshold (this is what “adds the separation of programs”)
- learning process must learn to “use the eye correctly” to extract signal and pass
- the training set would have to have examples that are impossible ideally (or very hard) to distinguish without focusing. (text too blurry at low res)
Why this matters:
I think the current deep learning approach of single passes across the whole network will turn out to be way less effective than feedback loop structures within. The reason we use single passes is because of backprop. But the brain uses spiking neurons with local learning rules only.
One branch of brain-inspired AI research asks whether we can get backprop on top of spiking neurons. Another asks “can we get close enough to backprop, but do better in other ways?” — the big advantage to me has always been in decoupling sub-networks and enabling deep recurrent loops — real thinking.
Most higher order animals have very clear thinking and consideration pauses before acting, and active control of sensory & motor systems while learning and deciding how to act. Let alone humans pausing and pondering.
Anyway, I think you could do this, and if you get it working well you could maybe kick off a new paradigm of bio-inspired deep learning! Happy hacking!
cc @EscolaSean70058 @countzerozzz @davidad(Juan Benet)
Share To
Timeline
HotFlash
APP
X
Telegram
CopyLink