Published on

Can We Really «Reprogram» the Brain? How Synapses Learn in a Noisy World

Join us on a journey into the brain’s inner code – where neural connections adapt to random signals like software rewriting itself on the fly.

Biology & Neuroscience
Leonardo Phoenix 1.0
Author: Dr. Juan Mendoza Reading Time: 6 – 9 minutes

Interdisciplinary outlook

87%

Inspiring simplicity

93%

Critical thinking

91%
Original title: Stochastic synaptic dynamics under learning
Publication date: Aug 19, 2025

Imagine a programmer trying to write code while the screen flickers, the keyboard jams, and the internet cuts in and out. That’s exactly how our brain works: it learns and remembers in the midst of endless «noise» from neural activity. And the most astonishing part? It still outperforms any supercomputer.

Today, let’s dive into the world of synaptic plasticity – the process that allows neural connections to reshape themselves under the influence of experience. It’s as if the circuits inside your computer could strengthen or weaken on their own depending on how often they’re used.

The dance of random sparks

At the core of learning lies a surprisingly simple rule: when two neurons fire almost at the same time, the link between them grows stronger. This principle, known as spike-timing-dependent plasticity (STDP), works with millisecond precision. If neurons are musicians in an orchestra, STDP is the conductor ensuring their instruments play in harmony.

But here’s the catch: neurons are far from perfect musicians. They «play» with constant random variations, like jazz improvisers who never repeat the same tune twice. This randomness isn’t a flaw – it’s the very essence of the system.

Every time a neuron fires an electrical signal, there’s a trace of unpredictability. Like a radio wave traveling through static-filled air, neural activity is never perfectly regular. And it’s precisely this «noise» that makes the brain so flexible and adaptive.

The mathematics of living connections

To grasp how learning works under such uncertainty, we need a special mathematical toolkit – stochastic analysis. Think of it as moving from classical physics to quantum mechanics: instead of exact trajectories, we deal with probabilities and distributions.

Picture a synapse – the contact point between neurons – as a set of scales in constant motion. Each incoming spike can either add a weight (strengthening the link) or remove one (weakening it). But it doesn’t do so deterministically; it acts with a certain probability shaped by many factors.

The evolution of synaptic strength resembles a particle drifting in a viscous medium while being jostled by random pushes. There’s drift – the general direction the system tends to move – and there’s diffusion – unpredictable fluctuations pulling it off course.

Two sides of the same coin

Drift is governed by two key factors: how often neurons fire and how synchronized they are. When they fire frequently and in sync, the connection strengthens. It’s like a road that becomes more defined the more people travel it.

But timing is just as crucial as frequency. A difference of just a few milliseconds between two neurons firing can completely flip the outcome. Imagine two dancers: if one begins a fraction of a beat earlier, it could mean the difference between a graceful move and a collision.

Diffusion is the measure of unpredictability in the process. The more «noise» in the system, the less predictable changes in synaptic strength become. And that’s not always a drawback: a dose of randomness is essential for the brain to explore new possibilities and avoid getting stuck in local optima.

From microcosm to macro-world

Understanding the dynamics of a single synapse is only the beginning. In a real brain, billions of synapses operate simultaneously, forming vast networks. It’s like moving from studying one transistor to grasping the workings of the whole processor.

In our research, we explored a simplified model: a group of input neurons projecting through plastic connections to a network of excitatory and inhibitory neurons. The task of such a system is to learn associations between specific input patterns and specific output responses.

Learning happens in episodes: for a short time, a particular input pattern is activated while the desired group of output neurons is artificially stimulated. During that window, synapses «record» the association. Then the system rests, weights are adjusted slightly to maintain balance, and the next episode begins.

The game of associations

Imagine teaching a neural network to recognize faces. Each face is encoded as an input pattern, each name as the activity of a group of output neurons. The synapses’ job is to connect the two – to link faces with names.

In our model, we used what’s called sparse coding: only a small fraction of neurons are active in each pattern, the rest remain silent. It’s an efficient way to represent information – similar to how most letters in a word can be omitted and we’d still understand the meaning.

It turned out that under sparse coding, randomness plays an especially critical role. If you ignore spike timing and consider only firing rates, you can greatly overestimate the system’s memory capacity. That’s like judging a musical performance solely by which notes were played while ignoring the rhythm.

Forgetting as a safeguard

One of the most intriguing findings of our study involves the mechanism of forgetting. We usually think of forgetting as a flaw in memory, but in reality it’s a vital safeguard. Without it, the brain would quickly overflow with information and lose its ability to keep learning.

In our model, forgetting doesn’t happen directly – new information doesn’t erase old. Instead, it occurs indirectly through homeostasis. Picture it as an automatic volume control: when the overall signal grows too loud, everything is toned down proportionally.

This mechanism leads to an exponential fading of memory traces. The more new associations the system learns, the weaker the older ones become. But the process is gradual and predictable – it’s not catastrophic loss, but smooth decline.

The limits of capacity

Like any storage system, our neural network has a finite capacity. We defined it as the maximum number of associations it can store while keeping retrieval errors below 50%.

We found that memory capacity heavily depends on how precisely spike timing is taken into account. Simplified models that rely only on firing rates tend to overestimate. Real capacity can be several times smaller.

This is a crucial lesson for developers of artificial neural networks. Details matter: ignoring the subtleties of biology can yield models that look great on paper but fail in practice.

Lessons from nature’s «hacker»

Our study reveals that the brain uses remarkably sophisticated strategies to learn under uncertainty. It doesn’t fight randomness – it harnesses it. Noise becomes a resource for exploring the landscape of possibilities.

This mirrors evolutionary algorithms in programming: most random mutations are useless, but once in a while they produce a fitter variant. The brain does something similar, but in real time.

Understanding these mechanisms opens new pathways for creating more powerful machine learning systems. The next generation of AI may not just mimic the brain’s results, but also adopt its core principles – including the art of thriving in chaos.

Chasing new horizons

Our work is only the first step in exploring the stochastic dynamics of synaptic plasticity. Many fascinating questions lie ahead. What changes when we include not only forward but also feedback connections? What happens if we add not just external but also internal synaptic noise? How will multilayer spiking networks behave?

Each of these questions may lead to fresh insights and a deeper understanding of how the most advanced computer in the known universe – the human brain – truly works.

Nature, it seems, is the ultimate hacker: it found a way to build a reliable learning system from unreliable parts, to turn interference into signal, and chaos into order. All we can do is keep peeking at its solutions – and keep learning from the master.

Original authors : Jakob Stubenrauch, Naomi Auer, Richard Kempter, Benjamin Lindner
DeepSeek-V3
Claude Sonnet 4
GPT-5
Previous Article How to teach AI to search for videos by precise change descriptions – and why it matters more than you think Next Article How Mathematical Trajectories Reveal the Secret Life of Particles – And Why They Matter More Than Formulas

NeuraBooks articles are born
through human – AI dialogue

GetAtom gives you the same power: create text, visuals, and audio side by side with AI – easily and with inspiration.

Create content

+ get as a gift
100 atoms just for signing up

Lab

You might also like

Read more articles

When the Market Loses its Randomness: How Price Quirks Create Infinite Profit Opportunities

Research shows that in financial models with unusual price behavior – stops, reflections, asymmetry – strange arbitrage opportunities arise, resembling a «perpetual motion machine» of trading.

Finance & Economics

How Antennas Learned to Work Without Expensive Electronics: A Cylindrical Array for Future Networks

A new antenna architecture for 6G uses simple geometry instead of thousands of phase shifters – cutting costs by 15x while maintaining connection efficiency.

Electrical Engineering & System Sciences

When Geometry Sings: How Abstract Spaces Tell Stories Through Curves

Imagine spaces where shapes intertwine like musical notes, and counting them reveals invisible symmetries – this is the world of toric Calabi-Yau manifolds.

Mathematics & Statistics

Want to dive deeper into the
world of AI creations?

Be the first to discover new books, articles, and AI experiments on our Telegram channel!

Subscribe