Inspiring delivery
Imagery
Interdisciplinary thinking
Imagine an orchestra playing in a dark hall. You sit in the audience and hear the music, but you don't see the musicians. By sound alone, can you determine who is playing right now – a violinist or a cellist? Can you reconstruct the score lying before the conductor? This is precisely the task neurobiologists face when trying to understand what happens in the brain during sleep. We hear the electric melody of the cortex, but we do not see which invisible conductors from the depths of the brain control this performance.
Today I want to tell you about an amazing study in which researchers learned not just to listen to the brain, but to reconstruct those hidden signals arriving at the cortex from other areas – like deciphering the silent cues behind the scenes of an opera. 🎭
When Electricity Becomes a Language
Every night, when we drift into sleep, our brain doesn't switch off – it rewrites its diary. The cerebral cortex and the hippocampus (the area responsible for memory) conduct a nightly dialogue: they exchange the day's impressions, turning fresh memories into long-term archives. But how can we eavesdrop on this conversation?
Neurobiologists use special electrodes that pick up so-called Local Field Potentials – LFPs. It’s as if you pressed your ear against a wall and heard the hum of voices from the next room: you can’t make out the words, but you catch the rhythm of the chat, its emotional coloring, moments of silence, and bursts of laughter. LFPs show us the summed electrical activity of thousands of neurons – their collective dance.
But here is the riddle: we record this dance only from the cortex. And what happens deeper down – in the hippocampus, the thalamus, other parts of the brain? What invisible hands pull the strings of this cortical ballet? To answer this question, we need to learn to reconstruct the hidden signals based only on what we see.
The Inverse Problem: From Effect to Cause
In mathematics and physics, there is a concept called the «inverse problem». A standard, direct problem sounds like this: «Given a stone with a mass of 1 kg, thrown from a height of 10 meters. Where will it be in two seconds?» The inverse problem is the opposite: «A stone fell right here. From where was it thrown?» Inverse problems are harder because there can be many answers, and it is far from always clear which one is correct.
In neurobiology, the inverse problem sounds something like this: «I see the cortex generating this specific electric wave. What signal came to it from the outside to make it sing like that?» It’s like trying to understand what an off-stage actor’s line was, hearing only the response of the actor on stage.
The study’s authors applied a method called data assimilation to solve this task. It sounds technical, but the idea is actually beautiful and almost poetic. Imagine you have two versions of reality: one is your theoretical model of the world (how, in your opinion, everything should be), and the second is real observations (how it actually is). Data assimilation is the path to their reconciliation: you constantly correct the model, checking it against reality, step by step approaching the truth.
This is how weather is predicted in meteorology: there is a physical model of the atmosphere, there are data from weather stations – and every hour the model «corrects itself» based on real measurements. One can do exactly the same with the brain.
The Amari Model: The Electric Landscape of the Cortex
To understand how the method works, we first need to understand which brain model the researchers used. They took the so-called Amari neural field model as a basis. This is not a model of individual neurons (there are billions, calculating each is pointless), but a mean field model – as if the cortex were not a cluster of cells, but a waving fluid.
Picture the surface of a lake. Someone threw a stone at one point – circles went out. The wind blew in another – ripples appeared. Waves overlay each other, interfere, fade away. Neural tissue behaves similarly: excitation in one area spreads to neighbors, amplifying somewhere (excitatory connections), and being dampened elsewhere (inhibitory neurons). A complex spatiotemporal picture of activity waves emerges.
The Amari model describes this picture mathematically. It has several key components:
Activity – this is the «wave height» at every point of the cortex at every moment in time.
Connections between neurons – are modeled by a special function resembling a Mexican hat: 🌵 close neighbors activate each other (the hat’s brim is raised), while distant ones suppress (the edges are lowered). Thus patterns arise: islands of activity surrounded by zones of silence.
External input – a signal coming from outside the cortex. This is precisely what the researchers wanted to reconstruct. It is like an unknown instrumental part that needs to be guessed from the orchestra’s overall sound.
The model’s formula looks threatening, but its essence is simple: activity in the next moment depends on activity now, on the influence of neighbors, and on an external push. It is a differential equation, but for the computer, it is turned into a sequence of steps – it is discretized.
The Particle Filter: A Dance of Probabilities
Now for the most interesting part: how do we reconstruct both the state of the cortex and the parameters of the external signal from observed waves? Here, an elegant Bayesian method called the particle filter enters the stage.
The Bayesian approach is a philosophy of working with uncertainty. Instead of asserting: «The parameter equals 5», it says: «With a 60% probability, the parameter is around 5, with 30% – around 4, and with 10% – actually 7». This probability distribution is a cloud of hypotheses in which the more likely options shine brighter.
The particle filter works like this: you create a whole swarm of hypotheses – each «particle» represents one possible variant of parameters and state. At first, the particles are scattered randomly. Then you take a step forward in time: each particle evolves according to the model. Then you look at real data: those that predicted something close to the observations gain more weight (become brighter), while those that erred grow dim.
After each observation, you redistribute the particles: you discard the weak ones and multiply the strong ones. It is the natural selection of ideas! 🦋 Over time, the swarm converges to the correct answer: the cloud of probability condenses around the truth.
But there is a problem: if one needs to simultaneously estimate both parameters (for example, frequency and amplitude of the external signal) and the state (the cortical activity itself), too many particles would be required. The solution is to use a nested filter: for each set of parameters, its own swarm of state particles is launched. It’s like a matryoshka doll: the outer level sorts through parameters, the inner one – states for each parameter.
A Check on an Artificial Brain
Before applying the method to a real mouse, scientists tested it on synthetic data – a virtual brain. They set up the model themselves with known parameters, generated a signal from it (adding noise for realism), and then «forgot» the parameters and tried to recover them.
For the external influence, they chose a traveling wave with a changing frequency – a so-called «chirp». It’s like a whistle that starts low and then smoothly rises. Such waves are typical for the brain: when a signal spreads through the cortex, it doesn't just pulsate in place, but runs in some direction, like a ripple across water.
This wave had three main parameters:
- Amplitude (how strong it is)
- Spatial frequency (how often wave crests alternate in space)
- Temporal frequency (how fast the wave runs)
The result was encouraging: after a short «warm-up», the filter accurately reconstructed the amplitude and spatial frequency. It was harder with temporal frequency – it changed in reality! The model assumed a constant frequency, while the real wave shifted it. But the filter didn't break; instead, it honestly showed: «The frequency is roughly this, but it’s unstable, the model isn't quite right». This is exactly what is needed: the method doesn't just fit the answer, but diagnoses discrepancies in the model.
The reconstruction error of the cortex state was about 0.56 millivolts – a very good result for such a complex signal. Synthetic test passed! ✅
A Journey into the Brain of a Sleeping Mouse
Now – to reality. Scientists took recordings of local field potentials from the cortex of a mouse drifting into natural sleep. The recording lasted 15 minutes (900 seconds), and during this time the mouse went through different phases: rapid eye movement sleep (REM, when dreams occur), slow-wave sleep (SWS, deep sleep with large slow waves), and indeterminate transitional states.
The LFP was recorded from a two-dimensional grid of electrodes – as if you photographed the lake surface from multiple points. At every moment, a 2D picture of activity was obtained: higher here, lower there, waves running, patterns changing.
When scientists analyzed these data, they discovered several key frequencies:
- A dominant wave around 0.5 Hz (one cycle every two seconds) – slow oscillations characteristic of deep sleep
- Additional components around 2 Hz and 6 Hz – faster ripples
In space, long, smooth waves prevailed – the cortex breathed slowly and deeply, without sharp, small eddies.
The Secret of the Radon Transform
Working with a 2D signal is computationally heavy: there are too many points, too many particles to track. To simplify the task, researchers applied an elegant trick – the Radon transform. This is a mathematical operation that turns a 2D picture into a set of 1D projections.
Imagine shining a flashlight on a sculpture from different angles and looking at its shadow on the wall. Each shadow is a one-dimensional projection of a three-dimensional object. The Radon transform does the same, but for the 2D LFP signal: it projects it onto lines at different angles. And amazingly – these projections preserve all key frequency characteristics! Having analyzed the spectrum of the original 2D signal and its 1D projections, scientists verified: nothing important was lost.
Now, instead of one complex 2D problem, several simple 1D ones can be solved – for projections at angles of 0°, 60°, 90°, and 150°. It’s like studying a person by four photographs – profile and full face – instead of immediately building a 3D model.
What the Parameters Told Us
When the particle filter started working on real data, the most thrilling part began. The parameters of the reconstructed external signal started telling a story about what was happening in the mouse's brain.
Spatial frequency ν quickly stabilized at a low value – about 0.02 per millimeter. This confirmed: the waves were indeed long, smooth, enveloping the whole cortex, rather than sharp and local.
Amplitude A behaved as if alive: it breathed in time with the brain's state. During slow-wave sleep (SWS), it became more stable, as if the brain found its rhythm and held it. When transitioning from an indeterminate state to deep sleep, the amplitude made a sharp jump – as if the brain shifted gears. This dynamics matched the envelope of the real LFP signal surprisingly accurately: when waves in the recording became more powerful, A grew; when they quieted down – it fell. The external input, reconstructed by the filter, really turned out to be like an invisible puppeteer controlling the cortex.
Temporal frequency f turned out to be the «most talkative». During SWS, it tended toward small negative values – around –0.5 Hz. The sign of the frequency indicates the direction of the traveling wave: positive means the wave goes one way, negative – the other. In other phases of sleep, the frequency fluctuated, finding no peace. But what is striking: roughly 340 seconds in – right before the onset of SWS – both A and f sharply changed behavior. It is as if the conductor in the orchestra suddenly changed, and the music became different.
Scientists also analyzed parameter distributions within time blocks. It turned out they are multimodal – meaning not one clear peak, but several. This suggests that the brain during sleep is non-stationary: it doesn't linger in one state, but jumps between regimes. During SWS, distributions became simpler, smoother – the brain found footing. And in the REM phase (rapid eye movement, when dreams occur), they became complex again – dreams are full of chaos and fantasy, and this is visible even in parameter statistics!
Reconstruction Quality: A Coincidence of Symphonies
How well did the model reproduce reality? The average error was 1.5–2.5 millivolts – quite acceptable for such a noisy and capricious signal as LFP. But the most convincing check was the comparison of frequency spectra.
When scientists built the spectrum of the signal reconstructed by the model and overlaid it on the spectrum of the real recording, they coincided! All major peaks – 0.5 Hz, 2 Hz, 6 Hz – were in place. This means the model grasped the essence: it reproduced not just the wave form at every moment, but the very musical structure of the brain – its rhythms, harmonics, overtones. The model sounded in unison with reality.
And importantly: the results depended little on the projection angle. Whether 0° or 150°, the reconstructed parameters turned out similar. This suggests that the spatial structure of LFP during sleep is fairly isotropic – identical in all directions, like ripples on a round lake without wind.
What It All Means
This work is not just a technical success in data processing. It is a step toward understanding how the brain communicates with itself. We have learned to reconstruct invisible messages that the cortex receives from the depths of the brain during sleep. We saw how transitions between sleep phases are reflected in the parameters of external influence: as if someone behind the scenes changes the score, and the cortex obediently adjusts.
The model used by the researchers is extremely simple: one population of neurons, simplified connections, idealized input. The real brain is infinitely more complex. But even this simple model was able to catch the essence – and this means the method works. As Einstein said: «Everything should be made as simple as possible, but not simpler.»
Bayesian data assimilation opens new horizons. We can not only observe the brain but also ask it questions: «What signal led to this activity? Why did you suddenly change rhythm? What is the hippocampus telling you in your sleep?» And the brain answers – through the language of probabilities, through parameter distributions, through reconstructed fields of activity.
Where to Next
The authors look to the future with optimism. The model can be complicated: add excitatory and inhibitory neuron populations (in reality, they work in pairs), account for connection heterogeneity (the cortex is not a smooth lake, but a landscape with elevations and depressions), and model external inputs in more detail. The method can be applied to multi-channel recordings without dimensionality reduction – it will just require more computing power.
But the most thrilling prospect is decoding dialogues between brain areas. Imagine: we could reconstruct exactly what the hippocampus «says» to the cortex during memory consolidation, how the thalamus controls transitions between sleep stages, what signals come from subcortical structures during decision-making. It is like learning to read letters exchanged between different provinces of one empire – the empire of reason.
The brain is not just a machine. It is a poem of electricity and chemistry, a symphony of waves and rhythms, where each area plays its part, and together they create what we call consciousness, memory, sleep. And now we have a tool to listen to this music not just as spectators in the hall, but as conductors with a score in hand – understanding where every note comes from, how they weave into chords, how meaning is born from the chaos of signals.
We are learning to read the brain's electric dreams. And this is only the beginning. 🌙