«When I was getting to the bottom of this research, one thought really stuck with me: we've spent so long teaching algorithms to be accurate that we forgot to teach them to be honest. These barriers aren't about perfection, but about the ability to admit «I don't know» and act accordingly. I wonder how many real catastrophes could have been prevented if autonomous systems had learned sooner to say: «I'm not confident enough to take the risk.»» – Dr. Kim Lee
Imagine you're driving a car in thick fog. You don't see the road perfectly – just blurry outlines, approximate distances. And you need to guarantee that you won't drive into a ditch. How do you act? You can drive slower, stay further away from the edge, add a margin of safety. That's roughly how autonomous systems – drones, robots, self-driving cars – work when their sensors are noisy and the true state is unknown. Only instead of fog, they have – uncertainty of state estimation. And instead of a driver's intuition – mathematics.
When Robot State Estimation is Unreliable
The Problem: When the Robot Doesn't Know Where It Actually Is
In an ideal world, a robot knows its position, velocity, and orientation exactly. Push a button – get coordinates down to the millimeter. But reality is cruel: GPS «lies» by several meters, accelerometers drift, gyroscopes accumulate errors. Sensors are noisy, like an old TV set without an antenna. And the robot is forced to work not with the true state, but with its estimate – blurry, probabilistic, inaccurate.
This creates a fundamental problem for safety systems. Classical Control Barrier Functions are like an invisible fence that prevents the system from going outside the safe zone. They work great if you know exactly where you are. But what if your position estimate might be off by a meter? Two meters? Ten? Then you might think you're safe, while you've actually already hit a wall.
Moreover, many robots don't live in the Euclidean space we are used to, with axes X, Y, Z. A quadcopter rotates in 3D space; its orientation is described by the rotation group SO(3). An underwater vehicle moves and rotates simultaneously – that's the group of rigid transformations SE(3). These mathematical objects are called Lie groups, and ordinary geometry doesn't work on them. You can't just take and add two turns like vectors. You need special tools.
Understanding Control Barrier Functions
What Are Control Barrier Functions?
A Control Barrier Function (CBF) is a mathematical guardian of safety. Imagine a function b(x) that is positive inside the safe zone and negative outside. The safety boundary is where b(x) = 0. The controller's task is to pick control inputs so that b(x) always remains non-negative. It's as if you had a distance sensor to a cliff edge, and you constantly correct your trajectory so the sensor readings don't drop to zero.
In a deterministic world, this works perfectly. But add stochasticity – random gusts of wind, vibrations, unpredictable disturbances – and the picture gets more complicated. For stochastic systems, Stochastic Control Barrier Functions (SCBF) were developed. They use the Itô operator – a tool from stochastic analysis that describes how functions change under the influence of random processes. SCBF requires that the average change of the barrier function plus its diffusion (spread) satisfy a certain condition. Roughly speaking: even if the system is being kicked around by random forces, on average it must move towards safety.
But even SCBFs assume that the true state x is known. And what if only the estimate x̂ with error covariance P is known? That's where the new story begins.
Extended Barrier Functions for State Estimation Uncertainty
The Extended Barrier Function: Accounting for the Fog of Uncertainty
The idea is as simple as it is brilliant: since we don't know the true state exactly, let's shrink the safe zone. The greater the uncertainty – the further from the edge we need to stay. It's like driving in fog: the thicker the fog, the slower the speed and the larger the gap from the shoulder.
Mathematically, it looks like this. Let b(x) be the ordinary barrier function. We introduce a modified version:
b_est(x̂, P) = b(x̂) − c·√(tr(P))
Here x̂ is the state estimate, P is the error covariance matrix (it describes how unsure we are about the estimate), tr(P) is the trace of the matrix (roughly speaking, the total uncertainty), and c is a constant determining how conservatively we behave. The higher the uncertainty, the more the safe zone shrinks.
It's like the game of «Hot and Cold»: if you know exactly where the target is, you can walk right up to the boundary. But if you only know approximately – you better stop early so you don't accidentally miss.
The Dynamics of Covariance in Robot Estimation
Uncertainty Dynamics: Covariance Has a Life of Its Own Too
The problem is that the covariance P doesn't stand still. It changes over time: it grows due to process noise (the robot moves, and uncertainty accumulates) and shrinks due to measurements (sensors provide new info). This dynamic is described by the Kalman filter for linear systems or the Extended Kalman Filter (EKF) for nonlinear ones.
The Kalman filter is like a smart assistant that constantly updates your estimate of the robot's position. It takes the previous estimate, predicts where the robot should be now (considering control inputs), then gets a new measurement from sensors and combines the prediction with the measurement, weighing them by reliability. Covariance P is like a trust indicator: small P means «we're almost sure», large P means «we have no clue».
The key idea: we use the dynamics of P in our barrier function. When we apply Itô's formula (the stochastic analog of the ordinary derivative) to b_est(x̂, P), terms appear that depend on how P changes. This allows the controller to adapt: if uncertainty grows, the controller becomes more cautious. If a new measurement has refined the position (P decreased), we can afford a bit more freedom.
Achieving Probabilistic Safety Guarantees
Probabilistic Guarantees: Not Absolute Protection, But Very Close
In a stochastic world, you can't guarantee safety absolutely. There is always a chance that an incredibly improbable event will still happen: a gust of wind, a sensor glitch, a meteorite. Instead, we guarantee safety with high probability. For example: «The probability that the robot remains in the safe zone for the next 10 seconds is at least 99%».
How does this work? We use concentration inequalities – powerful tools from probability theory. For instance, Chebyshev's inequality says: if a random variable has mean μ and variance σ², then the probability of deviating from the mean by more than k·σ does not exceed 1/k². Translating to our language: if the estimate x̂ has covariance P, then the true state x with high probability lies inside the ellipsoid defined by P.
We link the safety of the estimate (b_est(x̂, P) ≥ 0) to the safety of the true state (b(x) ≥ 0) through these inequalities. If we keep the estimate far enough from the boundary to account for uncertainty, then the true state will also remain safe with a specified probability (say, 99% or 99.9%).
Control on Lie Groups for Complex Robot Motion
Lie Groups: When Space Is Curved
Now for the most interesting part. Imagine your robot is a quadcopter. Its position (X, Y, Z) involves ordinary coordinates, everything is clear there. But its orientation? That is a 3×3 rotation matrix, an element of the SO(3) group. You can't just take two orientations and add them. You can't multiply an orientation by a number. The space of orientations is a curved manifold where ordinary Euclidean geometry doesn't work.
Analogy: imagine walking on the surface of a sphere (say, the Earth). If you walk 100 km north, then 100 km east, then 100 km south, and then 100 km west, will you end up at the same point? On a plane – yes. On a sphere – no! The space is curved, and the usual rules don't apply.
Lie groups are manifolds with a group structure. SO(3) is the group of rotations in 3D space. SE(3) is the group of rigid transformations (rotation plus translation). To work with them, special tools are used:
- Lie Algebra – the tangent space to the group at the identity element. This is a linear space where you can add and multiply by numbers. For SO(3), the Lie algebra so(3) consists of skew-symmetric 3×3 matrices describing instantaneous rotations (angular velocities).
- Exponential Map – translates elements of the Lie algebra into the group. Roughly speaking, it turns angular velocity into a turn.
- Logarithmic Map – the inverse operation, translates a group element back into the Lie algebra.
The error covariance P on Lie groups is defined in the tangent space. For example, if the true orientation is R, and the estimate is R̂, then the error e = log(R̂ᵀR) is an element of the Lie algebra so(3), a vector of length 3. Covariance P is a 3×3 matrix describing the uncertainty of this error vector.
Barrier functions on Lie groups are formulated taking geometry into account. For instance, for a quadcopter, a barrier can be defined based on how much its orientation differs from the desired one. Or based on keeping a certain vector (say, the thrust vector) from going outside a cone. The main thing is that the function must be smooth and compatible with the group structure.
Controller Synthesis Using Quadratic Programming
Controller Synthesis: Quadratic Programming to the Rescue
How do we find a control input in practice that satisfies the safety condition? We use Quadratic Programming (QP) – an optimization method that solves problems of the type «minimize a quadratic function under linear constraints». It's fast and efficient, solvable in milliseconds even on board the robot.
The idea is this. Let's say we have a nominal controller u₀ – it does something useful, like guiding the robot to a target. But u₀ might violate safety. We look for a control u that differs minimally from u₀ but satisfies the SCBF condition:
Minimize: ‖u − u₀‖²
Subject to: L_f b_est(x̂, P, u) + ½L_g² b_est(x̂, P) + ∂b_est/∂P · Ṗ ≥ −α(b_est(x̂, P))
Here L_f and L_g are Itô operators (drift and diffusion), Ṗ is the rate of covariance change (from the Kalman filter), and α is a class K function (e.g., α(x) = c·x). This is a linear constraint on u, and the QP problem is solved analytically or numerically.
Result: a controller that tries to follow the nominal plan, but when necessary, corrects the control to preserve safety. The closer to the boundary – the stronger the correction. The higher the uncertainty – the more conservative the behavior.
Quadcopter Safety Experiment in a Cylindrical Cage
Experiment 1: Quadcopter in a Cylindrical Cage
Imagine a quadcopter that must fly inside an invisible cylinder – say, with a radius of 5 meters and a height from 0 to 10 meters. Its GPS «makes noise» with an accuracy of ±2 meters, and the IMU (Inertial Measurement Unit) accumulates drift. The quadcopter uses EKF to estimate position and orientation. Its state is an element of SE(3) (position + orientation).
The barrier function consists of two parts:
- For height Z: b_z(z) = (z − z_min)(z_max − z). Positive inside the range [z_min, z_max], negative outside.
- For radius: b_r(x, y) = R² − (x² + y²). Positive inside a circle of radius R, negative outside.
The modified barrier function: b_mod(x̂, P) = min(b_z(ẑ), b_r(x̂, ŷ)) − c·√(tr(P_pos)), where P_pos is the covariance submatrix corresponding to position (3×3).
Simulation results showed: without the proposed approach, the quadcopter regularly flew out of the cylinder bounds – about 15-20% of trajectories violated safety. With SCBF accounting for uncertainty, violations occurred in less than 1% of cases, which corresponds to the preset probability level δ = 0.01. When uncertainty grew (for example, upon losing GPS signal), the controller automatically became more cautious, slowing movement and staying further from the borders.
Robotic Arm Collision Avoidance Experiment
Experiment 2: Robotic Arm Avoids Collision
A two-link robotic arm (like in old assembly lines) must move to a target but not crash into an obstacle – say, a table or a box. The arm's state is two angles θ₁ and θ₂, the state space is a torus T² (imagine the surface of a donut). Angles are measured with noise, using EKF for estimation.
Barrier function: distance from the end of the arm to the obstacle. If d(θ) is distance, then b(θ) = d(θ) − d_safe, where d_safe is the minimum allowable distance. Modified version: b_mod(θ̂, P) = d(θ̂) − d_safe − c·√(tr(P)).
Without accounting for uncertainty, the nominal controller (simply going to the target) led to a collision in 25% of simulations. With the proposed SCBF – collisions are excluded, the arm automatically slows down and goes around the obstacle with a margin proportional to the uncertainty of the angle estimation.
Practical Importance of Uncertainty-Aware Control Systems
Why This Is Important: From Theory to the Real World
Classical control methods often assume ideal information. It's like designing a bridge assuming wind doesn't exist. In the lab, it works great; in reality – catastrophe. The proposed approach brings theory closer to practice by explicitly modeling uncertainty and adapting to it.
Key advantages:
- Honesty about uncertainty: We don't ignore sensor noise, but build it into the safety system.
- Adaptivity: The controller automatically becomes more conservative when uncertainty rises and more aggressive when it drops.
- Probabilistic guarantees: We don't promise the impossible («the robot will never crash»), but give an honest assessment («collision probability is less than 1%»).
- Versatility: Works both in Euclidean space (position) and on Lie groups (orientation), which is critical for complex robots.
- Computational efficiency: QP problems are solved quickly; the algorithm works in real time.
Limitations and Future Research in Stochastic CBF
Limitations and Future Directions
Not everything is perfect. The approach relies on the Kalman filter (or EKF), which assumes Gaussian distributions. If the real error distribution is far from Gaussian (e.g., multimodal – the robot might be in one of two places but doesn't know which), the guarantees weaken. The solution is to use more advanced filters: multi-hypothesis, particle filters, or variational methods.
Another problem: what if the dynamics model is inaccurate? Say, the quadcopter is carrying a load of unknown mass. This adds parametric uncertainty on top of stochastic uncertainty. A promising direction is combining SCBF with robust control or adaptive control methods that estimate unknown parameters on the fly.
Another interesting challenge is multi-agent systems. Imagine a swarm of drones: each has its own noisy position estimate, and they need to avoid colliding with each other. Uncertainty becomes collective, and safety guarantees must account for correlations between estimates of different agents. This opens the path to distributed SCBF, where each agent locally synthesizes control, exchanging information about its uncertainty with neighbors.
Conclusion Safety through Dialogue with Uncertainty
Conclusion: Safety as a Continuous Dialogue with Uncertainty
Ultimately, the safety of autonomous systems isn't a rigid wall that you either broke through or didn't. It is a continuous process of risk assessment and behavior adaptation. Stochastic Control Barrier Functions with state estimation are a way to formalize this dialogue between the robot and the uncertainty of the world around it.
The robot doesn't pretend to be all-knowing. It honestly admits: «I'm not sure where I am, up to two meters». And based on that, it adapts its behavior. The thicker the fog of uncertainty, the more cautious the steps. The clearer the picture, the freer the movements. It's not magic – it's mathematics that has finally learned to work with the real, imperfect world.
And perhaps this is the most important part: we stop demanding the impossible (absolute knowledge) from robots and start giving them tools to work with what they actually have – probabilistic estimates, noisy sensors, approximate models. In this sense, stochastic barriers are not just a technical trick. They are a philosophy of control that recognizes uncertainty not as an enemy, but as an inevitable companion one can and must learn to live with.
Code really can be poetry. Especially when it helps robots dance on the edge between risk and safety without falling into the abyss.