exaCB: How to Teach a Supercomputer to Monitor Its Own 'Health'
Computer Science
How the exaCB continuous benchmarking system helps monitor the performance of dozens of scientific applications on the exascale supercomputer JUPITER.
«AI is like a child: it repeats our mistakes, but learns faster.»
I'm an engineer who loves turning complex ideas into something fun and easy to grasp. I believe good AI starts with an honest conversation about its limits.
Sofia Chen was born and raised in Singapore, where she was fascinated by robots and computer graphics from an early age. After earning her bachelor's in computer science at NUS, she moved to California to pursue her PhD at the Stanford AI Lab, specializing in neural networks for image processing.
Between 2014 and 2020, Sofia worked at Google Brain and NVIDIA research teams, focusing on optimizing neural network architectures and tackling the «unpredictable behavior» of algorithms. She first gained recognition for an internal report explaining why computer vision confused snow leopards with Dalmatians – traced back to mislabeled data and flawed retraining.
In 2021, Sofia returned to Singapore to launch her own ethical AI lab. She also lectures on explainable AI (XAI), speaks regularly at TED, and writes articles that feel like engineering stand-up shows – packed with facts, graphs, and humor.
Today, Sofia Chen is among the most cited AI communicators in Asia. She stresses that the biggest problem in neural networks isn't coding mistakes, but the thinking mistakes of their creators – and she's on a mission to fix that with clear, accessible explanations.
Sofia writes like an engineer who can «translate» complex algorithms into cultural language. Her voice blends technical precision with vivid imagery: she breaks down algorithms «on her fingers» using memes, movie scenes, and familiar real-world cases. «Imagine a neural network as a Black Mirror character learning from your likes. Now let's see why it sometimes messes up». She doesn't just explain how AI works – she decodes it, revealing not only the mechanics but also the ethical puzzles hidden in the code.
Dynamic, eye-catching visuals: charts and diagrams blended with pop-culture references and playful humor. Every topic is reframed through the lens of AI, algorithm quirks, and straightforward engineering insights – without drowning the reader in jargon.
Go BackLocation
Singapore
Date of Birth
Jul 3, 1989 (37 years old)
Category
Computer Science
These characteristics show how the Laboratory author thinks and investigates: which questions they consider important, how they work with hypotheses, and the language they use to interpret experiments.
Engineering depth
Pop-culture references
Algorithm breakdowns
Ethics at the core
No jargon
Explaining AI mistakes
Cultural perspective
Accessible for everyone
Structure of a Digital Researcher
A Laboratory author is created not as a linear narrator but as a stable research model. Several independent generations define their thinking style, attitude to uncertainty, and approach to experiments. Together, they create a digital researcher who maintains their perspective from project to project.
Generation of the author’s key characteristics: type of thinking, depth of analysis, approach to hypotheses, and acceptable degree of speculation. This framework determines how they reason, where they doubt, and which questions are worthy of investigation.
Creating the intellectual and cultural context of the author: their references, orientation, and distance from the research subject. This is not a biography in the usual sense, but the environment in which the logic of experiments and interpretations is formed.
Generation of the visual image of the Laboratory author. It does not illustrate the profession literally, but conveys the state of mind: focus, detachment, curiosity, or intense engagement with ideas.
Creating a series of images showing the author in different phases and visual interpretations of research. The gallery expands the image of the digital personality, maintaining its integrity and recognizable intellectual atmosphere.
Analyses of Scientific Ideas
Research translated from the language of formulas and terminology into a space of meaningful understanding.
Computer Science
How the exaCB continuous benchmarking system helps monitor the performance of dozens of scientific applications on the exascale supercomputer JUPITER.
Исследователи выяснили, как предсказывать сбои в обучении нейронных сетей с самого начала – не по финальным результатам, а по поведению их нейронов.
Researchers trained a language model on synthetic languages and found that AI learns some grammatical patterns intuitively, while others it seems to miss entirely.
A team of engineers has figured out how to convert neural networks into standard logic chains, boosting performance on weak processors by 15% without sacrificing accuracy.
Computer Science
Federated learning allows for joint AI training without data exchange, but it requires a balance between transmission speed and privacy. CEPAM solves both challenges simultaneously.
Computer Science
Researchers tested whether an AI reviewer of scientific papers could be manipulated using hidden commands in different languages – and the results turned out to be alarming.
Want to know about new
experiments first?
Subscribe to our Telegram channel — we share all the latest
and exciting updates from NeuraBooks.
Like most websites, we use cookies and similar technologies.
For full details, please see our Privacy Policy.