Newswise — NEWPORT NEWS, VA – Keeping high-power particle accelerators at peak performance requires advanced and precise control systems. For example, the primary research machine at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility features hundreds of fine-tuned components that accelerate electrons to 99.999% the speed of light.
The electrons get this boost from radiofrequency waves within a series of resonant structures known as cavities, which become superconducting at temperatures colder than deep space. These cavities form the backbone of Jefferson Lab’s Continuous Electron Beam Accelerator Facility (CEBAF), a unique DOE Office of Science user facility supporting the research of more than 1,650 nuclear physicists from around the globe. CEBAF also holds the distinction of being the world’s first large-scale installation and application of this superconducting radiofrequency (SRF) technology.
But changes in a cavity’s properties can sometimes lead to increased heat loads, resulting in the sudden loss of its zero-resistance state. This will force the system to abruptly shut down via its self-protection mechanism. Such incidents are called system trips.
Accelerator trips cause delays for experiments. To address this, a team of data scientists and accelerator experts at Jefferson Lab, working in collaboration with university and DOE national laboratory partners, developed novel machine learning (ML) techniques to transparently model the physics of CEBAF. Their goal is to provide operators with a reliable tool to anticipate how each cavity will behave during accelerator operation and tuning, helping them stay steps ahead of potential faults and minimizing disruptions to experiments.
“CEBAF is a fast-moving system,” said Kishan Rajput, a data scientist at Jefferson Lab. “It’s very dynamic, and it operates on the scale of microseconds. So, we need very fast models, mostly neural networks, that run on fast computers.”
Many Pieces in Play
Each of CEBAF’s 418 cavities can exhibit distinct thermal behaviors and trip susceptibilities, making it challenging to construct a unified model of the entire system.
To address this, researchers conducted two complementary studies evaluating the performance of various ML models in simulating a representative section of CEBAF comprising approximately 200 SRF cavities. Building on these results, the authors introduced a framework that captures the behavior of CEBAF within this roughly 200-dimensional space using explainable ML models constrained by underlying physics principles.
The team focused on an ML technique known as reinforcement learning (RL). RL can be likened to teaching a computer how to play chess.
“Instead of giving an ML agent a bunch of chess games that humans played, we give it the rules of chess,” Jefferson Lab data scientist Armen Kasparian said. “We tell it to play the game millions of times and learn from those experiences.”
Deep differentiable reinforcement learning (DDRL) is a version that back-propagates errors to optimize its neural network. Without differentiability, sampling a high-dimensional space is like playing chess blindfolded. But with DDRL, the models’ training is significantly accelerated, allowing scientists to quickly solve complex problems.
“We modified traditional RL to use the power of differentiability, and it works wonders,” Rajput said. “The short story is that DDRL beat all the other algorithms.”
Controlling the Board
CEBAF’s cavities are crafted from ultrapure niobium. They are housed in large steel structures known as cryomodules, which maintain an ultrahigh vacuum and are immersed in liquid helium to cool the system to just a few degrees Fahrenheit above absolute zero (approximately 2 kelvin).
The supercooled conditions make SRF cavities extremely sensitive to temperature. Slight changes in their energy can lead to increased local heat loads and eventual system trips. When operating hundreds of such cavities, there is an optimal tradeoff between the total heat generated and the number of trips over a given time period. This tradeoff can be visualized on a graph, where the relationship between these two competing objectives forms what is known as a Pareto front.
The team compared the mathematical form of accelerator control constraints to the established physics of CEBAF’s northern section, which houses about half of the accelerator’s SRF cavities. They found that introducing the concept of explainability with the physics-based constraints requirement allowed the RL algorithms to reliably model the high-dimensional environment.
Explainability is a major topic in AI discussions. Essentially turning the “black box” of AI into a gray one, explainability describes how ML models make decisions. It also goes a long way toward increased transparency and trust with AI.
“If you ask a neural network a question, it’s hard to say why it produces a specific answer,” said Jonathan Colen, a research assistant professor at Old Dominion University and collaborator at Jefferson Lab. “To gain more insight, we have a model build a mathematical equation for what it sees and use that to influence its prediction. This equation is more transparent than a neural network.
“If it agrees with what we know about the underlying physics, then we might trust the model’s subsequent decision more. But if the equation is ‘wrong,’ that’s a red flag to re-evaluate the decision, the model, or even the physics of the problem.”
The RL algorithms performed well in this unique, high-parameter game of chess. The approach could one day allow CEBAF operators to bypass time-consuming tasks and stay ahead of system trips by comparing an RL model’s equations to an accelerator’s governing physics.
“This is a validation of the hypothesis and conclusion we put forward,” Colen said. “That is, differentiability is the secret sauce that allows these RL algorithms to solve this really challenging, high-dimensional problem.”
Part of this research was supported by the DOE Office of Science’s Advanced Scientific Computing Research program and Office of Nuclear Physics. The Hampton Roads Biomedical Research Consortium, through the Joint Institute on Advanced Computing for Environmental Studies (ACES) between Jefferson Lab and ODU, provided additional funding. DOE’s SLAC National Accelerator Laboratory, through the DOE’s Laboratory Directed Research and Development program, also contributed.
Further Reading
Harnessing the power of gradient-based simulations for multi-objective optimization in particle accelerators
Explainable physics-based constraints on reinforcement learning for accelerator optimization
Advanced Scientific Computing Research
Nuclear Physics (NP) Homepage
Jefferson Lab, ODU Launch Joint Institute on Advanced Computing for Environmental Studies
SLAC National Accelerator Laboratory
Contact: Matt Cahill, Jefferson Lab Communications Office, [email protected]
Jefferson Science Associates, LLC, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energy’s Office of Science. JSA is a wholly owned subsidiary of the Southeastern Universities Research Association, Inc. (SURA).
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science

