Author: Denis Avetisyan
A new machine learning approach leverages Bayesian Neural Networks to accurately forecast neutron-induced reaction probabilities and, crucially, quantify the uncertainty in those predictions.

This work details BNN-I6, a Bayesian Neural Network for predicting (n,p) reaction cross sections with robust uncertainty quantification, improving the reliability of nuclear data libraries.
Accurate prediction of nuclear reaction rates is often hampered by sparse experimental data and inherent uncertainties in theoretical models. This work, titled ‘Bayesian Learning of (n,p) Reaction Cross Sections with Quantified Uncertainties’, introduces a data-driven framework-a Bayesian Neural Network (BNN) called BNN-I6-to address this challenge by predicting neutron-induced (n,p) reaction cross sections while simultaneously quantifying predictive uncertainty. Trained on data from the ENDF/B-VIII.1 library and incorporating six key input features, BNN-I6 demonstrates performance competitive with the TENDL-2023 library, alongside reliable uncertainty estimates. Could this approach unlock improved nuclear data evaluations and enable more robust predictions in data-scarce scenarios for applications ranging from reactor design to nuclear astrophysics?
Unveiling Nuclear Interactions: The Foundation of Scientific Understanding
The ability to accurately predict how nuclei interact – specifically, the rates at which nuclear reactions occur – underpins a surprisingly broad spectrum of scientific and technological endeavors. In the realm of nuclear energy, precise reaction cross sections are essential for designing safer, more efficient reactors and for accurately modeling the behavior of nuclear materials. Beyond terrestrial applications, astrophysics relies heavily on these predictions to understand the creation of elements within stars and supernovae, and to model the energy generation processes that power these celestial bodies. Furthermore, fields like medical isotope production and national security also depend on a reliable understanding of nuclear reaction probabilities, highlighting the pervasive need for improved predictive capabilities in this fundamental area of physics. The accuracy of these predictions directly impacts the reliability of simulations and the validity of conclusions drawn from them, making it a cornerstone of progress in multiple disciplines.
The Hauser-Feshbach model has long been a cornerstone of predicting the probabilities-or cross sections-of nuclear reactions, but its inherent complexity introduces substantial challenges. This statistical approach necessitates the input of numerous nuclear parameters, including level densities, gamma-ray strengths, and optical-model potentials, each carrying its own uncertainty. Because these parameters are often derived from limited experimental data or theoretical estimations, the resulting predictions can deviate significantly from actual reaction rates. This sensitivity to input parameters is particularly problematic when simulating reactions involving unstable nuclei or at energies where experimental measurements are scarce, limiting the reliability of predictions used in fields ranging from reactor design to understanding the creation of elements in stars. Consequently, refining these parameters and developing more robust theoretical frameworks remain critical areas of nuclear physics research.
Despite decades of effort, current evaluated nuclear data libraries represent an incomplete picture of the nuclear landscape. These resources, painstakingly compiled from experiments and theoretical calculations, are essential for modeling nuclear processes; however, they often lack the detail needed for high-fidelity simulations. Specifically, data is frequently missing for many isotopes, particularly those far from stability, and existing data tends to be averaged over energy ranges that obscure crucial resonance structures. This limited resolution hinders accurate predictions in areas like stellar nucleosynthesis, reactor physics, and advanced transmutation studies, where precise knowledge of reaction probabilities is paramount. Consequently, researchers continually strive to expand these libraries and improve their energy resolution, utilizing novel experimental techniques and sophisticated theoretical models to address these critical data gaps and unlock more reliable simulations.

A New Lens on Nuclear Data: Harnessing the Power of Machine Learning
Nuclear reaction rates are determined by intricate interactions between nucleons, resulting in highly nonlinear relationships that are difficult to model using traditional physics-based approaches. Machine learning algorithms, particularly neural networks, excel at approximating these complex functions without explicit formulation of underlying physical laws. This capability stems from their ability to identify and represent high-dimensional, nonlinear correlations within training datasets. By learning directly from experimental data and/or results of computationally intensive simulations, machine learning models can effectively capture the nuances of nuclear behavior and provide accurate predictions of reaction rates across a broad energy range and for various nuclear species. The resulting models are not intended to replace physics-based models, but rather to serve as powerful complements, especially in regimes where theoretical calculations are challenging or computationally prohibitive.
The BNN-I6 model predicts nuclear reaction cross sections using a multivariate input scheme comprised of six features. These inputs are the neutron number (N), proton number (Z), an energy offset parameter, a pairing energy term representing the effect of nucleon pairing, the isospin asymmetry (N-Z)/(N+Z), and the natural logarithm of the cross section itself. This feature set allows the model to learn complex relationships between these variables and the target cross section, enabling prediction across a range of nuclei and energies. The inclusion of the logarithmic cross section as an input serves as a regularization technique and aids in stabilizing the learning process, particularly in regions with limited data.
Machine learning models, when integrated with established nuclear physics methodologies, provide a complementary approach to predicting nuclear reaction rates. Traditional models, often based on theoretical frameworks and limited experimental data, can exhibit inaccuracies when extrapolating to unexplored energy ranges or for isotopes with few measurements. Data-driven machine learning algorithms excel in identifying patterns and relationships within existing datasets, enabling more reliable predictions in data-sparse regions. This synergistic combination leverages the strengths of both approaches: the physics-based insights of traditional models and the pattern recognition capabilities of machine learning, resulting in improved overall accuracy and enhanced predictive power for nuclear data.

Validating the Approach: Benchmarking the BNN-I6 Model’s Performance
The BNN-I6 model’s training and validation utilized data from two prominent evaluated nuclear data libraries: TENDL-2023 and ENDF/B-VIII.1. Employing data from both sources allowed for cross-validation and assessment of model robustness against variations in nuclear data evaluations. TENDL-2023 is a modern, continuously updated library based on Bayesian inference, while ENDF/B-VIII.1 represents a widely used, established standard. This dual-source approach ensured a comprehensive evaluation of the model’s predictive capability across different data representations and minimized potential biases inherent to a single library.
The BNN-I6 model’s predictive accuracy for (n,p) reaction cross sections was evaluated through comprehensive testing across a diverse set of energies and target nuclei. Quantitative analysis demonstrated a root mean square error (r.m.s.) of less than 1 when calculated on a logarithmic scale; this metric indicates a high degree of agreement between the model’s predictions and established experimental data. This level of accuracy is competitive with existing state-of-the-art methods for neutron-proton scattering cross section prediction and confirms the model’s reliability for applications in nuclear physics and related fields.
Uncertainty quantification within the BNN-I6 model is achieved through the Bayesian Neural Network framework, which doesn’t provide a single point prediction but rather a probability distribution over possible outcomes. This distribution is characterized by its mean and variance, allowing for an assessment of the model’s confidence in its predictions; higher variance indicates greater uncertainty. The reported uncertainty estimates are crucial for applications where reliable error bounds are necessary, such as nuclear data assimilation and sensitivity analysis, and enable informed decision-making regarding the suitability of the model’s predictions for specific scenarios. This probabilistic approach distinguishes the BNN-I6 from deterministic models and provides a quantifiable measure of its predictive reliability.

Dissecting the Model’s Reasoning: Unveiling Feature Importance
The BNN-I6 model’s predictive power is demonstrably linked to specific input features, as revealed by a recent SHAP (SHapley Additive exPlanations) analysis. This methodology pinpointed energy offset, neutron number, and logarithmic cross section as the most influential parameters driving the model’s outputs. Essentially, these three features contribute disproportionately to the final prediction; changes in these values elicit the most significant response from the BNN-I6. The analysis doesn’t just identify these features, but quantifies their impact, offering a detailed understanding of how each parameter influences the model’s decision-making process. This suggests the model isn’t simply memorizing data, but is instead leveraging physically meaningful relationships inherent in these core nuclear properties to arrive at its predictions.
The identification of energy offset, neutron number, and logarithmic cross section as key drivers in the BNN-I6 model’s predictions isn’t merely a mathematical observation, but a validation of established nuclear physics principles. These parameters, fundamentally linked to nuclear interactions and reaction probabilities, demonstrably exert the strongest influence on the model’s output, confirming its alignment with physical reality. This understanding directs future refinement efforts; developers can prioritize optimizing the model’s sensitivity to these critical features, potentially through increased data resolution or tailored algorithmic adjustments. Furthermore, recognizing these dominant parameters allows for a more focused approach to data acquisition, concentrating resources on improving the accuracy of measurements for these key inputs and, consequently, enhancing the overall predictive power and reliability of the model.
The ability to discern which input features most strongly influence a model’s output is critical for establishing confidence in its predictions, particularly within scientific domains. A model isn’t simply a ‘black box’ when feature importance is understood; instead, its reasoning becomes partially transparent, allowing researchers to evaluate whether the model is basing decisions on physically meaningful parameters. This nuanced interpretation moves beyond simply accepting a prediction to understanding why that prediction was made, bolstering the model’s trustworthiness for applications like nuclear data analysis and reactor physics calculations. Ultimately, acknowledging feature importance transforms a predictive tool into an investigatory one, fostering deeper insight and facilitating more informed scientific inquiry.

Looking Ahead: Expanding the Impact and Scope of Nuclear Data Prediction
The versatility of the BNN-I6 model extends beyond its current capabilities, offering a pathway to predict reaction cross sections for a diverse array of nuclear processes and target materials. Currently focused on specific reactions, the model’s underlying architecture is adaptable, allowing researchers to train it on datasets representing different nuclear interactions – such as neutron capture, alpha emission, or fission – and different isotopic compositions. This adaptability is crucial for addressing outstanding questions in nuclear astrophysics, where accurate cross sections are needed to model stellar nucleosynthesis, and in nuclear engineering, where reliable data informs reactor design and safety assessments. By expanding the training data and refining the model’s parameters, the BNN-I6 has the potential to become a broadly applicable tool for generating crucial nuclear data, ultimately reducing uncertainties in a wide range of scientific and technological applications.
The predictive power of the BNN-I6 model stands to gain considerably through the integration of established nuclear physics concepts as additional input features. Specifically, incorporating the optical model potential – which describes the interaction between a projectile and a target nucleus – would refine the model’s understanding of incoming particle behavior. Similarly, including level density, a measure of the number of quantum states available within a nucleus, and the gamma strength function, which governs the probability of gamma-ray emission, promises a more complete depiction of nuclear structure and decay pathways. These enhancements aren’t merely about adding complexity; they represent a strategic fusion of data-driven learning with fundamental physical principles, potentially leading to significantly improved accuracy in predicting nuclear reaction outcomes and broadening the model’s applicability across diverse nuclear systems.
The development of this data-driven framework represents a significant step towards resolving the longstanding need for comprehensive and accurate nuclear data. Traditionally, generating such data has relied heavily on complex theoretical models and painstaking experimental measurements – processes that are often resource-intensive and subject to considerable uncertainty. This new approach, however, leverages the power of machine learning to extrapolate from existing data, effectively creating a predictive engine for nuclear properties. The potential applications are far-reaching, impacting fields such as nuclear astrophysics, reactor design, medical isotope production, and materials science. By providing a reliable source of nuclear information, this framework promises to accelerate progress in these diverse areas and enable more informed decision-making across a broad spectrum of scientific and engineering endeavors.

The pursuit of accurate prediction, as demonstrated by BNN-I6’s modeling of (n,p) reaction cross sections, reveals an inherent dependency on understanding the underlying structural relationships within the data. Each prediction isn’t merely a numerical output, but a distillation of complex interactions. This echoes Bertrand Russell’s observation: “To be happy, one must find something to do.” In this context, ‘something to do’ is to rigorously explore the patterns hidden within nuclear data. The model’s capacity for uncertainty quantification is crucial; acknowledging what is not known is as vital as defining what is known, ultimately refining the broader landscape of nuclear data libraries and improving predictive power.
Beyond the Prediction
The successful application of BNN-I6 to (n,p) reaction cross section prediction, while encouraging, merely highlights the persistent challenge of translating algorithmic accuracy into genuine epistemic advancement. The model’s capacity for uncertainty quantification is not, in itself, a solution to the fundamental limitations of existing nuclear data libraries. Rather, it provides a framework for systematically acknowledging what remains unknown. Future iterations must move beyond simply calibrating against existing datasets; validation against independent experimental measurements – particularly those probing regions of parameter space where data are sparse – is paramount.
A critical avenue for exploration lies in the interpretability of the BNN’s internal representations. SHAP analysis, as demonstrated, offers a glimpse into feature importance, but it remains a post-hoc explanation. The field would benefit from incorporating principles of mechanistic interpretability – attempting to directly link the network’s learned parameters to underlying physical processes – even if such an endeavor proves asymptotically unattainable. Such investigations, however laborious, could reveal biases or spurious correlations lurking within the model’s ‘black box’.
Ultimately, the true test of this approach – and of machine learning in nuclear physics more broadly – will not be its ability to reproduce known results, but its capacity to guide new experiments. The model’s quantified uncertainties should not be viewed as a final answer, but as a map of ignorance, indicating where focused data acquisition efforts will yield the greatest reduction in predictive error – and, hopefully, a deeper understanding of nuclear structure.
Original article: https://arxiv.org/pdf/2603.04789.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- United Airlines can now kick passengers off flights and ban them for not using headphones
- SHIB PREDICTION. SHIB cryptocurrency
- Movie Games responds to DDS creator’s claims with $1.2M fine, saying they aren’t valid
- These are the 25 best PlayStation 5 games
- Scream 7 Will Officially Bring Back 5 Major Actors from the First Movie
- The MCU’s Mandarin Twist, Explained
- Rob Reiner’s Son Officially Charged With First Degree Murder
- MNT PREDICTION. MNT cryptocurrency
- Gold Rate Forecast
- How To Find All Jade Gate Pass Cat Play Locations In Where Winds Meet
2026-03-07 22:44