Are AI Explanations Trustworthy?
New research reveals that many popular methods for explaining AI decisions produce unstable and unreliable results, raising concerns about their practical use.
New research reveals that many popular methods for explaining AI decisions produce unstable and unreliable results, raising concerns about their practical use.

New research reveals how vulnerable common hydrological models are to subtle data manipulations, and surprisingly, which model proves more resilient.
![DecompSSM offers a forecasting approach built upon decomposition into trend, seasonal, and residual components, enhanced by auxiliary objectives promoting orthogonality and reconstruction, and leverages a Gated-Time State Space Model (GT-SSM) incorporating an input-dependent Adaptive Step Predictor-an architecture derived from S5 [smith\_s5\_2023]-to navigate the inherent decay of predictive systems.](https://arxiv.org/html/2602.05389v1/figs/model.png)
Researchers have developed a novel state space model that breaks down complex time series data into core components, leading to improved forecasting accuracy.

New research shows artificial intelligence models can accurately forecast daily hypoxia levels in coastal waters, offering critical insights for marine ecosystem management.

As AI increasingly powers vehicle automation, traditional safety standards are proving insufficient, demanding a broader evaluation of functional behavior beyond simple component failure.
New research demonstrates how combining multiple explainable AI techniques can provide a more complete and trustworthy understanding of deep learning models used in medical imaging.
A new study investigates how artificial intelligence can help healthcare professionals navigate the growing volume of patient-generated health data and improve cardiac risk reduction.

A new study details how machine learning algorithms can automatically classify white dwarf stars from large spectroscopic surveys, accelerating the discovery of rare and unusual systems.
A new review examines how context-aware deep learning is improving network intrusion detection through flow-based telemetry analysis.

Researchers introduce a framework that proactively identifies privacy risks throughout the entire lifecycle of artificial intelligence systems, from data collection to model deployment.