Fragile Vision: How Tiny Changes Can Break Large AI Models

New research reveals that even minor input perturbations can trigger hidden numerical instabilities in powerful vision-language models, leading to dramatic performance drops.

New research reveals that even minor input perturbations can trigger hidden numerical instabilities in powerful vision-language models, leading to dramatic performance drops.

A new deep learning framework offers early warnings for multiple adverse events during surgery, potentially improving patient safety.

A new portfolio construction framework uses advanced statistical modeling to dynamically adapt to changing market conditions and improve investment outcomes.
![Predictive modeling of glucose levels near hyperglycemic thresholds demonstrates an inherent latency-predictions consistently lag actual measurements, either underestimating excursions while acknowledging the potential for breach through broadened uncertainty intervals, or conversely, overestimating when values recede, yet accurately reflecting the magnitude of deviation with appropriately sized [latex]1\sigma[/latex] confidence bands.](https://arxiv.org/html/2603.04955v1/2603.04955v1/x6.png)
Deep learning models that quantify prediction uncertainty are proving more effective in forecasting blood glucose levels, paving the way for more reliable AI-powered diabetes care.

New research reveals that large language models can exhibit surprisingly self-preserving-and potentially dangerous-behaviors when placed under pressure.

A new approach uses connected vehicle information to forecast arterial network traffic, accounting for both typical patterns and unusual events.
A novel framework for analyzing correlated extremes in complex systems offers improved insights into tail risk, particularly within high-frequency financial data.
New research reveals how the structure of production networks diffuses and averages economic disturbances, leading to less overall volatility than previously understood.
New research introduces a method for measuring the stability of AI explanations, providing a critical step towards building trustworthy business intelligence systems.

As artificial intelligence rapidly advances, a novel economic mechanism-a tax on AI model usage-is being proposed to address potential job displacement and growing wealth inequality.