When AI Forgets What It Knows: Adapting to New Realities

A new study reveals how neural networks lose accuracy when faced with unfamiliar data, and introduces a method to realign their internal representations for better performance.

A new study reveals how neural networks lose accuracy when faced with unfamiliar data, and introduces a method to realign their internal representations for better performance.

A new approach combines the power of language models with graph networks to achieve accurate text classification, even when labeled data is limited.

A new approach combines the strengths of kernel methods and neural networks to dramatically accelerate aerodynamic simulations while maintaining high accuracy.
A new approach to building inclusive language tools prioritizes ethical data creation and community involvement for languages often left behind.

Despite impressive accuracy, new research reveals that large language models are surprisingly vulnerable to cleverly crafted phishing attacks.

New research demonstrates that Bayesian Neural Networks can be dramatically compressed for efficient deployment without sacrificing their crucial ability to estimate prediction uncertainty.

Researchers have developed a new framework to automatically design efficient neural networks that can deliver faster performance on resource-constrained devices.

A new approach to time series forecasting prioritizes managing risk and uncertainty, crucial for applications like clinical decision support where errors can have serious consequences.
This review connects the ability to forecast and diagnose faults in complex systems to a fundamental property called ‘pre-normality’, offering new strategies for design and control.

New research shows how to prevent large language models from losing their safety guardrails as they are continuously updated with new information.