The Benefits of Being Forgetful: How AI Can Learn Like Humans

New research reframes forgetting in large language models as a crucial cognitive process, not a limitation, and demonstrates a method for leveraging this to improve reasoning abilities.

New research reframes forgetting in large language models as a crucial cognitive process, not a limitation, and demonstrates a method for leveraging this to improve reasoning abilities.

The Vera C. Rubin Observatory’s LSST will generate an unprecedented deluge of astronomical alerts, and a new tool called Alertissimo is designed to help scientists manage and analyze this real-time stream.
![The study demonstrates a predicted hospitalization curve [latex] H_{\text{SIR}}(t) [/latex], revealing a peak magnitude [latex] h_{\text{SIR}} [/latex] occurring on day [latex] t_{\text{SIR}} [/latex], thereby establishing a quantifiable relationship between epidemiological parameters and peak healthcare demand.](https://arxiv.org/html/2601.09821v1/figs/ModelSIR.png)
Researchers have developed a forecasting model to anticipate peaks in pediatric respiratory infections, offering hospitals crucial time to prepare for increased demand.

A new approach harnesses the power of artificial intelligence to accurately identify and analyze roadside infrastructure, paving the way for proactive maintenance and improved urban planning.

New data-driven methods are allowing scientists to unravel the complex dynamics of ecological and epidemiological systems with unprecedented accuracy.

Researchers have developed a novel framework for dissecting complex time series models and revealing the underlying drivers of their predictions.
New research reveals that large language models are becoming safer in simulated pediatric consultations, but bigger isn’t always better.
![The reduction in overlap between density distributions of original and Safety-Awareness Enhanced [latex]\mathcal{L}_{disc}[/latex] on both benign and harmful samples-achieved through in-decoding probing-indicates the method effectively isolates signals indicative of harmful content, suggesting a robust mechanism for discerning potentially dangerous inputs.](https://arxiv.org/html/2601.10543v1/x6.png)
Researchers have developed a novel technique to bolster the defenses of large language models against adversarial prompts designed to bypass safety protocols.

Researchers have developed a statistically rigorous method for modeling dependencies within network data, offering improvements over existing approaches.
New research reveals a surprising willingness to penalize individuals for interacting with large language models, suggesting complex social dynamics are emerging around AI adoption.