Trusting the Network: Reliable AI Across Diverse Data
![In heterogeneous federated learning on the RetinaMNIST dataset, unweighted quantile aggregation systematically underestimates coverage for weaker agents, necessitating sample-size-aware aggregation to achieve the desired 0.95 coverage level-a result demonstrated through median performance with 95% confidence intervals across ten independent runs with a target error of [latex]\alpha = 0.05[/latex] and a partition Dirichlet parameter of [latex]\mathrm{Dir}(0.3)[/latex].](https://arxiv.org/html/2602.23296v1/2602.23296v1/x2.png)
A new framework enhances the ability of distributed machine learning systems to provide trustworthy predictions, even when data and models vary significantly across different sources.
![In heterogeneous federated learning on the RetinaMNIST dataset, unweighted quantile aggregation systematically underestimates coverage for weaker agents, necessitating sample-size-aware aggregation to achieve the desired 0.95 coverage level-a result demonstrated through median performance with 95% confidence intervals across ten independent runs with a target error of [latex]\alpha = 0.05[/latex] and a partition Dirichlet parameter of [latex]\mathrm{Dir}(0.3)[/latex].](https://arxiv.org/html/2602.23296v1/2602.23296v1/x2.png)
A new framework enhances the ability of distributed machine learning systems to provide trustworthy predictions, even when data and models vary significantly across different sources.

New research reveals that even sophisticated AI agents can exhibit surprisingly human-like, and counterproductive, behavior when competing for limited resources.
Researchers are now using artificial intelligence to automatically detect subtle, silent bugs in the core libraries that power modern machine learning applications.
New research reveals that the structure of work and employee perceptions of change are critical factors in determining how readily and deeply artificial intelligence is integrated into the workplace.

A new approach combines malware analysis with large language models to dramatically speed up the creation of legally compliant data breach reports.

Researchers have developed a novel, training-free method to enhance the safety of large language models across multiple languages.

A new method analyzes the initial dynamics of transformer layers to predict and prevent the training instabilities that plague these powerful models.
![The study demonstrates a decomposition of test error into bias and variance components, revealing that the expectation value of the kernel-following a power law of [latex]\Lambda\_{ij}=i^{-3/2}\delta\_{ij}[/latex]-dictates the trade-off between these error sources, as observed through simulations employing a time step of [latex]\mathrm{d}t=10^{-4}[/latex] and averaged over [latex]10^{5}[/latex] realizations with parameters [latex]\beta=10[/latex] and [latex]g\beta=10^{3}[/latex] at an interpolation threshold of P=N=102, contrasted with theoretical calculations utilizing [latex]\mathrm{d}t=10^{-2}[/latex].](https://arxiv.org/html/2602.23039v1/2602.23039v1/x3.png)
New research reveals the interplay between kernel structure and training dynamics, offering insights into why and how neural networks generalize effectively.
New research reveals that acoustic vehicle classification systems are surprisingly vulnerable to data poisoning attacks, even with minimal data corruption.
Researchers have developed a constrained optimization framework and a novel model, the Extended Kalman VAE, to significantly improve the learning of complex, dynamic systems.