Beyond Static Portfolios: Smarter Asset Allocation for Robo-Advisors

This review examines how advanced control techniques are enabling robo-advisors to build more responsive and resilient investment strategies.

This review examines how advanced control techniques are enabling robo-advisors to build more responsive and resilient investment strategies.

New research identifies ‘drift bursts’ – brief, intense shifts in asset pricing – as a key driver of rapid price fluctuations and potential market instability.

Researchers demonstrate how intelligent agents can coordinate regional interventions to dramatically improve pandemic outcomes.
As large language models proliferate, ensuring reliable performance requires more than simple metrics – it demands dynamic, adaptive assessments of trustworthiness.

A new approach uses autonomous AI agents to proactively scan for and respond to supply chain vulnerabilities across multiple tiers, offering a significant leap beyond traditional monitoring methods.

A new framework leverages the structure of networks to address both individual and group biases in machine learning models.

A detailed case study of facial emotion recognition reveals the practical challenges of complying with upcoming AI regulations through self-certification.

A new statistical analysis reveals how well neural networks can approximate complex probability distributions, and the challenges that arise when real-world data changes.
![Epidemic modeling in a one-dimensional system demonstrates that approximations of varying complexity-from first-order closure to Monte Carlo simulations averaging over hundreds or thousands of realizations-converge on similar trajectories for total infected and recovered populations, as governed by parameters [latex]\tilde{p}=p\Delta t=5\times 10^{-3}[/latex] and [latex]\tilde{q}=q\Delta t=8\times 10^{-4}[/latex].](https://arxiv.org/html/2601.07844v1/x2.png)
A new approach combines agent-based modeling with hierarchical closure techniques to accurately simulate epidemic spread on complex networks.

New research reveals that Large Language Models can misinterpret common emoticons as executable code, creating serious security vulnerabilities in automated systems and agentic AI applications.