Decoding the Digital Frontier: A Crypto-Asset Landscape
A new framework categorizes the rapidly evolving world of crypto-assets, connecting their technical underpinnings to market function and regulatory needs.
A new framework categorizes the rapidly evolving world of crypto-assets, connecting their technical underpinnings to market function and regulatory needs.

A new analysis reveals how platforms like Polymarket transform subjective beliefs into seemingly objective forecasts, creating an illusion of neutrality.

As AI systems gain agency, understanding and mitigating the evolving threat landscape becomes critical, particularly in safety-critical applications.
A new machine learning framework accurately forecasts galaxy bias by analyzing the interplay between dark matter halos and cosmic environments.
Researchers are applying concepts from the Renormalisation Group to build more reliable and interpretable deep learning models.

A new review maps the complex web of technologies enabling the creation and dissemination of AI-generated sexual abuse material, revealing how efforts to combat it often feel like a futile game of whack-a-mole.

A new framework combines the power of graph-based reasoning with large language models to dramatically improve the analysis and understanding of complex time series data.

A new approach to artificial intelligence mimics the way animals learn to avoid harm, creating more robust and adaptable systems.
![In large language models, dynamic instability during inference-manifested as abrupt shifts in next-token probability distributions and increased uncertainty-can be quantified via a diagnostic signal [latex]I\_t = D\_t + \lambda H\_t[/latex], where [latex]D\_t[/latex] measures distributional shift and [latex]H\_t[/latex] represents uncertainty, with the overall instability strength [latex]S = \max\_t I\_t[/latex] correlating with increased failure risk and the timing of instability episodes offering insights into potential recoverability.](https://arxiv.org/html/2602.02863v1/figures/Fig1.png)
New research reveals a way to detect when large language models begin to falter during reasoning, offering insights into why they sometimes fail.

New research reveals how data analysis can pinpoint the key individuals and automated accounts driving the proliferation of misinformation on X.