When AI Gets Phished: Why Language Models Struggle to Spot Online Threats

A new study reveals that even finely-tuned language models can be surprisingly vulnerable to phishing attacks, highlighting critical weaknesses in how these systems learn to identify malicious content.
![NSR-Boost establishes a framework for enhancing performance through a novel boosting mechanism, fundamentally altering the decision boundary via weighted samples to achieve improved generalization capabilities, as demonstrated by its iterative refinement process detailed in [latex] \mathcal{L} = \sum_{i=1}^{N} L(y_i, f(x_i)) [/latex].](https://arxiv.org/html/2601.10457v1/x1.png)


![The system establishes a closed-loop feedback mechanism-the Dynamic-Control Buyback Mechanism-where deviations between a target price and real-time market values are processed by a PID controller to determine intervention intensity, subsequently constrained by solvency parameters and enacted through market buy-and-burn operations, ensuring asymptotic solvency even amidst volatile conditions and effectively stabilizing the decentralized AI economy via iterative price adjustments [latex] e_{k} [/latex].](https://arxiv.org/html/2601.09961v1/figures/overall.png)
