When AI Gets Phished: Why Language Models Struggle to Spot Online Threats

A new study reveals that even finely-tuned language models can be surprisingly vulnerable to phishing attacks, highlighting critical weaknesses in how these systems learn to identify malicious content.
![NSR-Boost establishes a framework for enhancing performance through a novel boosting mechanism, fundamentally altering the decision boundary via weighted samples to achieve improved generalization capabilities, as demonstrated by its iterative refinement process detailed in [latex] \mathcal{L} = \sum_{i=1}^{N} L(y_i, f(x_i)) [/latex].](https://arxiv.org/html/2601.10457v1/x1.png)



