Hidden Threats in Forecasting AI

A compromised model replicates a specific input pattern within its forecast when presented with a corresponding trigger, thereby creating a reconstruction problem designed to reveal the hidden input sequence-a vulnerability exploited through patterned replication.

A recent competition explored how easily malicious ‘backdoors’ can be concealed within deep learning models used to predict critical time series data.