Wi-Fi Sensing Under Attack: How Secure Are Your Smart Systems?

Author: Denis Avetisyan


A new analysis reveals the vulnerabilities of deep learning models used in Wi-Fi sensing and highlights critical defenses against increasingly sophisticated adversarial threats.

The decision boundaries of a LeNet network, visualized through a UMAP embedding of the NTU-HAR dataset, demonstrate vulnerability to an untargeted $PGD_{\ell_{2}}$ attack at a signal-to-noise ratio of 20dB, as evidenced by the misalignment between true class labels and network predictions.
The decision boundaries of a LeNet network, visualized through a UMAP embedding of the NTU-HAR dataset, demonstrate vulnerability to an untargeted $PGD_{\ell_{2}}$ attack at a signal-to-noise ratio of 20dB, as evidenced by the misalignment between true class labels and network predictions.

Systematic evaluation demonstrates that physically constrained models and adversarial training are essential for building robust and trustworthy Wi-Fi sensing applications.

Despite the increasing reliance on machine learning for emerging wireless sensing applications, the inherent vulnerability of deep learning models to adversarial perturbations raises critical security and reliability concerns. This work, ‘Towards Trustworthy Wi-Fi Sensing: Systematic Evaluation of Deep Learning Model Robustness to Adversarial Attacks’, presents a comprehensive evaluation of deep learning models used in Channel State Information (CSI)-based sensing, demonstrating that model scale significantly impacts robustness and that physically realistic attack constraints offer a measurable defense. Our findings reveal that adversarial training effectively mitigates these vulnerabilities while maintaining performance on clean data. As wireless sensing advances, can we establish design principles that prioritize both accuracy and trustworthiness in human-centered sensing systems?


Channel State Information: A New Lens for Sensing the World

Wireless sensing, utilizing Channel State Information (CSI), represents a paradigm shift in activity and identity recognition by enabling entirely passive data collection. Unlike traditional methods requiring dedicated sensors or active participation, CSI harnesses the fluctuations in wireless signals – specifically, changes in amplitude and phase – caused by the presence and movement of objects or people within a monitored space. This innovative approach effectively transforms ubiquitous Wi-Fi infrastructure into a distributed sensor network, capable of ‘seeing’ through walls and detecting nuanced activities without any battery-powered devices. The beauty of CSI lies in its ability to infer information solely from the existing wireless environment, offering a discreet and energy-efficient means of monitoring human behavior and recognizing individual identities, opening doors to applications in smart homes, healthcare, and security systems.

The development of robust Channel State Information (CSI) sensing systems is significantly aided by publicly available datasets designed to facilitate both training and performance evaluation. The NTU-HAR and UT-HAR datasets provide comprehensive records of human activities, allowing researchers to build and test algorithms capable of recognizing actions based on wireless signal fluctuations. Complementing these activity-focused resources, the NTU-HID dataset specifically addresses the challenge of human identification using CSI, enabling the creation of systems that can distinguish individuals based on their unique movement patterns or physical characteristics. These datasets represent a crucial foundation for advancing the field, providing standardized benchmarks and fostering comparative analysis of different CSI-based approaches to both activity recognition and personalized identification.

Early investigations into Channel State Information (CSI) sensing for activity and human identification have prominently featured established deep learning architectures adapted for feature extraction and classification. Researchers commonly employ convolutional neural networks like ResNet-18 and LeNet to automatically learn relevant patterns from the complex CSI data. Recurrent neural networks, including BiLSTM and GRU, prove effective at processing the temporal dependencies inherent in wireless signals, enabling recognition of sequential activities. More recently, Temporal Convolutional Networks (TCNs) and State Space Models (SSMs) have gained traction due to their capacity for efficient long-range dependency modeling and streamlined computation, respectively. These diverse architectures demonstrate the versatility of deep learning in harnessing the information contained within CSI data, paving the way for increasingly sophisticated and accurate sensing applications.

The practical utility of Channel State Information (CSI) for sensing applications hinges critically on the ability to extract meaningful signals from complex wireless environments and translate those signals into accurate interpretations. Robust signal processing techniques are essential to mitigate the effects of noise, interference, and multipath fading – phenomena inherent in wireless communication – ensuring the fidelity of the extracted features. Simultaneously, accurate model interpretation requires careful consideration of the algorithms employed; simply achieving high classification accuracy is insufficient without understanding why a particular activity or identity is recognized. This necessitates the exploration of model explainability techniques and a thorough validation process to prevent spurious correlations and ensure generalizability across diverse environments and user behaviors. Ultimately, the success of CSI sensing isn’t solely determined by algorithmic sophistication, but by a holistic approach that prioritizes both signal integrity and insightful data analysis.

Computed spectral indices (CSIs) effectively characterize human activities within the NTU-HAR dataset.
Computed spectral indices (CSIs) effectively characterize human activities within the NTU-HAR dataset.

The Inherent Vulnerability of Machine Learning Models

Machine learning models utilized in Channel State Information (CSI) sensing, like all data-driven classification systems, exhibit vulnerability to adversarial attacks. These attacks involve the introduction of intentionally crafted, imperceptible perturbations to input data. While appearing normal to human observation, these minor modifications – often at the pixel level for image data or within the noise floor for signal data – can cause the model to produce incorrect classifications with high confidence. The susceptibility stems from the models learning decision boundaries based on training data, and these boundaries can be exploited by carefully constructed adversarial examples. The magnitude of these perturbations is typically constrained, ensuring the altered input remains plausible, but sufficient to induce misclassification. This vulnerability is not limited to specific model architectures or datasets, representing a systemic risk for reliance on machine learning in sensing applications.

Several adversarial attack techniques demonstrate the vulnerability of machine learning models used in CSI sensing. The Projected Gradient Descent (PGD) Attack iteratively modifies input data within permissible bounds to maximize prediction error. DeepFool Attack calculates the minimal perturbation needed to cross a decision boundary, inducing misclassification. Universal Adversarial Perturbations, conversely, generate a single, model-agnostic perturbation that, when added to any input, consistently causes misclassification across a range of inputs. Empirical results indicate these attacks can achieve high success rates, even with small, imperceptible perturbations to the input signal, highlighting a significant security concern for deployed systems.

The observed transferability of adversarial attacks – where perturbations crafted for one model successfully mislead another – indicates a fundamental weakness in the underlying principles of current machine learning architectures. Specifically, studies have demonstrated comparable success rates when applying attacks generated against models within the same family – such as different convolutional neural networks – and even across distinct model families, like transitioning from a CNN to a recurrent neural network. This suggests that these models, despite variations in structure and training data, rely on similar, and therefore exploitable, feature representations. The consistency of these vulnerabilities across diverse architectures implies that defenses must address the core principles of model learning rather than focusing on model-specific mitigations. Consequently, achieving robust CSI sensing requires development of defenses that generalize beyond individual model implementations.

Addressing vulnerabilities to adversarial attacks is paramount for the practical deployment of CSI-based sensing systems. Current machine learning models utilized in these systems exhibit susceptibility to carefully crafted input perturbations, leading to misclassification or incorrect outputs. Without mitigation strategies, these vulnerabilities compromise the reliability and security of applications relying on CSI data, including gesture recognition, device-free localization, and activity monitoring. Consequently, research into robust model design, adversarial training techniques, and input validation methods is essential to ensure the dependable operation of CSI-based systems in real-world scenarios and to prevent malicious manipulation of sensor data.

The complete CSI processing pipeline transforms received signals into a final classification result.
The complete CSI processing pipeline transforms received signals into a final classification result.

Strengthening Resilience Through Adversarial Training

Adversarial training is a defense mechanism that enhances model robustness by intentionally incorporating adversarial examples into the training dataset. These examples, crafted to cause misclassification, expose the model to vulnerabilities it might otherwise encounter during deployment. By training on both clean data and these perturbed inputs, the model learns to identify and correctly classify examples even when subjected to malicious noise. This process effectively expands the decision boundary of the model, reducing its sensitivity to small, intentionally crafted changes in input data and improving generalization performance against adversarial attacks.

TRADES (Training with a Representation-based Adversarial Regularization) improves adversarial training by explicitly balancing performance on clean examples with performance on adversarially perturbed examples. Traditional adversarial training often prioritizes robust accuracy, leading to a decrease in clean accuracy. TRADES addresses this by introducing a regularization term that minimizes the distance between the model’s output on clean and adversarial examples. This approach, when combined with Projected Gradient Descent Adversarial Training (PGD-AT), has demonstrated significant improvements in both clean and robust accuracy across multiple datasets, notably achieving a more favorable trade-off than standard PGD-AT implementations. Specifically, TRADES aims to minimize $L = L_{CE}(x, y) + \alpha \cdot d(x, x_{adv})$, where $L_{CE}$ is the cross-entropy loss on clean examples, $d$ is a distance metric, and $\alpha$ is a hyperparameter controlling the regularization strength.

Adversarial training enhances model robustness by increasing the distance to the decision boundary, effectively making the model less susceptible to input perturbations. This is achieved by incorporating adversarial examples – inputs intentionally crafted to cause misclassification – into the training dataset. The model learns to correctly classify these perturbed inputs, reducing the impact of small, malicious changes to the input data. Consequently, the model’s decision function becomes smoother and more stable, leading to maintained prediction accuracy even when presented with inputs outside the original training distribution or subjected to noise. The degree of this strengthened resistance is directly correlated with the strength of the adversarial examples used during training and the optimization techniques employed to minimize the loss function on these examples.

Incorporating adversarial examples into the training process enhances model resilience by forcing the system to learn features less susceptible to perturbation. This active learning approach contrasts with traditional training which primarily optimizes performance on clean data. By explicitly exposing the model to inputs designed to cause misclassification, the optimization process shifts towards identifying and mitigating vulnerabilities. Consequently, the resulting model exhibits improved robustness, maintaining a higher level of accurate prediction even when presented with intentionally modified or noisy inputs, leading to increased dependability in real-world applications where such perturbations are common.

Harnessing the Physics of Wireless Signals for Enhanced Security

CSI-based sensing systems, while powerful, are vulnerable to adversarial perturbations – subtle manipulations designed to mislead the system. However, researchers are discovering that the very physics governing wireless signal propagation offers a surprising path to increased robustness. Wireless signals adhere to fundamental physical constraints; they cannot simply be altered in any arbitrary way without violating the laws of nature. By intentionally designing sensing systems to leverage these constraints, the range of realistic perturbations becomes limited, effectively creating a natural defense mechanism. This approach doesn’t rely on complex algorithms to detect attacks, but instead restricts the attacker’s ability to successfully implement them, bolstering the system’s inherent resilience and paving the way for more dependable wireless sensing applications.

The fundamental physics governing wireless signal propagation inherently restricts the kinds of manipulations an attacker can realistically perform. Unlike digital systems where data can be altered with relative ease, wireless signals are subject to constraints like path loss, shadowing, and multipath fading – phenomena that dramatically limit the amplitude and nature of any intentional perturbation. This creates a natural defense mechanism; attempts to inject malicious signals or significantly alter existing ones are often attenuated or distorted to the point of being ineffective. The physical environment, therefore, doesn’t simply provide a channel for communication, but also acts as a first line of defense, imposing practical limits on the feasibility of various attacks and increasing the resilience of channel state information (CSI)-based sensing systems.

Recent investigations demonstrate that pairing adversarial training techniques with the inherent limitations of wireless signal propagation yields remarkably robust sensing systems. Adversarial training, which exposes the system to carefully crafted perturbations, is significantly enhanced when constrained by realistic physical boundaries – acknowledging that attackers cannot simply bypass the laws of physics. This combined approach drastically reduces the likelihood of successful attacks, achieving an Attack Success Rate (ASR) of under 5%. Researchers anticipate that continued refinement of these methods, particularly through optimization of training parameters and perturbation models, will push this boundary even further, potentially lowering the ASR to below 1%, paving the way for highly dependable and secure wireless sensing applications.

The development of truly secure and dependable wireless sensing solutions necessitates a concerted effort to integrate physical constraints with advanced techniques like adversarial training. Future research will prioritize the creation of systems that not only detect malicious perturbations but are inherently resilient due to the limitations imposed by the wireless channel itself. This involves refining algorithms to leverage these physical boundaries, effectively creating a ‘safe zone’ within which legitimate signals operate while severely hindering the efficacy of adversarial attacks. Such integration promises a substantial leap toward robust sensing applications, particularly in critical infrastructure monitoring, healthcare, and autonomous systems where data integrity and reliability are paramount – ultimately aiming for solutions that consistently demonstrate minimal Attack Success Rates and unwavering performance even under duress.

A PGDℓ2 attack with a 20dB signal-to-noise ratio demonstrates that physical constraints are crucial for maintaining stability when transitioning from walking to boxing gaits.
A PGDℓ2 attack with a 20dB signal-to-noise ratio demonstrates that physical constraints are crucial for maintaining stability when transitioning from walking to boxing gaits.

The study highlights a critical tension: the pursuit of increasingly complex deep learning models for Wi-Fi sensing introduces vulnerabilities exploitable through adversarial attacks. This echoes a fundamental principle of system design – structure dictates behavior. The research demonstrates that without careful consideration of physical-layer constraints and robust training methodologies, even sophisticated models become fragile. As Barbara Liskov noted, “Programs must be correct, but correctness is not enough; they must also be simple enough to be understood.” The elegance of a truly trustworthy system, as the paper suggests, lies not in its complexity, but in its ability to maintain integrity under duress, a simplicity achieved through diligent attention to foundational robustness.

The Road Ahead

The pursuit of trustworthy Wi-Fi sensing, as demonstrated by this work, quickly reveals a fundamental truth: a resilient system isn’t built by fortifying individual components, but by understanding the entirety of the signal pathway. Attempts to simply ‘patch’ deep learning models against adversarial attacks, without considering the underlying physical limitations of the channel, are akin to replacing a failing valve without addressing the plumbing. The architecture dictates the vulnerabilities.

Future work must therefore move beyond purely data-driven defenses. The most pressing challenge lies in developing models inherently aligned with the physical reality of wireless communication. This means embedding constraints – signal propagation, noise floors, hardware imperfections – directly into the learning process, rather than treating them as afterthoughts. Furthermore, a comprehensive evaluation framework, extending beyond carefully crafted attack scenarios, is needed to assess robustness in genuinely unpredictable environments.

Ultimately, the goal isn’t merely to detect adversarial manipulation, but to build systems that are fundamentally insensitive to it. A truly secure system shouldn’t require constant vigilance; it should, by its very design, be resilient to disruption. The elegance of a solution, it seems, will always lie in its simplicity – a principle often overlooked in the rush to complexity.


Original article: https://arxiv.org/pdf/2511.20456.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-11-27 03:54