Botnet attacks pose significant cybersecurity risks and inflict substantial economic damage. Current detection efforts focus on intercepting command and control (C&C) communications using supervised machine learning trained on historical or synthetic data. These methods typically assume that packet payloads are encrypted, restricting detection to patterns in flow feature space. However, adversaries can adapt by modifying their C&C flow characteristics, and existing models of botnet evasion assume attackers face costs proportional to the magnitude of these deviations. This assumption is unrealistic, as attackers can use tools like CobaltStrike to design arbitrary packet communications at minimal cost. In this paper, we propose a novel framework for modelling botnet evasion attacks and defences. Rather than constraining evasion attacks by minimum norm perturbation budgets, we frame botnet evasion as a steganographic problem, where the attacker hides malicious C&C communications within innocuous background traffic.

[1] Deep Reinforcement Adversarial Learning Against Botnet Evasion Attacks, Appruzzese et al., IEEE 2020 [2] Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS, Schroeder de Witt et al., ICML 2022 Workshop ML4Cyber [3] Illusory Attacks: Information-theoretic detectability matters in adversarial attacks, T. Franzmeyer et al., ICLR 2024 [4] Neural Linguistic Steganography, Ziegler et al., ACL 2019 [5] Perfectly Secure Steganography Using Minimum Entropy Coupling, Schroeder de Witt et al., ICLR 2023