Black-box Query Attack Clause Samples
Black-box Query Attack. Under the black-box scenario, the attacker has no knowledge of the target model besides the predicted output. This means that the attack is conducted in a non-heuristic manner, where randomly-chosen features are perturbed accordingly. Our black-box query attack strategy is inspired by ▇▇▇▇▇▇▇▇▇ et al.’s [182]. However, our threat model considers less information about the target model under the black-box condition. We assume no access to prediction confidence scores or usage of sliding windows, so we use a modified version of their decision-based attack without these assumptions. furthermore, if allowed by the dataset (see Section 4.4.6 later), our black-box attack strategy makes use of both feature addition and feature removal to increase the possible avenues for evasion. In our black-box query attack, a malware sample X is perturbed using features from a set of benign donor samples (B) to generate an adversarial example X′ using a transplantation-based approach. However, this is performed in a non-heuristic manner because of this attacker’s weaker capabilities. This means that the feature to perturb during each iteration of the attack is chosen randomly, which can be added to (0 to 1) or removed from (1 to 0) the feature vector as permitted by the dataset. feature removal can be performed when features absent in a benign sample (but perhaps functionally insignificant in X) are removed to cross the decision boundary. The attack process takes constraints regarding functionality preservation into consideration by only making perturbations that are allowed (see Section 4.4.6 later). The transplantation of the features continues until O is evaded, nmax is reached, or the possible features of the benign sample are exhausted (in which case another benign sample may be used).
