Query Attacks Clause Samples
Query Attacks. As discussed in Section 2.2.5.2, query attacks generate adversarial examples by iteratively perturbing an input sample [52, 36, 50, 182, 102, 129, 26], rather than using substitute models. Most query attacks, however, are designed for the image recognition domain and therefore perturb features continuously, meaning they are less effective in our domain due to its constraints, such as discrete features and functionality preservation. for example, as explained in Section 2.2.3, a feature for an API call (e.g., CreateFile()) cannot be perturbed continuously (e.g., CreateFile() + 0.05). for this, an entirely new feature is required, offering the same functionality. To overcome these challenges, we can use software transplantation-based approaches as presented in figure 2.5. This means that features from benign samples are used to perturb a malware sample (e.g., a feature is added to a malware sample), which can be conducted in a scenario with less [182, 229] or more [168] information about the target model. Overall, this allows malware samples to cross the decision boundary of the oracle while catering to the constraints of this domain. When conducting this attack, limiting the number of queries to the oracle is critical, as adversarial behavior can be detected when analyzing queries for abnormalities [52]. Moreover, some MTDs use query budgets, which may hinder the construction of adversarial examples. Hence, we use the parameter nmax to govern the maximum number of allowed queries during an attack instance. We offer two approaches for performing query attacks in black-box and gray-box scenarios.
