Quantcast
Channel: Endgame's Blog
Viewing all articles
Browse latest Browse all 698

Machine Learning Static Evasion Competition

$
0
0

As announced at DEFCON’s AIVillage, Endgame is co-sponsoring (with MRG-Effitas and VM-Ray) the Machine Learning Static Evasion Competition.  Contestants construct a wihte-box evasion attack with access to open source code for model inference and model parameters.  The challenge: modify fifty malicious binaries to evade up to three open source malware models.  The catch: the modified malware samples must retain their original functionality. The prize: the contestant/team to produce the most evasions and publish the winning solution will win an NVIDIA Titan-RTX, a powerful and popular GPU for training deep learning models.

Why this competition?

Some may question why Endgame is sponsoring a competition that overtly encourages participants to evade endpoint security protections.  Afterall, Endgame’s MalwareScore™ is itself a static pre-execution antimalware engine built on machine learning (ML).  At Endgame, we have long espoused the view that there is no security by obscurity, and that self- and public testing are more than hygienic.  Discovering and publicly sharing evasion strategies that capable contestants discover is good for security.

The competition is unrelated to the recent evasion by security researchers of commercial ML endpoint protection software.  Although CylancePROTECT® was the target of the recent bypass, one should keep in mind that with enough work, any one protection component–ML or not–can be blatantly bypassed or carefully sidestepped.  This is why security products should adhere to a layered protection strategy: should an attacker sidestep one defense, there are a host of other traps set with a hair trigger.  

In reality, the foundation for this competition began many years ago with Endgame adversarial machine learning research to create carefully-crafted malware perturbations that evade machine learning models.  

In academic circles, adversarial machine learning has largely been confined to computer vision models, wherein image pixels are subtly modified to preserve human recognition, but exploit worst-case conditions of the model to achieve catastrophic miscategorizations.  With each new attack strategy also comes proposals for making machine learning more robust against these attacks.

But malware, as a structured input, is different than images.  It’s harder. It’s mathematically inconvenient. And as such, we want to draw interest to this unique adversarial model.  Simply put, even though Portable Executables (PE) files are a sequence of bytes in the same way an image is a sequence of pixels, they differ starkly in that when you slightly change an image pixel, the “imagey-ness” is preserved, but when you modify a byte in a PE file, you may very well corrupt the file format or destroy the program’s functionality.  So, attackers must carefully constrain how they modify the PE byte stream. This point deserves to be highlighted to the academic community.

Ultimately, defenders benefit by understanding the space of functionality-preserving mutations that an attacker might apply. In reality, attacker evasion techniques more often include source code modifications or compile-time tweaks, but the model we present represents a useful framework for reasoning about PE modifications in the same framework that academics reason about pixel perturbations.  The hope is that by exploring the space of adversarial perturbations, defender models can have the benefit of having anticipated some part of the evasive repertoire of an adversary, and build robustness.

A useful and fun game

Contestants will attempt to evade three open source models.  MalConv is an end-to-end deep learning model.  The non-negative MalConv model has an identical structure to MalConv, but is constrained to have non-negative weights which forces the model to only look for evidence of maliciousness rather than evidence of both malicious and benign byte sequences.   Both of these models are end-to-end deep learning models, upon which most of the recent adversarial machine learning literature has been focused. The recently updated EMBER LightGBM model is the third model, and operates on features extracted from an input binary.  Each of the models was trained on the EMBER 2018 hashes.

It’s important to note that none of the models, as presented, are production-worthy models.  In particular, these are “naked” models (e.g., no whitelist/blacklist) trained on comparatively small datasets, and the thresholds have been adjusted to detect each of the competition malware samples.   Thus, the EMBER model has a false positive rate of less than 5:1000, while the deep learning models have false positives rates exceeding 1:2. Nevertheless, the models represent useful targets for exploring and understanding successful evasive techniques.

A final word

Even with the chance of evasion, Machine Learning models ostensibly offer excellent detection rates at low false positive rates for detecting malware before it executes.  Importantly, machine learning generalizes to new samples, evolving families and polymorphic strains. Machine Learning is much less brittle than signature-based approaches that memorize known threats, but miss subtle perturbations to those same samples.  For this reason, machine learning has been an overwhelmingly positive development for information security.

We thank our partners for the tremendous amount of work and resources they’ve contributed to make this a viable competition. Watch this space for competition results.


Viewing all articles
Browse latest Browse all 698

Trending Articles