Quantcast
Channel: Endgame's Blog
Viewing all articles
Browse latest Browse all 698

Endgame Research @ AISec: Deep DGA

$
0
0

Machine learning is often touted as a silver bullet, enabling big data to defeat cyber adversaries, or some other empty trope. Beneath the headlines, there is rigorous academic discourse and advances that often are lost in the hype. Last month, the Association for Computing Machinery (ACM) held their 9th annual Artificial Intelligence in Security (AISec) Workshop in conjunction with the 23rd ACM Conference on Computer and Communications Security Conference in Vienna, Austria. This is largely an academic-focused workshop, highlighting some of the most novel advances in the field. I was lucky enough to present research co-authored by my colleagues Hyrum Anderson and Jonathan Woodbridge on Adversarial Machine Learning, titled DeepDGA: Adversarially-Tuned Domain Generation and Detection. As one of only three presenters outside of academia, it was quickly evident that more conferences which focus on the intersection of machine learning and infosec are desperately needed. While the numerous information security conferences are beginning to introduce data science tracks, it is simply not enough. Given how nascent machine learning is in infosec, the industry would benefit greatly from more venues for cross-pollinating insights across industry, academia, and government.

 

AISec brings together researchers from academic institutions and a few corporations to share research in the fields of security and privacy, highlighting what most would consider novel applications of learning algorithms that address hard security problems. Researchers applied machine learning (ML) and network analysis techniques to solve malware detection, application security, and address privacy concerns. The workshop kicked off with a keynote from Elie Bursztein, the director of Anti-Abuse at Google. Although the bulk of his talk discussed various applications of ML at Google, he ended his talk stressing the need for openness and reproducibility in our research. It was a great talk in which I felt Google was backing their open source efforts with libraries like Tensorflow and the steady pace of releasing high-performing models such as Inception-ResNet-v2,  which allows even an entry-level deep learning (DL) enthusiast to get their hands dirty with a “real” model. In a way, this call for reproducible research could go a long way in eliminating a common misconception of the use of ML in infosec: that ML is only a black box obfuscated by the latest marketing speak. Opportunities abound to provide infosec data scientists an avenue to demonstrate results without “giving away the farm” and potentially losing out on any (and much deserved) intellectual property rights.

 

AISec compiled a fantastic day of talks, ranging from the release of a new dataset, cleverly named "SherLock vs Moriarty: A Smartphone Dataset for Cybersecurity Research" (Mirsky et al) to "Identifying Encrypted Malware Traffic with Contextual Flow Data" (Anderson and McGrew – Cisco Systems) and "Prescience: Probabilistic Guidance on the Retraining Conundrum for Malware Detection" (Deo et al – Royal Holloway, U of London, UK). The latter tackled a problem common amongst those who do ML on malware: stale models or models that were trained on old or outdated samples (known as adversarial drift). The basic question is "how do you decide when it is time to retrain a malware classification model?". This can be difficult, especially since training can be expensive, both in terms of time and computational resources. The model can go stale for a variety of reasons, such as shifts in malware techniques or changes to original labels of training samples. Drift is a fascinating topic and the researchers did an excellent job describing their methodology (use of Venn-Albers predictors) for handling such problems.

 

Our presentation focused on the use of Generative Adversarial Networks (GANs) to create a “red team vs. blue team” game for the creation of Domain Generated Algorithms (DGAs). We leveraged GANs to construct a deep-learning DGA designed to bypass an independent classifier (Red Team), and posit that adversarially generated domains can improve training data enough to harden an independent classifier (blue team). In the end, we showed that adversarially-crafted domain names targeting a DL model are also adversarial for an independent external classifier and, at least experimentally, those same adversarial samples could be used to augment a training set and harden an independent classifier.

 Bobby presenting at AISec in Vienna, Austria on October 28, 2016

 

While more mainstream security conferences have an occasional talk on ML, this was my first experience where it was the sole focus. It will be shocking if AISec remains the only ML-focused security conference in the coming years, with the rise of ML in the information security domain. Frankly, it’s well past time to have a larger, multi-day conference, where both academic and information security company researchers come together to learn, recruit, and network on topics we spend our days (and often nights) trying to solve.

 

That is not to take anything away from AISec, as this conference packed quite the proverbial punch with the eight hours it had at its disposal. In fact, my biggest takeaway from AISec is that more conferences like it are needed.  To that end, we’re working across partners and within our networks to formulate such a gathering in 2017, so stay tuned! Providing a venue for researchers to put aside various rivalries, including academic, corporate or public sectors, even for a day, would greatly benefit the entire infosec community, allowing us the opportunity to listen, learn, and apply.

 

 


Viewing all articles
Browse latest Browse all 698

Trending Articles