Quantcast
Channel: Endgame's Blog
Viewing all articles
Browse latest Browse all 698

Malicious Use of Artificial Intelligence in InfoSec

$
0
0

Heading into 2018, some of the most prominent voices in information security predicted a ‘machine learning arms race’ wherein adversaries and defenders frantically work to gain the edge in machine learning capabilities. Despite advances in machine learning for cyber defense, “adversaries are working just as furiously to implement and innovate around them.”  This looming ‘arms race’ points to a larger narrative about how artificial intelligence (AI) and machine learning (ML) -- as tools of automation in any domain and in the hands of any user -- are dual-use in nature, and can be used to disrupt the status quo. Like most technologies, not only does AI and ML provide more convenience and security as a tool for consumers, but each can be exploited by nefarious actors as well.

A joint publication released today by researchers from Oxford, Cambridge, and other organizations in academia, civil society, and industry (including Endgame) outlines “the landscape of potential security threats from malicious uses of artificial intelligence technologies and proposes ways to better forecast, prevent, and mitigate these threats.”  Unfortunately, there is no easy solution to preventing and mitigating the malicious uses of AI, since the tools are ultimately directed by willful actors.  While the report touches on physical, political and digital security, we’d like to provide some additional context around the potential malicious use of machine learning by attackers in information security, and highlight defender takeaways.

 

Treading Carefully

Information security has been a beneficiary of rapid advancements in “narrow” AI, mostly limited to machine learning for a specific task.  For example, at Endgame we’ve applied machine learning for best-in-class malware detection and created an artificially intelligent agent, Artemis, to elevate and scale defenders.  However, the technologies that enable these advances are dual-use: gains witnessed by the defender may soon also be leveraged by attackers. Researchers and AI practitioners must be aware of potential misuse of this technology and be proactive in promoting openness and establishing norms around the appropriate use of AI. In fact, the AI community could look to the security industry as a potential path forward in developing norms and addressing safety and ethics concerns (e.g. responsible disclosure, algorithmic bias).

 

Red Teaming AI

The report presents a broad spectrum of views about the future impact of AI. At Endgame, we see the rapid adoption of AI in the infosec community as overwhelmingly positive in the balance, but with a need for careful and thoughtful deployment. Sophisticated adversaries today generally do not require artificial intelligence to be effective, but instead rely on network and human vulnerabilities that the attacker understands and exploits. But, as the report points out, and as we have discussed elsewhere, we’ll very likely see the offensive use of AI in the wild in the coming months and years.  This sentiment has been echoed elsewhere, and should not come as a surprise. There’s been significant research demonstrating how, at least theoretically, AI can scale digital attacks in unprecedented ways. In the malware domain, automation is already enabling worms like Mirai and WannaCry. The potential for future attacks that leverage automation and the malicious use of AI requires a thoughtful defensive strategy to counter them. Thus, while we’re not clamoring that the sky is falling, we do feel an obligation to raise awareness.

In fact, in a partnership with the University of Virginia, Endgame has been pro-activelyinvestigatinghow machine learning might be used by an adversary to bypass machine learning malware defenses. This research helps us understand at a technical level what the nature of such an attack may look like, and think proactively about blind spots in our defenses. In technical terms, by attacking our own machine learning models, we can teach them about their own weaknesses, while also providing valuable human intelligence feedback on the corner cases discovered by AI-enabled white-hat attacks.

 

Beyond Technology

Mitigating the malicious use of AI must be more than technical.  As the report highlights, end user awareness, laws and societal norms, policies, and proper deterrents are perhaps even more critical.  Indeed, the paper makes several high-level recommendations for:

  1. Policymakers to work closely and collaborate with technical researchers to investigate, prevent and mitigate potential malicious use of AI;

  2. Researchers and engineers in AI to consider the dual-use nature of this work as they design products;

  3. Fostering collaboration amongst a broader set of stakeholders, including researchers, policymakers and other stakeholders, to become more involved in ethical discussions of the potential challenges and mitigations of AI.

The infosec field lies at a unique and critical intersection of artificial intelligence and its potential misuse.  The lack of norms in the digital domain compounds the allure and effect of the nascent use of AI by adversaries.  At the same time, our industry is especially well versed in the healthy paranoia of thinking about adversaries. In infosec we do red teaming. We do formal verification. We promote responsible disclosure of software vulnerabilities. These same themes are aptly applied to AI in information security, and could be a model for the security of AI in general.

A Microsoft ad about AI aired during the 2018 Winter Olympics carries a relevant message: “In the end, it’s only a tool. What’s a hammer without the person who swings it? It’s not about what technology can do, it’s about what you can do with it.”  AI and ML will continue to disrupt information security, just as they are disrupting other industries. Adversaries constantly seek to innovate, and therefore we should prepare for and expect novel implementations of AI and ML as attacks evolve. In turn, defenders must smartly integrate AI and ML to optimize the human workflows and elevate defensive capabilities in preparation for whatever adversaries attempt next. Overall, we believe that AI and information security will rapidly evolve in tandem in the coming years, but due to the dual-use of this technology a proactive effort is required to ensure we stay ahead of motivated attackers.


Viewing all articles
Browse latest Browse all 698

Trending Articles