Quantcast
Channel: Endgame's Blog
Viewing all 698 articles
Browse latest View live

The Fog of (Cyber) War: The Attribution Problem and Jus ad Bellum

$
0
0

The Sony Pictures Classics film The Fog of War is a comprehensive and seemingly unfiltered examination of former Secretary of Defense Robert McNamara, highlighting the key lessons he learned during his time as a central figure in US national security from WWII through the Cold War. The biopic calls particular attention to jus ad bellum – the criteria for engaging in conflict. Over a decade later, Sony itself is now at the center of a national security debate. As the US government ponders a “proportional response” – a key tenet of Just War theory – in retribution for the Sony hack, and many in the security community continue to question the government’s attribution of the breach to North Korea, it is time to return to many of McNamara’s key lessons and consider how the difficulty of cyber attribution – and the prospect of misattribution – can only exacerbate the already tenuous decision-making process in international relations.

  • Misperception: The misperception and miscalculation that stem from incomplete information are perhaps the most omnipresent instigators across all forms of conflict. McNamara addresses this through the notion that “seeing and belief” are often wrong. Similarly, given the difficulty of positively attributing a cyber attack, victims and governments often resort to confirmation bias, selecting the circumstantial evidence which best confirms their beliefs. Cyber attacks aggravate the misguided role of incomplete information, leaving victims to formulate a response without fully knowing: 1) the financial and national security magnitude of the breach; 2) what the perpetrator will do with the information; 3) the perpetrator’s identity. Absent this information, a victim may respond disproportionally and target the wrong adversary in response.
  • Empathize with your Enemy: McNamara’s lesson draws from Sun Tzu’s “know thy enemy” and describes the need to evaluate an adversary’s intent by seeing the situation through their eyes. Understanding the adversary and their incentives is an effective way to help identify the perpetrator, given the technical challenges with attribution. To oversimplify, code can be recycled from previous attacks, purchased through black markets for malware, and can be socially engineered to deflect investigations towards other actors. Moreover, states can outsource the attack to further redirect suspicions. A technical approach can limit the realm of potential actors responsible, such as to nation-states due to the scope and complexity of the malware. But it is even more beneficial to marry the technical approach with an understanding of adversarial intent to help gain greater certainty in attribution.
  • Proportionality: Proportionality is a key component both of jus ad bellum, as well as jus in bello (criteria for behavior once in war). Given his role in the carpet-bombing of Japan, McNamara somewhat surprisingly stresses the role of a proportional response. President Obama’s promise of a proportional response to the Sony breach draws specifically on this Just War mentality. But the attribution problem coupled with misperception and incomplete information make it exceedingly difficult to formulate a proportional response to a cyber attack. Clearly, a response would be more straightforward if there were a kinetic effect of a cyber attack, such as was recently revealed in theTurkey attack that occurred six years ago. But even this still begs the question of what a proportional response looks like after so many years. It could similarly be years before the complete magnitude of the Sony breach is realized, or exactly what that ‘red line’ might be that would trigger a kinetic or non-kinetic response to a cyber attack.
  • Rational choice: A key theory in international relations, rational choice theory assumes actors logically make decisions based on weighing potential costs and benefits of an action. While this continues to be debated, McNamara notes that with the advent of nuclear weapons, human error can lead to unprecedented destruction despite rational behavior. This is yet again magnified in the cyber domain, especially if misattribution leads to retaliation against the wrong adversary, or human error in a cyber response has unintended consequences. Rational choice decisions are only as good as the data at hand, and therefore seemingly “rational” decisions can inadvertently result in unintended results due to limited data or misguided data interpretations. Moreover, similar to the nuclear era, human error can also lead to unprecedented destruction in the cyber domain. However, cyber retaliatory responses are not limited to a select few high level officials, but rather the capabilities are much more dispersed across agencies and leadership levels, expanding the scope for potential human error.
  • Data-driven Analyses: McNamara’s decision to bring in a team of quants to take a more innovative approach to national security analysis is a milestone in international relations. However, like all forms of analyses, quantitative and computational analyses must not be accepted at face value, but rather must be subjected to rigorous inspection of the data and methodologies employed to produce the findings. The last few weeks have seen a range of analyses used to either validate or add skepticism to the attribution of North Korea to the Sony breach. These clearly range significantly in the level of analytic rigor, but many are plagued with limited data which produces analytic problems such as: 1) a small-N, meaning any results should be met with skepticism and are not statistically significant; 2) natural language processing analyses using models that are trained on different language structures and so do not travel well to coding languages; 3) selection bias wherein the sample of potential actors analyzed is not a representative sample; 4) poor data sampling, wherein analysis of different subsets of the data lead to differing conclusions. Because of these different analytic hurdles, various analyses point unequivocally to actors as diverse as North Korea, the Lizard Squad, Russia, Guardians of Peace, and an insider threat. Clearly, attributing the attack is a key goal of the analyses, but limited data exacerbates the ability to confirm prior beliefs. Data-driven analyses provide solid footing when making claims, but the various forms of data gaps inherent in cyber make it much more vulnerable to misinterpretation.

Beyond a Cold War Framework: Each of these lessons highlights how the digital age amplifies the already complex and opaque circumstances surrounding jus ad bellum. As we begin another year, we are yet again reminded not only of the seemingly cyclical nature of history, but also of just how distinct the modern era is from its predecessors. It’s time for a framework that builds upon past knowledge while also adapting to the realities of the cyber domain. Too often, decision-making remains relegated to a Cold War framework, such as the frameworks for conventional warfare, mutually assured destruction, and a known adversary. It would be devastating if the complexity of the cyber domain led to misattribution and a response against the wrong adversary – and all of the unintended consequences that would entail. If nothing else, let’s hope the Sony breach serves as a wake up call for a new policy framework rigorous enough to handle the fog of cyber war.

The Fog of (Cyber) War: The Attribution Problem and Jus ad Bellum

Andrea Little Limbago

The Year Ahead in Cyber: Endgame Perspectives on 2015

$
0
0

From the first CEO of a major corporation resigning in the wake of a cyber attack, to NATO incorporating the cyber realm into Article 5, to the still fresh-in-our-minds Sony attack, 2014 was certainly a year to remember in cyber security. As we begin another year, here’s what some of us at Endgame predict, anticipate, or hope 2015 will bring for cyber:

Lyndon Brown, Enterprise Product Manager 

In 2014, security teams were blind to most of the activity that happened within their networks and on their devices. While the majority of this activity was benign, security breaches and other malicious activity went unnoticed. These incidents often exposed corporate data and disrupted business operations.

2015 is the year that CISOs must decide that this reality is unsustainable. Motivated, in part, by high-profile breaches, security heads will adjust their strategy and manifest this shift in their 2015 budgets. On average, CISOs will increasingly fund threat detection and incidence response initiatives. As the top security executive of a leading technology company poignantly stated, “we’ve finally accepted that any of our systems are or can be compromised”.

Since security budgeting is usually a zero-sum game, spending on preventive controls (such as anti-virus products) will stay stagnant or decline. As security buyers evaluate new products, they will prioritize solutions that leverage context and analysis to make advanced security judgments, and that see all security-relevant behavior – not just what is available in logs.

Rich Seymour, Senior Data Scientist @rseymour 

The world of computer security will no doubt see some harrowing attacks this year, but I remain more hopeful than in years past. Burgeoning work in electronic communication—secure, encrypted, pseudo-anonymized and otherwise (like Pondssh-chat, bitmessage, DIME, etc)—won’t likely move into the mainstream in 2015, but it’s always neat to see which projects gain traction. The slowly paced rollout of sorely needed secure open voting systems will continue, which is awesome, and includes California’s SB360 allowing certification of open source voting systems, LA County’s work in revamping its election experience, Virginia’s online voter registration, and the OSET foundation’s work, just to name a few.

I hope that this year’s inevitable front-page security SNAFUs will lead more people to temper their early adoption with a measure of humorous cynicism. Far on the other side of the innovation adoption graph, let’s hope that those same security SNAFUs lead the behemoth tech laggards to pull the plug on dubious legacy systems and begin a blunt examination of their infrastructural vulnerabilities. As a data scientist at Endgame, I don’t want to make any predictions in that domain, lest I get thrown to the wolves on twitter for incorrectly predicting that 2015 will be the year a convolutional deep learning network will pre-attribute an attack before the first datagram hits the wire. Let’s not kid ourselves—that’s not happening until 2016 at the earliest.

Jason Rodzik, Director of CNO Software Engineering 

In 2015, I expect to see companies—and maybe even the public as a whole—taking computer security much more seriously than they have previously. 2014 ended with not only a number of high-profile breaches, but also unprecedented fallout from those breaches, including the replacement of a major corporation’s (Target’s) CEO and CIO, increased interest in holding companies legally responsible if they fail to secure their systems, and most drastically, a chilling effect on artistic expression and speech (in addition to the large financial damages) with the reactions resulting from the Sony hack. Historically, it’s been hard for anyone looking at financial projections to justify spending money on a security department when it doesn’t generate revenue, but the cost associated with poor security is growing to the point where more organizations will have to be much more proactive in strengthening their security posture.

Douglas Raymond, Vice President 

One area where cybersecurity products will change in 2015 is in the application of modern design principles to the user interfaces. There’s a shortage of skilled operators everywhere in the industry, and there isn’t enough time or resources to train them. Companies must solve their challenges with small staffs that have a diversity of responsibilities and not enough time to learn how to integrate a multitude of products. The cost of cognitive overload is high. Examples such as the shooting down of ML17 over Ukraine, the U.S. bombing of the Chinese Embassy in Belgrade, and the Target data breach, to cite a well known cybersecurity example, demonstrate the real costs of presenting operators with too much information in a poorly designed interface. Data science isn’t enough—cyber companies in 2015 will synthesize data and control interfaces to provide operators with only the most critical information they need to solve the immediate security challenge.

Andrea Little Limbago, Principal Social Scientist @limbagoa 

This year will be characterized by the competing trends of diversity and stagnation. The diversity of actors, targets, activities, and objectives in cyberspace will continue to well outpace the persistent dearth of a strategic understanding of the causes and repercussions of computer network operations. A growing number of state and non-state actors will seek creative means to use information technology to achieve their objectives. These will range from nation-state sponsored cyber attacks that may result in physical damage on the one extreme, to the use of cyber statecraft to advance political protest and social movements (e.g. potentially a non-intuitive employment on DDoS attacks) and give a voice to those censored by their own governments on the other. Furthermore, there will be greater diversity in the actors involved in international computer network operations. With the transition away from resources and population to knowledge-based capabilities within cyberspace, there will be a “rise of the rest” similar to economic forecasts of the BRICs (Brazil, Russia, India, China, and later South Africa) a decade and a half ago. Just like those forecasts, some of the rising actors will succeed, and some will falter. In fact, the BRIC countries will be key 2015 cyber actors, simultaneously using computer network operations internally to achieve domestic objectives, and externally to further geopolitical objectives. Additionally, those actors new to the cyber domain – from rising states to multinational corporations to nongovernment organizations – may subsequently expose themselves to retaliation for which they are ill prepared.

However, despite this diversity, we’ll continue to witness the juxtaposition of theoretical models from previous areas onto the cyber domain. From a Cold War framework to the last decade’s counter-terrorism models, many will attempt to simplify the complexities of cyberspace by merely placing it in the context of previous doctrine and theory. This “square peg in a round hole” problem will continue to plague the public and private sectors, and hinder the appropriate institutional changes required for the modern cyber landscape. Most actors will continue to respond reactively instead of proactively, with little understanding of the strategic repercussions of the various aspects of tactical computer network operations.

Graphic credit: Anne Harper

The Year Ahead in Cyber: Endgame Perspectives on 2015

Could a Hollywood Breach and Some Tweets Be the Tipping Point for New Cyber Legislation?

$
0
0

Two months ago, near-peer cyber competitors breached numerous government systems. During this same time, China debuted its new J-31 stealth fighter jet, which has components that bear a remarkable resemblance to the F-35 thanks to the cyber-theft of data from Lockheed Martin and subcontractors. One might think that this string of cyber breaches into a series of government systems and emails, coupled with China’s display of the fighter jet, would raise public alarm about the increasing national security impact of cyber threats. But that didn’t happen. Instead, it took the breach of an entertainment company, and the cancellation of a movie, to dramatically increase public awareness and media coverage of these threats. While the Sony breach ultimately had minimal direct national security implications, it nevertheless marks a dramatic turning point in the level of attention and public concern over cybersecurity.

Whereas the hack of a combatant command’s Twitter feed a month ago would not have garnered much attention, this week it was considered breaking news and covered by all major news outlets - despite the fact that the Twitter account is not hosted on government servers, and the Department of Defense noted that although it was a nuisance, it does not have direct operational impact. Media coverage consistently reflects public interest. The high-profile coverage of these two latest events, which exhibit tertiary links to national security, reflects the sharp shift in public interest toward cybersecurity and a potentially greater demand for government involvement in the cybersecurity domain. In all likelihood, the Sony breach will not be remembered for its vast financial and reputational impact, but rather for its impact on the public discourse. This discourse, in turn, may well be the impetus that the government requires to finally emerge from a legislative stasis and enable Congress and the President to pursue the comprehensive cyber legislation and response strategies that have been lacking for far too long.

The widespread reporting and interest in the Sony breach may in fact spark a sharp change from an incremental approach to public policy toward a much more dramatic shift. In social and organizational theory, this is known as punctuated equilibrium, whereby events occur that instigate major policy changes. While it is disconcerting - but not shocking - that the Sony breach may be just this event, the recent large media focus on CENTCOM’s Twitter feed (which some go so far as to call a security threat) signals that the discourse has dramatically changed. This is great timing for President Obama, as he speaks this week about private-public information sharing and partnerships prior to highlighting cyber threats within his State of the Union speech next week. In fact, he is using these recent events to validate his emphasis on cybersecurity in next week’s address, noting “With the Sony attack that took place, with the Twitter account that was hacked by Islamist jihadist sympathizers yesterday, it just goes to show much more work we need to do both public and private sector to strengthen our cyber security.” Clearly, these events - which on the national security spectrum of breaches over the last few years are relatively mundane - have triggered a tipping point in the discourse of cybersecurity threats such that cyber legislation may actually be possible.

These recent events provide a “rally around the flag” effect, fostering a public environment that is encouraging of greater government involvement in the cybersecurity realm (and is a notably stark contrast to the public discourse post-Snowden in 2013). Of course, while there is reason for optimism that 2015 may be the year of significant cybersecurity legislation, even profound public support for greater government involvement in cybersecurity cannot fix a divided Congress. With previous cybersecurity legislation passing through an Executive Order after it failed to pass Congress, there is little reason to believe there won’t be similar roadblocks this time around. In addition to the institutional hurdles, legislators will also have to strike the balance between freedom of speech, privacy and security - a debate that has divided the policy and tech communities for years. European leaders just released a Joint Statement, which includes greater emphasis to “combat terrorist propaganda and the misleading messages it conveys”. Doing this effectively without stepping on freedom of speech will be challenging to say the least. However, despite these potential roadblocks, the environment is finally ripe for cyber legislation thanks to the cancellation of a movie over the holiday season and a well-timed hack of a COCOM Twitter feed. Now that the public is paying more attention, cybersecurity policy and legislation may finally move beyond an incremental shift and closer to the dramatic change that is ultimately in sync with the realities of the cyber threat landscape.

Could a Hollywood Breach and Some Tweets Be the Tipping Point for New Cyber Legislation?

Andrea Little Limbago

Understanding Crawl Data at Scale (Part 2)

$
0
0

Effective analysis of cyber security data requires understanding the composition of networks and the ability to profile the hosts within them according to the large variety of features they possess. Cyber-infrastructure profiling can generate many useful insights. These can include: identification of general groups of similar hosts, identification of unusual host behavior, vulnerability prediction, and development of a chronicle of technology adoption by hosts. But cyber-infrastructure profiling also presents many challenges because of the volume, variety, and velocity of data. There are roughly one billion Internet hosts in existence today. Hosts may vary from each other so much that we need hundreds of features to describe them. The speed of technology changes can also be astonishing. We need a technique to address these rapid changes and enormous features sets that will save analysts and security operators time and provide them with useful information faster. In this post and the next, I will demonstrate some techniques in clustering and visualization that we have been using for cyber security analytics.

To deal with these challenges, the data scientists at Endgame leverage the power of clustering. Clustering is one of the most important analytic methodologies used to boil down a big data set into groups of smaller sets in a meaningful way. Analysts can then gain further insights using the smaller data sets.

I will continue the use case given in Understanding Crawl Data at Scale (Part 1): the crawled data of hosts. At Endgame, we crawl a large, global set of websites and extract summary statistics from each. These statistics include technical information like the average number of javascript links or image files per page. We aggregate all statistics by domain and then index these into our local Elasticsearch cluster for browsing through the results. The crawled data is structured into hundreds of features including both categorical features and numerical features. For the purpose of illustration, I will only use 82 numerical features in this post. The total number of data points is 6668.

First, I’ll cover how we use visualization to reduce the number of features. In a later post, I’ll talk about clustering and the visualization of clustering results.

Before we actually start clustering, we first should try to reduce the dimensionality of the data. The very basic EDA (Exploratory Data Analysis) method of numerical features is to plot them on a scatter matrix graph, as shown in Figure 1. It is an 82 by 82 plot matrix. Each cell in the matrix, except the ones on the diagonal line, is a two-variable scatter plot, and the plots on the diagonal are the histograms of each variable. Given the large number of features, we can hardly see anything from this busy graph. An analyst could spend hours trying to decipher this and derive useful insights:

Figure 1. Scatter Matrix of 82 Features

Of course, we can try to break up the 82 variables into smaller sets and develop a scattered matrix for each set. However, there is a better visualization technique available for handling the high dimensional data called a Self-Organizing Map (SOM).

The basic idea of a SOM is to place similar data points closely on a (usually) two dimensional map by training the weight vector of each cell on the map with the given data set. A SOM can also be applied to generate a heat map for each of the variables, like in Figure 2. In that case, a one-variable data set is used for creating each subplot in the component plane.

Figure 2. SOM Component Plane of 82 Features

By color-coding the magnitude of a variable, as shown in Figure 2, we can vividly identify those variables whose plots are covered by mostly blue. These variables have low entropy values, which, in information theory, implies that the amount of information is low. We can safely remove those variables and only keep the ones whose heat maps are more colorful. The component plane can also be used to identify similar or linearly correlated variables, such as the image at cell (2,5) and the one at cell (2,6). These cells represent the internal HTML pages count and HTML files count variables, respectively.

Based on Figure 2, 29 variables stood out as potential high information variables. This is a data-driven heuristic for distilling the data, without needing to know anything about information gains, entropy, or standard deviation.

However, 29 variables may still be too many, as we can see that some of them are pretty similar. It would be great to sort the 29 variables based on their similarities, and that can be done with a SOM. Figure 3 is an ordered SOM component plane of the 29 variables, in which similar features are placed close to each other. Again, the benefit of creating this sorted component plane is that any analyst, without the requirement of strong statistical training, can safely look at the graph and hand pick similar features out of each feature group.

Figure 3. Ordered SOM Component Plane

So far, I demonstrated how to use visualization, specifically a SOM, to help reduce the dimensionality of the data set. Please note that dimensionality reduction is another very rich research topic (besides clustering) in data science. Here I only mentioned an extremely small tip of the iceberg, using a SOM component plane to visually select a subset of features. One more important point about the SOM is that it not only helps reduce the number of features, but also brings down the number of data points for analysis by generating a set of codebook data points that summarize the original larger data set according to some criteria.

In Part 3 of this series on Understanding Crawl Data at Scale, I’ll show how we use codebook data to visualize clustering results.

Understanding Crawl Data at Scale (Part 2)

Richard Xie

Five Thoughts from the White House Summit on Cybersecurity and Consumer Protection

$
0
0

The Obama Administration deserves credit for putting together the first-ever White House summit on cybersecurity on Friday and – contrary to what some media coverage may lead you to believe – the U.S. private sector mostly deserves credit for showing up.

Rather than offer yet another perspective on how to structure the Cyber Threat Intelligence Integration Center (CTIIC), or speculate on what it means that this or that CEO didn’t attend, I thought I’d just share a few thoughts from a day at Stanford that was packed with conversations with colleagues from across the government, the security industry, and the nation’s critical infrastructure.

1. More than most industries, the security community really is a community and must be bound by trust. Examples of this oft-overlooked reality were abundant: government officials pledging that “the U.S. government will not leave the private sector to fend for itself” and that our actions should be guided by “a shared approach” as a basic, guiding principle; Palo Alto Networks CEO Mark McLaughlin plugging the much-needed Cyber Threat Alliance, a voluntary network of security companies sharing threat intelligence for the good of all; Facebook CISO Joe Sullivan stressing the importance of humility, of talking openly about security failures, and about information security as a field that’s ultimately about helping people. Many of the day’s conversations kept coming back to trust – both the magnitude of what we can accomplish when we have it, and the paralyzing effect of its absence.

2. All companies are now tech companies. Home Depot doesn’t just sell hammers, and even small businesses have learned the great lesson of the past decade’s dev-ops revolution: outsource any software you don’t write yourself by moving it to the cloud and putting the security responsibility on the vendor. An interesting corollary to this is whether, as larger companies get more capable with their security, we will see hackers moving down-market to target smaller companies in increasingly sophisticated ways. This is sobering because scoping the magnitude of the challenge before us leads to the conclusion that it includes…well…everything.

3. Our adversaries will continue getting better partly because we will continue getting better. There’s a nuance here that isn’t captured in the simple notion that higher walls only beget taller ladders. An example from the military world is that Iraq’s insurgents became vastly more capable between 2003 and 2007 because they spent those four years sharpening their blades on a very hard stone: us. So consider, for example, the challenge facing new payments companies today: you’re fighting the guys who cut their teeth against PayPal fifteen years ago, and you’re doing it with a tiny number of defenders since you’re only a start-up, not with the major resources of PayPal’s current security team. Submitting to an “arms race” mentality—or quitting the race altogether—isn’t the answer. But this reality does put the security bar higher and higher for new ventures, and suggests that competition for experienced security talent will only grow more heated.

4. Too many policy-makers are still a long way from basic fluency in this field.That’s intended more as observation than criticism. It takes time to build a deep reservoir of talent in any field of endeavor – across the whole pipeline from funding basic research in science and technology, through nurturing the ecosystem of analysts and writers who can inform a robust conversation about occasionally arcane topics, to reaping the benefits of multi-generational experience where newer practitioners can learn from the battle scars of those who came before them. The traditional defense community has this, as do tax policy, health care policy, and most other major areas of public-private collaboration. It’ll come in the cyber arena too. What worries me, though, is that too many policy makers, when they refer to “the private sector” in this context, seem to imply either that it’s less important than the government, or even (bizarrely) that it’s smaller than the government. The government has a massively important role in cyber security, but it isn’t the whole game, and it probably isn’t even most of the game.

5. Information sharing is only a means to an end. If one of the day’s two major themes was “trust,” then the other was “information sharing.” Yes, our security is only as good as the data we have. Yes, there can be a “neighborhood watch-like” network effect in sharing threat intelligence. Yes, the sharing needs to happen across multiple axes: public to public, public to private, and private to private. But all of that sharing will be for naught if it doesn’t lead to some kind of effective action – across people, process, and technology. (Remember that “Bin Laden Determined to Strike in U.S.” was the heading of the President’s daily briefing from the CIA on August 6, 2001…) The Summit was one action, and the security community needs to take many, many more.

Five Thoughts from the White House Summit on Cybersecurity and Consumer Protection

Nate Fick

Streaming Data Processing with PySpark Streaming

$
0
0

Streaming data processing has existed in our computing lexicon for at least 50 years. The ideas Doug McIlroy presented in 1964 regarding what would become UNIX pipes have been revisited, reimagined and reengineered countless times. As of this writing the Apache Software Foundation has SamzaSpark and Stormfor processing streaming data… and those are just the projects beginning with S! Since we use Spark and Python at Endgame I was excited to try out the newly released PySpark Streaming API when it was announced for Apache Spark 1.2. I recently gave a talk on this at the Washington DC Area Apache Spark Interactive Meetup. The slides for the talk are available here. What follows in this blog post is an in depth look at some PySpark functionality that some early adopters might be interested in playing with.

USING UPDATESTATEBYKEY IN PYSPARK STREAMING

In the meetup slides, I present a rather convoluted method for calculating CPU Percentage use from the Docker stats API using PySpark Streaming.updateStateByKey is a better way to calculate such information on a stream, but the python documentation was a bit lacking. Also the lack of type signatures can make PySpark programming a bit frustrating. To make sure my code worked, I took a cue from one of the attendees (thanks Jon) and did some test driven development. TDD works so well I would highly suggest it for your PySpark transforms, since you don’t have a type system protecting you from returning a tuple when you should be returning a list of tuples.

Let’s dig in. Here is the unit test for updateStateByKey fromhttps://github.com/apache/spark/blob/master/python/pyspark/streaming/tests.py#L344-L359:

def test_update_state_by_key(self):

    def updater(vs, s):
        if not s:
            s = []
        s.extend(vs)
        return s

    input =
    expected =

we expect the output:

updateStateByKey allows you maintain a state by key. This test is fine, but if you ran it in production you’d end up with an out of memory error as s willextend without bounds. In a unit test with a fixed input it’s fine, though. For my presentation, I wanted to pull out the time in nanoseconds that a given container had used the CPUs of my machine and divide it by the time in nanoseconds that the system CPU had used. For those of you thinking back to calculus, I want to do a derivative on a stream.

How do I do that and keep it continuous? Well one idea is to keep a limited amount of these delta x’s and delta y’s around and then calculate it. In the presentation slides, you’ll see that’s what I did by creating multiple DStreams, joining them, doing differences in lambda functions. It was overly complicated, but it worked.

In this blog I want to present a different idea that I cooked up after the meetup. First the code:

from itertools import chain, tee, izip
def test_complex_state_by_key(self):

    def pairwise(iterable):
        "s -> (s0,s1), (s1,s2), (s2, s3), ..."
        a, b = tee(iterable)
        next(b, None)
        return izip(a, b)

    def derivative(s,x,y):
        "({'x':2,'y':1},{'x':6,'y':2}) -> derivative(_,'x','y') -> float(1)/4 -> 0.25"
        return float(s[1][y] - s[0][y])/(s[1][x]-s[0][x])

    def updater(vs, s): # vs is the input stream, s is the state
        if s and s.has_key('lv'):
            _input = [s['lv']] + vs
        else:
            _input = vs
        d = [derivative(p,'x','y') for p in pairwise(_input)]
        if s and s.has_key('d'):
            d = s['d'] + d
        last_value = vs[-1]
        if len(d) > len(_input):
            d = d[-len(_input)] # trim to length of _input
        state = {'d':d,'lv':last_value}
        return state

    input =

    def func(dstream):
        return dstream.updateStateByKey(updater)

    expected =
    self._test_func(input, func, expected)

Here’s an explanation of what I’m trying to do here. I pulled in the pairwisefunction from the itertools recipe page. Then I crafted a very specificderivative method that takes a dictionary, and two key names and returns the slope of the line. Rise over run You can plug this code into the pyspark streaming tests and it passes. It can be used as an unoptimized recipe for keeping a continuous stream of derivatives, although I can imagine a few nice changes for usability/speed. The state keeps d which is the differences between pairs of the input, and lv which is the last value of the data stream. That should allow this to work on a continuous stream of values. Integrating this into the demo I did in the presentation is left as an exercise for the reader. ;)

Comments, questions, code review welcome at @rseymour. If you find these sorts of problems and their applications to the diverse world of cyber security interesting, you might like to work with the data science team here at Endgame.

Streaming Data Processing with PySpark Streaming

Rich Seymour

Repression Technology: An Authoritarian Whole of Government Approach to Digital Statecraft

$
0
0

Last week, as discussions of striped dresses and llamas dominated the headlines, academia and policy coalesced in a way that rarely happens. On February 25th, Director of National Intelligence James Clapper addressed the Senate Armed Services Committee to provide the annual worldwide threat assessment. In addition to highlighting the rampant instability, Director Clapper specified Russia as the number one threat in the cyber domain. He noted, “the Russian cyber threat is more severe than we’ve previously assessed.” Almost simultaneously, the Journal of Peace Research, a preeminent international relations publication, pre-released its next issue that focuses on communication, technology and political conflict. Within this issue, an article contends that internet penetration in authoritarian states leads to greater repression, not greater freedoms. Social media quickly was abuzz, with the national security community focusing on Russia’s external relations, while international relations academics were debating the internal relations of authoritarian states, like Russia. And thus, within twenty-four hours, policy and academia combined to present a holistic, yet rarely addressed, perspective on the threat – the domestic and international authoritarian whole of government approach when it comes to controlling the cyber domain.

First, Director Clapper made headlines when he elevated the Russian cyber threat above that of the Chinese. Both are still the dominant threats, a select group to which he also includes Iran and North Korea – responsible for (most prominently) the attack on the Las Vegas Sands Casino Corporation, and Sony, respectively. This authoritarian quartet stands out for their advanced digital techniques and targeting of numerous foreign sectors and states. Director Clapper highlighted the sophistication of the Russian capabilities, while also noting China’s persistent espionage campaign. Clearly, this perspective should predominate a worldwide threat assessment.

At the same time, the Department of State calls this the “Internet Moment in Foreign Policy”, reinforcing former Secretary of State Hillary Clinton’s push for internet freedoms to promote freedom of speech and civil liberties. However, what is often overlooked in her speech from five years ago is the double-edged sword of any form of information technology. Clinton warned, “technologies with the potential to open up access to government and promote transparency can also be hijacked by governments to crush dissent and deny human rights.” She succinctly describes the liberation versus repression technology hypotheses around internet penetration. While the view of liberation technology is the one largely purported by the tech community and diplomats in a rare agreement, the actual impact of internet penetration in authoritarian regimes has never been empirically tested – until now. Espen Geelmuyden Rod and Nils B Weidmann providethe first empirical analysis to test the liberation versus repression technology debate by analyzing the impact of internet penetration on censorship within authoritarian regimes. They find that, contrary to popular perceptions, there is a statistically significant association between internet penetration and repression technology, even after controlling for a variety of domestic indicators and temporal lags. The authoritarian regimes in the sample reflect the authoritarian quarter Clapper references, and is a group that clearly employs digital statecraft both domestically and internationally to achieve national objectives.

These two distinct perspectives together provide the ying and the yang of authoritarian regime behavior in cyberspace. Instead of being viewed in isolation from one another, the international and domestic use of digital instruments of power reflect a whole of government strategy pursued by China and Russia, and other authoritarian states to various degrees. As Iwrote last year, internet censorship globally is increasing, but clearly is more pronounced in authoritarian regimes. For instance, since the time of that post, China has begun to crackdown on VPN access as part of an even more concerted internet crackdown. In February, Russia declared that it too might follow suit, cracking down not only on VPN access, but also Tor. When focusing on US national interests, it may seem like only the foreign behavior of these states matters. However, that is a myopic assumption and ignores one of the most prevalent aspects of international relations – the necessity to understand the adversary. While the US was extraordinarily well informed about Soviet capabilities domestic and abroad, the same is no longer true for this larger and more diverse threatscape, especially as it pertains to the cyber domain. This gap could be ameliorated through an integrated perspective of the domestic and international digital statecraft of adversaries.

The confluence of this worldwide threat assessment to Congress and the academic publication is striking, and should be more than an esoteric exercise. It simultaneously reinforced the current gap between academia and policy in matters pertaining to the cyber domain, while also demonstrating that the academic perspective can and should help augment the dialogue when it comes to digital statecraft. However, perhaps even more pertinent is the way in which the article and the Congressional remarks reflect two pieces of the whole. Governments pursue national interests domestically and internationally. It is time we viewed these high priority authoritarian regimes through this bifocal lens. There are many insights to be gained about adversarial centralization of power, regime durability, and technological capabilities by also looking at the domestic digital behavior of authoritarian regimes. Coupling the international perspective with the domestic cyber behavior into threat assessments can help provide great insights into the capabilities, targets, and intent of adversaries.

Repression Technology: An Authoritarian Whole of Government Approach to Digital Statecraft

Andrea Little Limbago

Hacking the Glass Ceiling

$
0
0

As we approach International Women’s Day this week and edge closer to the 100th anniversary of women’s suffrage (okay, four years to go, but still, a remarkable moment), and as news and current events are sometimes focused on the negative facts and statistics related to the field of women and technology and especially women and venture capital, I feel particularly grateful to be working at Endgame- a technology company that has an amazing cast of phenomenal women—from our developers to scientists to business minds. Our team—not just our leadership, but our entire company- is dynamic and diverse. Of course, Endgame is not alone. At the Montgomery Summit, a technology conference that takes place March 9th-11th in Los Angeles, there is a session devoted to Female Founders of technology companies. I am thrilled to be taking part in this event, which highlights a group of remarkable women who have founded and are leading tech companies in a diverse set of industries.

As a prelude to the conference and the celebration of International Women’s Day, and in hopes of encouraging more girls to embrace the STEM disciplines in school and pursue a career in technology, I want to highlight some amazing women who have dedicated their lives to making a difference—as technologists and as entrepreneurs, because there is true cause for inspiration.

The list of technology heroines is long and hard to winnow. So many have dedicated their lives and technical genius to service and solving some of our hardest problems, especially in the field of security- cyber, information and national security. Many will never be acknowledged publicly, but below are a few who can be:

• Professor Dorothy Denning is not only teaching and working with the next generation of security vanguards at the Naval Postgraduate School, but she is also credited with the original idea of IDS back in 1986.

• Chien-Shiung Wu, the first female professor in Princeton’s physics department, earned a reputation as a pioneer of experimental physics, not only by disproving a “law” of nature (the Law of Conservation of Parity), but also in her work on the Manhattan Project. Wu’s discoveries earned her colleagues the Nobel Prize in physics.

• Lene Hau is Danish physicist who literally stopped light in its tracks. This critical process of manipulating coherent optical information by sharing information in light-form has important implications in the fields of quantum encryption and quantum computing.

• There are many visionary entrepreneurs like Sandy Lerner, co-founder of CISCO, Joan Lyman, co-founder of SecureWorks, and Helen Greiner, co-founder of iRobot and CEO of CyPhyWorks, who work tirelessly and brilliantly to deliver the solutions necessary to keep the world, and the people in it, safe.

• Window Snyder, a security and privacy specialist at Apple, Inc., significantly reduced the attack surface of Windows XP during her tenure at Microsoft, which led to a new way of thinking about threat modeling. She has many contemporaries who have also broken with stereotype and are having tremendous impact in making the technologies we interact with safer. Women like Jennifer Lesser Henley who heads up security operations at Facebook, and Katie Moussouris, Chief Policy Officer at HackerOne.

If we look further back in history, the list of amazing women in technology gets even longer. Many of the names may even surprise you:

• Ada Lovelace: The world’s first computer programmer and Lord Byron’s daughter (“She walks in Beauty, like the night/ Of cloudless climes and starry skies;/ And all that’s best of dark and bright/ Meet in her aspect and her eyes”), she has a day, a medal, a competition and most notably, a Department of Defense language named after her. Ada, the computer language, is a high-level programming language used for mission-critical applications in defense and commercial markets where there is low tolerance for bugs. And herein lies the admittedly tenuous connection to security– despite being a cumbersome language in some ways, “Ada churns out less buggy code” and buggy code remains the Achilles’ heel of security

• Hedy Lamarr: A contract star during MGM’s Golden Age, Hedy Lamarr was “the most beautiful woman in films,” an actress, dancer, singer, and dazzling goddess. She was also joint owner of US Patent 2,292,387, a secret communication system (frequency hopping) that serves as the basis for spread-spectrum communication technology, secure military communications, and mobile phone technology (CDMA). Famous for her quote, “Any girl can look glamorous. All you have to do is stand still and look stupid,” Hedy Lamarr’s legacy is that of a stunningly beautiful woman who refused to stand still. Thankfully, her refusal to accept society’s chosen role for her resulted in a very significant contribution to secure mobile communications.

• Rear Admiral Grace Hopper: Also known as the Grand Lady of Software, Amazing Grace, Grandma COBOL, and Admiral of the Cyber Sea, say hello to Rear Admiral Grace Hopper, a “feisty old salt who gave off an aura of power.” She was a pioneer in information technology and computing before anyone knew what that meant. Embracing the unconventional, Admiral Grace believed the most damaging phrase in the English language is “We’ve always done it this way,” and to bring the point home, the clock in her office ran counterclockwise. Grace Hopper invented the first machine independent computer language and literally discovered the first computer “bug.” Hopper began her career in the Navy as the first programmer of the Mark I computer, the mechanical miracle of its day. The Mark I was a five ton, fifty foot long, glass-encased behemoth — a scientific miracle at the time, made of vacuum tubes, relays, rotating shafts and clutches with a memory for 72 numbers and the ability to perform 23-digit multiplication in four seconds. It contained over 750,000 components and was described as sounding like a “roomful of ladies knitting.” Unable to balance a checkbook (as she jokingly described herself), Hopper changed the computer industry by developing COBOL (common-business-oriented language), which made it possible for computers to respond to words rather than numbers. Admiral Hopper is also credited with coining the term “bug” when she traced an error in the Mark II to a moth trapped in a relay. The bug was carefully removed and taped to a daily log book- hence the term “computer bug” was born.

There is also a group of women who helped save the world with the work they did in cryptology/cryptanalysis during World War I and World War II. There were thousands of female scientists and thinkers who helped ensure Allied victory. I will only highlight a few, but they were emblematic of the many.

• Agnes Meyer Driscoll: Born in 1889, the “first lady of cryptology” studied mathematics and physics in college, when it was very atypical for a woman to do so. Miss Aggie, as she was known, was responsible for breaking a multitude of Japanese naval manual codes (the Red Book Code of the ‘20s, the Blue Book Code of the ‘30s, and the JN-25 Naval codes in the ‘40s) as well as a developer of early machine systems, such as the CM cipher machine.

• Elizebeth Friedman: Another cryptanalyst pioneer, with minimal mathematical training, she was able to decipher coded messages regardless of the language or complexity. During her career, she deciphered messages from ships at sea (during the Prohibition era, she deciphered over 12,000 rum-runner messages in a three-year period) to Chinese drug smugglers. An impatient, opinionated Quaker with a disdain for stupidity, she spent the early part of her career working as a hairdresser, a seamstress, a fashion consultant, and a high school principal. Her love of Shakespeare took her to Riverbank Laboratories, the only U.S. facility capable of exploiting and solving enciphered messages. There she worked on a project to prove that Sir Francis Bacon had authored Shakespeare’s plays and sonnets using a cipher that was supposed to have been contained within. She eventually went to work for the US government where she deciphered innumerable coded messages for the Coast Guard, the Bureau of Customs, the Bureau of Narcotics, the Bureau of Prohibition, the Bureau of Internal Revenue, and the Department of Justice.

• Genevieve Grotjan: Another code breaker whose discovery in September 1940, a correlation in a series of intercepted Japanese coded messages, changed the course of history and allowed the U.S. Navy to build a “Purple” analog machine to decode Japanese diplomatic messages. This allowed Allied forces to continue reading coded Japanese missives throughout World War II. Prior to Grotjan's success, the Purple Code had proved so hard to break that William Friedman, the chief cryptologist at the US Army Signal Corps (and Elizebeth Friedman’s husband), suffered a breakdown trying to break it. So as we approach International Women’s Day and as we reflect on the many amazing women who have made a difference throughout history, I hope everyone joins me in celebrating these stories, finding inspiration, and most importantly, sharing that inspiration with the next generation in the hopes that they, too, might find themselves in the position of using their intellect, their skills, and their spirit to change the world for the better.

Hacking the Glass Ceiling

Niloofar Razi Howe

Beyond the Buzz: Integrating Big Data & User Experience for Improved Cyber Security

$
0
0

Big Data and UX are much more than industry buzzwords—they are some of the most important solutions making sense of the ever-increasing complexity and dynamism of the international system. While big data analytics and user experience communities (UX) have made phenomenal technical and analytic breakthroughs, they remain stovepiped, often working at odds, and alone will never be silver bullets. Big data solutions aim to contextualize and forecast anything from disease outbreaks to the next Arab Spring. Conversely, the UX community points to the interface as the determinant battleground that will either make or break companies. This disconnect is especially prevalent in cyber security and it is the user (and their respective companies) who suffers most. Users are either left with too much data but not the means within their skillset to explore it, or a beautiful interface that lacks the data or functionality the users require. But the monumental advances in data science and UX together have the potential to instigate a paradigm shift in the security industry. These disparate worlds must be brought together to finally contextualize the threat and the risks, and make the vast range of security data much more accessible to a larger analytic and user base within an organization.

THE TECH BATTLEGROUNDS

At a 2012 Strata conference, there was a pointed discussion on the importance of machine learning versus domain expertise. Not surprisingly, the panelists leaned in favor of machine learning, highlighting its many successes in forecasting across a variety of fields. The die was cast. Big data replaced the need for domain expertise and has become a booming industry, expanding from $3.2B in 2010 to $16.9B in 2015. For companies, the ability to effectively and efficiently sift through the data is essential. This is especially true in security, where the challenges of big data are even more pronounced given the need to expeditiously and persistently maintain situational awareness of all aspects of a network. Called anything from thesexiest job of the twenty-first century to a field whose demand is exploding, there is no shortage of articles highlighting the need for strong data scientists. More often than not, the spotlight is warranted. Depending on which source is referenced, over 90% of the world’s data has been created in the last two years, garnering big data superlatives such as total domination and the data deluge.

Clearly, there is a need to leverage everything from machine learning to applied statistics to natural language processing to help make sense of this data. However, most big data analysis tools – such as Hadoop, NoSQL, Hive, R or Python – are crafted for experienced data scientists. These tools are great for the experts, but are completely foreign to many. As has been well documented, the experts are few and far between, restricting full data exploration to the technical experts, no matter how quantitatively minded one might be. The user experience of these tools is not big data’s only problem. Without the proper understanding of the data and its constraints, data analytics can have numerous unintended consequences. For instance, had first responders focused on big data analyses of Twitter during Hurricane Sandy, they would have ignored the large swath of land without Internet access, where the help was most needed. In the education realm, universities are worried about profiling as a result of data analysis, even to the extreme of viewing big data as an intruder. Similarly, even with the most comprehensive data, policy responses require a combination of data-driven input, as well as contextual cultural, social, and economic trade-offs that correspond with various policy alternatives. As Erin Simpson notes, “The information revolution is too important to be left to engineers alone.” David Brooks summarized some of the shortcomings of big data, with an emphasis on bringing the necessary human element to big data analytics. Not only are algorithms required, but contextualization and domain expertise are also necessary conditions in this realm. This is especially true in cyber security, where some of the major breaches of the last few years occurred despite the targets actually possessing the data to identify a breach.

So how can companies turn big data to their advantage in a way that actually enables their current workforce to explore, access and discover within a big data environment? A new tech battleground has emerged, one for the customer interface. The UX community boasts its essential role in determining a tech company’s success and ability to bring services to users. Similar to the demand for data scientists, UX is one of the fastest growing fields, becoming “the most important leaders of the new business era…The success of companies in the Interface Layer will be designer-driven, and the greatest user experience (speed, design, etc.) will win.” The user-experience can either breed great product loyalty, or forever deter a user from a given product or service. From this perspective, technology is a secondary concern, driven by UX. The UX community prioritizes the essential role of humans over technologies, focusing on what the users experience and perceive. This is not just a matter of preferences and brand loyalty; it’s about the bottom line. By one measure, every $1 invested in UX yields a $2-$100 return.

In fact, the UX community is increasingly denoting the essential role of UX in extracting insights from the data. Until relatively recent advances in UX, the data and the technologies were both inaccessible for the majority of the population, driving them to spreadsheets and post-it notes to explore data. UX provides the translation layer between the big data analytics technologies and the users, enabling visually intuitive and functional access to data. The UX democratizes access to big data – both the technologies driving big data analytics as well as the data itself. Unfortunately, the pendulum may have swung too far, with data perceived at best as “a supporting character in a story written by user experience” and at worst as simply ignored. The interface layer alone is not sufficient for meeting the challenges of a modern data environment.

A UNIFIED APPROACH

The data science and UX communities are innovating and modernizing in parallel silos. In some industries, such as cyber security, they are unfortunately rarely a consideration. Although necessary, neither is sufficient to meet the needs of the user community. Customers are not drawn to a given product for its interface, no matter how beautiful and elegant it might be. It has to solve a problem. The reason products such as Amazon, Uber and Spotify are so popular is because of the data and data analytics underlying the services they provide. In each case, each product filled a niche or disrupted an inefficient process. That said, none of these would have caught on so quickly or at all without the modern UX that enabled that fast, efficient and intuitive exploration of the data. Steve Jobs mastered this confluence of technology and the arts, noting “technology alone is not enough. It’s technology married with liberal arts, married with humanities, that yields the results that make our hearts sing.”

It is this confluence of the arts and technology – the UX and the data science – that can truly revolutionize the security industry. The tech battlegrounds over machine learning and domain expertise or big data and UX are simply a waste of time. To borrow from Jerome Kagan, this is similar to asking whether a blizzard is caused by temperature or humidity – both are required. Together, sophisticated data science and modern, intuitive UX can truly innovate the security community. It is not a zero sum game, and the integration of the two is long overdue for security practitioners. The security threatscape is simply too dynamic, diverse and disparate to be tackled with a single approach. Moreover, the stakes are too high to continue limiting access to digital tools and data to only a select few personnel within a company. The smart integration of data science and the UX communities could very well be the long overdue paradigm shift the security community needs to truly distill the signal from the noise.

Graphic credit: Philip Jean-Pierre

Beyond the Buzz: Integrating Big Data & User Experience for Improved Cyber Security

Andrea Little Limbago

See Your Company Through the Eyes of a Hacker: Turning the Map Around On Cybersecurity

$
0
0

Today, Harvard Business Review published “See Your Company Through the Eyes of a Hacker: Turning the Map Around On Cybersecurity” by Endgame CEO Nate Fick. In this piece, Nate argues that in order for enterprises to better defend themselves against the numerous advanced and pervasive threats that exist today, they must take a new approach. By looking at themselves through the eyes of their attackers—in the military, “turning the map around”—companies can get inside the mind of the adversary, see the situation as they do, and better prepare for what’s to come.

Nate identifies four ways that companies can “turn the map around” and better defend themselves against attackers. Read the full article at HBR.org

Sign up here for more News & Communications from Endgame.

Meet Nate and the Endgame team at RSA 2015. We’ll be in booth #2127 – register here for a free expo pass (use the registration code X5EENDGME) and stop by to learn more about Endgame.

See Your Company Through the Eyes of a Hacker: Turning the Map Around On Cybersecurity

Nate Fick

Data-Driven Strategic Warnings: The Case of Yemeni ISPs

$
0
0

In 2007, a flurry of denial of service attacks targeted Estonian government websites as well as commercial sites, including banks. Many of these Russian-backed attacks were hosted on servers located in Russia. The following year, numerous high profile Georgian government and commercial sites were forced offline, redirected to servers in Moscow. Eventually, the Georgian government transferred key sites, such as the president’s site, to US servers. These examples illustrate the potential vulnerability of hosting sites on servers in adversarial countries. Both Estonia and Georgia are highly dependent on the Internet, with Estonia conducting virtually everything online from voting to finance. At the opposite end of the spectrum is Yemen, with twenty Internet users per 100 people. Would the same kind of vulnerability experienced by Georgian sites be a concern for a country with minimal Internet penetration?

For low and middle-income countries, traditional indicators of instability and dependencies – such as conflict measures or foreign aid, respectively – tend to drive risk assessments. When modern technologies are taken into account, most of this work focuses on the role of social media, as the majority of research on the Arab Spring and now ISIS reflects. While these technologies are important to include, they do not reflect the full spectrum of digitally focused insights that can be garnered for geopolitical analyses. More specifically, the hosting and/or transfer of strategic servers hosted in adversarial (or allied) sovereign territory could provide an oft-overlooked signal of a country’s intent. Eliminating this risk could be a subtle, but insightful, change that may warrant additional attention. The changing digital landscape could provide great value and potentially strategic warning of an altering geo-political landscape.

The Public Telecommunication Corporation (PTC) is the operator of Yemen’s major Internet service providers, Yemennet and TeleYemen. Using Endgame’s proprietary data, it is possible to analyze the changing digital landscape of all Internet-facing devices, including the digital footprint of the ISPs. The geo-enrichment and organizational information, when explored temporally, may shed light both on transitioning allegiances, as well as on who controls access to key digital instruments of power during conflict. These are state-affiliated ISPs, and in turn can be used for censorship and propaganda by those who control them, as exemplified in Eastern Europe. In fact, news broke on 26 March that Yemennet is blocking access to numerous websites opposed to Houthi groups. Houthis control the capital and have expanded their reach, leading to the recent air strikes by Saudi Arabia and Gulf Cooperation Council allies.

Looking at data from early 2011 to the present, it is apparent that the PTC and Yemennet particularly had a footprint mainly in Yemen, but also in Saudi Arabia as well.

PTC Cumulative Host Application Footprint 2011-2015

Yemennet Cumulative Host Application footprint 2011-2015

However, the larger temporal horizon masks changes that occurred during these years. The maps below illustrate data over the last year, highlighting that the digital footprint has moved to entirely within Sana.

PTC footprint 2014-15

Yemennet Footprint March 2014-2015

An overview of the time series data shows a dramatic termination of a presence in Saudi Arabia during the summer of 2013.

To ensure this breakpoint was not simply an elimination of the IP blocks located in Riyad and Jeddah, but rather a move to Sana, I explored numerous IP addresses independently to assess the change. In each case, the actual hosting of the IP address transferred from Saudi Arabia to Yemen. Interestingly, just prior to the breakpoint in the data, an (allegedly) Iranian shipment of Chinese missiles was located off the coast of Yemen, which were intended at the time for Houthi rebels in the northwestern part of the country. Moreover, the breakpoint also occurs within the same timeframe of the termination of Saudi Arabia’s aid to Yemen, which had been the bedrock of the relationship for decades. In fact, the elimination of this aid was described as giving “breathing space for it (Yemen) to become independent of its ‘big brother’ next door.” It is plausible that this transfer of domain host locations is similarly part of the larger desire for “breathing space”, or elimination of dependencies on its powerful neighbor.

Does this transfer of the main Yemeni ISPs away from Saudi Arabia to entirely within Yemen’s borders indicate a strategic change? As is the case with all strategic warnings, they should be validated with additional research. Nevertheless, data-driven strategic warnings are few and far between in the realm of international relations. Even the smallest proactive insight into potential changes in the geo-political landscape could help highlight and focus attention to areas previously overlooked. Despite the presence of al-Qaeda in the Arabian Peninsula (AQAP), Yemen has not garnered much attention outside of the counter-terrorism domain. But as we’re seeing now, Yemen could very well be the battleground for a proxy conflict between the dominant actors in the Middle East. Perhaps any exploration of Yemen’s digital landscape during 2013 could have prompted a more holistic and proactive analysis into the changing regional dynamics. The digital landscape of key organizations may offer a range of insights that just may provide enough strategic insight to help enable proactive research into regions that are on the verge of major tectonic geopolitical shifts. With the onset of the cyber domain as a major battleground for power politics, digital data must be integrated not only into tactical analyses, but also can help inform strategic warning as well.

Data-Driven Strategic Warnings: The Case of Yemeni ISPs

Andrea Little Limbago

Meet Endgame at RSA 2015

$
0
0

Endgame will be at RSA 2015!

Stop by the South Hall, Booth #2127 to:

  • Get a product demo. Learn more about how we help customers instantly detect and actively respond to adversaries.

  • Learn from our experts. We’ll present three technical talks at our booth throughout the week. No registration required - just show up!

  • Enter to win an iPad mini! We'll be giving one away Monday, Tuesday and Wednesday of RSA. We'll announce the winners at the end of each day here on our website and on Twitter (@EndgameInc). Come to the booth to claim your prize.  ***Congratulations to the iPad mini winners for Tuesday 4/21 - #233026 and Wednesday 4/22 - #233120. Come to our booth tomorrow (South Hall 2127) to claim your prize!***

Don't have an expo pass? Register here for a free expo pass courtesy of Endgame (use the registration code X5EENDGME).

Technical Talk Descriptions

Vulnerability and Exploit Stats: Combining Behavioral Analysis and OS Defenses to Combat Emerging Threats


Speaker: Cody Pierce, Endgame Director of Vulnerability Research

Despite the best efforts of the security community—and big claims from security vendors—large areas of vulnerabilities and exploits remain to be leveraged by adversaries. Attendees will learn about:

  • A new perspective on the current state of software flaws.
  • The wide margin between disclosed vulnerabilities and public exploits including a historical analysis and trending patterns.
  • Effective countermeasures that can be deployed to detect, and prevent, the exploitation of vulnerabilities.
  • The limitations of Operating System provided mitigations, and how a combination of increased countermeasures with behavioral analysis will get defenders closer to preventing the largest number of threats.

Cody Pierce has been involved in computer and network security since the mid 90s. For the past 13 years he has focused on discovery and remediation of known and unknown vulnerabilities. Instrumental in the success of HP’s Zero Day Initiative program, Cody has been exposed to hundreds of 0day vulnerabilities, advanced threats, and the most current malware research. At Endgame, Cody has lead a successful team tasked with analyzing complex software to identify unknown vulnerabilities and leveraged global situational awareness to manage customer risk.

Global Attack Patterns to Improve Threat Detection  


Speaker: Curt Barnard, Endgame Software Implementation Engineer

The Internet is flooded with traffic from web crawlers, port scanners, and brute force attacks. Data analyzed from Sensornet™, a unique network of sensors, allows us to observe trends on the Internet at large. Attendees will learn:

  • How to identify if malicious traffic directed at your network service is part of a larger CNO campaign.
  • How to get advanced warning of new attacks and malware seen in the wild but not yet reported on.
  • How network defenders can better protect themselves against attacks that occur at scale.
  • How Endgame identifies malicious hosts that are attempting to leverage exploits such as the Shellshock vulnerability at scale.

Curt Barnard is a network security professional with expertise in advanced methods of covert data exfiltration, steganography, and digital forensics. As a Department of Defense employee, Curt focused on analysis and operations to counter some of the most advanced cyber threats. At Endgame, Curt continues this research, coaxing malicious actors into revealing their TTP’s and creating defensive measures based on real-time threat data.

How Data Science Techniques Can Help Investigators Detect Malicious Behavior 

Speaker: Phil Roth, Endgame Data Scientist

Data science techniques can help organizations solve their security problems — but they aren’t a silver bullet. Working directly with customers, Endgame has been able to match the right science to unsolved customer security challenges to create effective solutions. In this talk, attendees will experience a small part of that process by learning:

  • How machine learning techniques can be used to find security insights in large amounts of data.
  • The difference between supervised and unsupervised learning and the different types of security problems they can solve.
  • How a lack of labeled data and the high cost of misclassifications present challenges to data scientists in the security industry.
  • How Endgame has used an unsupervised clustering technique to group cloud-based infrastructure, a fundamental step in the detection of malicious behavior.

Phil Roth cleans, organizes, and builds models around security data for Endgame. He learned those skills in academia while earning his physics PhD at the University of Maryland. It was there that he built data acquisition systems and machine learning algorithms for a large neutrino telescope called IceCube based at the South Pole. He has also built image processors for air and space based radar systems.

Meet Endgame at RSA 2015

Git Hubris? The Long-Term Implications of China’s Latest Censorship Campaign

$
0
0

Last Friday, GitHub, the popular collaborative site for developers, experienced a series of distributed denial of service (DDoS) attacks. The attacks are the largest in the company’s history, and continued through Tuesday before fully coming under control. GitHub has not been immune to these kinds of attacks in the past, and is quite experienced at maintaining or restoring the site during the onslaught. In both 2012 and 2013, GitHub experienced a series of DDoS attacks and experienced similar attacks earlier in March. By all independent accounts, the Cyberspace Administration of China (CAC) is behind this latest wave of attacks, redirecting traffic from the Chinese search engine, Baidu, to overwhelm GitHub. While the malicious activity bears the fingerprints of a Chinese campaign, they may have awoken a sleeping giant in the open source development community. Unlike the latest high profile attacks – such as Sony and Anthem – these attacks visibly disrupted the day-to-day life of a tight knit, transnational and largely middle-class social network. And it is these kinds of transnational networks that, when unified, spawn social movements.

This week’s attack focused on pressuring GitHub to remove content related to GreatFire.org and another site that hosts links to the Chinese version of The New York Times. Both of these are platforms for circumventing the Great FireWall, and therefore are a direct attack both on free speech and also the tech community. In the past, China has restored access to GitHub due to criticism from the domestic developer community. However, China has been tightening censorship over the last few years, which has instigated the creation of groups like Great Fire to partner with external partners – such as Reporters without Borders – to help fight Chinese censorship. With over 300 cofounders, Great Fire is gaining traction and has tightened relations with the major media outlets outside of China. It is these kinds of transnational activist networks that have proven so successful in the past. Written well before rise of social media, Margaret Keck and Kathryn Sikkink’s Activists without Borders introduced the concept of the boomerang effect. The boomerang effect occurs when a state is unresponsive to the demands of domestic groups, who then form transnational alliances to amplify the demands of the groups and readdress the demands via international pressure. To date, Great Fire is pursuing a similar trajectory to previous successful social movements.

Is it possible that the latest wave of DDoS attacks is enough to fully solidify the relationship of groups like Great Fire not only with journalists, but also with the open source development community? A brief review of Twitter content (similar to those screenshots below) pertaining to the GitHub DDoS attacks produces three general themes: 1) who is doing this?; 2) why are they doing this; 3) stop messing with my project. In fact, one popular sourcefor open source news asks, “Who on Earth would attack GitHub?” The open source community is clearly one of the largest proponents of free speech and collaboration, which has been very vocal in issues of privacy, but has been relatively silent on global events. Nevertheless, couple that intrinsic and core set of beliefs with disruption to their own projects, and the conditions are created under which social movements begin to coalesce. More recent literature on social movements further highlights the greater success of the movements when pursuing non-violent means to instigate change.

The latest executive order sanctions those associated with cyber attacks, but it is more so reactive than proactive. The open source community could build upon lessons learned from the GitHub experience, and collaborate with colleagues throughout the tech community to inflict economic damage on those who are directly attacking open source development. For instance, a de facto embargo on certain technologies to China is much more politically feasible and costly than working through the ITAR process. While the tipping point for awareness has not yet been reached – one indication of which is the lack of prominent mainstream media on the GitHub breach – the conditions are ripe for the start of a transnational social movement, driven by the open source development community if it coalesces around this cause (similar to that which occurred over privacy concerns) instead of allowing it to silently dissipate.

In contrast, China likely sees this latest GitHub campaign as simply an extension of previous breaches, which failed to garner any political blowback, but aided their larger censorship efforts. However, China will increasingly have to deal with the growing paradox of promoting censorship as well as technical development. This is one of the many contradictions China continues to encounter as it simultaneously modernizes its economy and seeks global ambitions. The choice of Baidu, for instance, potentially reveals another rift in China’s approach to development. Robin Li is the CEO of Baidu, is the third wealthiest man in China, and a member of the government’s top political advisory council. This makes the choice of Baidu potentially confrontational, as it is publicly traded and not part of the state-owned enterprises that tend to operate at the behest of the government. So far, Baidu has denied any connection to the GitHub attacks. Contradictions like these are only increasingly surfacing as corruption campaigns, censorship and extension of power dominate Chinese politics.

The latest GitHub attack, the largest in its history, remains off the radar for all but those in larger technology and open source communities. This is unfortunate as it has the potential to have much broader long term implications within China than any of the other Chinese-associated attacks in the last year. It will be interesting to watch whether the open source community will use this as a springboard for global advocacy for free speech with the potential to inflict economic and technological pain. The current response has been luke warm at best, but the conditions are ripe for change. China might do well to heed the advice of Barrington Moore, who over a half century ago wrote about the preconditions for social movements toward democracy and dictatorships. He notes that the tipping point of change tends to occur when the daily routines of the middle class is disrupted or threatens to be destroyed. China has crossed this threshold, and very well may be uniting the transnational network on which movements are made.

Git Hubris? The Long-Term Implications of China’s Latest Censorship Campaign

Andrea Little Limbago

The Endgame Guide to Informed Cocktail Party Conversations on Data Science and the Latest Security Trends at RSA 2015

$
0
0

The mathematician George Box famously noted that, “all models are wrong, but some are useful”. This is especially useful advice when looking at quantitatively driven analytics—a topic that is increasingly dominating research and media coverage in the security industry. While the move toward more data-driven analyses is a welcome one, without a proper understanding of the indicators, parameters, and compilation of the data, the field is ripe for misinterpretation and apples-to-oranges comparisons of the state of the security threatscape. New security industry research and related media coverage over the last week indicates a strong focus on the escalation of cyber attacks over the last year. The quantitative findings, coupled with the growing qualitative narrative of China’s Great Cannon escalatory capabilities of censorship outside of Chinese sovereign territory, indicates a troubling trend in rising malicious activity in the cyber domain. These estimates, at a strategic level, are likely correct, but they are prone to misinterpretation and confusion when translating them into business, policy, and course of action decisions for executive leaders. To make sense of competing analytics and best evaluate specific organizational risks, executives need to understand how data and behavioral science work together. As security executives and practitioners get ready to head to RSA next week, here are a few guidelines for comparing and interpreting the latest security research:

  • Parameters: Many of the recent industry reports focus on specific geographic or industry coverage or even company size. For instance, headlines that attacks are up 40% pertains only to large companies with over 2500 employees.  Similarly,headlines that cyber attacks cost companies $400B requires the qualification that this is an estimate for some companies. Similarly, SCADA systems seemed especially vulnerable in analyses that focus solely on SCADA systems, which may or may not apply to other targets. Finally, frequently the research is based on a sample or subset of the data and therefore may not reflect the entire population. In short, findings in one region or vertical or target-type do not necessarily translate into the same risk factor outside of those specific parameters. This could be exacerbated depending on the sample size of the data. The type, severity, and frequency of attacks against the financial services industry in the US likely vary significantly from those targeting the telecommunications industry in Peru. Distinguishing even further based on company size adds another level of complexity that cannot be ignored.
  • Measurement: What constitutes an attack? This is perhaps one of the most challenging and inconsistent areas of quantitative security analytics. For instance, in the critical infrastructure industry there are significant discrepancies in the number of reported attacks, partly due to a lack of consensus on the nature of an attack. Critical infrastructure is not alone, as organizations vary on their definition of an attack. What was the target? From where did the attack occur? Was data breached? For some, the breach of data appears to be the distinguishing element of defining an attack. “It wasn’t an actual hack, no data was breached” noted Alex Willette, when the State of Maine’s website went down last month after being the target of a series of denial of service attacks. And this is just the key independent variable. A series of control and dependent variables also are prone to measurement discrepancies as well. In fact, most quantitative analytics base their measurement on raw numbers and ignore what percent they might be of the larger population. In an industry where the number of connected objects and people continues to expand exponentially, the raw numbers mask the growing population size from which these measurements occur. Are there more attacks simply because there are a greater number of connected devices and people? Maybe not, but it certainly is a factor that must be considered in any rigorous analyses.
  • Collection: Even with the parameters and measurement well established, the security industry faces great challenges in data collection. This is both a technical and a social challenge. Clearly, the technical means to collect the data often remain proprietary and therefore limit apples-to-apples comparisons of the findings. However, the social dimension likely provides an even greater collection problem. Unlike other areas where risk factors are visible (such as conflict), the security industry leans heavily on self-reporting of breaches. This is one of the many areas where behavioral and social science can be integrated into the quantitative analytics. For instance, the notion of norms emerges frequently, but rarely is it applied to norms pertaining to reporting. Previously, companies and organizations were disinclined to report on a breach for fear of the reputational costs. Is this norm even more embedded in light of CEOs at Target and Sony losing their positions? Or is rising awareness of the geo-political threats leading to greater disclosure to the government? In short, the latest figures on the escalating malicious digital activity might reflect changes in reporting, detection, increased activity, or more likely a confluence of the three. Given the nature of obfuscation and continued norms that may limit reporting or even information sharing, it is essential to remain cognizant of how data collection directly impacts any findings in the security industry.

As corporate executives and the security industry flock to San Francisco next week for the RSA conference, there will be plenty of discussion on the latest reports and big data techniques to help tackle the escalatory nature of malicious digital activity. This may be the one time data munging and structuring discussions just might be welcome at the numerous cocktail party receptions that coincide with the RSA conference. When asked for thought-provoking insights on the latest trends in the security industry, it never hurts to remember that models are oversimplifications of reality. The parameters, measurement and collection of the data dramatically impact a model’s robustness, and thus the validity of the findings. It is best to avoid oversimplifying such a complex domain, and instead opt for digging beneath the surface of the latest trends to fully comprehend exactly how they might apply to a given organization.

If you’re interested in learning more about the diverse applicability of data and behavioral science to the security industry, visit Endgame’s booth next week—our experts will be giving a series of technical talks each day.

The Endgame Guide to Informed Cocktail Party Conversations on Data Science and the Latest Security Trends at RSA 2015

Andrea Little Limbago

Geeks, Machines and Outsiders: How the Security Industry Fared at RSA

$
0
0

Last week at RSA—the security industry’s largest conference—Andrew McAfee, co-author of “The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies”, introduced the trifecta of geeks, machines and outsiders as technological innovation’s driving factors. However, after listening to numerous panels and talks during the week that glossed over or downplayed the relevance of geeks, machines and outsiders in moving the security industry forward, it was impossible to miss the irony of McAfee’s argument.

So using the criteria of geeks, machines and outsiders as the driving factors in technology innovation, how does the security industry fare? Based on my week at RSA, here is my assessment:

  • Geeks: By geeks, McAfee refers to people who are driven by evidence and data. Despite the buzzword bingo of anomaly detection, outliers and machine learning, it is not apparent that the implementation of data science has evolved to the point in security that it has in other industries. This might be shocking to insider experts who find that data science has almost reached its peak impact in security. To the contrary, as one presenter accurately noted, data science is, “still in the dark ages in this space.”

    Most data science panels at RSA devoted entire presentations to non-technical and bureaucratic descriptions of data science. In fact, one presenter joked that the goal of the presentation was to only show one equation at most, and only in passing, in order to try to maintain the audience’s attention. While the need to reach a broader audience is understood, panels on similarly technical topics such as malware detection, authentication or encryption dove much deeper into the relevant technologies and methodologies. It’s unfortunate for the industry that the highly technical and complex realm of data science is not always granted the same privilege.

    Incorrect assumptions about data science were also prevalent. At one point during one of the talks, someone commented that “the more data you have, the higher the accuracy of the results.” Comments like these perpetuate the myth that more data is always better and ignore the distinction betweenprecision, recall, and accuracy. Even worse, the notion of “garbage in, garbage out”, which is taught at any introductory level quantitative course, did not even seem to be a consideration.

    Finally, security companies seem to buy into the notion that data scientists are necessary for the complex, dynamic big data environment, but they have no idea how to gainfully employ them. During one panel, a Q&A session focused on what to do with the data scientists in a company. Do you partner them with the marketing team? Finance? Something else? It was clear that data science remains an elusive concept that everyone knows they need, but have no idea how to operationalize.
     

  • Machines: Ironically, it was a data science presentation that, although short on real data science, provided the strongest case for increasing human machine interaction in security by illustrating its success in other industries. In his own argument about machines as a driving factor in technology innovation, McAfee pointed out that companies that ignore human-machine partnerships fall behind. This remains a dominant problem in the security industry, as the numerous high-profile breaches of the last few years illustrate.

    Unlike in many other extraordinarily technical fields, the human factor is often overlooked or ignored in security.  Whether it’s boasting thousands of alerts a day (which no human could ever analyze), or the omnipresent donut/pie chart visualization which is the bane of the existence of anyone who actually has to use it, the human factor approach to security—like data science—lags well behind other industries. While there was an entire RSA category devoted to human factors, the vast majority of those panels were focused on the insider threat, rather than on the user experience in security. The importance of the human-machine interplay is simply not on the security industry’s radar.
     

  • Outsiders: McAfee’s last point about outsiders emphasizes the erroneous mindset in some industries that unless you grew up and are trained in that specific field, you have nothing to offer. Instead, industries that are open to ideas and skills from other fields will have the greatest success in the foreseeable future. This perspective has actually been the driving force of creative innovation throughout time. The wariness (and at times exclusion) of outsiders in the security industry is extraordinarily detrimental not only to the industry, but to corporate and national security as well. It impedes cooperation at the policy level and innovation within the security companies themselves. Although not commenting on the security industry specifically, McAfee reiterated the foundational role of a diversity of views and experiences, working collaboratively together, to foster innovation and paradigm shifts.

    This preference toward industry-insiders is the driving factor limiting the integration of data science and human-machine partnerships and hindering security innovation. The response to McAfee himself was perhaps indicative of the industry’s perspective on the issue of outsiders. McAfee was the last keynote presenter of the day. Many attendees sat through a series of talks by security insiders, but unfortunately left when it came time for an outsider’s perspective. 

Changing an embedded mindset can be even harder than developing the technical skills. This is especially apparent in the security industry, which has yet to figure out how to take the great advances in data science and human-machine interaction from other industries and leverage them for security. As a quantitative social scientist, it was truly mind-boggling to see just how nascent data science and user experience are in the security industry. The future of the security workplace should obviously maintain subject matter experts, but must also pair them with the data scientists who truly understand the realm of the possible, as well as UI/UX experts who can take the enormous complexity of the security data environment and render it useful to the vast user community. It’s ironic that such a technology-driven industry as security completely discounts its roots in Ada Lovelace’s vision of bringing together arts and sciences, machines and humans. Maintaining the status quo—which in the security industry is 0 for 3 in McAfee’s categories for innovation—should not be an option. There is simply too much at stake for corporate and national security. Technical innovation must be coupled with organizational innovation to truly leverage the insights of geeks, machines and outsiders in security.

Geeks, Machines and Outsiders: How the Security Industry Fared at RSA

Andrea Little Limbago

Change: Three Ways to Challenge Today’s Security (UX) Thinking

$
0
0

Last week, I was fortunate enough to spend three and a half days on the floor at RSA for its “Change: Challenge Today’s Security Thinking” inspired conference. I was simply observing and absorbing the vast array of companies and products. As someone new to the world of security (but very well-versed in the field of UX) I was afforded an opportunity to look at an entire industry with a fresh perspective. One of the most unique challenges that faces the growing world of user experience professionals is knowing just enough about a target user group to create compelling solutions without being too “in the weeds.” In my experience, being too close to a particular industry or audience segment can prevent a more objective approach that a seasoned designer can, and should, bring to a product. Having said that, there were some interesting trends as well as some areas that could benefit from the thematic undercurrent of “Change” presented at RSA. I focused my research on 51 companies spanning multiple verticals, sizes and problem sets—and because the majority were not direct Endgame competitors, the true purpose of my research was to understand more about how the industry thinks and to find key areas of improvement for the field of UX. 

Color as a key component
Color played a large part in virtually every product, whether by choice or chance. Color palettes were dominated by bold hues that usually included black, gray, red, orange and blue. Yellow, purple and green were used far less frequently and likely for good reason. Traditionally, black and gray representsimplicity, prestige and balance with red and orange representing importance, danger, caution and change. Blue will always represent strength and trust. On the flip side, yellow, green and purple tend to represent sunshine and warmth,growth and fertility, and magic and mystery, unlikely traits in the security industry. Still, some companies utilized these weaker palette choices in their products, possibly without a true understanding of the “baggage” they bring.

Outside of content color, background color use went one of two ways – either dark content on light background with 80% of companies utilizing this design paradigm or the much less common, light-on-dark construct. Neither is “better” or “correct” in application development, however, the former tends to be more common in the business-to-business realm and is far more familiar to business-centric application users. When I inquired with the companies I observed who had chosen to implement the lesser utilized light-on-dark approach, they generally did so to either differentiate themselves or to successfully target a very specific population of their target market. Whether these two outcomes are true still remains to be seen. These companies were all young start-ups, clearly taking a bit of a risk.

Maps as presentation vehicles
There were a multitude of products that featured some sort of map – whether it was a network, geographic, server, GPS, sankey, tree – you name it, there was a map for it. This was both good and bad. For those companies that did it well, the maps provided a much-needed visualization of data that wouldn’t fare well in a tabular or list format. When a security professional needs to have a birds-eye view of where their vulnerabilities lie, providing a visual representation over a list of IP addresses may allow them to better comprehend what requires their attention in a fraction of the time. However, the maps started to suffer in situations where their presence had no clear purpose. Several products had unnecessary animations. Others were so small that the corresponding data and labels overlapped, rendering the graphic unusable. I saw quite a few stuck into a corner of a dashboard simply to fill an otherwise empty space. The D3 collapsible tree map was extremely popular, often at the cost of legibility and a clear understanding of the complexity of the processes that the visualizations were supposed to clarify.

Features as framework
Perhaps the greatest challenge I found in the majority of products from both small and large companies, but particularly the industry behemoths, was a clear, well thought-out information architecture (IA), particularly as it related to feature development and organization. There is a common misunderstanding that more features equates to better “sellability”, particularly in products that like to position themselves head-to-head with their competitors.  In the industry, this is often referred to as feature-bloat and time and again it presents itself in products that are designed by product management, marketing and and/or engineers. Generally, these are the individuals who are the most removed from the end-user. It’s the idea that if some is good, more must be better and the false assumption that commanding a big price tag means being able to do a lot. We see this as the mark of success in many industries including the automobile, electronics and vacation/travel sectors.

However, in an industry where time is critical and decision-making is crucial (and competitors are abundant), the feature bloat present in many products shown on the floor can be detractors to product success and may actually make them harder to use when time is of the essence. Think scalpel over Swiss army knife, especially if you’re a startup.

What does this mean?
The good news is that UX is starting to make inroads in the security industry and this is an exciting time to be talking about UX in this massive field. Fully bringing UX to an entire product and team takes time, but there are three things that every company can start doing now.

  • First, know your audience and your brand. Figure out to whom you want to sell and for whom you want to build (hint: they may not be the same person). What does your company stand for? What are your core values and selling points? What problems are you solving and for whom? How are you solving them? Then figure out what it is that makes your company and your product different from everyone else. This is yourown brand pyramid. Ask yourself with each new feature that gets proposed: “Does this align with our core strategy and does our product really need this? Does this solve a specific problem for the user” Don’t assume that you know the answer to this simply because you work in marketing or are an engineer. Subsequently, if your answer is “no” and/or the feature doesn’t align with supporting your original brand pyramid, it’s extraneous at best, and distracting or detrimental, at worst.
  • Second, don’t be afraid to be different—but not so different that people don’t even understand it. This is where your UX team needs to understand how to do good user research and then analyze that research. Don’t just comb your analytics—watch people use your product. Don’t just ask your users what they need—it’s  likely that they actually won’t be able to tell you. Don’t assume every user is of a certain demographic and will like some wacky color scheme. Instead try to understand what it is they want to do with your product. Have an open conversation around their roles in their organizations and what problems they face in their roles. Seek ways to create solutions they wouldn’t have thought of and then iterate on how to best manifest that within the product interface without sacrificing usability.
  • Finally, offer unique and targeted solutions even if it means having more than one product. It is better to have several separate but logically connected solutions than it is to have a bloated product with many layers of navigation and too many features. If possible, create roles within the product and give those roles specific policies that can hide data and modules when a particular user does not need them. This may seem obvious, but again, when a feature is proposed, ask “does every user in my system need this and if so, are they all using it the same way?” Chances are, they aren’t.

Interestingly enough, on several occasions at RSA I heard the question “what products will this replace?” The end goal of any product should be to solve problems, not displace the competition. If a competitor’s product already solves a user’s problem, then your company is facing an uphill battle if the only goal is to unseat that product. Instead, ask if there is a more unique way to solve that same problem. Perhaps there is a different problem worth solving. Seek the blue ocean. As Apple would say, “Think Different.” Apple wasn’t successful because Apple wanted to outsell Microsoft. Apple was successful because Apple wanted to make products that solved users’ problems. They’ve done this by investing the necessary resources into their user experience. They’ve aligned their business with user needs. Sounds simple—but it takes dedication and time.

In the end, UX does take effort. It can feel like starting over. In some ways, it is. However in every other industry that has embraced it, especially when that industry is inundated with solutions (think healthcare, education, mobile development) it’s often the difference between an “ok product” and a market success. Even if your organization has already invested a lot of time in your existing products, as RSA taught us, it’s never too late to “Change”.

Change: Three Ways to Challenge Today’s Security (UX) Thinking

Emily Ryan

How the Sino-Russian Cyber Pact Furthers the Geopolitical Digital Divide

$
0
0

As I wrote at the end of last year, China and Russia have been in discussions to initiate a security agreement to tackle the various forms of digital behavior in cyberspace. Last Friday, Xi Jinping and Vladimir Putin formally signed a cyber security pact, bringing the two countries closer together and solidifying a virtual united front against the US. This non-aggression pact serves as just one of a series of cooperative agreements occurring in the cyber domain, and is indicative of the increasingly divisive power politics that are shaping the polarity of the international system for decades to come.

Non-aggression pacts are not new, and by definition focus solely on preventing the use of force between signatories to the pact. However, although they are structured to only impact bilateral country relations, historically they have had significant international implications. By signaling ideological, political, or military intentions, non-aggression pacts can exclude similar levels of cooperation with other states. In fact, when states form neutrality pacts (which are similar, but slightly distinct from non-aggression pacts), the probability of a state initiating a conflict is 57% higher than those without any alliance commitments. Regardless of the make-up of a state’s alliance portfolio—whether non-aggression or neutrality pacts, offensive or defensive alliances—a state’s involvement in alliances of any kind increases the likelihood of that state initiating conflict. It would be a mistake to assume that pacts in the cyber domain should be any different, as they serve a similar signaling mechanism of affiliation in the international system. In fact, last week’s cyber security pact has already prompted analogies to the Molotov-Ribbentrop Pact, the non-aggression treaty signed in 1939 between Germany and the USSR.  While externally the emphasis was on preventing conflict between the two signatories (which clearly didn’t last), the pact contained a privately held aspect dividing parts of Eastern Europe into Soviet and Russian spheres of influence. In short, while non-aggression pacts may appear pacifistic, rarely has that been the case historically.

Moreover, the Sino-Russian pact provides a forum for each state to further shape the guiding principals and norms in cyberspace away from its foundation, which is based on Internet freedom of information and access, and toward the norm of cyberspace sovereignty. Following the surveillance revelations beginning in 2013, global interest in the notion of cyberspace sovereignty has increased, largely aimed at limiting external interventions viewed as an infringement on traditional notions of state sovereignty.  On the surface, this merely extends the Westphalian notion of state sovereignty. However, authoritarian regimes (such as Russia and China) have coopted the de jure legitimacy of state sovereignty to control, monitor and censor information within their borders. This is orthogonal to those norms generally favored by Western democracies and further divides cyberspace into two distinct spheres defined by proponents of freedom of information versus proponents of domestic state control. The Sino-Russian pact will likely only encourage greater fractionalization of the Internet based on the norm of cyberspace sovereignty.

Finally, this pact must be viewed in the context of the growing trend of bilateral cyber security pacts. Japan and the US recently announced the Joint Defense Guidelines, which covers a wide range of cooperative aspects targeted at the cyber domain and the promotion of international cyber norms. Just as the agreement with Japan is likely targeted at countering China, many states in the Middle East are requesting similar cooperation in light of the potential easing of Iranian sanctions. The Gulf Cooperation Council—a political and economic union of Arabic states in the Middle East—is similarly pushing for a cyber security agreement with the US to help deter Iranian aggression in cyberspace. In short, these cooperative cyber security agreements are indicative of the larger power politics that shape the international system. States are increasingly jockeying for positions in cyberspace, signaling their intent and allegiance, which will have implications for the foreseeable future. The Sino-Russian agreement is only the latest in the string of cyber pacts that reflects the competing visions for cyberspace, and the ever-growing geopolitical digital divide.

 

How the Sino-Russian Cyber Pact Furthers the Geopolitical Digital Divide

Andrea Little Limbago

Open-Sourcing Your Own Python Library 101

$
0
0

Python has become an increasingly common language for data scientists, back-end engineers, and front-end engineers, providing a unifying platform for the range of disciplines found on an engineering team. One of the benefits of Python is that it allows software developers to choose and make use of zillions of good code packages. Among the huge number of excellent Python packages, a data scientist may use Pandas for data manipulation, NumPy for matrix computation, matplotlib for plotting, SciPy for mathematical modeling, and Scikit Learn for machine learning. Another benefit of using Python is that it allows developers to contribute their own code packages to the community or share a library with other Python programmers. At Endgame, library sharing is very common across projects for agile product development. For example, the implementation of a new clustering algorithm as a Python library can be used in multiple products with minimum adaptation. This tutorial will cover the basic steps and recommended practices for how to structure a Python project, package the code, distribute it over a Git repository (Github or a private Git repository) and install the package via pip.

For busy readers, I’ve developed a workflow diagram, below, so that you can quickly glance at the steps that I’ll outline in more detail throughout the post. Feel free to look back at the workflow diagram anytime you need a reminder of how the process works.

 

Workflow Diagram for Open-Sourcing a Python Library

 

Step One: Setup

Let’s suppose we are going to develop a new Python package that will include some exciting machine learning functionality. We decide to name the package "egclustering" to indicate that it contains functions for clustering. In the future, if we are to develop a new set of functions for classification, we could create a new package called "egclassification". In this way, functions designed for different purposes are organized into different buckets. We will name the project folder on the local computer as "eglearning". In the end, the whole project will be version controlled via Git, and be put on a remote Git repository, either GitHub or a private remote repository. Anyone who wants to use the library would just need to install the package from the remote repository. 

Term Definitions

Before we dig into the details, let’s define some terms:

  • Python Module: A Python module is a py file that contains classes, functions and/or other Python definitions and statements. More detailed information can be found here.
  • Python Package: A Python package includes a collection of modules and a ___init___.py file. Packages can be nested at any depth, provided that the sub-directories contain their own __init__.py file.
  • Distribution: A distribution is one level higher than a package. A distribution may contain one or multiple packages. In file systems, a distribution is the folder that includes the folders of packages and a dedicated setup.py file. 

Step Two: Project Structure

A clearly defined project structure is critically important when creating a Python code package. Not only will it present your work in an organized way and help users find valuable information easily, but it will also be much easier to add new packages or files in the future if the project scales.

I will take the recommendation from "Repository Structure and Python" to structure a new project, only adding a new file called README.md which is an introductory file used on GitHub, as shown below.

README.rst
README.md
LICENSE
setup.py
requirements.txt
egclustering
            __init__.py
            clusteringModuleLight.py (This py file contains the code.)
            helpers.py
docs
            conf.py
            index.rst
tests
            test_basic.py
            test_advanced.py 

The project structure is well explained on the page referenced above. Still, it might be helpful to emphasize a few points here:

  • setup.py is the file that tells a distribution tool, such as Distutils or Setuptools, how to install and configure the package. It is a must-have.
  • egclustering is the actual package name. How would we (or a distribution tool) know that? Because it contains a __init__.py file. The __init__.py file could be empty, or contain statements for some initiation activities.
  • clusteringModuleLight.py is the core file that defines the classes and functions. A single py file like that is called a module. A package may contain multiple modules. A package may also contain other packages, namely sub-packages, as long as there is a __init__.py included in a package folder. A project may contain multiple packages as well. For instance, we may create a new folder on par with "egclustering" called "egclassification" and put a new __init__.py under it.
  • Once you find a structure you like, it can serve as a template for future structures. You only need to copy and paste the whole project folder and give it a new project name. More advanced users can try using some template tools, for example, cookiecutter

Step Three: Setup Git and GitHub (or private GitHub) Repository

Ctrl+Alt+t to open a new terminal, and type in the following two commands to install Git on your computer, if you haven't done so. 

            sudo apt-get update
            sudo apt-get install git

If the remote repository will be on GitHub (or any other source code host, such as bitbucket.org), open a web browser and go to github.com, apply for an account, and create a new repository with a name like 'peterpan' in my case. If the remote repository will be on a private GitHub, create a new repository in a similar way. In either situation, you will need to tell GitHub your public key so that you can use ssh protocol to access the repository. 

To generate a new pair of ssh keys (private and public), type the commands in the terminal:

            ssh-keygen -t rsa -C "your_email@example.com"
            eval "$(ssh-agent -s)"
            ssh-add ~/.ssh/id_rsa

Then go to the settings page of your github account and copy and paste the content in the pub file into a new key. The details of generating ssh keys can be found on this settings page.

You should now have a new repository on GitHub ready to go. Click on the link of the repo and it will open the repo's webpage. At the moment, you only have a master branch. We need to create a new branch called "develop" so that all the development will happen on the "develop" branch. Once the code reaches a level of maturity, we put it on "master" branch for release.

To do that, click "branch", and in the blank field, type "develop". When that's done, a new branch will be created. 

Step Four: Initiate the Local Git and Syn with the Remote Repository

So far, we have installed Git locally to control the source code version, created the skeleton structure of the project, and set up the remote repository that will be linked with the local Git. Now, open a terminal window and change the directory (command ‘cd’) in the project folder (in my case, it is ~/workspace/peterpan). Type:

            git init
            git add .  

The period “.” after “git add” indicates to add the current folder into Git control.

If you haven't done so already, you will need to tell Git who you are. Type:

            git config --global user.name "your name"
            git config --global user.email "your email address"

Now let's tell local Git what remote repository it will be associated with. Before doing that, we need to get the URL of the remote repository so that the local Git knows where to locate it. On your browser, open the remote Git repository webpage, either on Github or your private GitHub. On the bottom of the right-side panel, you will see URL in different protocols of https, SSH, or subversion. If you're using GitHub and your repository is public, you may choose to use the https URL. Otherwise, use the SSH URL. Click the "copy to clipboard" button to copy the link.

In the same terminal, type:

            git remote -v 

to check what remote repositories you currently have. There should be nothing.

Now use the copied URL (which in my case is git@github.com:richardxy/peterpan.git) to construct the command below. "peterpanssh" is the name I gave to this specific remote repository which helps the local Git to identify which remote repository we deal with.

            git remote add peterpanssh git@github.com:richardxy/peterpan.git

When you type in the command “git remote -v” again, you should see the new remote repository has been registered with the local Git. You can add more remote repositories in this way by using “git remote add” command. In the case when you would like to delete a remote repository, which basically means "break the link between the local git and the remote repository", you can do Git remote rm (repository name), such as:

            git remote rm peterpanssh

If you don't like the current name of a repository, you can rename it by using the following command.

            git remote rename (oldname) (newname), such as:
            git remote rename peterpanssh myrepo

At the moment, the local Git repository has only one branch. Use “git branch” to check, and you will see “master” only. A better practice is to create a “develop” branch and develop your work there. To do this, type:

            git checkout -b develop

Now type “git branch” again and hit enter in the terminal window, and you will see the branch “develop” with an asterisk attached ahead of it, which means that the branch “develop” is the current working branch.

Now that we have linked a remote Git repository with the local Git, we can start synchronizing them. When you created the new repository on the remote Git (Github or your company's private Git repository), you may have opted in to add a .gitignore file. At the moment, .gitignore file exists only at the remote repository, but not at the local git repository. So we need to pull it to the local repository and merge it with what we have in the local repository. To do that, we use the command below:

            git pull peterpanssh develop 

Of course, peterpanssh is the name of the remote repository registered with the local git. You may use your own name.

“Git pull” works fine in small and simple projects like this. But when working on a project that has many branches in its repository, separate commands "git fetch" and "git merge" are recommended. More advanced materials can be found at git-pull Documentation and Mark's blog.

Once the local Git repository has everything the remote Git repository has (and more), we can commit and push the contents in the local Git to the remote Git.

The reason for committing to Git is to put the source code under Git's version control. The workflow related to committing usually includes:

Modify code -> Stage code -> Commit code

So, before we actually commit the code, we need to stage the modified files. We do this to tell Git what changes should be kept and put under version control. The easiest way to stage the changes is to use:

            git add -p

That will bring up an interactive session that presents you with all the changes and lets you decide to stage them or not. As we haven't made many changes so far, this interactive session should be short. Now we can enter:

            git commit -m "initial commit"

The letter "m" means "message", and the string after "-m" is the message to describe the commit.

After committing, the staged changes (by the "git add" command) are now placed in the local Git repository. The next step is to push it to the remote repository. Using the command below will do this:

            git push peterpanssh HEAD:develop

In this case, "peterpanssh" is the remote repository name registered with the local Git, and "develop" is the branch that you would like to push the code to. 

Step Five: Develop the Software Package

So far, we have built the entire infrastructure for hosting the local project, controlling the software versions both locally and remotely. Now it's time to work on the code in the package. To put the changes under version control (when you’re done with the project, or any time you think it’s needed), use:

            git add -p
            git commit -m "messages"
            git push repo_name HEAD:repo_branch

Step Six: Write setup.py

When your code package has reached a certain level of maturity, you can consider releasing it for distribution. A distribution may contain one or multiple packages that are meant to be installed at the same time. A designatedsetup.py file is required to be present in the folder that contains the package(s) to be distributed. Earlier, when we created the project structure, we already created an empty setup.py file. Now it's time to populate it with content. 

A setup.py file contains at least the following information:

           from setuptools import setup, find_packages
           setup(name='eglearning',
           packages=find_packages()
           )

There are a few distribution tools in Python. The standard tool for packaging in Python is distutils, and setuptools is an upgrade of distutils, with more features. In the setup() function, the minimum information we need to supply is the name of the distribution, and what packages are to be included. The function find_packages() will recursively go through the current folder and its sub-folders to collect package information, as long as a __init__.py is found in a package folder.

It is also helpful to provide the meta data for the distribution, such as version, a description of what the distribution does, and author information. If the distribution has dependencies, it is recommended to include the installation requirements in setup.py. Therefore, it may end up looking like this:

           from setuptools import setup, find_packages
           setup(name='eglearning',
                      version='0.1a',
                      description='a machine learning package developed at Endgame',
                      packages=find_packages(),
                      install_requires=[
                                 'Pandas>=0.14',
                                 'Numpy>=1.8',
                                 'scikit-learn>=0.13',
                                 'elasticsearch',
                                 'pyes',
                      ],
           )

To write more advanced setup.py, Python documentation or this web page are good resources.

When you are done with setup.py, commit the change and push it to the remote repository by typing the following commands:

           git add -p
           git commit -m 'modified setup.py'
           git push peterpanssh HEAD:develop

Step Seven: Merge Branch Develop to Master

According to Python engineer Vincent Driessen, "we consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state." When the code in the develop branch enters the production-ready state, it should be merged into the master branch. To do this, simply type in the terminal under the project directory:

           git checkout master
           git merge develop

Now we can push the master branch to the remote repository:

           git push peterpanssh

Step Eight: Install the Distribution from the Remote Repository

The Python package management tool "pip" supports the installation of a package distribution from a remote repository such as GitHub, or a private remote repository. pip currently supports cloning over the protocols of git, https and ssh. Here we will use ssh.

You may choose to install from a specific commit (identified by a MD5 check-sum) or whatever the latest commit in a branch. To specify a commit for cloning, type:

           sudo pip install -e git://github.com/richardxy/peterpan.git@4e476e99ce2649a679828cf01bb6b3fd7856281f#egg=MLM0.01

In this case, "github.com/richardxy/peterpan.git" is the ssh clone URL with ":" after ".com" being replaced with "/". This is tricky and it won't work if you omitted the replacement. The parameter "egg" is also a requirement. The value is up to you.

If you opt to clone the latest version in the branch (e.g. “develop” branch), type:

           sudo pip install -e git://github.com/richardxy/peterpan.git@develop#egg=MLM0.02

You only need to specify the branch name after "@" and before "egg" parameter. This is my preferred method.

Then pip will check if the installation requirements are met and install the dependencies and the package for you. Once it's done, type: 

           pip freeze

to find the newly installed package. You will see something like this:

           -e git://github.com/richardxy/peterpan.git@2251f3b9fd1b26cb41526f394dad81016d099b03#egg=eglearning-develop

Here, 2251f3b9fd1b26cb41526f394dad81016d099b03 is the MD5 checksum of the latest commit. 

Type the command below to create a requirements document that registers all of the installed packages and versions. 

           pip freeze > requirements.txt

Then open requirements.txt, replace the checksum with the branch name, such as “develop”, and save it. The reason for doing that is, the next time when a user tries to install the package, there might be new commit and therefore the MD5 would have changed. Using the branch name will always point to the latest commit in that branch.

One caveat: if virtualenv is used, the pip freeze command should look like this so that only the configurations in the virtual environment will be captured:

           pip freeze -l > requirements.txt

Conclusion

This tutorial covers the most fundamental and essential procedures for creating a Python project, applying version control during the development, packaging the code, distributing it over code-sharing repositories, and installing the package via cloning the source code. Following this process can help non-computer science-trained data scientists get more comfortable using well-known collaborative tools like Python for software development and distribution.

Open-Sourcing Your Own Python Library 101

Richard Xie

Hunting Your Adversaries with Endgame Enterprise: Meet Us at Gartner

$
0
0

Stop by our booth (#1214) at the Gartner Security & Risk Management Summit next week to receive a demo of Endgame Enterprise, the industry's first endpoint detection and response platform to hunt, contain, and eliminate adversaries that bypass signature and perimeter based security solutions. Featuring advanced threat intelligence, behavioral analysis, and attack chain modeling, Endgame Enterprise "thinks like the adversary", enabling customers to detect and respond faster to unknown threats, preventing damage and loss.

Location: Gaylord National, National Harbor, MD

Booth Hours:

Monday, 6/8: 12:30-2:45pm, 5:30-7:30pm

Tuesday, 6/9: 11:45am-5:15pm

Wednesday, 6/10: 12:15-2:15pm

Learn more about the event.

Hunting Your Adversaries with Endgame Enterprise: Meet Us at Gartner

Much Ado About Wassenaar: The Overlooked Strategic Challenges to the Wassenaar Arrangement’s Implementation

$
0
0

In the past couple of weeks, the US Bureau of Industry and Security (BIS), part of the US Chamber of Commerce, announced the potential implementation of the 2013 changes to the Wassenaar Arrangement (WA), which is a multinational arrangement intended to control the export of certain “dual-use” technologies.  The proposed changes place additional controls on the export of “systems, equipment or components specially designed for the generation, operation or delivery of, or communication with, intrusion software.” Many in the security community have been extraordinarily vocal in opposition to this announcement, especially with regard to the newly proposed definition of "Intrusion Software" in the WA. This debate is important and should contribute to the open comment period requested by the BIS, which ends July 20. While the WA appears to be a legitimate attempt to control the export of subversive software, the vague wording has raised alarms within the security community. 

For decades the security community has developed and studied exploit and intrusion techniques to understand and improve defenses. Like many research endeavors, it has involved the development, sharing, and analysis of information across national boundaries through articles, conferences, and academic publications. This research has successfully produced countermeasures like DEP (Data Execution Prevention) and ASLR (Address Space Layout Randomization), which mitigate numerous exploits seen in the wild. These kinds of countermeasures resulted directly from exploitation research and are protected by the new WA definition. While a robust debate on the WA’s implications is useful for the security community, what seems to be lacking is a strategic level discussion on whether these kinds of arrangements even have the potential to achieve the desired effect. The debate over the definition and wording of key terms is indicative of the larger hurdles these kinds of multinational arrangements encounter. This is especially problematic when building upon legacy agreements. By most measures, the WA simply renamed the COCOM (Coordinating Committee for Multilateral Export Controls) export control regime and is a Cold War relic designed to limit the export of weapons and dual-use technologies to the Soviet bloc. The Cold War ended a quarter of century ago, and yet agreements like WA still are built on that same mentality and framework. Below are four key areas that impact the ability of the WA (and similar agreements) to achieve the desired effect of “international stability” and should be considered when seeking to limit the diffusion of strategically important and potentially destructive materials. 

1. Members only: There are only 41 signatories to the WA (see the map below*). While to some that may seem extensive, it reflects less than a quarter of the states in the international community. In layman’s terms, three-quarters of the countries will be playing by a completely different set of rules and regulations, putting those who implement it at a competitive disadvantage – economically and in national security. Moreover, it means that three-quarters of the countries can export these potentially dual-use technologies – including countries like China, Iran, North Korea – rendering it unlikely to achieve the desired effect. To be clear, this concern is not just about US adversaries, but also about allies that could gain a competitive advantage. Israel, not a signatory of the WA, has a thriving cyber security industry and may increasingly attract more investment (and innovation!) in light of implementation of the WA.

2. Credible commitments: International cooperation depends heavily oncredible commitments and the ability of states to implement the policies embedded in the treaty domestically. As membership rises, so too does diversity in domestic political institutions and foreign policy objectives. It would be startling (to say the least) if Western European countries and Russia pursue implementation that produce uniform adherence to the WA. Even within Western Europe, elections may usher in a new way of approaching digital security. Recent UK elections with a Tory majority may alter legislation pertaining to surveillance issues, and may run counter to the WA. 

3. Ambiguity of language: The most unifying theme of the security community’s opposition to the WA is the vague and open-ended definition of intrusion software. By some estimates, anti-virus software and Chrome auto-updates may fit within the definition. The government will likely receive many comments on the definition over the 60-day response period. It is strongly in the best interest of all parties involved if greater specificity is included. Otherwise, there will continue to be headlines vilifying the government for classifying everything digital as a weapon of war, which clearly is not the case. As we grapple with securing systems globally and ensuring our defenses can prevent advanced threats, one might imagine a future where loose policy definitions move software and techniques underground or off-shore for fear of prosecution. This could be counterproductive to understanding and securing the new and changing connected world.

4. Rudderless ship: The most successful international agreements have relied heavily on global leadership, either directly by a hegemonic state or indirectly through leadership within a specific international governmental organization (IGO). This leadership is essential to ensure compliance and norm diffusion of the regulations inherent within a treaty or agreement. The WA lacks any form of IGO support and certainly lacks any hegemonic or bipolar leadership. Even if this leadership did exist, the cyber domain simply lends itself to obfuscation and manipulation of the data and techniques, rendering external monitoring difficult. More so, China andRussia continue to push forth norms completely orthogonal to those of the WA, including cyber sovereignty. Without global acceptance and agreement on these foundational concepts, the WA has little chance of adherence even if there is domestic support for the verbiage (which clearly is not currently the case).

In short, the hurdles the WA will encounter when trying to achieve its objectives is a typical two-level game that hinders international cooperation. States must balance international polarity and norms on the one hand, with domestic constituents, institutions and domestic norms on the other. Without the proper conditions at both the domestic and international level, agreements have little chance of actually achieving the objective. If the goal is truly focusing on international stability, human rights, and privacy, the WA may not be the optimal means of achieving these goals. As organizations, researchers, and activists continue to contribute to the critical debate about the value and feasibility of the WA, the policy and security communities should take advantage of the open comment period to remember that the complexity and dynamism of the current digital landscape requires novel thinking beyond obsolete Cold War approaches.

*Wassenaar Arrangement Participants (source: http://acdis.illinois.edu/resources/arms-control-quick-facts/Wassenaar)

Much Ado About Wassenaar: The Overlooked Strategic Challenges to the Wassenaar Arrangement’s Implementation

Cody Pierce & Andrea Little Limbago
Viewing all 698 articles
Browse latest View live