# Tag Archives: Intelligence Analysis

## Book Review: Challenges in Intelligence Analysis

I have always believed in the value of interdisciplinary studies. Specifically, I like to examine approaches taken in superficially-dissimilar fields where the underlying problems or useful solutions have stronger connections to those on which I work when examined more closely. For example, nearly 10 years ago I read Level 4: Virus Hunters of the CDC and found a number of useful lessons for combating malware outbreaks and dealing with large-scale incidents.

More recently, my interest has turned to applying lessons from intelligence analysis. This isn’t much of a reach, truthfully, because those of us working in infosec (“cyberintelligence”) frequently do the same work as those in military intelligence and related agencies. As part of this effort, I recently finished reading Challenges in Intelligence Analysis by Timothy Walton (ISBN 0521132657). Out of all the books I’ve read recently on intelligence, this offered perhaps the most direct application in any number of fields (including mine). I read the Kindle edition, so I can’t say much about the quality of the printing, readability of the text, or appearance of the figures.

The structure makes it particularly straightforward to read. After the initial chapters dealing with challenges and solutions in somewhat general and abstract terms, Walton runs through nearly 40 case studies ranging from the Israelite spies in Canaan (as recounted in the Book of Numbers, chapter 13) to George Washington to the pre-WWII Luftwaffe to Aldrich Ames to Aum Shinrikyo. Apart from the history lessons, each case study examines the intelligence analysis techniques used and discusses what could have possibly improved upon the approach. “Questions for Further Thought” provide utility for classroom settings or those simply interested in taking the time to structure their thoughts in response. Each case also has a recommended reading list, which I find particularly useful because a number of historical cases have striking parallels in current situations (beyond their own intellectual appeal).

For example, Chapter 10 “Estimating the Strength of the Luftwaffe in the 1930s” immediately resonated with me in thinking about challenges regarding ‘cyberwar’ with China and understanding their strengths. The same challenge would apply in looking at the US, I’d think. And Chapter 17 “Counterinsurgency in Malaya” has a number of connections to the US’ recent conflicts in Iraq and Afghanistan, something not lost on General David Petraeus and Lieutenant General James Amos when they wrote the new Counterinsurgency Field Manual.

Several techniques appear frequently in the text. It does not limit discussion to easily-understood tools like timelines, flow charts, and matrices. Walton also reviews link and network analysis (particularly applicable in cyberintelligence), analysis of competing hypotheses, indicators (sound familiar?), and red teaming. This latter goes beyond a simple penetration test to emulate the tactics, techniques, and procedures of specific adversaries. Decision trees and especially scenario analysis also recur throughout the case studies. Cognitive biases also play a significant role in the discussions, especially confirmation bias, groupthink, and even hindsight bias given the context of the book.

A few of the case studies seem a little rushed. Even when we have less data on the situation for historical review, Walton doesn’t always take the opportunity to explore analysis techniques in greater detail. Related to this, a few case studies seem a little forced (“Sun Tzu” has a lot to say about intelligence analysis, but he isn’t a case study per se). And I would have liked a little more description on why he recommends certain books for further reading, especially in the general (non-case-specific) list at the end of the book.

In general, I highly recommend this book to anyone with an interest in intelligence analysis, world history, or critical and analytical thinking.

A version of this review also appears on Amazon.

## Kent doctrine for security intelligence analysis

I’ve said before that log management matters, but log analysis matters more. Extracting and communicating useful information (analysis) requires collecting and storing your security data as well as processing the data quickly. But having all the data available won’t matter to anybody except auditors if you don’t use it in ways that inform good decisions. Mike Rothman of Securosis expressed this exceptionally well in his preview of the upcoming RSA Conference:

You will see a bunch of vendors talking about their new alerting engines taking advantage of these cool new data management tactics, but at the end of the day, it’s not how something gets done – it’s still what gets done.
So a Hadoop-based backend is no more inherently helpful than that 10-year-old RDBMS-based SIEM you never got to work. You still have to know what to ask the data engine to get meaningful answers. Rather than being blinded by the shininess of the BigData backend focus on how to use the tool in practice. On how to set up the queries to alert on stuff that maybe you don’t know about.

To paraphrase Socrates, unexamined data are not worth collecting. So analysis methodology and critical thinking skills matter. Rothman is spot on with this: the value of big data tech comes when you need to grow past the capabilities that traditional SIEM and RDBMS provide. By way of analogy: if you don’t understand algebra, then don’t take a course in calculus until you have the basic prerequisites down. You’ll just frustrate yourself and waste your tuition dollars.

Provided by CIA

In this vein, then, I appreciated the pointer from the OSINT and analysis training firm Treadstone 71 to a CIA paper on the background and work of Sherman Kent, the “father of intelligence analysis”.

He promoted an analytic doctrine that boils down to nine key points, listed in the CIA paper above. That doctrine applies across domains, not just for the sorts military and geopolitical analysis we expect from government intelligence agencies. I highly recommend that everyone read at least that section of the paper, but here are some applications for those of us involved in security intelligence analysis, especially in the private sector.

1. Focus on Policymaker Concerns: What keeps your management up at night? Hopefully security isn’t the only thing, of course. So assuming that your CxOs understand the general threat landscape, analysts need to ensure that they track relevant areas that can lead to useful changes and decisions at strategic and tactical levels.
2. Avoidance of a Personal Policy Agenda: Many analysts focus on threats that concern them for reasons outside of their organization. Maybe they disagree with the politics of the Occupy movement and overemphasize threats to entirely unrelated organizations, or worry about APT China because of Sinophobia rather than a reasoned assessment of the situation. Or maybe they want to drive decision makers to a particular tech solution. Even worse, they may use their analyses as weapons for corporate political plays. Doing that represents a disservice to the organization and an unprofessional approach.
3. Intellectual Rigor: This area stands as-is: “Estimative judgments are based on evaluated and or­ganized data, substantive expertise, and sound, open-­minded postu­lation of assumptions. Uncertainties and gaps in in­formation are made explicit and accounted for in making predictions.”
4. Conscious Effort to Avoid Analytic Biases: None of us can completely avoid cognitive bias, but we can make sure we understand it and try to correct for it where possible. That principally means application of the scientific method. As previously noted, whether or not faith and dogma have a place in one’s personal life, they certainly do not in one’s professional analyses.
5. Willingness to Consider Other Judgments: Fight for your ideas, but “playing devil’s advocate” should rest on a better intellectual basis than simply spreading FUD. Recognize that others may in fact know more than you do or have insights that can help you.
6. Systematic Use of Outside Experts: In addition to seeking out and understanding the work of other analysts, don’t restrict yourself solely to your field or even industry. Work with a community and keep bringing in fresh concepts from other disciplines.
7. Collective Responsibility for Judgment: Eventually, your team will produce a report. You may not have agreed with everything that went into it, but that’s the way the sausage gets made. Once that report goes to its audience, support it. Throwing the rest of your analysis team under the bus by telling the audience “I told them so” doesn’t actually make you look smarter. It makes you look unprofessional. That doesn’t mean that you should ignore all criticism; rather, it means that you should be willing to take lumps with the rest of the group. If someone asks you for your opinion, give it – but clarify that it doesn’t represent the considered opinion of the rest of the team.
8. Effective communication of policy-support information and judgments: Analysts need three core skills: domain expertise, critical thinking skills, and communication ability. This includes targeting your analysis to the level appropriate to your audience. You must be able to summarize your findings in understandable and accurate ways. And you must be able to handle points of uncertainty properly.
9. Candid Admission of Mistakes: You won’t always be right. Admit it, and review past work to see what you can learn for improvement the next time. “Try again. Fail again. Fail better.”

Security intelligence analysts should learn from previous work, instead of simply trusting in their own domain expertise and innate intelligence. Dr. Kent led the way, and even we non-spooks can still learn from his work.

## Analysis of DNI annual Worldwide Threat Assessment

The US Director of National Intelligence, James Clapper, provided his annual Worldwide Threat Assessment to the Senate yesterday (followed by a classified session with, we can surmise, greater detail).

The unclassified portion discusses cybersecurity several times. In fact, the introduction states:

Counterterrorism, counterproliferation, cybersecurity, and counterintelligence are at the immediate forefront of our security concerns.

Notwithstanding the idea that we should consider cybersecurity as a domain and not only a specific activity, I found it useful to see where the policymakers within the US intelligence community see specific concerns. The entire document runs about thirty pages, but over two-thirds of it addresses specific region-by-region and country-by-country concerns. Two pages cover cyber threats and counterintelligence, which for our purposes cover largely similar ground.

The assessment correctly notes that “neither the public nor private sector has been successful at fully implementing best practices.” I’d go a step further, because best practices evolve on both the attack and defense fronts. We don’t even fully implement standard practices: the things we know how to do efficiently and relatively easily. Standard practices, in my mind, constitute a reasonable bar to clear: if practitioners in a given area generally all accept some technology or process as “the way it’s done”, then we shouldn’t excuse anyone doing less than that.

Interestingly, the document first singles out China and Russia as state actors, but then refers to the 2011 NCIX report to specifically blame “entities within these countries”. This means that, although the DNI does not provide specific reasons for attribution in the unclassified report, he does claim that the entities have state sponsorship. The NCIX only said on page 5 of his report that the intelligence community has “not been able to attribute many of these private sector data breaches to a state sponsor.”

The DNI report also notes that governments cannot keep up with tech development and illustrates this by “failed efforts at censoring social media” in the Arab Spring. This should provide an object lesson to US policymakers, though the recent controversies over SOPA, PIPA, and now ACTA indicate that they might not have fully connected the dots.

As a community, we’ve talked for years about addressing the vulnerability problems (including across the entire supply chain), but the DNI also talks about threat in the context of problems regarding warning, detection, and attribution. He recommends greater “US Government engagement” with the private sector. This presents other challenges, though, because we have concerns about transparency versus legitimate secrecy needs (just for starters).

In the section on counterintelligence, the report also links cybersecurity to foreign intelligence service activity. I physically laughed out loud at the assessment that “many intrusions into US networks are not being detected“: understatement of the year. The report here adds Iran to the list of countries undertaking cybersecurity operations against the US. The private sector infosec community, outside of the defense industrial base and Stuxnet, hasn’t really paid much attention to Iran. That could change in 2012, particularly if geopolitical tensions continue to increase there.

I didn’t expect any specific data in this document, given its purpose and classification level. But it could point the way to at least some of the areas that could involve many of us in the next few years, and it certainly is useful in validating the idea that we need to improve our abilities in sharing threat intelligence and incident detection & response.

## Adapting intelligence analysis for DFIR

We can define an analyst as a function taking data and caffeine as inputs that outputs (hopefully useful) knowledge:

$analyst(data,caffeine) \to knowledge$

But analysts need more than just good data and properly brewed coffee (or tea, if that’s your thing). We need well-written “internal code”: our thought processes, if you will. As I’ve previously mentioned, too much material focuses on the data and not enough on the processing. If you look for information on log management, you can find endless advice on how to collect your logs, and how to store them. If you look for information on SIEM systems, you can find lots of vendor “marketecture”, compliance guidance, and so forth – but not enough guidance on what to do with the information you find there.

To find what we really need, two things have to happen. First, we need to look outside the IT security echo chamber. Simply repeating the same endless mantras won’t advance the state of the art at all, but looking at other fields with related problems and finding ways to cross-pollinate certainly can bear fruit. In my view, the intelligence community has spent decades working through similar issues. Some really useful references I’ve found lately include Psychology of Intelligence Analysis (which largely discusses “Tools for Thinking” and “Cognitive Biases”). But another document, Basic Counterintelligence Analysis in a Nutshell, has much better applicability to DFIR. Some things work directly, like the section on “Analytic Traps and Mindsets”, others have simply gone out of date, and other concepts have useful analogues. For example, map analysis usually doesn’t reveal very much if invoked in a geographic context (since network links and physical proximity don’t correlate very well), but when you overlay your data on a network map, it certainly can.

So in February, I intend to take the “Basic Counterintelligence Analysis in a Nutshell” document and adapt the ideas in it to network security investigations in particular. But to do this justice takes more than a simple post, so instead of posting that here as originally intended, I’ll spend some time on it and get feedback when it’s ready. This post mostly serves the purpose of getting it out there so that my colleagues, friends, and readers can hold me accountable next month.