Tony Sager joined the NSA in the mid to late 70s when it was far more secretive, with a college background in mathematics. At the time, he confused NSA with NASA because nobody really knew what it was. He went in as a ComSecIntern doing cryptography and what we now call “cybersecurity”. Coworkers joined the National Softball Association so they could get NSA caps to wear. He switched to math and computer science because the government would buy him an Apple II+. For the last seven years, he’s run the IAD focused on computer defense and vulnerability. Threats are about adversaries, but not the only part of the risk equation.
1: The optimal place to solve a problem is never where you found it. The NSA Red Team does a great job and will work with you to understand how they got in, how to stop it, et cetera. But knowing what patch to install isn’t good enough and doesn’t solve the real underlying operational issue. If you can’t manage configuration changes and patches at the enterprise, you’re doomed. But red teams can’t actually fix and redesign complex networks; that’s not their expertise. And the information you get is usually not in the optimal form to help you solve the problem. Pen test reports aren’t scalable, in other words. The purpose of red teaming is to help someone else understand and fix their problems quickly, not just to get better at it.
2: If a bad thing is happening to you today, it almost certainly happened to someone else yesterday. There aren’t really that many new things in this business. New twists and variations on a theme, certainly, but not truly new. And tomorrow it’ll happen to someone you care about. You almost certainly don’t know who the somebody is from yesterday and don’t have a relationship to share that information.
3: After you figure out what happened, you’ll notice plenty of obvious signs in your environment that would’ve helped. But you didn’t understand it to do the right analysis. The information that has that value might not be what we call cyberdefense information today, or even be accessible to the defenders. Think of VMs crashing at a much higher rate than normal, possibly because an attacker with imperfect information is trying to install something bad. Security people don’t usually see those logs because nobody sees them as defensive data. Similarly, license management can tell you if you suddenly have old versions of software running that weren’t running before. Don’t think of management tools and security tools as wholly separate. Think about how to bring the data into your analytic environment.
The future of cyberdefense is an information and action problem. Think about the movement of information from place to place. Only two kinds of people survive in this business: incredible cynics or hopeless optimists. Sager is the latter but knows that the problem has gotten worse, not better. The bad guys have a better business model than we do, better information sharing, adapt more quickly to new technologies, and have very high efficiencies: a very tight OODA loop. In theory, we’re protecting everything all the time from everything.
The vast majority of problems are known problems with known solutions. You can possibly draw the terrible conclusion that you have lazy front-line defenders who don’t care, but think about who that defender is. Typically, he’s an underpaid tech school graduate pulled in many different directions without appropriate equipment and training. But apply the Pareto Principle: what’s the 20% of input with 80% of the output? Network hygiene, user administration, and other things that help you get control and visibility of your environment. But we’re spending 90% of our resources on that effect, which is a bad way to go. You need better automation and approaches. The 20% output, though, matters because that’s the determined adversaries like nation-states and other really bad guys. When nations compete, they cheat. So not everything is “cyber” (or IO or CNO). Lots of stuff still happens in the real world with real people. It’ll actually be a happy day when the only adversary that concerns you is a nation-state, rather than drowning in information and processes now.
In the intelligence business, they talk about needing to “look over the horizon”. We need to get there and look beyond our own enterprise, because otherwise we’ll never solve our problems. One way is to have friends outside your borders who have the technical capability and willingness to share the data in a regular, methodical way. The other way is to have an intelligence service that looks in ways not limited to yourself and your friends, forward and backward and all around.
Don’t make the mistake that the adversary is perfect. He’s kind of like us, except he’s bad. Their tools don’t appear as if by magic, but have to get developed and acquired and deployed too. If I can look and see those things happening, I can tune my defenses to what’s coming down the road. And don’t separate the two problems (80% and 20%), making them different, unique, and independent. Everybody hides in the 80% noise, so if you ignore it, then you’ll miss it. And the 80% stuff is actually pretty clever and can teach you new tradecraft, the tools and techniques that you want to know about.
When a threat intel analyst, how do we get that information into a form and location that will make it usable to defenders? PDFs and all-upper-case DoD message formats don’t lend themselves to the usages we need them. A human being has to take it, read it, and go through a complex process to turn that into an actual defense (e.g. write a script, deploy a new policy, etc.) Why can’t I send an open file, such as XML, to share this information and let systems process it? Think about how the information will be used and get closer to a native language for that usage. Vendor lock-in is a terrible defensive strategy, compared to standards.
Professional “bad guys” (at least those who work for the US government) all agree that a well-managed network is a hard target. Doing the core things matters: patches, visibility, appropriate change control, etc. This doesn’t make it impenetrable, but it does harden the network and force the adversary to think and plan and cheat.
They also fear uncertainty: they like knowing the specifics of the target, like its behaviors and components and people. “Defense in depth” on its own has become a crutch. Throwing another layer of defense on something ‘because you can’ adds cost and complexity unless you do it for known reasons and integrate properly with the rest of your layers. Clever attackers, like clever users, find ways around your defenses, so put them in with purpose according to a data-based model. But building this model of adversaries requires sharing information in an automated, standardized, trusted way. So how do we extend these ideas? Look at the stuff that already has standards and work off of that. Threat information has lots of rich data, though, that we want to pump into our tools and not just read.
This is the new frontier: finding and sharing threat data.