I really enjoyed reading the recent article on sharing threat intelligence by Conrad Constantine of Alienvault. He clearly has spent a lot of time thinking about this issue and working with it, and he has some really fascinating perspectives. While I discuss the article below, you should read it first for context (and because it has additional thoughts not explored here).
As he notes, this controversy has (at least) two sides, and I lean toward the first. We should continue to explore how to share threat intelligence, though we have a lot to figure out. As a side note: OSVDB, by definition, does not contain threat intel but vulnerability intel. The other two sites listed (Malware Domain List and Shadowserver) do provide good examples. Organizations will always have to “construct their own processes and technology to consume it effectively”, particularly processes. As an example, most practitioners would agree that current-generation antivirus / antimalware has a terrible intelligence process, driving us to look to entirely different technologies.
Intelligence sharing within an industry vertical works okay. My experience primarily centers around the Financial Services ISAC, which has some fairly basic but useful methods for data sharing. Even then, many organizations often find themselves mired in fear of their own staff. And not for nothing, but classification affects more than just the defense industrial base. We live in an environment in which all critical industries find themselves semi-nationalized, and thus national loyalty matters even for corporate and economic matters. I’ll leave discussion of whether that makes sense out of this post, but the fact remains that industries far removed from “national security” use this process as well.
“The progress we haven’t made in security data sharing isn’t because of limitations in technology or legal implications (both of which can be overcome with little effort). People don’t want to share because of those old faithful standbys still gnawing at the human mind: fear and greed. Fear of how whatever we share may be used against us, greed for anything we can get for free, or better yet, monetize the transaction.” (emphasis original)
The article also seems to imply some problems with commercializing this segment. In my view, vendors may charge, but they (should) also provide important components like validation & data cleaning so that those consuming the intelligence can trust it and put it directly into use. He prefers that people rely on ‘enlightened self interest’, but this looks like an extension of the classic prisoner’s dilemma (diner’s dilemma). We should look at existing thought surrounding concepts like the tragedy of the commons to find possible solutions here. Clearly, in any workable arrangement, you must receive more than you give – but I suggest that you must also give in order to receive, or the whole arrangement will fall apart. Economics and game theory have lots of history and application that can help us here.
Some data doesn’t require too much anonymization. In general, higher-level threat intelligence presents less operational risk than lower-level data. Statistical incident summaries or even data using tools like the VERIS Framework will rarely if ever endanger organizational security. This becomes more true when aggregated by a trusted third party or held until investigators have completed the confidential portions of their work. We can delve into greater detail with tools like IODEF or even OpenIOC, but these may necessitate more scrubbing in some cases to protect against disclosing internal confidential data.
The real issue comes when an ongoing investigation becomes compromised due to sharing this intelligence. The article gives the example of an attacker who uses a host only to attack a specific target and then realizes he’s been “made” once that host appears in a threat intel feed. Similar issues have arisen in the past with malware samples uploaded to AV vendors or even VirusTotal. This directly drives the concerns many organizations have with sharing information. In my view, the risk depends on the type of attacker, or at least his sophistication level, and whether you suspect a targeted attack.
Perhaps it would help organizations to determine in advance how far out to share certain types of data (and at what point in an investigation). If you’ve established good rapport with your counterparts in organizations much like yours (e.g. competitors), then you can start by doing that fairly early in an investigation. Once you’ve disclosed an incident, you can disseminate more widely. Automated sharing should only occur in the case of automatically-detected attacks (e.g. “IP addresses attempting web application exploits caught by WAF”). While OPSEC considerations still apply here, keep in mind that you should still receive more useful intel than you share. And don’t forget that attackers frequently share target data, including in underground markets.
For background on the image, see Badger Guns on Wikipedia and the reference links, or search for “Badger Guns Milwaukee racism”.