Independent security researchers often have a reputation as narcissistic vulnerability pimps (true or not), but the environment which has evolved around information security largely drives this. This came to a head for me tonight in a Twitter discussion kicked off by Steve Werby:
CTFs are awesome, but if we can teach students how to articulate recommendations in a way non-technical staff can understand, even better.—
Steve Werby (@stevewerby) March 08, 2013
Creating an exploit can often pay anywhere between 1k and 100k (or possibly more in specific circumstances), depending on the researcher’s choice of market and product (or technology). This even affects areas that many users believe unrelated, like mobile OS jailbreaks, which essentially consist of exploits to gain root control despite the operating system’s best efforts to the contrary.
No equivalent market exists for threat-related research. Freelance malware analysts don’t have similar economic drivers because organizations with an interest in this information generally do the research themselves. You can’t monetize malware or attribution the same way. Put another way, nobody believes that Krebs and Danchev get rich from what they do. I don’t think we can “fix” this with the market, although I’d welcome discussion of ideas or evidence to the contrary. But we need to recognize this when thinking about issues around software security and threat identification.
Believing that security, on its own, adds value often turns into a form of the broken windows fallacy. And creating artificial demand for threat intelligence could lead to all sorts of perverse incentives. Some of the same organizations interested in purchasing vulnerabilities and exploits might have an interest in highly-focused intelligence, such as espionage on particular threat groups, but at this point the line between “offense” and “defense” becomes very fuzzy.
I’d love to hear alternate viewpoints and suggestions on where this can go.