NoMoreFreeBugs – ohnoes!

At CanSecWest last week (note to self: write a post about how awesome the conference was) a few well known researchers, Alex Sotirov, Dino Dai Zovi, and Charlie Miller began a movement against “free bugs”.  The basic and over simplified premise is that they feel that security vulnerabilities should not be handed over to vendors for free.  I don’t necessarily agree with this but in reality who cares?  To each their own.  This is really an individual choice.

Of course, this caused a few to scratch their heads and while I am sure there are other really dumb blog posts about this — I thought this one took the cake:

http://www.sophos.com/security/blog/2009/03/3680.html

Not only is the above blog post completely off the mark, but it is clear that the author is very inexperienced in dealing with security vulnerabilities.  Lets look at some of the ridiculous comments made by Ross Thomas of Sophos.

“As one of those users, I have to say I’m not exactly delighted to discover that a so-called security researcher was so breathtakingly cavalier about the safety of my data and the privacy of my personal information. Apparently I’ve been vulnerable to this “idiot-proof” exploit for at least a year, and have only good luck to thank for the fact that no-one used it to drain my bank accounts in the meantime.”

Wow.. talk about raising the level of FUD and so soon in the post.  While we don’t have a heck of a lot of details on the bug (some do have more than others) I can say with a pretty high confidence level that this bug could not be used to “drain” the author’s bank account.  If it could, there would be even less reason to disclose it.  😉

But wait it gets even worse:

“The point I’m trying to make is that this wasn’t “his exploit” to do with as he saw fit.”

Really?  Didn’t the researcher, in this case Charlie Miller, spend the time to find this bug?  He found the bug and he wrote the exploit.  That does in fact make it his to do with as he pleases.

I guess that is really the entire point that Sophos and Ross Thomas are missing.  While I personally would report any vulnerabilities I find to the vendor, for free, it really is up to the individual researcher to do as he pleases with what he finds.  Afterall, he did put in the work.

“With today’s highly monetized black market for malware authors this kind of bug must not be permitted to exist even for a day, let alone a year”

More FUD!  Security vulnerabilities exist, they always have and they always will.  Get over it.  Bugs exist much longer than days as it takes most vendors months to fix anything and once you have reported the bug to a vendor — it is no longer a secret.  While anyone could have found the same bug and used it for “bad things” no one did.  So what does that tell you?  It suggests to me that the so called “black market” and malware authors aren’t looking as hard or maybe they aren’t as good as looking.

Lets also not forget that users are always slow to patch their machines.  So waiting to report this really has no bearing on anything — especially when this specific bug has not been used in the wild.  Looking at the last few very successful pieces of malware — none of them used a zeroday.  In fact one of the bigger ones (although we all know the shady AV Vendors inflate their numbers) Confiker, used a known and patched vulnerability.  In fact, the trend lately has been, patch released, bad guys reverse patch, bad guys start using vulnerabilities, months later users get around to installing patch.

Perhaps once we start to see more actual zero day being used and lets be honest here, perhaps once AV Vendors start actually offering their users REAL PROTECTION that can’t be easily bypassed then we can cast stones at someone for wanting to be paid for something they do in their spare time.

Advertisements

8 Responses to “NoMoreFreeBugs – ohnoes!”

  1. I couldn’t agree more.

    I’ve been pretty surprised at the reaction to the “No More Free Bugs” thing. I never would have thought that there were so many people who would think that it is anyone but the security researcher’s decision as to what they should/shouldn’t/can/can’t do with a vulnerability they find.

    It’s their work. If and how they want to monetize it is their business, so long as it’s legal. If a vendor wants guaranteed first-dibs on vulnerabilities in their products, they should be prepared to hire/contract out the analysis or make it known that they are willing to negotiate with independent researchers on found bugs.

    Anything else is essentially “free bugs” for the vendor, which is fine if that’s what the researcher wants to do for them (some researchers find them in the context of other paid work). It shouldn’t be something that vendors should rely on, though.

    All of this really shows a deep lack of understanding of how difficult this work can be and how much it’s taken for granted.

  2. I think we are seeing more 0day, the original vuln that conficker was exploiting was 0day and so was 09-002 and the adobe 0day. so its coming around. not that i disagree with you, i think its totally up to the bug finder on whether or not they sell the bug. If the reality we live in is that you should get paid for your work now that exploitable bugs are harder to find then i think thats ok. vendors certainly dont “deserve” to have people do research for free.

  3. Ross Thomas Says:

    Hi hellnbak,

    Nice to know I have at least one reader 😉

    The point I was trying to make about draining my bank accounts is this: while details are certainly scant (because TippingPoint bought “exclusive rights” to them, whatever that means), there’s at least a decent chance that the exploit could be used to install a keylogger. Once a keylogger is installed, my bank accounts are as good as empty and my identity is as good as stolen. In other words, even though — of course — this vulnerability has no direct means of accessing my chip ‘n’ PIN, all kinds of personal and financial chaos could ensue as an indirect consequence.

    As for the second quote, I think it’s quite defensible, though I perhaps didn’t justify it adequately in the original blog. I meant to suggest that the “scene” has changed in the last few years: it’s no longer a game now that millions of dollars are at stake, if it ever was a game in the first place. While Miller had no legal obligation to disclose the bug (though I would argue there perhaps *should* be a legal obligation, for a critical browser bug), I believe he as a security professional had a clear ethical duty to do so. It sometimes feels like people don’t make the connection between “box to be hacked” and “person who gets hurt because of it”.

    And yes, of course security vulnerabilities exist, and always will. But that situation won’t improve as long as people are rewarded for keeping them secret. And just because no-one happened to discover this bug doesn’t make hiding it any better, in my opinion. It just makes it not quite as bad as if someone had used it to start a giant new iBotnet.

    Internet security is serious business, and breaches have serious consequences. I consider it imperative that bugs be reported, especially when they’re in a browser. It sounds almost as if you agree…

    Thanks for reading! 🙂

    -Ross

  4. hellnbak Says:

    You sure about that? Confiker, unless I killed those brain cells at CanSecWest, was exploiting MS08-067. Which was never an “in the wild” zero day.

  5. hellnbak Says:

    Ross,

    You fail to make your point…. again.. I am not trying to be a dick here but you clearly do not get it.

    “TippingPoint bought “exclusive rights” to them, whatever that means”

    What that means is that Tippingpoint has purchased the vulnerability and can do with it what they please. The seller, has agreed to not disclose to anyone else and it is up to Tippingpoint what they do with it. Typically, they will report it to the vendor and then build protection in to their IPS for it. They, and others, have been doing this for years. I don’t necessarily agree with it, but it is the reality and it seems to work.

    “there’s at least a decent chance that the exploit could be used to install a keylogger. Once a keylogger is installed, my bank accounts are as good as empty and my identity is as good as stolen.

    You are being over dramatic and creating FUD again. Sure, the vulnerability allows for code execution. So yes, in theory a keylogger can be installed. You signed your blog post “Sophos Canada” meaning I am assuming you bank in Canada. Meaning the most I am going to get out of your bank in any given day via an online transfer (be it an email money transfer or a wire transfer) is $1,000.00 or maybe $2,000.00 if it is a corporate account. In order for this to work, I would have to be transferring that money to a Canadian account increasing my risk of getting caught. So already, the cash payout + risk is way greater than the 5K + hardware paid out for the bug.

    Your identity would not be stolen as again, even with your banking credentials, things like your Social are not exposed to online banking. This is a true statement with all of the “big Canadian banks” and the above is even harder via any of the big USA banks. You mentioned chip ‘n’ PIN — which is also something that cannot be accessed via online banking. So sure, a keylogger may get me additional access to your life and your system but again, your bank account is not drained.

    “millions of dollars are at stake”

    Really? Can you prove that with actual evidence and I mean REAL evidence, not just the numbers that AV firms like to toss around for sales purposes? I bet you can’t. For years, losses to hacks have been overstated. Yes there are losses, I am not disputing that, but more often than not those losses are inflated.

    Your next sentence is not only bat shit crazy/scary but also tells me that you have probably never spent your own free time hunting for a vulnerability:

    “While Miller had no legal obligation to disclose the bug (though I would argue there perhaps *should* be a legal obligation, for a critical browser bug)”

    There should be legal obligation? Wow, I would love to see someone try and make that happen. That is when you see the truly talented researchers go back to their roots and move underground. Perhaps software vendors, including security companies, should have some sort of legal obligation to write secure code in the first place. What about the losses corporations suffer because they have to continually patch broken code? How about the losses suffered when software doesn’t work as advertised? How is the researcher spending his own time, for free basically, doing something that a consulting company would charge hundreds of thousands for at more fault for creating risk than the vendor who releases buggy software?

    “I believe he as a security professional had a clear ethical duty to do so. It sometimes feels like people don’t make the connection between “box to be hacked” and “person who gets hurt because of it”.”

    The bug already existed. The researcher did not create the bug. So what about the ethics of the software company who releases buggy code just to make a arbitrary deadline with out performing basic due diligence and testing? How are they off the hook here? Where is the connection to poor security practice on the part of the vendor and “people who get hurt because of it”.

    “And yes, of course security vulnerabilities exist, and always will. But that situation won’t improve as long as people are rewarded for keeping them secret. And just because no-one happened to discover this bug doesn’t make hiding it any better, in my opinion. It just makes it not quite as bad as if someone had used it to start a giant new iBotnet.”

    But he isn’t being rewarded for keeping it secret. He is being rewarded for selling it to a vendor that will work to get it fixed. What he is being rewarded for is free QA and research work.

    “Internet security is serious business, and breaches have serious consequences. I consider it imperative that bugs be reported, especially when they’re in a browser. It sounds almost as if you agree”

    You are right it is a serious BUSINESS. So start paying researchers for their work. I totally agree bugs need to be reported, and before I read your reply to this I agreed that it should be done for free. But now, after reading your comment, I am slowly moving over to the NoMoreFreeBugs camp. I mean lets face it, you don’t care about the bug itself. You want to be able to say that you protect against it so you can sell more software. Just like Tippingpoint will sell more IPS as will my employer. Like you said — this is a serious business and everyone deserves to get paid.

    To be honest, I don’t even get why this is an issue. This is such a simple concept — researcher finds bug.. research does what he/she wants with said bug. End of story.

  6. “While Miller had no legal obligation to disclose the bug (though I would argue there perhaps *should* be a legal obligation, for a critical browser bug)”

    Great, this of course means that Sophos has disclosed all the vulnerabilities found in their products, so that customers know the full extent of the risk they were at before patches were available. Right Ross?

  7. This is great discussion… in fact I couldn’t resist and write up a very small piece myself. I think both side of this argument need to understand what’s at stake, and how they’re involved.

    Disclosing to a 3rd party like TippingPoint is lame, because they’re using it to make themselves money – not to necessarily increase security in the general populous in any measurable way.

    Let’s not miss the forest for the trees… I hope you don’t mind a link to my write-up here… http://preachsecurity.blogspot.com/2009/03/reflections-on-0-day-disclosure.html.

    Thanks for sharing your thoughts.

  8. Ross Thomas Says:

    @hellnbak:

    Once again I think you agree with at least one of my points ;). By “whatever that means” I wasn’t really asking what it means. It was an overly obtuse way of saying I think it’s mercenary and contrary to public interest. In other words, I don’t agree with it either.

    wrt financial loss, I’ve personally experienced this and can assure you more than $1,000 was stolen in a single day. I managed to get my (non-chip) ATM card skimmed a couple of years ago and discovered — at the airport! great timing — that according to the bank official I spoke to, someone in Belgium had withdrawn at least $2,500 in $500 increments. Perhaps it works differently for Cirrus/Maestro transactions, or perhaps the laws have changed recently. But that’s really not the point, and you’re taking me too literally (or maybe I’m being too prosaic). The claim wasn’t that someone could *literally* empty my accounts. The claim was that via this exploit a keylogger could have been installed, and once that happens all bets are off vis a vis my privacy and finances. (I do my taxes online, for example, which provides more than enough information to steal my identity.)

    And yes, I believe millions of dollars are at stake. I’ve personally seen the dozens (if not hundreds) of hours we’ve spent detecting/mitigating against/cleaning up after Conficker, for example. Multiply that by every security vendor and you’re talking an awful lot of money. I’m not just some shill repeating “numbers that AV firms like to toss around for sales purposes”. Fair enough, my basis for saying that is anecdotal, but I think it’s obvious that with maybe 10m Conficker infections that need to be dealt with, we’re talking lots and lots of dollars sloshing around. IT isn’t cheap when you’re a big organization.

    wrt legal obligation: As I said in my original post, given the 1b+ people now online (a figure which is growing hugely every month) enormous amounts of damage can be caused by a successful drive-by browser exploit. These are of course uncharted legal waters, but I don’t think it’s that big a stretch to consider a failure to report such a critical bug to be reckless endangerment of the well-being of very many people. Note I’m not saying that’s what it is, I just believe there are strong parallels, and the legal system hasn’t really caught up to the new reality yet. (And yes, this new reality may well include liability of software producers for damage caused by bugs *they knew about*, by the way.)

    Perhaps everyone does deserve to get paid, but I believe given the brave new world we’re entering, there is a line at which it becomes an ethical (if not moral) obligation to report critical bugs that might harm large numbers of people and their property. Would the obligation exist, in your opinion, if he knew about the bug *and knew someone was about to use it in a 0day*? If so then you agree there’s a line, and we just disagree on where the line lies.

    Very interesting discussion. You’re now on my Google Reader 🙂

    @Jericho:

    I have no idea whether Sophos discloses to customers critical bugs in our software while we’re working on a patch. I would certainly hope so. But I would again argue that a critical bug in a browser used by millions of people is about as serious as it gets, and perhaps deserves a special category of its own when talking about how ethics must change to deal with the Internet age.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: