Archive for security

I Totally Owned Your Grandma…

Posted in security with tags , , , , , , , on June 8, 2010 by hellnbak

This was originally written by me and posted here as a guest blog:


Guest editorial by Steve Manzuik

Lately there has been a lot of attention given to various privacy issues of social networking sites.  Whether it is Google’s Buzz automatically adding anyone you have ever emailed to your follow list or the multitude of Facebook privacy concerns, it seems that all of a sudden the world is now worried about their privacy on the Internet.  While I can understand why some users wish to have their privacy, I do chuckle a bit inside when I hear people complain that they wish to have privacy on an open and public network.

While this blog post will not be specifically about privacy I do want to state that expecting privacy on the Internet is a bit misguided as no one has ever had privacy on the Internet ever.  Unless you are encrypting every little packet sent from your system, it has been read somewhere by someone for whom it was not intended.  Users are failing to make the connection between acceptable behaviors in the real world vs. acceptable behavior on the Internet.  If you want something to be private you wouldn’t yell it out in a crowded shopping mall, so perhaps you shouldn’t post it on a social networking site. Privacy issues aside, the real topics that interest me when it comes to social networking on the Internet is the various ways that social networking tools become attack platforms. During the recent privacy debates Mark Zuckerberg, founder of Facebook, was quoted in the Washington Post stating the following:

“Facebook has been growing quickly. It has become a community of more than 400 million people in just a few years. It’s a challenge to keep that many people satisfied over time, so we move quickly to serve that community with new ways to connect with the social Web and each other. Sometimes we move too fast.”

If you put yourself into the mindset of an attacker, does 400 million targets all centralized on one fast and ever-changing web application not sound like a great place to play?  Attacks via the Internet are nothing new, but over the last five years we have seen the intent behind attacks shift from mostly harmless annoyances to actual well-planned business models that give an attacker the ability to create an income from successful compromises.  Be that income from rented-out botnet cycles, from spam, theft of corporate secrets, or even the outright stealing of bank funds, today an attacker has the ability to make some real money.  Combine this ability with 400 million targets who are mostly non-technical and running ineffective host-based security solutions, and you have a breeding ground for malicious behavior.  Or, as my grandma likes to call it: “that Facespace thing on the Internet”.

Without getting too platform or site specific – because let’s face it, these days it really doesn’t matter what operating system or browser you use – let’s look at some of the ways that your grandma will get abused via social networking.  I did some very fast brainstorming via email with some very smart colleagues and friends and we came up with some attack scenarios that are all possible today.  I won’t credit each person but you know who you are, so thank you for your input.

Attack Scenario 1:  Malicious add content
The very core of most social network sites’ “business plan” is to generate revenue via advertising content.  This is achieved via partnership deals with the various online advertisers as well as, in some cases, the ability for general users to purchase ad-space that appear in a targeted fashion.  Leveraging this model has actually been done before with much success.  I am sure that there are multiple ways that this can be achieved. The two that pop in to my head immediately are 1) generating an ad that will entice users to click, and therefore be served malicious content or depending on how much html and java -fu you are allowed to use in an ad, or 2) have the ad itself contain malicious content.  This type of attack is actually very simple and in my opinion would probably have a high rate of success.  Remember, your anti-virus and other host-based security products are only protecting you from the threats they know about – meaning anything you throw together will have success until the security vendors collect their samples and write their signatures for it.

Attack Scenario 2:  Spyware infested applications
I won’t get in to the debate over what is and what is not considered spyware. Social networking sites like Facebook have shown us that even if you are a shady scam artist, users are willing to install your application so they can grow virtual crops, manage fish farms, or pretend to be a mobster.  Why not take this to the very next level and place spyware or other potentially harmful and malicious content in to your games?  A smart attacker could easily come up with an application that the masses want only to then leverage that popularity to do evil.

Attack Scenario 3:  Targeted attacks
This is probably the more interesting attack scenario, mostly because an attacker can leverage this to compromise those of us who feel that we are too careful to become victims.  Social networks have been great for people to reconnect with old friends and maintain those connections. The very nature of a social network is the fact that your connections, and even some conversations are public, a savvy attacker could easily leverage this information to attack those who feel they are safe.  For example, if someone wanted to compromise my systems, I would hope that they would not have a lot of success by attacking me directly.

That said, they could target someone close to me who may not be as diligent with their online security.  Once that target is compromised, a targeted attack via their social network would have a higher chance of success – because who would suspect someone close to them as an attack source?  An alternative scenario could also be to compromise someone in the target’s social network who is known to occasionally roam on to the target’s private network.  A back door installed via a social network attack could work wonders as a launching point for an attack once that system is connected to the right network.  The example used here is based on a targeted personal attack – but would this not also work very well to gain access to an internal corporate network as well?  We all love to share who we work for via our social networks.

Scenario 4:  Virtual gets real world
It seems that between various status updates, services like Gowalla or Foursquare, and the ability to instantly upload a photo to the web complete with geo-tagging information, that we are able to know where everyone in our social network is, physically, at all times.  In many cases a lot of this information is public and viewable by anyone.  How long until petty thieves begin to leverage this information to determine what homes are empty and easy targets for robbery?

The previous scenarios are only the tip of the iceberg when it comes to ways that an attacker can leverage the social networks themselves to conduct attacks. None of these scenarios are really new, each of them have already been used in a successful attack.  Of course, I have not gone in to how one can protect themselves from these sorts of things.  The frightening reality is that today’s security mechanisms are not sufficient enough to protect us against today’s attack vectors.  The software industry has done a great job dealing with the messes of the past, but they have not adjusted or moved fast enough to address what is currently going on and what will happen in the future. No, this is not me saying that we should not run any host-based protection products as they are better than nothing.

The reality today is that we as end users of various social networking services are really at the mercy of the service providers.  With the shift in cloud computing and the ability for everyone to share everything online instantly we are placing a ton of trust in the hands of a few providers to protect us.  The Facebooks, Twitters, and Foursquares of the world owe it to their end users to be more diligent and perhaps provide a little more scrutiny to the services they offer.

Hopefully startups like Immunet continue to pop up and introduce interesting and hopefully more effective ways to protect end users from attackers and, sadly enough, from themselves.

Operating System Choice Does Not Equal Security

Posted in security with tags , , , , , , , , , , on June 2, 2010 by hellnbak

Yesterday while some of us in the USA were enjoying a day off Google made the news with this article in the Financial Times stating that they are moving away from Microsoft Windows due to security concerns.  My first reaction was to question why a company with as many smart brains as Google would make such a misguided decision.  That was, of course, before I actually read the entire article. 

To steal from the article:

“We’re not doing any more Windows. It is a security effort,” said one Google employee.

“Many people have been moved away from [Windows] PCs, mostly towards Mac OS, following the China hacking attacks,” said another.

I cannot comment directly on the China hacking incident because I was involved in various meetings with unnamed companies and unnamed forensics experts on the so-called “China hacking incident” but I can comment on the stupidity of this clearly knee jerk reaction.  Your operating system choice does not equal security.  I cannot put that any more simply than that.  If your company employs experts in Linux then it makes sense to standardize on Linux.  If your company employs expertise in Windows — rolling out Linux, OSX, or any other operating system is asking for problems.

Obviously in Google’s specific case one could argue that they have more expertise in Linux.  So the switch from Windows isn’t a security concern its common sense and makes me wonder why they would have had Windows boxes in the first place.  This quote from an unnamed employee says it best;

Employees said it was also an effort to run the company on Google’s own products, including its forthcoming Chrome OS, which will compete with Windows. “A lot of it is an effort to run things on Google product,” the employee said. “They want to run things on Chrome.”

I could care less what OS Google or any company standardised on.  The reason I felt the need to comment on this was not because I think Google is making a mistake but because the press is taking some comments from “anonymous employees” out of context and turning this in to something it’s not and helping perpetuate a huge Information Security Myth.

The myth I speak of: “Switching to Mac OSX or Linux will make you more secure”.

Corporations get hacked, in fact they get hacked much more than we read in the press.  Sometimes those hacks come via a “zero day” type attack and others via a known issue that the corporation failed to patch for.  This is the reality of running a business in the Internet age.

Let me paraphrase what was said by myself and other “experts” back in February 2010 (

Every operating system has its advantages and disadvantages in security but no one is a silver bullet, more secure, option.  Some represent a higher risk than others but in reality you are only as secure as your ability to administer the chosen operating system.  This means that if your organization has IT expertise in Linux then you are probably more secure running Linux than you are an operating system that they do not have the same level of expertise in.  The same goes for companies that have Windows expertise, while I am sure that a good Windows Administrator can find his way around alternative operating systems, I would not want that Administrator to be responsible for securing an operating system that he is not proficient in. 

So while one could argue that in general Windows has been the more riskier operating system to run I would actually counter that argument by saying that while correct in the past it is this level of exposure and risk that has caused great improvements in Windows security.  Not to mention the fact that if you are Google you have a very large target painted on you and no matter what operating system you decide to run you are and probably always will be a target of attackers.  Shift your operating system and attackers will shift their attack methods. 

Based on available public information on the Aurora attack the compromise may have come via an unpatched Internet Explorer vulnerability and was a targeted attack.  The second part of that sentence is actually the more important one here.  TARGETED ATTACK.  This means that when, and not if, Aurora the sequel happens it will come via an unpatched vulnerability in whatever operating system happens to be in use at the target company.

It is really too bad that the press in this particular case did not reach out to real security experts and get actual facts around what your operating system choice means to your security.  In fact the Financial Times article is nothing more than FUD generated by “anonymous” quotes from “anonymous sources”.

The unfortunate part about FUD like this is that all week various executives at other companies will read this article and determine that because the great minds at Google have done this to be “more secure” that they should follow suit.  They will bring in some clueless IT Security Consultant (aka CISSP) who will back up this opinion for the sake of billable time and the poor IT guys will have to do their bidding and will ultimately make their company less secure than it was in the first place.

Rinse, wash, repeat.. the cycle of Information Security Myths trumping actual progress continues…………..

Clueless FUD Article…

Posted in security with tags , , , , , , , on April 2, 2010 by hellnbak

I haven’t blogged anything of good use lately so I thought I would start up again by calling out this completely useless and incorrect opinion piece.  On the Dark Reading blog an article appeared entitled; “Share — Or Keep Getting Pwned”

Sigh.  Clearly zero research was done in to this posting as there really is a lot of information sharing going on in the industry.  While I will admit that the industry as a whole needs to be better organized the assumption that no one shares inside the industry is a wrong one and very misleading to the sheep who actually believe what they read.

Take the second paragraph for example;

“Take the attacks on Google, Adobe, Intel, and others out of China (aka “Operation Aurora”). McAfee and other security firms investigating victims’ systems each had is own fiefdom of intelligence, occasionally publicly sharing bits of information, like the Internet Explorer zero-day bug used in many of the initial attacks. But did anyone have the whole picture of the attacks?”


Actually, yes.  Yes multiple people at multiple different organizations did in fact have the whole picture.  I personally was witness to a lot of inter-vendor information sharing that was extremely helpful for those affected by this issue.  I obviously am not going to comment on who exactly shared what information or what was shared.  But a lot of information that was never made public was in fact shared amongst many parties.  Even more “shocking” this was done without the use of silly non-disclosure agreements (NDA) and done based on reputation and personal trust relationships.  Meaning that there was zero corporate bullshit in the way of moving forward.

Using a second example, that I can talk more publically about without getting myself in trouble, we all remember the Marsh Ray TLS MITM bug from earlier this year.  Marsh Ray and Steve Dispensa both went above and beyond what was expected with sharing information with anyone.  They even attempted to leverage the muscle at ICASI ( to pull all the major vendors together and share.  Taking things a step further, Marsh personally offered to sit down and work directly with any vendor having issues with the bug.  Sure, the vulnerability release did not go as planned, these things rarely do happen that way, but it was handled in a very open and progressive manner.

These are only two of multiple examples.  There are even private mailing lists where COMPETITORS on the product side of the house routinely share information on various threats ranging from malware to new exploitation techniques.  So again, the whole process could use some improvement (maybe I just found a use for VulnWatch) but the insinuation that sharing doesn’t happen because of jealousy or competitive reasons is way off base.  Most want to do the right thing even if it means working directly with a competitor.

Taking Responsible Disclosure for Granted

Posted in security with tags , , , , , , , , , on October 24, 2009 by hellnbak

The last couple years of my career have been interesting when it comes to disclosing vulnerabilities.  I have worked on some pretty big ones and a few aren’t even public yet.  Based on this I have been thinking a lot about how the industry as a whole handles vulnerability disclosure.  Yes, I am aware that this debate has raged on for years and will probably never be settled but I thought I would share my random thoughts here.

I have always been a fan of responsible disclosure.  Wait, let me first define what I feel responsible disclosure is.  To me it is very simple, do your best to get the vendor to fix the vulnerability without increasing the risk for the general Internet ecosystem. 

Of course one could easily argue that the existence of the bug is already a huge risk and therefore should be disclosed immediately to the world.  While there is some truth in that statement my personal issue with going that route is that disclosing something without a patch or at least with some very strong mitigation options does not help anyone increase their security and simply shines a spotlight on the flaw.

Over my career I have been involved with some pretty cool and pretty serious vulnerabilities.  My involvement has mostly been around reporting the issue to the vendor and working with them to fix the issue.  Believe it or not this can be a lot of work depending on the vendor.  In some cases you simply toss the crash dump over the fence and the vendor is able to run with it.  In other cases you end up having to supply PoC, which I hate having to do BTW, and in even more extreme cases you actually have to sit down prove the vulnerability on a live system and even offer fix advice.  I have even been involved in cases where a vendor has supplied a beta patch in which the scary smart people I work with quickly prove to also be flawed.

Unfortunately, over the years, the phrase “responsible disclosure” has become rather meaningless and really a one way street.  Vendors, note I work for one, are very quick to remind researchers that they need to do the responsible thing and while most researchers do attempt to show as much good faith as they can.  The vendors themselves seem to forget that responsibility and disclosure is a two-way street that requires both the vendor and the researcher to act in a manner that is best for those whom are vulnerable.

Some vendors do a pretty good job while others are still extremely horrible.  Believe it 0r not, but this is a place where other vendors should look at the Microsoft MSRC model.  The industry went from beating up on Microsoft during the 80s and 90s to seeing some great improvements in handling vulnerabilities and some proven process.  The only real criticism I can toss towards the folks in Redmond is that they are still a bit slow on some issues and yes I am still pissed at Culp for calling researchers Terrorists after 9/11.  😉

I am really not sure where I am going with all of this but I am finding myself becoming more and more frustrated with various vendors and their ability to invoke “responsibility” of the bug finder while not acting responsible themselves.  While I understand that this has become a business and fixing bugs cost money, the vendors need to understand that the researcher didn’t create the bug.  Their bad development process and lack of anything resembling a SDL process did.

Based on this frustration I have some advice for vendors.  Note that all of this is coming from the perspective of someone reporting a bug and not from the side of a vendor.


1.)  The researcher is not your enemy.  In fact, if you have a researcher contacting you about a vulnerability he/she found in your product, they are the exact opposite of the enemy.  They just provided you wtih some free QA and are handing you a great opportunity to not only improve your product but be seen in the public eye as a company that actually cares about their customer’s security and reacts accordingly.

2.)  If the bug is stupid or not exploitable.  Call it out.  But do so in a constructive manner.  Spend the time to sit down with the researcher and make sure you fully understand what they are telling you and that they understand how you came to the conclusion you have.  Again, you are not adversaries.

3.)  Be honest about patch timelines and potential issues you may have.  I don’t expect a vendor to share sensitive information with a researcher but being genuine about timelines and the process to produce a patch helps.  Most will understand that you can’t produce a safe patch in 30 days or less.  But a reasonable timeline should be offered,

4.)  Over communicate.  Nothing is worse when a researcher feels like they are being ignored or nothing is going on with the bug they reported.  As they sit and look for some sort of communication back from the vendor their mouse also hovers over the send key of a full-disclosure post.  Vendors are one forgotten status update away from having the issue dropped on them.

5.)  Lawyers won’t work – usually.  Enough companies have tried and failed to silence a researcher with lawyers.  If you contact your legal team BEFORE you contact your developers with a vulnerability report expect to have the vulnerability leak to the public.  Again, the researcher is not the enemy.  Your bad coding process is.

6.)  Give the researcher credit.  At the very least you should credit the researcher.  Remember, unless they have sold this bug to one of the clearinghouses, they have done this QA work for you FOR FREE.  They should get recognition for their work.  Hell if you have headcount and good workplace — hire this person to find more bugs for you.  Researchers like money just as much as corporations.

7.)  Applying pressure via other means such as mutual customers or corporate relationships will just build resentment.  Any attempt to silence the researcher will simply turn a positive situation in to an adversarial one.  As a vendor you do not want this.

8.)  Stay honest and back up your promises.  If you make commitments to the researcher.  Follow them.  It’s really that simple. 

9.)  Use  the vulnerability report as a mechanism to improve your internal development and QA process.

10.)  Do not use #9 as a way to stall a patch or prevent disclosure.


I know the above seems very simplistic and obvious to most of us but believe it or not the majority of vendors out there still do not get it.  Vendors need to realize that having a bug found in your product is actually an opportunity and not a set back. 

I encourage all researchers to remind vendors of THEIR responsiblity in the process.  Be open with the vendor with what you feel a reasonable process to fix the vulnerability is.  The more approachable you are, the easier the entire process will be. 

Got a vendor thats not cooperating?  Contact me and I can try and help.

Of course your other option is to simply disclose the issue to the public but that becomes a whole new can of worms.  My next post, when I get around to it, will talk about the issues that a researcher faces when they go this route.

Writing original material is hard…

Posted in security with tags , , , , , on October 17, 2009 by hellnbak

It is a little ironic that I am basing this blog post off of another blog post but I am willing to admit that I rarely come up with a good ideas of my own.

Over the weekend we saw lots of Twitter activity about a blog post over at McGrew Security.  While I applaud the effort in pointing out this complete scam job of a book I do feel that perhaps the “authors” (can we even call them that?) are getting off a bit too easy.  Or at least one of them.

Before I rant and make fun of them let me first state that I too have written books.  I have even written books for Syngress.  While I am biased and honestly have not been paying attention, I have not seen a Syngress book worth purchasing since the Hack Proofing Your Network series — this includes my own material. 

I have worked with other publishers and this is my take on Syngress as a book publisher.  They went from being pretty cool and easy to work with during the Hack Proofing days to simply an outfit that attempts to churn out as many books as possible as quickly and as cheaply as they can.  Apparently, if you can cut and paste from Wikipedia, you are now a Syngress author.  Syngress pays the lowest amount they can negotiate with you and then rushes you through the fastest possible timeline to get your work in and published.  Quality is not the goal here – quantity is.  Flood the market with enough cheaply made books and you eventually make money on a few of them.

Back when I wrote for Syngress they did recommend that we run various tools to insure that we don’t plagiarise anyone’s material and they did do *some* technical editing but my most recent experience resulted in a book being released with next to no oversight.  Hell, I know for a fact that the majority of my last Syngress book was a.) written from the bottom of a bottle and b.) not reviewed very closely by anyone.  I am honestly embarassed about that one.

So do we point a finger at the so called authors?  Or is this a failure in the Syngress editing process and quality management?  I say both.  Jumping back to the blog post over at McGrew we see this explanation from one of the authors:

Edit: Dustin L. Fritz (of The CND Group) has left the following comment regarding plagiarism in this book:

This was an honest mistake and I sincerely apologize for any miscommunication. I hope that the correct and proper citations can be added soon and that all questions regarding copyright and plagiarism issues can be resolved. I hope the book can still be enjoyed as a valuable contribution to the information security community and I hope it will go on to fulfill its objective in reaching anyone who desires to learn more about hacking and security. I want to specifically apologize to Jayson, Kent, Syngress, Rachel, Angelina, all the readers, reviewers, and others who have taken offense. I want to fix this and I sincerely appreciate everyone’s positive support!

Wait, “honest mistake”?  Really?  Let me jump back and steal more of Mcgrew’s content;

If you have a copy of this book that you bought or received for review, I encourage you to take a look at these pages and source URLs to see what I’m talking about:

page topic original source length
135 OSI Model 2 paragraphs and a table
141 Maltego Old description from 1 sentence
146 DNSPREDICT Many sources (likely original tool site) Entire description
149 Kismet Entire description
151 Netstumbler Entire description
153 SuperScan Entire description
154 Nmap Entire description
155 Paratrace Entire description
156 Scanrand Entire description
157 Amap Entire description (short)
161 Plug-in Paragraph description
164 Vulnerability Scanner Entire description
164 IBM Internet Security Systems Entire description & history
165 Nessus Entire description
166 Nessus Goes Closed License quoted
167 Tenable NeWT Pro 2.0 Press release? Entire description
168 Rapid7 Entire description
169 Microsoft Baseline Security Analyzer Entire description
170 eEye Retina Entire description
177 Exploits Entire description (full page of text)
179 Buffer Overflows Entire description
180 SubSeven and Stopping SubSeven Entire description
186 Metasploit Entire description
187 Core Impact Entire description
193 Registry Keys Entire description
194 Securing your logs Entire how-to
195 Event Viewer and HOW TO: Event Log Types Entire description
197-200 Last User Logged in Entire how-to copied
201 Last True Login Tool Many – Likely old description from website Entire description
202-204 Last logoff script Entire how-to
205-208 Windows Security Log Entire article
223 Description of NIST Two paragraphs
233-235 CompTIA Entire description
236 EC-Council Entire description
236-237 (ISC)2 Entire description
244 One-time Passwords Paragraph and list
246 Honey Pot Paragraph
253 Firewall Paragraph
255-256 Full-Disk Encryption Three sections
257-258 Snort Entire description
258-264 IPS The entire wikipedia article copied over multiple pages!
278 Wireshark Several sentences from the article
279 PGP Two paragraphs of description
281 Personal firewalls Short description
285 Perl Entire description
292 Bluesnarf Entire description
299 Bleeding edge technology description and list
303-305 ECHELON Entire description + photo
310 Ghost Rat Two paragraphs
332 2600 Magazine Entire description
333-334 Gary McKinnon Entire description
336 PSP Hack Tutorial
396 World of Warcraft Large paragraph
399-400 Infragard Entire description
404 Bump Keys Entire description


That is no honest mistake.  The mistake here was that this so called “author” thought he could get away with cutting and pasting from online resources.  There is zero honesty in this mistake.  What is even funnier (at least to me) that Syngress didn’t even catch this in their so called edits and reviews. 

Miscommunication?  Really?  What part of cutting and pasting from a website results in a miscommunication? 

To quote someone who will remane nameless because they said this in private:  “honesty and quality are not priorities for Syngress.”

Apparently, honesty and quality was not a priority for at least one of the authors of this book.  Mistake?  Yes.  Honest?  Thats hard to believe.

For my next book I think Iwill just cut and paste directly from Twitter.

What a complete joke.

IIS Webdav Bug

Posted in security with tags , , , , on May 25, 2009 by hellnbak

Just wanted to do a quick post to point out a couple good posts on the IIS WebDav bug.  Am I the only one to think its kind of cool to see another IIS bug after spending months upon months of dealing with file format type bugs?

Great and detailed post from Todd Manning over at Breakingpoint (you should follow this blog its great!)

and, an amusing post showing yet another consequence of the bug:

Sentex Locks

Posted in security with tags , , , on March 11, 2009 by hellnbak

This is a very amusing blog post that I thought I would toss on here. Great find and I am going to test this later today.

Full post from:

How to open many keypad-access doors 03/11/2009

14 Comment(s)

Here’s a fun little tip: You can open most Sentex key pad-access doors by typing in the following code:


The first *** are to enter into the admin mode, 000000 (six zeroes) is the factory-default password, 99# opens the door, and * exits the admin mode (make sure you press this or the access box will be left in admin mode!)

I’m not sure how prevalent they are, but here in San Francisco, Sentex building access systems seem to be the most popular.