Author Archives: Daniel Martin

BSides London 2013 aftermath

BSides London took place last Wednesday the 24th on the Kensington and Chelsea Town Hall near High Street Kensington tube station in London.

I was really looking forward to this year’s edition as for the first time ever Dradis Pro was a sponsor in a security event. There are a lot of lessons learned on that front alone, but I’ll save them for another post.

It was a really long day. I only finished the slides for the Creating Custom Dradis Framework Plugins workshop around midnight the night before and I got to the venue by 8am to give the organisers a hand with the preparations. On the bright side, we had a really good turnout on the workshop:


Creating Custom Dradis Framework plugins in action (more pics)

I think that the final head count was around 500 people both from around the country and from abroad. The downside is that we had to prepare around 500 tote bags with sponsor swag, the upside is that some sponsors provided some really nice goodies 😉

BSides swag by ScotSTS, 7Elements and Dradis Pro

The truth is that running an event such as BSides is a ton of work, and the team do it for free. And it doesn’t cost a penny to attend and you get a really nice free t-shirt:

BSides London t-shirt

I don’t think people thank the organisers enough. Thanks guys! To both the visible faces of the organisation but also to the rest of the conference goons that make all the little moving parts of the event tick.

As usual in this type of event, it’s easy to let yourself be distracted by the social side of things. I managed to finally catch up with a lot of Dradis Community contributors and Dradis Pro users. And hopefully meet a few future ones 😉 I finally put a face to some of the #dc4420 peeps and manage to catch up with some people that I no longer get to see that often.

It always baffles me that after working for a company for the last 5 years you get to meet some of your colleagues in a random security event instead of in the office or in an official company event. I guess that’s the nature of the industry we are on though. It was also good to catch up with ex-colleagues from previous lives.

Even though the scheduling gods decided I had to miss Marion & Rory’s workshop in the morning, I managed to get myself a WiFi Pineapple after Robin’s, just in time to rush to the main hall to catch the closing ceremony.

WiFi Pineapple kit

And before you realise it, the day was over and you are having a pint too many at the official BSides after-party…

Should you create your own back-office/collaboration tools?

When an organisation tries to tackle the “collaboration and automated reporting problem“, one of the early decisions to make is whether they should try to build a solution in-house or use an off-the-shelf collaboration tool.

Of course, this is not an easy decision to make and there are a lot of factors involved including:

  • Your firm’s size and resources
  • Cost (in-house != free)
  • The risks involved

But before we begin discussing these factors, it will be useful to get a quick overview what’s really involved in creating a software tool.

The challenge of building a software tool

What we are talking about here is building a solid back-office tool, something your business can rely on, not a quick Python script to parse Nmap output.

My background is in information security and electrical engineering, but I’ve written a lot of software, from small funny side projects, to successful open source tools and commercial products and services.

I’m neither a software engineering guru nor an expert in software development theory (I can point you in the right direction though), but building a software tool involves a few more steps than just “coding it”. For starters:

  • Capture requirements
  • Design the solution
  • Create a test plan
  • Develop
  • Document
  • Maintain
  • Improve
  • Support your users

The steps will vary a little depending on your choice of software development methodology, but you get the idea.

If one of the steps is missing the full thing falls apart. For instance, say someone forgets to ‘document’ a new tool. Whoever wants to use the tool and doesn’t have direct access to the developer (e.g. a new hire 8 months down the line) is stuck. Or if there is no way to track and manage feature requests and improvements, the tool will become outdated pretty quickly.

With this background on what it takes to successfully build a solid software tool, lets consider some of the factors that should play a role in deciding whether to go the in-house route or not.

Your firm’s size and resources

How many resources can your firm invest in this project? Google can put 100 top-notch developers to work on an in-house collaboration tool for a full quarter and the financial department won’t even notice.

If you are a small pentesting firm, chances are you don’t have much in terms of spare time to spend on pet projects. As the team grows, you may be able to work some gaps in the schedule and liberate a few resources though. This could work out. However, you have to consider that not only will you need to find the time to create the initial release of the tool but also you’ll need to be able to find the resources down the line to maintain, improve and support it. The alternative is to bring a small group of developers on payroll to churn back-office tools (I’ve seen some mid- and large-size security firms that successfully pulled this off). However this is a strategic decision which comes with a different set of risks (e.g. how will you keep your devs motivated? What about training/career development for them? Do you have enough back-end tools to write to justify the full salary of a developer every month?).

Along the same lines, if you’re part of the internal security team of an organisation that isn’t focussed on building software, chances are you’ll have plenty in your plate already without having to add software project management and delivery to it.

Cost (in-house != free)

There is often the misconception that because you’re building it in-house, you’re getting it for free. At the end of the day whoever is writing the tool is going to receive the same salary at the end of the month. If you get the tool built at the same time, that’s like printing your own money!

Except… it isn’t. The problem with this line of reasoning is the at the same time part. Most likely the author is being paid to perform a different job, something that’s revenue-generating and has an impact in the bottom line. If the author stops delivering this job, all that revenue never materialises.

Over the years, I’ve seen this scenario play out a few times:

Manager: How long is it going take?
Optimistic geek: X hours, Y days tops
Manager: Cool, do it!

What is missing from the picture is that it is not enough to set aside a few hours for “coding it”, you have to allocate the time for all the tasks involved in the process. And more often than not Maintaining and Improving are going to take the lion’s share of the resources required to successfully build the tool (protip: when in doubt estimating a project:

One of the tasks that really suffers when going the in-house route is Support: if something breaks in an unexpected way, who will fix it? Will this person be available when it breaks or there is a chance he’ll be on-site (or abroad) for a few weeks before the problem can be looked into?

Your firm’s revenue comes from your client work not from spending time and resources working on your back-end systems. The fact that you can find some time and resources to build the first release of a given tool, doesn’t mean that maintaining, supporting and improving your own back-end tools will make economic sense.

The risks of in-house development

There are a few risks involved in the in-house approach that should be considered. For instance, what happens when your in-house geek, the author of the tool, decides to move on and leaves the company? Can someone maintain and improve the old system or are you back to square one? All the time and resources invested up to that point can be lost if you don’t find a way to continue maintaining the tool.

Different developers have different styles and different preferences for development language, technology stack and even source code management system. Professional developers (those that work for a software vendor developing software as their main occupation) usually agree on a set of technologies and practices to be used for a given project, meaning that new people can be brought on board or leave the team seamlessly. Amateur developers (those that like building stuff but don’t do it as their main occupation) have the same preferences and biases as the pros and they are happy to go with them without giving them a second though as they don’t usually have to coordinate with others. Normally, they won’t invest enough time creating documentation or documenting the code because at the end of the day, they created it from scratch and know it inside out (of course 6 months down the line, they’ll think it sucks). Unfortunately this means that the process of handing over or taking ownership of a project created in this way will be a lot more complicated.

When building your own back-end systems you have to think: who is responsible for this tool? Another conversation I’ve seen a few times:

(the original in-house author of the tool just moved on to greener pastures)
Manager: Hey, you like coding, will you take responsibility for this?
Optimistic geek: Sure! But it’s Ruby, I’ll rewrite the entire thing from scratch in Python and we’ll be rolling in no time!
Manager: [sigh]

If you are part of a bigger organisation that can make the long-term strategic commitment to build and maintain the tool then go ahead. If you don’t have all those resources to spare and are relying on your consultants to build and maintain back-end tools, be aware of the risks involved.

Conclusion: why does the in-house approach not always work?

The in-house development cycle of doom:

  1. A requirement for a new back-office tools is identified
  2. An in-house geek is nominated for the task and knocks something together.
  3. A first version of the tool is deployed and people get on with their business.
  4. Time passes, tweaks are required, suggestions are made, but something else always has priority on the creator’s agenda.
  5. Maybe after a few months, management decides to invest a few days from the creator’s time to work on a new version.

As you can imagine, this process is unlikely to yield optimum results. If building software tools is not a core competency of your business, you may be better served by letting a software development specialist help you out. Let them take care of Maintaining, Improving and Supporting it for you while you focus on delivering value to your clients.

Of course the other side of this coin is that if you decide to use a third-party tool, whoever you end up choosing has to be worthy of your trust:

  • How long have they been in business?
  • How many clients are using their solutions?
  • How responsive is their support team?

These are just some of the highlights though, the topic is deep enough to warrant its own blog post.

tl; dr;

Going the in-house route may make sense for larger organisations with deeper pockets. They can either hire an internal development team (or outsource the work and have an internal project manager) or assign one or several in-house geeks to spend time creating and maintaing the tools. But remember: in-house != free.

Smaller teams and those starting up are usually better off with an off-the-shelf solution built by a solid vendor that is flexible and reliable. However, the solution needs to be easily extended/connected with other tools and systems to avoid any vendor lock-ins of your data.

Choosing an independent penetration testing firm

There’s been a recent post on the [pen-test] mailing list asking for advice on the things to consider when choosing a independent penetration testing company. The original request went as follows:

I’m currently in the process of sizing up/comparing various
Penetration Testing firms, and am having a bit of trouble finding
distinguishing characteristics between them. I’ve looked at a fair
few, but they all seem to offer very similar services with little to
recommend one over another.

The thread was full of good advice. However, having first-hand experience in a number of these penetration testing firms I thought it would be a good idea to dig a bit deeper into the subject: what makes a penetration testing company great?

What are your requirements?

But first things first, do you need a penetration test? Do you know what a penetration test consists of? What would be the goal of the test if you performed one? These are really the key questions that you need to be able to answer before even considering choosing an external security partner.

Unfortunately, in depth answers to those questions fall outside the scope of this article. Doing a bit of internet research as well as reaching out to industry colleagues, peers and business acquaintances that are in a role similar to your own would be a first step in the right direction.

It is not a bad idea to ask each of the vendors you evaluate to help you with the answers to those questions. It will enable you to understand their approach. Do they really have your best interest in mind? Will they make sure that you are able to define the problem and sketch your goals before jumping to their keyboards and sending you an invoice? Are they knowledgable enough on the relationship between security and your business and the tradeoffs involved?

Contrary to what one might think, in the majority of the cases security projects are performed “just because” without a clear goal in mind:

  • We made a change in the app and policy says we need to have it pentested.
  • We deployed a new server and IT said we need to pentest it.
  • A year has passed since our last test and we have to do it again.

If you are completely lost on this and don’t know what services you need or what services might be available, it may be worth getting some external help just to help you clarify your requirements. You can bring in an independent consultant for a couple of days to help you gain a sufficient understanding of your own requirements to ensure that when you go out and shop around for security partners you know what you are up against.

IT generalists vs penetration testing specialists

After correctly understanding what your problem is and what type of testing you need, the next thing to get out of the way is to decide whether you should go for a general IT contractor or a security specialist. As usual, there is no clear cut answer and it depends on your needs more than anything else.

I’ve worked with big integrators where the security team was virtually non-existent. Of course that didn’t prevent the business from selling security services. ‘Security consultants’ would spend their time doing IT deployments (e.g. firewall and router configurations, etc.) or coding Java until the off security project arrived when they would gather in a team and deliver it. This may work for you or not, but it’s worth thinking about. Do you need a lot of support in several IT areas? It would definitely be easier to establish a relationship with a single IT provider than shopping around for specialist vendors for each area.

When dealing with a generalist, make sure you understand their approach to security testing. Most of the advice given below for firms can be directly translated to the “security function” inside a bigger consultancy or integrator.

Company background

Lets assume that you have decided to go for a security testing company. What are the important factors to consider before making a decision?


As with all business decisions, trust is a very important factor when evaluating security services vendors. Can you get any verifiable references of any of these guys? If you approach a firm and made them aware that X from Y company recommended their services, you are likely to get a better deal than if you didn’t. The level of service you will get will also be different as failing/disappointing you can potentially have the risk of upsetting the existing relationship they have with the people that referred you on the first place.

You can always ask the different vendors to put you in contact with organisations of a similar profile to yours. It’s important that they are of a similar profile or otherwise their feedback might not be as valuable. If you are a SME owner in the tourism industry a reference by the CSO of a huge high-street bank is of little value. Chances are they are pouring money over the vendor and the firm is bending over backwards to ensure the bank is fully satisfied.

You can ask for examples of similar projects they have undertaken. Don’t satisfy yourself with a conference call or conversation on the subject, consultants (security or otherwise) are paid to sound good even when they aren’t experts in what they are talking about. Try to push for a sanitised report (not the marketing sample) to see what a real-world deliverable looks like (more on this later).

If you can’t get any solid references or pointers through your business contacts, you’ll need to establish trust by yourself. This will take a bit of work and time but it is definitely worth the investment. Also, beware that this is a very technical service you’re shopping for. You have to be able to trust both the management team and the technical team in the firm. There are several things you can look into when trying to establish that trust.

Research and conferences

Something that you hear often when shopping around for penetration testing providers is that the company “should present at conferences”. This in principle sounds like a reasonable idea: if the team is on top of their game, they will be performing cutting edge research that will be of interest to the security industry which will earn them a spot in the security conferences. However, the truth is that with an ever growing number of security conferences every year which in turn have an ever growing number of different tracks running in parallel, not all conference speaking slots are created equal.To give you an example, this month there are at least six of them (not counting BSides London that we are sponsoring ;)).

When evaluating security conference presence, it is important to analyze the contents that were presented. Was it really research or does the company employ a well known industry expert that is regularly invited to speak at the conferences to give their opinion on the state of the art? Does the research have sufficient breath and depth or was it put together in a rush to have it in time for the conference? Was it relevant to your business? For instance, imagine you need to get your SAP deployment tested. Even if a well-known company has someone finding amazing bugs in some cutting edge technology like NFC, you may be better served by a lesser known company presenting on a SAP testing methodology or on the SAP testing toolset they have created over the last few years doing this type of testing.

Something similar could be said about published advisories: the fact that a company has 100s of published advisories may or may not be relevant to your needs. Are the advisories in technologies your company uses? If all their advisories are on Microsoft-related technologies and you are a Linux/Solaris shop, that wouldn’t help. This is a tricky one to assess, especially for non-security people, but it is worth to be on the look for the “but we publish advisories!” line and ask a few follow up questions to see if the company’s background is aligned with your own needs.

Finally, is all this conference presence and research recent enough? The security industry changes quickly and even though security specialists are fairly loyal to their employers, they move on from time to time. Double-check your facts to ensure that the research the vendor is presenting you as proof of competence is recent and that the authors are still with the company. The same could be said of books, courses or tools that have been written “by company members”. Verify they are still around to help your company and if your find out they are not, at least call their bluff to see how your point of contact reacts. The savvier you look in their eyes the better 🙂

The legalese

This falls a bit out of the scope of this post in the sense that has nothing to do with the firm’s technical competence. However it is essential you consider these points as part of your due diligence process:

  • Does the company carry sufficient insurance and reasonable legal agreements ?
  • Are there any NDA terms that you need to discuss with them?
  • Does the firm hold any relevant certifications that your company might care about (e.g. ISO 27001)?

Their approach to testing

After covering the basics of the company’s background, the next thing to focus on is their approach to testing.

There is a lot of solid advice on this subject on this 2007 post by Chris Eng at the Veracode blog, I’ll include a few references to it here, but please go and read it now, it’s well worth the time.

For instance, Chris recommends asking vendors under what circumstances would they advise a customer to bear the risk of a vulnerability. If they can’t give a good example of this, he continues, you might be dealing with someone who views security in a vacuum and doesn’t consider other business factors when framing recommendations. This hits the nail on the head. Your vendor’s approach needs to be aligned with your business goals. Otherwise the return on your investment will be very poor. This type of question should be asked to the people that will be directly involved in the technical delivery of your projects and not to your sales person or account manager. At the very least, you should have a conversation around this with the head of the pentest practice (or technical director).

Team lottery

When working with a technical consultancy, the bigger it gets, the bigger the risk of being affected by the “team lottery”: the variation on service you will notice depending on who gets assigned to deliver your projects. There are two factors that can minimise the risk of the team lottery: the company’s workflow/methodology and the overall composition of the team.

The team

I want to open this section with a quote from Avoid wasting money on penetration testing that makes a great point:

Finally, remember that companies don’t perform penetration tests, people do. So no matter which company you go to, it always boils down to the person you have working on your account.

It is key to cut through the sales layer and try to reach the technical director or pentest practice leader. If you are going to spend any significant amount of money, I’d push it even harder (at least for the first engagement, and every now and then too) and request a conversation with the testers assigned to your project. Or at the very least request their CVs/bios. Do they have experience working under your requirements? Does their general work experience makes you comfortable (e.g. someone that just started their pentesting career may not be the best fit to test your critical AS/400 mainframe)? If in doubt request a conversation or find out if someone else can be assigned to your project. Scheduling is very fluid in pentest firms and they should be able to accommodate such requests. The goal of this exercise is to minimise the team lottery by being vigilant and pushing back.

The firm’s size is also a factor in this equation, as Chris puts it, the bigger the consulting organization gets, the more likely the consultants will be generalists as opposed to specialists. This may or may not be an issue for you. Depending on your requirements, your needs will be better served by a generalist. On the one hand, you don’t want a reverse engineer that specialises in subverting DRM libraries for embedded systems running your external infrastructure pentest, on the other, you don’t want a generalist looking at your DRM library. Again, this goes back to square one: knowing your requirements.

Another way to try to avoid the lottery is to go for a fairly small team where you know each person is well worth their salt and you will get top-shelf service every time. However this isn’t easy to find (or evaluate) and depending on your own firm’s size you may need to use a bigger vendor (smaller firms can’t usually accommodate too many projects or multiple concurrent projects for a single client). Even though these are not very common, such specialist boutiques exist and depending on your situation, size and approach they could be a great fit.

Finally, another interesting subject is to figure out if the company subcontracts any work (to other firms or to freelancers). Don’t get me wrong, some of the finest testers I’ve worked with wouldn’t change freelancing for any job in the world. However, when third parties are involved you have to double check the situation with the firm’s legal coverage (e.g. liability insurance) and the due diligence you performed on the main team’s technical leadership and members of the team should be extended to any third parties and contractors. Moreover, subcontracting introduces additional challenges in the collaboration and methodology department, which as we will see in the next section, are not free of complications.

Workflow, tools and methodology

Even if they have a bunch of great people in the team, there are still some important things to consider about the firm’s methodology and processes.

The first one is the testing methodologies the company has for the different types of engagements that will be relevant to your company (e.g. it is of no use to you if the company excels are wireless assessments if you just need a code review). As discussed in Using testing methodologies to ensure consistent project delivery creating and maintaining a high quality testing methodology is not without its challenges and the bigger the penetration testing firm, the more important their methodology becomes.

There are a number of industry bodies that provide baseline testing methodologies including:

Be advised that the fact that your point of contact is aware of some of these organisations does not mean that the team assigned to your engagement will follow their methodology (or any other methodology for that matter). Have a conversation with the technical director about the methodologies used by the team. And later on have the same conversation with the team members assigned to your project. Protip: if you get different responses from the technical director and the team members or different responses from different team members chances are the firm is not seriously following any defined testing methodologies. For example, if the technical director mentions OSSTM and CREST, and the team leader mentions OWASP and another team member says he mainly relies on his years of experience, that should be a red herring.

Another key part of the firm’s workflow to consider is whether engagements are typically run by a single person or they routinely involve several testers. I’ve already discussed about the importance of collaboration in the past, having multiple testers in your project ensures that a wide range of skills and expertise are brought to bear against your systems which maximises your chances of uncovering most of the problems.

If multiple testers are going to be involved in your assessment, how does the team coordinate their efforts to ensure there is no time wasted and that all points in the methodology are covered? If your test team is on the same page and have the right collaboration tools, you will ensure there won’t be any time wasted, tasks will be split efficiently among the available team members and all points in the methodology will be covered. If on the other hand, the company does not have the tools or processes defined to ensure seamless collaboration and task splitting, some of the time allocated to your projects will be wasted and some areas of the methodology may remain unexplored while the team is spending time trying to manage the collaboration overhead.

The penetration testing report

In the majority of the cases, when you engage a penetration testing firm, the final deliverable you receive is security report. Before making your decision and choosing a vendor, it is important that you are provided with a sample report by each prospect.

The report needs to be able to stand on its own, providing comprehensive information about the project: from a description of the scope, to a high-level, my-CEO-would-understand-this-language executive summary of the results and a detailed list of findings. It should also provide remediation advice and any supporting information required to both validate the work performed by the team (does it look like they attained sufficient coverage?) and verify that issues had been successfully mitigated after the remediating work is performed.

Whilst some of the report sections have to be very technical and full of proof-of-concept code, requests or tool output, the report also needs to present the results of the engagement in the context of your business. Sure, you found three Highs, seventeen Mediums and twenty Lows, what does it mean for my business? Should I get the team to stop doing what they are doing and fix all the issues? Some of them? None of them? All findings are not created equal, and some testers get carried away by the technical details or the technical mastery required to find and exploit the issues and forget about presenting them in a context that matters to your business. In general the more experienced the tester, the more emphasis will be put in the business context around the findings uncovered (of course “experience” is not a synonym of “age”).

As a result, and to try to avoid the team lottery mentioned above, in an ideal world you would like to be provided with a sanitised report written by the same person that will be writing your own deliverable. This may not be practical in every instance but if you are going to engage on a mid-size or larger assessment, I think it is reasonable to push for this sort of proof to ensure that the final document you will receive is legible, valuable to your business and of an overall high-quality standard.

tl; dr;

  • Your requirements, to get the best value for your investment you need to know what you need help with, is it a pentest? or just a VA? or help with some basic security awareness training for your development team?
  • Trust, can someone recommend you a trustworthy security vendor? If no, then for each prospect partner try to figure out what’s the firm’s background? Have they worked with clients in your industry? Are they interested in your business? Do they perform any research in areas that are relevant to you?
  • Their approach, who will be delivering your assessment? Do they understand your business and motivations? What is their workflow like? Do they have a process in place to ensure consistent, high quality results every time?

There are a lot of moving parts in this process, and not all of them will apply to every vendor and every company looking for a penetration testing provider.

Here are Security Roots we can’t help you with your security testing needs (we will stick to doing what we know best), but hopefully now you should be better equipped to consider all the pros and cons and some of the gotchas involved in deciding what security firm you should trust with your business.

Using a collaboration tool: why being on the same page matters?

In this post I want to expand over the ideas discussed in The importance of collaboration during security testing focusing not so much in the splitting of tasks and work streams (that’s subject of another post) but on the magic that happens when all team members are on the same page sharing a clear picture of what is going on and on the role played by your collaboration tool.

Security testing often benefits from multiple people looking at the same target. The problem with one-person assessments is that, even if you follow a testing methodology you may miss stuff. This is because the way you approach testing, following the same patterns, looking for some telltale signals from the environment, you bring your background and intuition to the test. A fellow auditor would bring their own set of patterns and predefined expectations to the test. Combining the two approaches almost always produces interesting results.

As a rule of thumb, the more people trying to uncover issues, the better. Of course there is a limit to this rule (e.g. in a large enough test team, some testers would try to hide in the crowd and not pull their weight), but again that’s subject for another post.

In an ideal world

In an ideal world, everyone in the team will be on the same room throughout the duration of the test. They’ll be talking to each other, looking over each other’s shoulder when something interesting comes up and writing down everything they find, as they find on a whiteboard.

Both the economics and logistics of real world testing make this scenario highly unlikely except when some very specific circumstances are met. Testers are often based around the country (or the world), clients can’t afford the investment of putting together a larger testing team when a smaller one could get the job done, etc. Only in special circumstances, typically when the reputation of either the penetration testing firm (e.g. PCI ASV accreditation) or the client (e.g. new product launch) is at stake, the conditions are met to make this kind of effort. A strike team is put together to tear the system apart and testers and target are locked in a room (rules of Thunderdome apply).

The one man test

More often than not, security tests are performed by a single auditor. This is fine, provided the auditor has the right background for the job.

Even in this scenario it is fairly easy for things to slip through the cracks. You are focussed investigating a promising issue, then you notice a weird behaviour and if you don’t make a note there and then to look into it later, you will forget about it. The weird behaviour doesn’t get investigated. The test finishes and you even forget that you noticed it on the first place. Everybody looses.

Note-taking is a crucial skill. Noticing minor issues, adding them to the queue, triaging, and back to square one. Of course all this, while you follow a suitable testing methodology. The devil is in the details, and noticing the stuff that is not covered by the standard methodology is often the key to unlocking some of the more interesting bugs.

I would argue that even one-man teams would benefit from using a collaboration tool that lets them keep the big picture of the engagement (e.g. scope, progress, methodology, notes, findings, attachments, etc.). But in this case, I would even settle for a pen and a notebook. Just make sure nothing slips through the cracks! Of course, you’ll also need a cross-cut shredder to dispose of the paper once you are done 😉

Nevertheless, using a collaboration tool would enable you to share your interim findings with other stakeholders (account manager, client POC, etc.) and possibly reduce your reporting time.

The ever changing team

On the opposite side of the spectrum you have ‘fluid team‘ tests. We’ve all been there. On day one, you’ve got 2 testers that are going to be testing for 2 weeks, then in the afternoon, client requirements change, and now you have 3 testers working for 1 week. On day two, they change again and it’s back to 2 testers for 2 weeks, but your original team-mate has been pulled to perform some specialist testing only he’s capable of delivering. With every change in scope and team, you have to make sure everyone is brought to the same page or you won’t be able to make any progress.

If you’re keeping track of the project via email, you’ve got a problem. Every time a new tester joins the team, you’ve got to forward all the scoping emails, plus all the “I’ve found X” emails. Every time someone leaves the team, you have to chase them to send more emails with their latest findings. For the team leader, this is a waste of valuable time, it’s time that he can’t spend testing.

The alternative, using a collaboration tool, could make things a lot simpler. You receive scoping information and add it to the project. A new tester joins the team? Check the project information in the tool to find about the scope. Everyone adds their findings as they go along. Suddenly a team member becomes unavailable? No problem all their findings are already on the tool. A new team member joins half the way through? Check the project page to get up to speed in no time. Go through all the issues covered so far, check the methodology to find out what remains to be done, roll up your sleeves and start working.

The report

I have been arguing for the use of a collaboration tool during the engagement, to ensure everyone is on the same page. If everything has developed according to plan, the scoping information was available in the tool at the beginning of the test and everyone has been feeding their notes and findings as they went along. Now the test is over and it’s time to write the report. Whoever has to write it knows that all the information is on a single place, everything that’s needed: issues, evidence, screenshots, tool output, can be found there. If our report writer is savvy enough, he would have been keeping an eye on the project page to ensure that the information for all the issues found by each team member was complete, every i was dotted and every t was crossed. And all this can happen before the reporting time even starts.

Consider for a moment the alternative. There is no collaboration tool, progress is made via email (e.g. “Hey guys, look what I’ve found!”) and each member of the team is keeping a notes.txt file in their laptop. On the last day of the engagement, the report writer receives the notes from each tester: plain text files, word documents, .zip files with text and screenshots, etc. There will be a significant amount of time wasted collating results. Even if everyone provided all the information, there is still a requirement to re-format and re-style everything for the final report. If someone missed something, or if further evidence/details are required, it is almost certain that you won’t be able to get them. The system is firewalled off again, the test accounts no longer work, the person that found the issue on the first place is now doing a gig for the government in a bunker somewhere with zero connectivity, etc.

The amount of work required to feed a collaboration tool as you go along with complete information about the issues you uncover is insignificant compared to the task of manually collating the results of N testers (remember the ‘fluid team‘ problem?), reformatting and chasing around the missing bits and pieces.

Defining the requirements for a collaboration tool

We’ve covered a lot of ground in this post. Hopefully I’ve managed to highlight some of the merits of using a collaboration tool. The benefits that we would like to see in our solution would be:

  • Effective sharing: keep things organised, provide a big picture overview of the project: scope, coverage/progress, findings, notes, attachments, etc.
  • Flexible: you need to be able to extend the tool, adapt it to your needs and to the other tools and systems in your environment. “Silver bullet” solution that pretend to do everything for everyone out of the box, most likely won’t fit your needs. You need to be able to extend, modify and adapt your tool to fit your needs.
  • Capture all the data needed for the report. If the solution only lets you capture some of the information you’ll need for the report, you will be adding complexity to the workflow. Get all the information at once, while it’s fresh in the tester’s mind, add it to the solution, and forget about it until the report is due.
  • Ideally it should have report generation capabilities. Customisable report templates and the possibility of editing the report yourself after it is generated (e.g. Word vs. PDF) are also a plus.
  • Easy to adopt. To disrupt as little al possible the testers’ workflow, something that is easy to use, and cross-platform will go a long way towards adoption.

These are some of the guiding principles that we followed when we created and open sourced the Dradis Framework back in the day. More than 24,000 downloads later and after Dradis has been included in the BackTrack distro, featured in books like Grey Hat Hacking and Advanced Penetration Testing for Highly-Secure Environments it looks like we were into something.

These days, we continue to work hard to help our users collaborate more effectively and our clients to be more competitive.

Every manager or senior team member that tries to push for a collaboration tool in an environment where none is being used is bound to face a degree of push back. This post provides some of the counter arguments that can be used to fight such push back, however, fighting the push back is a complicated subject that calls for an entire post on its own.

Dradis Framework workshop in BSides London 2013

Security Roots founder Daniel Martin will be delivering a workshop on Custom Dradis Framework plugins in this year’s BSides London 2013 event.

Attendees will learn how to create their own custom plugins to integrate Dradis with their existing tools and systems. If you have a suggestion for a plugin that we should create during the workshop please give us a shout: @dradisfw or @securityroots.

For those planing to attend, be advised:

Workshops will be booked on the day in a first come first basis. The format is small so to give the opportunity to get close and personal with a guru in the subject.

See the official workshop announcement:

There is a great lineup of workshops and talks (track 1 and track 2) this year, so don’t miss the opportunity to attend.

Dradis Professional is sponsoring the event so checkout our Dradis Pro & BSides London 2013 page for a chance to win a Dradis Pro license and future workshop updates.

Mapping external tool output to fit your reporting needs

One of the challenges usually faced by information security teams during their day to day operations is trying to make sense of the diverse output produced by the different tools they need to use. Mapping external tool output into a format that is going to be useful for you is no small challenge. This is one of the reasons that we sometimes hear that well rounded security professionals should have a bit of sysadmin and also coding experience so they can quickly knock together a parser script or a concatenate a few greps and awks to parse tool output and produce information that is relevant to the task at hand.

The number of security tools is growing every year, and this is great. However, each tool provides its output in a slightly different format using slightly different labels (not so great). Even tools that are in the same space like Nessus, Nexpose or Qualys can’t agree on the best nomenclature to provide their findings on (e.g. Title vs. Name vs. Plugin Name). It makes sense as part of their commercial strategy to maintain a degree of differentiation by structuring the information in different ways. But that doesn’t help you.

There are several sides to this tool diversity problem: on the one hand you need to be able to make sense of all these heterogeneous output. On the other hand you need to collate, remove duplicates and adjust the information to produce a report that is valuable and accurate. You can do this by hand and repeat it for every project and tool combination or you can automate it.

As you can see, this type of mapping can be useful in a number of different scenarios.

The full fledged pentest or webapp assessment

In a full pentest or webapp assessment, you will be running a bunch of automated scans to complement your manual testing efforts.

Being able to upload tool output and getting it converted to the right format for the final report is going to save you a lot of time. On day one you’d kick a bunch of scans and forget about them. You’d then start manually testing the target system and following your own testing methodology. Ideally you’ll be adding your notes for the different issues and preparing the final report as you test along.

Once the scans have completed you can quickly process the output produced by the tool and map it to the nomenclature that you need in your report. Discard false positives and add a note to make sure you investigate in detail any real positives.

The vulnerability assessment (VA)

Not every client needs a full pentest every time. Take PCI compliance, there is a requirement to run quarterly VA scans. Most likely the client won’t be able to afford a pentest every quarter, but by reducing the scope of the testing to an automated vulnerability assessment, it is possible to achieve a degree of assurance without blowing up the budget.

In this less sophisticated vulnerability assessment jobs, you often need to run some scanners, triage false positives and produce a branded report. The goal is to reduce the time and effort it takes you to implement the full assessment (i.e. scanning + reporting) so you can offer the service at a competitive rate to your clients.

If you had an automated tool that lets you map between the output produced by the different scanners you use and the format your final report is going to be into, you could be saving yourself a lot of time. In this case there is no need for any manual testing (apart from the one required to discard false positives):

  1. Get scanner output
  2. Map to your own format / nomenclature
  3. Produce a branded deliverable

The Plugin Manager

It looks like either of the scenarios described above could benefit from some automation. This is why we introduced the Plugin Manager in Dradis Pro, a module that you can use to map external tool output into the format you need.

As usual there are many different ways to tackle the problem. We could define our own “schema” for the information and hard-code mappings between the different tools and our own nomenclature. Unfortunately, as perfectly captured by xkcd 927, nothing guarantees that our schema is going to be useful to everybody.

What we really needed was a way to let our users define their own schemas and map from external tool output to the schema they had defined. Company A needs to use Title, Risk, Description and Recommendation? That’s fine you can do it. Company B uses Issue, Impact, Probability, Description and Mitigation? Yes, we support that. Once you have agreed on your own representation for the information you can use the Plugin Manager to map between Nessus output and your own or between Qualys or Nexpose and your own. That is a lot of power right there. This is how it works:

Mapping external tool output with the Plugin Manager: from sample to template to result

First, for each of the tools we provide you with a sample of the output produced by the tool. Then you are able to define a template to map between the different fields provided by the tool and those that you need for your deliverables. In the example we are creating a template for Nessus issues. We want to extract the Plugin name, Description and Solution fields from Nessus and map them to Title, Description and Mitigation respectively. The template builder also has a live preview panel that lets you see the end result of your mapping to make sure everything is working as expected.

As you can see, this approach requires a bit of upfront investment (i.e. you have to create the mappings for the different tools you are going to use) but in return it gives you a lot of flexibility and saves you a significant amount of time. Mapping external tool output to fit your reporting needs has never been easier.


With the growing number of tools that made their way into the security tester’s arsenal, making sense of the different output formats in an effective way is becoming crucial skill. If adding a new tool to your methodology is going to mean that you have to spend more time manually mapping the output or collating information, you’re limiting yourself. Finding out a way to map between the different tool outputs and the format you need for your deliverables is a worthwhile investment.

Dradis Pro report templates and testing methodologies for download

Ever wanted to create your own Dradis Pro report templates but didn’t know where to start? Wait no more! A few days ago we introduced the Extras page. From there you can download report templates and testing methodologies. The idea is to showcase all the possibilities supported by our reporting engine and lay the ground work so our users can build on top of these templates.

The latest addition has been the OWASP Top 10 – 2013rc checklist. This covers the recently released OWASP Top 10 – 2013 release and contains 60 checks that you can use to test for all the issues in the new Top 10:

  • A1-Injection
  • A2–Broken Authentication and Session Management
  • A3–Cross-Site Scripting (XSS)
  • A4–Insecure Direct Object References
  • A5–Security Misconfiguration
  • A6–Sensitive Data Exposure
  • A7–Missing Function Level Access Control
  • A8-Cross-Site Request Forgery (CSRF)
  • A9-Using Components with Known Vulnerabilities
  • A10–Unvalidated Redirects and Forwards

Below is a list with a few examples of the Dradis Pro report templates (both Word and HTML) that you can find there:

Advanced Word example

Mix everything together: use Dradis notes for your conclusions, sort your findings by severity, filter, group, make use of document properties, etc.

Dradis Pro Advanced report template: a screenshot showing the advanced word report

A simple report to get you started

Never created a custom Dradis Pro report template before? No problem, start with this basic template to learn about the inner workings of the engine and in no time you’ll have your custom own report template up and running.

Dradis Pro Basic report template: a screenshot showing a detail of a table in the simple report template

A fancy HTML report

Dradis Pro supports a number of report formats including Word 2010 and HTML. In this case we show you how to create a fairly complex HTML report with the list of issues order by severity, a bit of JavaScript to auto-colour and auto-link external references and some awesome charts to nicely show the risk profile of the environment.

Dradis Pro HTML report template: a screenshot of the HTML report template showing a chart for all the issues

With the help of these samples, creating your own report template has never been easier. Are you ready to give Dradis Pro a try?

Using testing methodologies to ensure consistent project delivery

It doesn’t matter if you are a freelancer or the Technical Director of a big team: consistency needs to be one of the pillars of your strategy. You need to follow a set of testing methodologies.

But what does consistency mean in the context of security project management? That all projects are delivered to the same high quality standard. Let me repeat that again:

Consistency means that all projects are delivered to the same high quality standard

Even though that sounds like a simple goal, there are a few parts to it:

  • All projects: this means for all of your clients, all the time. It shouldn’t matter if the project team was composed of less experienced people or if this is the 100th test you’re running this year for the same client. All projects matter and nothing will reflect worse in your brand than one of your clients spotting inconsistencies in your approach.
  • The same standard: as soon as you have more than one person in the team, they will have different levels of skill, expertise and ability for each type of engagement. Your goal is to ensure that the process of testing is repeatable enough so each person knows the steps that must be taken in each project type. There are plenty of sources that you can base your own testing methodology upon including the Open Source Testing Methodology Manual or the OWASP Testing Guide (for webapps).
  • High quality: this is not as obvious as it seems, nobody would think of creating and using a low quality methodology, but in order for a methodology to be useful you need to ensure it is reviewed and updated periodically. You should keep an eye on the security conferences calendar (also a CFP list) and a few industry mailing lists throughout the year and update your methodologies accordingly.

So how do you go about accomplishing these goals?

Building the testing methodology

Store your methodology in a file

We’ve seen this time and again. At some point someone decides that it is time to create or update all the testing methodologies in the organization and time is allocated to create a bunch of Word documents containing the methodologies.


  • Easy to get the work done
  • Easy to assign the task of building the methodology
  • Backups are managed by your file sharing solution


  • Difficult to maintain methodologies up to date.
  • Difficult to connect to other tools
  • Where is the latest version of the document?
  • How do you know when a new version is available?
  • How does a new member of the team learn about the location of all the methodologies?
  • How do you prevent different testers/teams from using different versions of the document?
Use a wiki

Next alternative is to store you methodology in a wiki.


  • Easy to get started
  • Easy to update content
  • Easy to find the latest version of the methodology
  • Easier to connect to other tools


  • Wikis have a tendency to grow uncontrollably and become messy.
  • You need to agree on a template for your methodologies, otherwise all of them will have a slightly different structure.
  • It is somewhat difficult to know everything that’s in the wiki. Keeping it in good shape requires constant care. For instance, adding content requires adding references to the content in index pages (some times to multiple index pages) and categorizing each page so they are easy to find.
  • There is a small overhead for managing the server / wiki software (updates, backups, maintenance, etc.).
Use a tool to manage your testing methodologies

Using a testing methodology management tool like VulnDB or something you create yourself (warning creating your own tools will not always save you time/money).


  • Unlike wikis, these are purpose-built tools with the goal of managing testing methodologies in mind: information is well structured.
  • Easy to update content
  • Easy to find the latest version of the methodology
  • Easiest to connect to other tools
  • There is little overhead involved (if using a 3rd party)


  • You don’t have absolute control over them (if using a 3rd party).
  • With any custom / purpose-built system, there is always a learning curve.
  • There is strategic risk involved (if using a 3rd party). Can we trust these guys? Will they be in business tomorrow?

Using the testing methodology

Once you have decided the best way in which to store and manage your testing methodologies the next question to address is: how do you make the process of using your testing methodologies painless enough so you know they will be used every time?

Intellectually we understand that all the steps in our methodology should be performed every time. However, unless there is a convenient way for us to do so, we may end up skipping steps or just ignoring the methodology all together and trusting our good old experience / intuition and just get on with the job at hand. Along the same lines, in bigger teams, it is not enough to say please guys, make sure everyone is using the methodologies. Chances are you won’t have the time to verify everyone is using them so you just have to trust they will.

Freelancers and technical directors alike should focus their attention in removing barriers of adoption. Make the methodologies so easy to use that you’d be wasting time by not using them.

The format in which your methodologies are stored will play a key part in the adoption process. If your methodologies are in Word documents or text files, you need to keep the methodology open while doing your testing and somehow track your progress. This could be easy if your methodologies are structured in a way that lets you start from the top and follow through. However, pentesting is usually not so linear (I like this convergent intelligence vs divergent intelligence post on the subject). As you go along you will notice things and tick off items located in different sections of the methodology.

Even if you store your methodologies in a wiki, the same problem remains. A solution to the progress tracking problem (provided all your wiki-stored methodologies are using a consistent structure) would be to create a tool that extracts the information from the wiki and presents it to the testers in a way they can use (e.g. navigate through the sections, tick-off items as progress is made, etc.). Of course, this involves the overhead of creating (and maintaining) the tool. And then again, it depends on how testers are taking their project notes. If they are using something like Notepad or OneNote, they will have to use at least two different windows: one for the notes and one for following the methodology which isn’t ideal.

In an ideal world you want your methodologies to integrate with the tool you are using for taking project notes. However, as mentioned above, if you are taking your notes using off the shelf note taking applications or text editors you are going to have a hard time integrating. If you are using a collaboration tool like Dradis Pro or some other purpose-built system, then things will be a lot easier. Chances are these tools can be extended to connect to other external tools.

Now you are into something.

If you (or your testers) can take notes and follow a testing methodology without having to go back and forth between different tools, it is very likely you will actually follow the testing methodology.

Dradis Pro is sponsoring BSides London 2013

Dradis Professional is sponsoring the next edition of the B-Sides London security conference:

B-Sides London 2013 will be held at the Kensington and Chelsea Town Hall on April 24, 2013 in London, UK.

We’ve put together a page for the event and are raffling a Dradis Pro license, read more at:

Are you planing to attend or want to get in touch? Let us know!

The Ethical Hacker Network interviews Security Roots founder

Daniel Martin (@etdsoft), creator of Dradis Framework and founder of Security Roots Ltd was interviewed by Todd Kendall for The Ethical Hacker Network:

Interview: Daniel Martin of

Previous press appearances: