Monthly Archives: April 2013

BSides London 2013 aftermath

BSides London took place last Wednesday the 24th on the Kensington and Chelsea Town Hall near High Street Kensington tube station in London.

I was really looking forward to this year’s edition as for the first time ever Dradis Pro was a sponsor in a security event. There are a lot of lessons learned on that front alone, but I’ll save them for another post.

It was a really long day. I only finished the slides for the Creating Custom Dradis Framework Plugins workshop around midnight the night before and I got to the venue by 8am to give the organisers a hand with the preparations. On the bright side, we had a really good turnout on the workshop:

BSides_London_2013_276

Creating Custom Dradis Framework plugins in action (more pics)

I think that the final head count was around 500 people both from around the country and from abroad. The downside is that we had to prepare around 500 tote bags with sponsor swag, the upside is that some sponsors provided some really nice goodies ๐Ÿ˜‰

BSides swag by ScotSTS, 7Elements and Dradis Pro

The truth is that running an event such as BSides is a ton of work, and the team do it for free. And it doesn’t cost a penny to attend and you get a really nice free t-shirt:

BSides London t-shirt

I don’t think people thank the organisers enough. Thanks guys! To both the visible faces of the organisation but also to the rest of the conference goons that make all the little moving parts of the event tick.

As usual in this type of event, it’s easy to let yourself be distracted by the social side of things. I managed to finally catch up with a lot of Dradis Community contributors and Dradis Pro users. And hopefully meet a few future ones ๐Ÿ˜‰ I finally put a face to some of the #dc4420 peeps and manage to catch up with some people that I no longer get to see that often.

It always baffles me that after working for a company for the last 5 years you get to meet some of your colleagues in a random security event instead of in the office or in an official company event. I guess that’s the nature of the industry we are on though. It was also good to catch up with ex-colleagues from previous lives.

Even though the scheduling gods decided I had to miss Marion & Rory’s workshop in the morning, I managed to get myself a WiFi Pineapple after Robin’s, just in time to rush to the main hall to catch the closing ceremony.

WiFi Pineapple kit

And before you realise it, the day was over and you are having a pint too many at the official BSides after-party…

Should you create your own back-office/collaboration tools?

When an organisation tries to tackle the “collaboration and automated reporting problem“, one of the early decisions to make is whether they should try to build a solution in-house or use an off-the-shelf collaboration tool.

Of course, this is not an easy decision to make and there are a lot of factors involved including:

  • Your firm’s size and resources
  • Cost (in-house != free)
  • The risks involved

But before we begin discussing these factors, it will be useful to get a quick overview what’s really involved in creating a software tool.

The challenge of building a software tool

What we are talking about here is building a solid back-office tool, something your business can rely on, not a quick Python script to parse Nmap output.

My background is in information security and electrical engineering, but I’ve written a lot of software, from small funny side projects, to successful open source tools and commercial products and services.

I’m neither a software engineering guru nor an expert in software development theory (I can point you in the right direction though), but building a software tool involves a few more steps than just “coding it”. For starters:

  • Capture requirements
  • Design the solution
  • Create a test plan
  • Develop
  • Document
  • Maintain
  • Improve
  • Support your users

The steps will vary a little depending on your choice of software development methodology, but you get the idea.

If one of the steps is missing the full thing falls apart. For instance, say someone forgets to ‘document’ a new tool. Whoever wants to use the tool and doesn’t have direct access to the developer (e.g. a new hire 8 months down the line) is stuck. Or if there is no way to track and manage feature requests and improvements, the tool will become outdated pretty quickly.

With this background on what it takes to successfully build a solid software tool, lets consider some of the factors that should play a role in deciding whether to go the in-house route or not.

Your firm’s size and resources

How many resources can your firm invest in this project? Google can put 100 top-notch developers to work on an in-house collaboration tool for a full quarter and the financial department won’t even notice.

If you are a small pentesting firm, chances are you don’t have much in terms of spare time to spend on pet projects. As the team grows, you may be able to work some gaps in the schedule and liberate a few resources though. This could work out. However, you have to consider that not only will you need to find the time to create the initial release of the tool but also you’ll need to be able to find the resources down the line to maintain, improve and support it. The alternative is to bring a small group of developers on payroll to churn back-office tools (I’ve seen some mid- and large-size security firms that successfully pulled this off). However this is a strategic decision which comes with a different set of risks (e.g. how will you keep your devs motivated? What about training/career development for them? Do you have enough back-end tools to write to justify the full salary of a developer every month?).

Along the same lines, if you’re part of the internal security team of an organisation that isn’t focussed on building software, chances are you’ll have plenty in your plate already without having to add software project management and delivery to it.

Cost (in-house != free)

There is often the misconception that because you’re building it in-house, you’re getting it for free. At the end of the day whoever is writing the tool is going to receive the same salary at the end of the month. If you get the tool built at the same time, that’s like printing your own money!

Except… it isn’t. The problem with this line of reasoning is the at the same time part. Most likely the author is being paid to perform a different job, something that’s revenue-generating and has an impact in the bottom line. If the author stops delivering this job, all that revenue never materialises.

Over the years, I’ve seen this scenario play out a few times:

Manager: How long is it going take?
Optimistic geek: X hours, Y days tops
Manager: Cool, do it!

What is missing from the picture is that it is not enough to set aside a few hours for “coding it”, you have to allocate the time for all the tasks involved in the process. And more often than not Maintaining and Improving are going to take the lion’s share of the resources required to successfully build the tool (protip: when in doubt estimating a project: sixtoeightweeks.com).

One of the tasks that really suffers when going the in-house route is Support: if something breaks in an unexpected way, who will fix it? Will this person be available when it breaks or there is a chance he’ll be on-site (or abroad) for a few weeks before the problem can be looked into?

Your firm’s revenue comes from your client work not from spending time and resources working on your back-end systems. The fact that you can find some time and resources to build the first release of a given tool, doesn’t mean that maintaining, supporting and improving your own back-end tools will make economic sense.

The risks of in-house development

There are a few risks involved in the in-house approach that should be considered. For instance, what happens when your in-house geek, the author of the tool, decides to move on and leaves the company? Can someone maintain and improve the old system or are you back to square one? All the time and resources invested up to that point can be lost if you don’t find a way to continue maintaining the tool.

Different developers have different styles and different preferences for development language, technology stack and even source code management system. Professional developers (those that work for a software vendor developing software as their main occupation) usually agree on a set of technologies and practices to be used for a given project, meaning that new people can be brought on board or leave the team seamlessly. Amateur developers (those that like building stuff but don’t do it as their main occupation) have the same preferences and biases as the pros and they are happy to go with them without giving them a second though as they don’t usually have to coordinate with others. Normally, they won’t invest enough time creating documentation or documenting the code because at the end of the day, they created it from scratch and know it inside out (of course 6 months down the line, they’ll think it sucks). Unfortunately this means that the process of handing over or taking ownership of a project created in this way will be a lot more complicated.

When building your own back-end systems you have to think: who is responsible for this tool? Another conversation I’ve seen a few times:

(the original in-house author of the tool just moved on to greener pastures)
Manager: Hey, you like coding, will you take responsibility for this?
Optimistic geek: Sure! But it’s Ruby, I’ll rewrite the entire thing from scratch in Python and we’ll be rolling in no time!
Manager: [sigh]

If you are part of a bigger organisation that can make the long-term strategic commitment to build and maintain the tool then go ahead. If you don’t have all those resources to spare and are relying on your consultants to build and maintain back-end tools, be aware of the risks involved.

Conclusion: why does the in-house approach not always work?

The in-house development cycle of doom:

  1. A requirement for a new back-office tools is identified
  2. An in-house geek is nominated for the task and knocks something together.
  3. A first version of the tool is deployed and people get on with their business.
  4. Time passes, tweaks are required, suggestions are made, but something else always has priority on the creator’s agenda.
  5. Maybe after a few months, management decides to invest a few days from the creator’s time to work on a new version.

As you can imagine, this process is unlikely to yield optimum results. If building software tools is not a core competency of your business, you may be better served by letting a software development specialist help you out. Let them take care of Maintaining, Improving and Supporting it for you while you focus on delivering value to your clients.

Of course the other side of this coin is that if you decide to use a third-party tool, whoever you end up choosing has to be worthy of your trust:

  • How long have they been in business?
  • How many clients are using their solutions?
  • How responsive is their support team?

These are just some of the highlights though, the topic is deep enough to warrant its own blog post.

tl; dr;

Going the in-house route may make sense for larger organisations with deeper pockets. They can either hire an internal development team (or outsource the work and have an internal project manager) or assign one or several in-house geeks to spend time creating and maintaing the tools. But remember: in-house != free.

Smaller teams and those starting up are usually better off with an off-the-shelf solution built by a solid vendor that is flexible and reliable. However, the solution needs to be easily extended/connected with other tools and systems to avoid any vendor lock-ins of your data.

Choosing an independent penetration testing firm

There’s been a recent post on the [pen-test] mailing list asking for advice on the things to consider when choosing a independent penetration testing company. The original request went as follows:

I’m currently in the process of sizing up/comparing various
Penetration Testing firms, and am having a bit of trouble finding
distinguishing characteristics between them. I’ve looked at a fair
few, but they all seem to offer very similar services with little to
recommend one over another.

The thread was full of good advice. However, having first-hand experience in a number of these penetration testing firms I thought it would be a good idea to dig a bit deeper into the subject: what makes a penetration testing company great?

What are your requirements?

But first things first, do you need a penetration test? Do you know what a penetration test consists of? What would be the goal of the test if you performed one? These are really the key questions that you need to be able to answer before even considering choosing an external security partner.

Unfortunately, in depth answers to those questions fall outside the scope of this article. Doing a bit of internet research as well as reaching out to industry colleagues, peers and business acquaintances that are in a role similar to your own would be a first step in the right direction.

It is not a bad idea to ask each of the vendors you evaluate to help you with the answers to those questions. It will enable you to understand their approach. Do they really have your best interest in mind? Will they make sure that you are able to define the problem and sketch your goals before jumping to their keyboards and sending you an invoice? Are they knowledgable enough on the relationship between security and your business and the tradeoffs involved?

Contrary to what one might think, in the majority of the cases security projects are performed “just because” without a clear goal in mind:

  • We made a change in the app and policy says we need to have it pentested.
  • We deployed a new server and IT said we need to pentest it.
  • A year has passed since our last test and we have to do it again.

If you are completely lost on this and don’t know what services you need or what services might be available, it may be worth getting some external help just to help you clarify your requirements. You can bring in an independent consultant for a couple of days to help you gain a sufficient understanding of your own requirements to ensure that when you go out and shop around for security partners you know what you are up against.

IT generalists vs penetration testing specialists

After correctly understanding what your problem is and what type of testing you need, the next thing to get out of the way is to decide whether you should go for a general IT contractor or a security specialist. As usual, there is no clear cut answer and it depends on your needs more than anything else.

I’ve worked with big integrators where the security team was virtually non-existent. Of course that didn’t prevent the business from selling security services. ‘Security consultants’ would spend their time doing IT deployments (e.g. firewall and router configurations, etc.) or coding Java until the off security project arrived when they would gather in a team and deliver it. This may work for you or not, but it’s worth thinking about. Do you need a lot of support in several IT areas? It would definitely be easier to establish a relationship with a single IT provider than shopping around for specialist vendors for each area.

When dealing with a generalist, make sure you understand their approach to security testing. Most of the advice given below for firms can be directly translated to the “security function” inside a bigger consultancy or integrator.

Company background

Lets assume that you have decided to go for a security testing company. What are the important factors to consider before making a decision?

Trust

As with all business decisions, trust is a very important factor when evaluating security services vendors. Can you get any verifiable references of any of these guys? If you approach a firm and made them aware that X from Y company recommended their services, you are likely to get a better deal than if you didn’t. The level of service you will get will also be different as failing/disappointing you can potentially have the risk of upsetting the existing relationship they have with the people that referred you on the first place.

You can always ask the different vendors to put you in contact with organisations of a similar profile to yours. It’s important that they are of a similar profile or otherwise their feedback might not be as valuable. If you are a SME owner in the tourism industry a reference by the CSO of a huge high-street bank is of little value. Chances are they are pouring money over the vendor and the firm is bending over backwards to ensure the bank is fully satisfied.

You can ask for examples of similar projects they have undertaken. Don’t satisfy yourself with a conference call or conversation on the subject, consultants (security or otherwise) are paid to sound good even when they aren’t experts in what they are talking about. Try to push for a sanitised report (not the marketing sample) to see what a real-world deliverable looks like (more on this later).

If you can’t get any solid references or pointers through your business contacts, you’ll need to establish trust by yourself. This will take a bit of work and time but it is definitely worth the investment. Also, beware that this is a very technical service you’re shopping for. You have to be able to trust both the management team and the technical team in the firm. There are several things you can look into when trying to establish that trust.

Research and conferences

Something that you hear often when shopping around for penetration testing providers is that the company “should present at conferences”. This in principle sounds like a reasonable idea: if the team is on top of their game, they will be performing cutting edge research that will be of interest to the security industry which will earn them a spot in the security conferences. However, the truth is that with an ever growing number of security conferences every year which in turn have an ever growing number of different tracks running in parallel, not all conference speaking slots are created equal.To give you an example, this month there are at least six of them (not counting BSides London that we are sponsoring ;)).

When evaluating security conference presence, it is important to analyze the contents that were presented. Was it really research or does the company employ a well known industry expert that is regularly invited to speak at the conferences to give their opinion on the state of the art? Does the research have sufficient breath and depth or was it put together in a rush to have it in time for the conference? Was it relevant to your business? For instance, imagine you need to get your SAP deployment tested. Even if a well-known company has someone finding amazing bugs in some cutting edge technology like NFC, you may be better served by a lesser known company presenting on a SAP testing methodology or on the SAP testing toolset they have created over the last few years doing this type of testing.

Something similar could be said about published advisories: the fact that a company has 100s of published advisories may or may not be relevant to your needs. Are the advisories in technologies your company uses? If all their advisories are on Microsoft-related technologies and you are a Linux/Solaris shop, that wouldn’t help. This is a tricky one to assess, especially for non-security people, but it is worth to be on the look for the “but we publish advisories!” line and ask a few follow up questions to see if the company’s background is aligned with your own needs.

Finally, is all this conference presence and research recent enough? The security industry changes quickly and even though security specialists are fairly loyal to their employers, they move on from time to time. Double-check your facts to ensure that the research the vendor is presenting you as proof of competence is recent and that the authors are still with the company. The same could be said of books, courses or tools that have been written “by company members”. Verify they are still around to help your company and if your find out they are not, at least call their bluff to see how your point of contact reacts. The savvier you look in their eyes the better ๐Ÿ™‚

The legalese

This falls a bit out of the scope of this post in the sense that has nothing to do with the firm’s technical competence. However it is essential you consider these points as part of your due diligence process:

  • Does the company carry sufficient insurance and reasonable legal agreements ?
  • Are there any NDA terms that you need to discuss with them?
  • Does the firm hold any relevant certifications that your company might care about (e.g. ISO 27001)?

Their approach to testing

After covering the basics of the company’s background, the next thing to focus on is their approach to testing.

There is a lot of solid advice on this subject on this 2007 post by Chris Eng at the Veracode blog, I’ll include a few references to it here, but please go and read it now, it’s well worth the time.

For instance, Chris recommends asking vendors under what circumstances would they advise a customer to bear the risk of a vulnerability. If they canโ€™t give a good example of this, he continues, you might be dealing with someone who views security in a vacuum and doesnโ€™t consider other business factors when framing recommendations. This hits the nail on the head. Your vendor’s approach needs to be aligned with your business goals. Otherwise the return on your investment will be very poor. This type of question should be asked to the people that will be directly involved in the technical delivery of your projects and not to your sales person or account manager. At the very least, you should have a conversation around this with the head of the pentest practice (or technical director).

Team lottery

When working with a technical consultancy, the bigger it gets, the bigger the risk of being affected by the “team lottery”: the variation on service you will notice depending on who gets assigned to deliver your projects. There are two factors that can minimise the risk of the team lottery: the company’s workflow/methodology and the overall composition of the team.

The team

I want to open this section with a quote from Avoid wasting money on penetration testing that makes a great point:

Finally, remember that companies donโ€™t perform penetration tests, people do. So no matter which company you go to, it always boils down to the person you have working on your account.

It is key to cut through the sales layer and try to reach the technical director or pentest practice leader. If you are going to spend any significant amount of money, I’d push it even harder (at least for the first engagement, and every now and then too) and request a conversation with the testers assigned to your project. Or at the very least request their CVs/bios. Do they have experience working under your requirements? Does their general work experience makes you comfortable (e.g. someone that just started their pentesting career may not be the best fit to test your critical AS/400 mainframe)? If in doubt request a conversation or find out if someone else can be assigned to your project. Scheduling is very fluid in pentest firms and they should be able to accommodate such requests. The goal of this exercise is to minimise the team lottery by being vigilant and pushing back.

The firm’s size is also a factor in this equation, as Chris puts it, the bigger the consulting organization gets, the more likely the consultants will be generalists as opposed to specialists. This may or may not be an issue for you. Depending on your requirements, your needs will be better served by a generalist. On the one hand, you don’t want a reverse engineer that specialises in subverting DRM libraries for embedded systems running your external infrastructure pentest, on the other, you don’t want a generalist looking at your DRM library. Again, this goes back to square one: knowing your requirements.

Another way to try to avoid the lottery is to go for a fairly small team where you know each person is well worth their salt and you will get top-shelf service every time. However this isn’t easy to find (or evaluate) and depending on your own firm’s size you may need to use a bigger vendor (smaller firms can’t usually accommodate too many projects or multiple concurrent projects for a single client). Even though these are not very common, such specialist boutiques exist and depending on your situation, size and approach they could be a great fit.

Finally, another interesting subject is to figure out if the company subcontracts any work (to other firms or to freelancers). Don’t get me wrong, some of the finest testers I’ve worked with wouldn’t change freelancing for any job in the world. However, when third parties are involved you have to double check the situation with the firm’s legal coverage (e.g. liability insurance) and the due diligence you performed on the main team’s technical leadership and members of the team should be extended to any third parties and contractors. Moreover, subcontracting introduces additional challenges in the collaboration and methodology department, which as we will see in the next section, are not free of complications.

Workflow, tools and methodology

Even if they have a bunch of great people in the team, there are still some important things to consider about the firm’s methodology and processes.

The first one is the testing methodologies the company has for the different types of engagements that will be relevant to your company (e.g. it is of no use to you if the company excels are wireless assessments if you just need a code review). As discussed in Using testing methodologies to ensure consistent project delivery creating and maintaining a high quality testing methodology is not without its challenges and the bigger the penetration testing firm, the more important their methodology becomes.

There are a number of industry bodies that provide baseline testing methodologies including:

Be advised that the fact that your point of contact is aware of some of these organisations does not mean that the team assigned to your engagement will follow their methodology (or any other methodology for that matter). Have a conversation with the technical director about the methodologies used by the team. And later on have the same conversation with the team members assigned to your project. Protip: if you get different responses from the technical director and the team members or different responses from different team members chances are the firm is not seriously following any defined testing methodologies. For example, if the technical director mentions OSSTM and CREST, and the team leader mentions OWASP and another team member says he mainly relies on his years of experience, that should be a red herring.

Another key part of the firm’s workflow to consider is whether engagements are typically run by a single person or they routinely involve several testers. I’ve already discussed about the importance of collaboration in the past, having multiple testers in your project ensures that a wide range of skills and expertise are brought to bear against your systems which maximises your chances of uncovering most of the problems.

If multiple testers are going to be involved in your assessment, how does the team coordinate their efforts to ensure there is no time wasted and that all points in the methodology are covered? If your test team is on the same page and have the right collaboration tools, you will ensure there won’t be any time wasted, tasks will be split efficiently among the available team members and all points in the methodology will be covered. If on the other hand, the company does not have the tools or processes defined to ensure seamless collaboration and task splitting, some of the time allocated to your projects will be wasted and some areas of the methodology may remain unexplored while the team is spending time trying to manage the collaboration overhead.

The penetration testing report

In the majority of the cases, when you engage a penetration testing firm, the final deliverable you receive is security report. Before making your decision and choosing a vendor, it is important that you are provided with a sample report by each prospect.

The report needs to be able to stand on its own, providing comprehensive information about the project: from a description of the scope, to a high-level, my-CEO-would-understand-this-language executive summary of the results and a detailed list of findings. It should also provide remediation advice and any supporting information required to both validate the work performed by the team (does it look like they attained sufficient coverage?) and verify that issues had been successfully mitigated after the remediating work is performed.

Whilst some of the report sections have to be very technical and full of proof-of-concept code, requests or tool output, the report also needs to present the results of the engagement in the context of your business. Sure, you found three Highs, seventeen Mediums and twenty Lows, what does it mean for my business? Should I get the team to stop doing what they are doing and fix all the issues? Some of them? None of them? All findings are not created equal, and some testers get carried away by the technical details or the technical mastery required to find and exploit the issues and forget about presenting them in a context that matters to your business. In general the more experienced the tester, the more emphasis will be put in the business context around the findings uncovered (of course “experience” is not a synonym of “age”).

As a result, and to try to avoid the team lottery mentioned above, in an ideal world you would like to be provided with a sanitised report written by the same person that will be writing your own deliverable. This may not be practical in every instance but if you are going to engage on a mid-size or larger assessment, I think it is reasonable to push for this sort of proof to ensure that the final document you will receive is legible, valuable to your business and of an overall high-quality standard.

tl; dr;

  • Your requirements, to get the best value for your investment you need to know what you need help with, is it a pentest? or just a VA? or help with some basic security awareness training for your development team?
  • Trust, can someone recommend you a trustworthy security vendor? If no, then for each prospect partner try to figure out what’s the firm’s background? Have they worked with clients in your industry? Are they interested in your business? Do they perform any research in areas that are relevant to you?
  • Their approach, who will be delivering your assessment? Do they understand your business and motivations? What is their workflow like? Do they have a process in place to ensure consistent, high quality results every time?

There are a lot of moving parts in this process, and not all of them will apply to every vendor and every company looking for a penetration testing provider.

Here are Security Roots we can’t help you with your security testing needs (we will stick to doing what we know best), but hopefully now you should be better equipped to consider all the pros and cons and some of the gotchas involved in deciding what security firm you should trust with your business.

Using a collaboration tool: why being on the same page matters?

In this post I want to expand over the ideas discussed in The importance of collaboration during security testing focusing not so much in the splitting of tasks and work streams (that’s subject of another post) but on the magic that happens when all team members are on the same page sharing a clear picture of what is going on and on the role played by your collaboration tool.

Security testing often benefits from multiple people looking at the same target. The problem with one-person assessments is that, even if you follow a testing methodology you may miss stuff. This is because the way you approach testing, following the same patterns, looking for some telltale signals from the environment, you bring your background and intuition to the test. A fellow auditor would bring their own set of patterns and predefined expectations to the test. Combining the two approaches almost always produces interesting results.

As a rule of thumb, the more people trying to uncover issues, the better. Of course there is a limit to this rule (e.g. in a large enough test team, some testers would try to hide in the crowd and not pull their weight), but again that’s subject for another post.

In an ideal world

In an ideal world, everyone in the team will be on the same room throughout the duration of the test. They’ll be talking to each other, looking over each other’s shoulder when something interesting comes up and writing down everything they find, as they find on a whiteboard.

Both the economics and logistics of real world testing make this scenario highly unlikely except when some very specific circumstances are met. Testers are often based around the country (or the world), clients can’t afford the investment of putting together a larger testing team when a smaller one could get the job done, etc. Only in special circumstances, typically when the reputation of either the penetration testing firm (e.g. PCI ASV accreditation) or the client (e.g. new product launch) is at stake, the conditions are met to make this kind of effort. A strike team is put together to tear the system apart and testers and target are locked in a room (rules of Thunderdome apply).

The one man test

More often than not, security tests are performed by a single auditor. This is fine, provided the auditor has the right background for the job.

Even in this scenario it is fairly easy for things to slip through the cracks. You are focussed investigating a promising issue, then you notice a weird behaviour and if you don’t make a note there and then to look into it later, you will forget about it. The weird behaviour doesn’t get investigated. The test finishes and you even forget that you noticed it on the first place. Everybody looses.

Note-taking is a crucial skill. Noticing minor issues, adding them to the queue, triaging, and back to square one. Of course all this, while you follow a suitable testing methodology. The devil is in the details, and noticing the stuff that is not covered by the standard methodology is often the key to unlocking some of the more interesting bugs.

I would argue that even one-man teams would benefit from using a collaboration tool that lets them keep the big picture of the engagement (e.g. scope, progress, methodology, notes, findings, attachments, etc.). But in this case, I would even settle for a pen and a notebook. Just make sure nothing slips through the cracks! Of course, you’ll also need a cross-cut shredder to dispose of the paper once you are done ๐Ÿ˜‰

Nevertheless, using a collaboration tool would enable you to share your interim findings with other stakeholders (account manager, client POC, etc.) and possibly reduce your reporting time.

The ever changing team

On the opposite side of the spectrum you have ‘fluid team‘ tests. We’ve all been there. On day one, you’ve got 2 testers that are going to be testing for 2 weeks, then in the afternoon, client requirements change, and now you have 3 testers working for 1 week. On day two, they change again and it’s back to 2 testers for 2 weeks, but your original team-mate has been pulled to perform some specialist testing only he’s capable of delivering. With every change in scope and team, you have to make sure everyone is brought to the same page or you won’t be able to make any progress.

If you’re keeping track of the project via email, you’ve got a problem. Every time a new tester joins the team, you’ve got to forward all the scoping emails, plus all the “I’ve found X” emails. Every time someone leaves the team, you have to chase them to send more emails with their latest findings. For the team leader, this is a waste of valuable time, it’s time that he can’t spend testing.

The alternative, using a collaboration tool, could make things a lot simpler. You receive scoping information and add it to the project. A new tester joins the team? Check the project information in the tool to find about the scope. Everyone adds their findings as they go along. Suddenly a team member becomes unavailable? No problem all their findings are already on the tool. A new team member joins half the way through? Check the project page to get up to speed in no time. Go through all the issues covered so far, check the methodology to find out what remains to be done, roll up your sleeves and start working.

The report

I have been arguing for the use of a collaboration tool during the engagement, to ensure everyone is on the same page. If everything has developed according to plan, the scoping information was available in the tool at the beginning of the test and everyone has been feeding their notes and findings as they went along. Now the test is over and it’s time to write the report. Whoever has to write it knows that all the information is on a single place, everything that’s needed: issues, evidence, screenshots, tool output, can be found there. If our report writer is savvy enough, he would have been keeping an eye on the project page to ensure that the information for all the issues found by each team member was complete, every i was dotted and every t was crossed. And all this can happen before the reporting time even starts.

Consider for a moment the alternative. There is no collaboration tool, progress is made via email (e.g. “Hey guys, look what I’ve found!”) and each member of the team is keeping a notes.txt file in their laptop. On the last day of the engagement, the report writer receives the notes from each tester: plain text files, word documents, .zip files with text and screenshots, etc. There will be a significant amount of time wasted collating results. Even if everyone provided all the information, there is still a requirement to re-format and re-style everything for the final report. If someone missed something, or if further evidence/details are required, it is almost certain that you won’t be able to get them. The system is firewalled off again, the test accounts no longer work, the person that found the issue on the first place is now doing a gig for the government in a bunker somewhere with zero connectivity, etc.

The amount of work required to feed a collaboration tool as you go along with complete information about the issues you uncover is insignificant compared to the task of manually collating the results of N testers (remember the ‘fluid team‘ problem?), reformatting and chasing around the missing bits and pieces.

Defining the requirements for a collaboration tool

We’ve covered a lot of ground in this post. Hopefully I’ve managed to highlight some of the merits of using a collaboration tool. The benefits that we would like to see in our solution would be:

  • Effective sharing: keep things organised, provide a big picture overview of the project: scope, coverage/progress, findings, notes, attachments, etc.
  • Flexible: you need to be able to extend the tool, adapt it to your needs and to the other tools and systems in your environment. “Silver bullet” solution that pretend to do everything for everyone out of the box, most likely won’t fit your needs. You need to be able to extend, modify and adapt your tool to fit your needs.
  • Capture all the data needed for the report. If the solution only lets you capture some of the information you’ll need for the report, you will be adding complexity to the workflow. Get all the information at once, while it’s fresh in the tester’s mind, add it to the solution, and forget about it until the report is due.
  • Ideally it should have report generation capabilities. Customisable report templates and the possibility of editing the report yourself after it is generated (e.g. Word vs. PDF) are also a plus.
  • Easy to adopt. To disrupt as little al possible the testers’ workflow, something that is easy to use, and cross-platform will go a long way towards adoption.

These are some of the guiding principles that we followed when we created and open sourced the Dradis Framework back in the day. More than 24,000 downloads later and after Dradis has been included in the BackTrack distro, featured in books like Grey Hat Hacking and Advanced Penetration Testing for Highly-Secure Environments it looks like we were into something.

These days, we continue to work hard to help our users collaborate more effectively and our clients to be more competitive.

Every manager or senior team member that tries to push for a collaboration tool in an environment where none is being used is bound to face a degree of push back. This post provides some of the counter arguments that can be used to fight such push back, however, fighting the push back is a complicated subject that calls for an entire post on its own.