Upcoming in Dradis Pro v1.7: Issues and Evidence

A new release of Dradis Pro is in the making: Dradis Pro v1.7. We continue to evolve our solution based of the feedback we receive from our users.

Starting in Dradis Pro v1.7 we have introduced two new concepts:

  • Issues: these are findings or vulnerabilities. An example would be: “Cross-site scripting“.
  • Evidence: this is where you provide the concrete information / proof-of-concept data for a given instance of the Issue.

For example:

  • The ‘Hackme bank’ application is vulnerable to Cross-site scripting (Issue). There are 7 instances of this issue and here is the information about them (Evidence).
  • The HTTP service in tcp/443 of the 10.0.0.1 host is affected by the Out-of-date Apache Tomcat issue and so is the tcp/8080 service in 10.0.0.2

As you can see, the main benefit of this approach is that you get to describe the Issue once and reuse that description.

To continue with our example, we’d have to create the following project structure:

Here we would add the Out-of-date Apache Tomcat Issue to the all issues node of the project, and then the Evidence for each host will be added in the corresponding node.

By segregating core vulnerability information from the evidence associated with each instance of the issue, we can start doing some powerful things.

Reporting by host, reporting by issue

On the one hand, some penetration testing firms like to structure their reports by finding. They go through the list of issues identified, providing description, mitigation advice, references, etc. and including all the hosts affected by the issue in each instance.

byhost-20

On the other hand, some prefer to structure their report by host. They list all the hosts in-scope for the engagement and describe each issue that affects them.

Of course there are others that provide these two options in the same report. A section where all the issues are described in detail followed by a host summary where you can quickly see a list of issues affecting a given host.

In order to provide this level of flexibility there needs to be a segregation between the issue details and the instance information.

With the introduction of Issues/Evidence in v1.7, we have just opened the door to all this flexibility.

More information

If you are an existing Dradis Pro user, you can already take advantage of all this features without having to wait until the release of v1.7. We have also prepared a step-by-step reporting guide for you:

Reporting by host, reporting by issue

If you are not a user yet, you can read more about cutting your reporting time, putting external tools to work for you (and not against you) and delivering consistent results with our tool. Get a license and start saving yourself some time today.

BSides London 2013 aftermath

BSides London took place last Wednesday the 24th on the Kensington and Chelsea Town Hall near High Street Kensington tube station in London.

I was really looking forward to this year’s edition as for the first time ever Dradis Pro was a sponsor in a security event. There are a lot of lessons learned on that front alone, but I’ll save them for another post.

It was a really long day. I only finished the slides for the Creating Custom Dradis Framework Plugins workshop around midnight the night before and I got to the venue by 8am to give the organisers a hand with the preparations. On the bright side, we had a really good turnout on the workshop:

BSides_London_2013_276

Creating Custom Dradis Framework plugins in action (more pics)

I think that the final head count was around 500 people both from around the country and from abroad. The downside is that we had to prepare around 500 tote bags with sponsor swag, the upside is that some sponsors provided some really nice goodies 😉

BSides swag by ScotSTS, 7Elements and Dradis Pro

The truth is that running an event such as BSides is a ton of work, and the team do it for free. And it doesn’t cost a penny to attend and you get a really nice free t-shirt:

BSides London t-shirt

I don’t think people thank the organisers enough. Thanks guys! To both the visible faces of the organisation but also to the rest of the conference goons that make all the little moving parts of the event tick.

As usual in this type of event, it’s easy to let yourself be distracted by the social side of things. I managed to finally catch up with a lot of Dradis Community contributors and Dradis Pro users. And hopefully meet a few future ones 😉 I finally put a face to some of the #dc4420 peeps and manage to catch up with some people that I no longer get to see that often.

It always baffles me that after working for a company for the last 5 years you get to meet some of your colleagues in a random security event instead of in the office or in an official company event. I guess that’s the nature of the industry we are on though. It was also good to catch up with ex-colleagues from previous lives.

Even though the scheduling gods decided I had to miss Marion & Rory’s workshop in the morning, I managed to get myself a WiFi Pineapple after Robin’s, just in time to rush to the main hall to catch the closing ceremony.

WiFi Pineapple kit

And before you realise it, the day was over and you are having a pint too many at the official BSides after-party…

Should you create your own back-office/collaboration tools?

When an organisation tries to tackle the “collaboration and automated reporting problem“, one of the early decisions to make is whether they should try to build a solution in-house or use an off-the-shelf collaboration tool.

Of course, this is not an easy decision to make and there are a lot of factors involved including:

  • Your firm’s size and resources
  • Cost (in-house != free)
  • The risks involved

But before we begin discussing these factors, it will be useful to get a quick overview what’s really involved in creating a software tool.

The challenge of building a software tool

What we are talking about here is building a solid back-office tool, something your business can rely on, not a quick Python script to parse Nmap output.

My background is in information security and electrical engineering, but I’ve written a lot of software, from small funny side projects, to successful open source tools and commercial products and services.

I’m neither a software engineering guru nor an expert in software development theory (I can point you in the right direction though), but building a software tool involves a few more steps than just “coding it”. For starters:

  • Capture requirements
  • Design the solution
  • Create a test plan
  • Develop
  • Document
  • Maintain
  • Improve
  • Support your users

The steps will vary a little depending on your choice of software development methodology, but you get the idea.

If one of the steps is missing the full thing falls apart. For instance, say someone forgets to ‘document’ a new tool. Whoever wants to use the tool and doesn’t have direct access to the developer (e.g. a new hire 8 months down the line) is stuck. Or if there is no way to track and manage feature requests and improvements, the tool will become outdated pretty quickly.

With this background on what it takes to successfully build a solid software tool, lets consider some of the factors that should play a role in deciding whether to go the in-house route or not.

Your firm’s size and resources

How many resources can your firm invest in this project? Google can put 100 top-notch developers to work on an in-house collaboration tool for a full quarter and the financial department won’t even notice.

If you are a small pentesting firm, chances are you don’t have much in terms of spare time to spend on pet projects. As the team grows, you may be able to work some gaps in the schedule and liberate a few resources though. This could work out. However, you have to consider that not only will you need to find the time to create the initial release of the tool but also you’ll need to be able to find the resources down the line to maintain, improve and support it. The alternative is to bring a small group of developers on payroll to churn back-office tools (I’ve seen some mid- and large-size security firms that successfully pulled this off). However this is a strategic decision which comes with a different set of risks (e.g. how will you keep your devs motivated? What about training/career development for them? Do you have enough back-end tools to write to justify the full salary of a developer every month?).

Along the same lines, if you’re part of the internal security team of an organisation that isn’t focussed on building software, chances are you’ll have plenty in your plate already without having to add software project management and delivery to it.

Cost (in-house != free)

There is often the misconception that because you’re building it in-house, you’re getting it for free. At the end of the day whoever is writing the tool is going to receive the same salary at the end of the month. If you get the tool built at the same time, that’s like printing your own money!

Except… it isn’t. The problem with this line of reasoning is the at the same time part. Most likely the author is being paid to perform a different job, something that’s revenue-generating and has an impact in the bottom line. If the author stops delivering this job, all that revenue never materialises.

Over the years, I’ve seen this scenario play out a few times:

Manager: How long is it going take?
Optimistic geek: X hours, Y days tops
Manager: Cool, do it!

What is missing from the picture is that it is not enough to set aside a few hours for “coding it”, you have to allocate the time for all the tasks involved in the process. And more often than not Maintaining and Improving are going to take the lion’s share of the resources required to successfully build the tool (protip: when in doubt estimating a project: sixtoeightweeks.com).

One of the tasks that really suffers when going the in-house route is Support: if something breaks in an unexpected way, who will fix it? Will this person be available when it breaks or there is a chance he’ll be on-site (or abroad) for a few weeks before the problem can be looked into?

Your firm’s revenue comes from your client work not from spending time and resources working on your back-end systems. The fact that you can find some time and resources to build the first release of a given tool, doesn’t mean that maintaining, supporting and improving your own back-end tools will make economic sense.

The risks of in-house development

There are a few risks involved in the in-house approach that should be considered. For instance, what happens when your in-house geek, the author of the tool, decides to move on and leaves the company? Can someone maintain and improve the old system or are you back to square one? All the time and resources invested up to that point can be lost if you don’t find a way to continue maintaining the tool.

Different developers have different styles and different preferences for development language, technology stack and even source code management system. Professional developers (those that work for a software vendor developing software as their main occupation) usually agree on a set of technologies and practices to be used for a given project, meaning that new people can be brought on board or leave the team seamlessly. Amateur developers (those that like building stuff but don’t do it as their main occupation) have the same preferences and biases as the pros and they are happy to go with them without giving them a second though as they don’t usually have to coordinate with others. Normally, they won’t invest enough time creating documentation or documenting the code because at the end of the day, they created it from scratch and know it inside out (of course 6 months down the line, they’ll think it sucks). Unfortunately this means that the process of handing over or taking ownership of a project created in this way will be a lot more complicated.

When building your own back-end systems you have to think: who is responsible for this tool? Another conversation I’ve seen a few times:

(the original in-house author of the tool just moved on to greener pastures)
Manager: Hey, you like coding, will you take responsibility for this?
Optimistic geek: Sure! But it’s Ruby, I’ll rewrite the entire thing from scratch in Python and we’ll be rolling in no time!
Manager: [sigh]

If you are part of a bigger organisation that can make the long-term strategic commitment to build and maintain the tool then go ahead. If you don’t have all those resources to spare and are relying on your consultants to build and maintain back-end tools, be aware of the risks involved.

Conclusion: why does the in-house approach not always work?

The in-house development cycle of doom:

  1. A requirement for a new back-office tools is identified
  2. An in-house geek is nominated for the task and knocks something together.
  3. A first version of the tool is deployed and people get on with their business.
  4. Time passes, tweaks are required, suggestions are made, but something else always has priority on the creator’s agenda.
  5. Maybe after a few months, management decides to invest a few days from the creator’s time to work on a new version.

As you can imagine, this process is unlikely to yield optimum results. If building software tools is not a core competency of your business, you may be better served by letting a software development specialist help you out. Let them take care of Maintaining, Improving and Supporting it for you while you focus on delivering value to your clients.

Of course the other side of this coin is that if you decide to use a third-party tool, whoever you end up choosing has to be worthy of your trust:

  • How long have they been in business?
  • How many clients are using their solutions?
  • How responsive is their support team?

These are just some of the highlights though, the topic is deep enough to warrant its own blog post.

tl; dr;

Going the in-house route may make sense for larger organisations with deeper pockets. They can either hire an internal development team (or outsource the work and have an internal project manager) or assign one or several in-house geeks to spend time creating and maintaing the tools. But remember: in-house != free.

Smaller teams and those starting up are usually better off with an off-the-shelf solution built by a solid vendor that is flexible and reliable. However, the solution needs to be easily extended/connected with other tools and systems to avoid any vendor lock-ins of your data.

Dradis Pro report templates and testing methodologies for download

Ever wanted to create your own Dradis Pro report templates but didn’t know where to start? Wait no more! A few days ago we introduced the Extras page. From there you can download report templates and testing methodologies. The idea is to showcase all the possibilities supported by our reporting engine and lay the ground work so our users can build on top of these templates.

The latest addition has been the OWASP Top 10 – 2013rc checklist. This covers the recently released OWASP Top 10 – 2013 release and contains 60 checks that you can use to test for all the issues in the new Top 10:

  • A1-Injection
  • A2–Broken Authentication and Session Management
  • A3–Cross-Site Scripting (XSS)
  • A4–Insecure Direct Object References
  • A5–Security Misconfiguration
  • A6–Sensitive Data Exposure
  • A7–Missing Function Level Access Control
  • A8-Cross-Site Request Forgery (CSRF)
  • A9-Using Components with Known Vulnerabilities
  • A10–Unvalidated Redirects and Forwards

Below is a list with a few examples of the Dradis Pro report templates (both Word and HTML) that you can find there:

Advanced Word example

Mix everything together: use Dradis notes for your conclusions, sort your findings by severity, filter, group, make use of document properties, etc.

Dradis Pro Advanced report template: a screenshot showing the advanced word report

A simple report to get you started

Never created a custom Dradis Pro report template before? No problem, start with this basic template to learn about the inner workings of the engine and in no time you’ll have your custom own report template up and running.

Dradis Pro Basic report template: a screenshot showing a detail of a table in the simple report template

A fancy HTML report

Dradis Pro supports a number of report formats including Word 2010 and HTML. In this case we show you how to create a fairly complex HTML report with the list of issues order by severity, a bit of JavaScript to auto-colour and auto-link external references and some awesome charts to nicely show the risk profile of the environment.

Dradis Pro HTML report template: a screenshot of the HTML report template showing a chart for all the issues

With the help of these samples, creating your own report template has never been easier. Are you ready to give Dradis Pro a try?

Using testing methodologies to ensure consistent project delivery

It doesn’t matter if you are a freelancer or the Technical Director of a big team: consistency needs to be one of the pillars of your strategy. You need to follow a set of testing methodologies.

But what does consistency mean in the context of security project management? That all projects are delivered to the same high quality standard. Let me repeat that again:

Consistency means that all projects are delivered to the same high quality standard

Even though that sounds like a simple goal, there are a few parts to it:

  • All projects: this means for all of your clients, all the time. It shouldn’t matter if the project team was composed of less experienced people or if this is the 100th test you’re running this year for the same client. All projects matter and nothing will reflect worse in your brand than one of your clients spotting inconsistencies in your approach.
  • The same standard: as soon as you have more than one person in the team, they will have different levels of skill, expertise and ability for each type of engagement. Your goal is to ensure that the process of testing is repeatable enough so each person knows the steps that must be taken in each project type. There are plenty of sources that you can base your own testing methodology upon including the Open Source Testing Methodology Manual or the OWASP Testing Guide (for webapps).
  • High quality: this is not as obvious as it seems, nobody would think of creating and using a low quality methodology, but in order for a methodology to be useful you need to ensure it is reviewed and updated periodically. You should keep an eye on the security conferences calendar (also a CFP list) and a few industry mailing lists throughout the year and update your methodologies accordingly.

So how do you go about accomplishing these goals?

Building the testing methodology

Store your methodology in a file

We’ve seen this time and again. At some point someone decides that it is time to create or update all the testing methodologies in the organization and time is allocated to create a bunch of Word documents containing the methodologies.

Pros:

  • Easy to get the work done
  • Easy to assign the task of building the methodology
  • Backups are managed by your file sharing solution

Cons:

  • Difficult to maintain methodologies up to date.
  • Difficult to connect to other tools
  • Where is the latest version of the document?
  • How do you know when a new version is available?
  • How does a new member of the team learn about the location of all the methodologies?
  • How do you prevent different testers/teams from using different versions of the document?
Use a wiki

Next alternative is to store you methodology in a wiki.

Pros:

  • Easy to get started
  • Easy to update content
  • Easy to find the latest version of the methodology
  • Easier to connect to other tools

Cons:

  • Wikis have a tendency to grow uncontrollably and become messy.
  • You need to agree on a template for your methodologies, otherwise all of them will have a slightly different structure.
  • It is somewhat difficult to know everything that’s in the wiki. Keeping it in good shape requires constant care. For instance, adding content requires adding references to the content in index pages (some times to multiple index pages) and categorizing each page so they are easy to find.
  • There is a small overhead for managing the server / wiki software (updates, backups, maintenance, etc.).
Use a tool to manage your testing methodologies

Using a testing methodology management tool like VulnDB or something you create yourself (warning creating your own tools will not always save you time/money).

Pros:

  • Unlike wikis, these are purpose-built tools with the goal of managing testing methodologies in mind: information is well structured.
  • Easy to update content
  • Easy to find the latest version of the methodology
  • Easiest to connect to other tools
  • There is little overhead involved (if using a 3rd party)

Cons:

  • You don’t have absolute control over them (if using a 3rd party).
  • With any custom / purpose-built system, there is always a learning curve.
  • There is strategic risk involved (if using a 3rd party). Can we trust these guys? Will they be in business tomorrow?

Using the testing methodology

Once you have decided the best way in which to store and manage your testing methodologies the next question to address is: how do you make the process of using your testing methodologies painless enough so you know they will be used every time?

Intellectually we understand that all the steps in our methodology should be performed every time. However, unless there is a convenient way for us to do so, we may end up skipping steps or just ignoring the methodology all together and trusting our good old experience / intuition and just get on with the job at hand. Along the same lines, in bigger teams, it is not enough to say please guys, make sure everyone is using the methodologies. Chances are you won’t have the time to verify everyone is using them so you just have to trust they will.

Freelancers and technical directors alike should focus their attention in removing barriers of adoption. Make the methodologies so easy to use that you’d be wasting time by not using them.

The format in which your methodologies are stored will play a key part in the adoption process. If your methodologies are in Word documents or text files, you need to keep the methodology open while doing your testing and somehow track your progress. This could be easy if your methodologies are structured in a way that lets you start from the top and follow through. However, pentesting is usually not so linear (I like this convergent intelligence vs divergent intelligence post on the subject). As you go along you will notice things and tick off items located in different sections of the methodology.

Even if you store your methodologies in a wiki, the same problem remains. A solution to the progress tracking problem (provided all your wiki-stored methodologies are using a consistent structure) would be to create a tool that extracts the information from the wiki and presents it to the testers in a way they can use (e.g. navigate through the sections, tick-off items as progress is made, etc.). Of course, this involves the overhead of creating (and maintaining) the tool. And then again, it depends on how testers are taking their project notes. If they are using something like Notepad or OneNote, they will have to use at least two different windows: one for the notes and one for following the methodology which isn’t ideal.

In an ideal world you want your methodologies to integrate with the tool you are using for taking project notes. However, as mentioned above, if you are taking your notes using off the shelf note taking applications or text editors you are going to have a hard time integrating. If you are using a collaboration tool like Dradis Pro or some other purpose-built system, then things will be a lot easier. Chances are these tools can be extended to connect to other external tools.

Now you are into something.

If you (or your testers) can take notes and follow a testing methodology without having to go back and forth between different tools, it is very likely you will actually follow the testing methodology.

The Ethical Hacker Network interviews Security Roots founder

Daniel Martin (@etdsoft), creator of Dradis Framework and founder of Security Roots Ltd was interviewed by Todd Kendall for The Ethical Hacker Network:

Interview: Daniel Martin of Dradisframework.org

Previous press appearances:

New in Dradis Pro v1.6

Today we have pushed a new version of Dradis Professional Edition. This is the result of two months of hard work. It is a shorter release cycle than usual, but there are some good reasons for it. We think it will make our user’s day-to-day work significantly more efficient.

Here are some changes:

  • Improved Word 2010 reporting (more below):
    • The styles you apply in Dradis are kept when generating the report.
    • Easy note filtering and grouping in the report (e.g. list of High-impact findings).
  • New testing methodology support (more below).
  • New Client Manager to group your projects.
  • Fresh look & feel (screenshots).
  • Lots of minor updates:
    • With the new Quick Filter locating clients, projects and users is a breeze!
    • Updated VulnDB HQ plugin to support v2 of the API.
    • Updated to Rails 3.2.8

 

Improved Word 2010 reporting

Creating complex pentest report templates has never been easier. You just need your copy of Word and a few minutes. Of course we have extensive documentation in our support site, but here are the highlights:

Note styles

Add notes in our WYSIWYG editor and the styles will be kept in the report:

Note filters

Word is the only tool you need to create powerful templates

Get the report without breaking a sweat:

 

Testing methodologies

This is a game changer. Tracking progress during an engagement is always a daunting task. No matter how experienced you are, if you don’t play close attention, you might be missing something.

Enter our testing methodology support:

You can define as many methodologies as you need (e.g. webapp, wireless, code review, etc.) and you can add them to your projects. For instance, a typical webapp assessment will have a web testing methodology and maybe a web server checks methodology.

Keep track of progress and split tasks amongst team members. Using a standardized testing methodology is the best way to obtain consistent results.

Still not a Dradis Pro user?

These are some of the benefits you are missing out:

  • Less time writing reports
  • Provide a consistent experience to your clients. Every time.
  • Pro is reliable, up-to-date and with comes with quality support

Read more in Why to give Dradis Professional Edition a try?

Create a report in minutes with Dradis Pro and VulnDB HQ

How long did it take you to create your last pentest report? Days? Hours? Sounds like too much effort for something that should be 80% automated!

Lets see how you can use Dradis Pro and VulnDB HQ to create a pentest report in minutes.

Tracking progress with Dradis Pro

Everybody tracks progress and makes notes while conducting an assessment. However, using Dradis Pro has a few advantages over other methods (e.g notepad).

First you can use testing methodologies to define the steps you need to cover and track your progress:

Of course this is useful both when you’re working alone and when you’re part of team to ensure there is no overlapping.

If everyone is adding their findings to Dradis Pro’s shared repository, generating the report is one click away (keep reading!).

Adding a few findings from your VulnDB account

Say that today is your lucky day, LDAP injection on the login form! You don’t think this is in your private VulnDB HQ repository but search anyway:

Well, it was not in your private repository, but there is an LDAP injection entry in VulnDB HQ’s Public repository that you can use as a baseline. You import it.

You continue with you hack-fu, find a bunch of issues: cross-site scripting, some SQL injection, Axis2 testing servlet, header injection and a few SSL issues. For each of these, you spend 30 seconds searching VulnDB HQ, importing the issue to your project and tweaking the particulars.

Assign everything to the AdvancedWordExport ready category, and you’re done. Fairly painless, no?

And if Dradis is not your cup of tea (?!) you could always connect your VulnDB HQ account to your own tools using our RESTful API (or the convenient vulndbhq Ruby gem).

Report template

Now, the report. We want a high-quality Word 2010 document that we can easily edit and adapt as time passes.

I won’t get into the nitty-gritty details of template building here (there is a Creating Word reports with DradisReports guide in our support site with step-by-step instructions).

We will use a fairly simple approach, I’ve created a template based of one of Word’s default styles (Home > Styles > Change Style > Formal). Just add the headings you need and a few Content Controls. Here is what ours look like:

It starts with a table with some information about the project (name, client, dates, team, etc.).

Then the Exec Summary with a Conclusions section (sorry, you’ll have to adjust this with your own conclusions!) and a Summary of Findings list which will contain just the Title of each finding.

Then a Technical Details section that contains issue descriptions for each of the vulnerabilities we’ve identified during the report.

Note that you only have to create the template the first time, and then reuse it for every project. The template you see above took me about 10 minutes to create.

One last thing: the properties

Yes, we could add the project specifics like the client name and dates and everything else by hand. However, chances are that your report template is a bit more complex than the one in this example and that you’ll have your client’s name in multiple places and that some of the other information will also be repeated.

Thankfully we can define document properties from within Dradis Pro (see the DradisReports: using custom document properties guide for more information):

There you go. Now we can re-export and voila, the report is complete:

  • Total reporting time: 1 click.
  • Overhead during the test for importing issues from your VulnDB HQ account: ~30 seconds each?

We rest our case.

Would you like to know more?

We recommend you start with:

VulnDB HQ API v2

A few days ago we released v2 of the API for VulnDB HQ, our platform to manage vulnerability databases.

A lot of work has happened in the background to pave the way to a more stable and comprehensive API. From the consumer perspective we now have a dedicated endpoint for API access (i.e. /api/) and can specify API versions via the Accept HTTP header. You can read all about it in the VulnDB HQ API v2 guide in our support site.

To make everyone’s life easier we’ve also open sourced a Ruby client-side library to make it easy for you to integrate VulnDB HQ with your own tools and systems. You can find it in our GitHub page:

https://github.com/securityroots/vulndbhq

We hope you find this useful!

You gotta commit

This answer from Bill Murray really hits the mark:

Bill: You gotta commit. You’ve gotta go out there and improvise and you’ve gotta be completely unafraid to die. You’ve got to be able to take a chance to die. And you have to die lots. You have to die all the time. You’re goin’ out there with just a whisper of an idea. The fear will make you clench up. That’s the fear of dying. When you start and the first few lines don’t grab and people are going like, “What’s this? I’m not laughing and I’m not interested,” then you just put your arms out like this and open way up and that allows your stuff to go out. Otherwise it’s just stuck inside you.

Bill Murray interview in Esquire via nate

When building a product and exposing it to the world, especially if you are a small organisation like us, you have to be unafraid to die. See what works and keep improving it, see what doesn’t, remove it completely and start again.