Author Archives: Daniel Martin

Claude audited Dradis, then used Dradis to report the findings

This is the story of how the Dradis 5.0 release we’d been working on for months got delayed by 48 hours, thanks to Claude. Dradis is a self-hosted pentest reporting and collaboration platform used by cybersecurity teams around the world.

On April 14, 2026, OpenAI announced GPT-5.4-Cyber, a variant of GPT-5 tuned for cybersecurity work. The same day, Dradis 5.0 was scheduled to ship. A week before GPT-5.4-Cyber, Anthropic had announced Mythos Preview, claiming Claude could now identify and exploit zero-day vulnerabilities in every major operating system and web browser.

Two weeks before that, Thomas Ptacek (tptacek) published “Vulnerability research is cooked”, profiling how Nicholas Carlini at Anthropic’s Frontier Red Team runs Claude across every file in a codebase and asks it to find bugs.

Three announcements, sixteen days. Reading them while our next major release sat green and ready to push, the answer was obvious. It would be irresponsible to ship 5.0 without running the same kind of audit on our own code. Forty-eight hours later, eight findings had been triaged and fixed, and 5.0 went out the door.

Key Takeaways

  • Between March 30 and April 14, 2026, three major AI security announcements changed what a reasonable engineering team should try on its own code before a release.
  • We held Dradis 5.0 for 48 hours to run an AI-assisted security audit on the codebase. Eight findings surfaced. Every finding was triaged, fixed, and merged before the release shipped.
  • The brute-force “Claude on every file” approach that Carlini uses is viable but expensive. Building an architectural primer once, then running per-file audits warm, produced sharper findings at a fraction of the cost.
  • We tracked the audit using Dradis itself, organized against the OWASP Top 10 2025 methodology and templates shipped in 5.0. The reporting tool reported on its own vulnerabilities.
  • The 48-hour turnaround was possible because Dradis already runs an AI-assisted code review pipeline as part of the standard PR workflow. Without that infrastructure already in place, the audit would have taken longer or shipped rougher.
  • We can publish the audit this transparently because the Dradis code is self-hosted and open-source, and you can inspect and extend Dradis. You may be interested in the fixes in dradis/dradis-ce PRs.

This applies if

  • You ship a product that handles sensitive data and want to understand what an AI-assisted security audit looks like in practice, not in a vendor pitch.
  • You are evaluating Dradis (or any security platform) for a regulated environment and want to see how the team behind it operates when the stakes are real.
  • You run your own codebase and want to replicate the primer-first audit pattern described here. The practical steps section at the end is sized for that.

Skip this if

  • You are looking for a general introduction to AI in cybersecurity. This is a specific case study about a specific audit on a specific codebase in a specific 48-hour window.
  • You want a comparison of AI coding assistants. We used Claude because it was the tool that fit. The pattern works with other frontier models.

The three announcements that made this inevitable

On March 30, Ptacek’s piece introduced a lot of people to the Carlini method. It’s disarmingly simple. Download a code repository. Write a bash script. Inside the loop, run the same Claude Code prompt against every source file:

“Find me an exploitable vulnerability in this project. Start with ${FILE}.”

Ptacek compares it to “a kid in the back seat of a car on a long drive, asking ‘are we there yet?'” The stochasticity is a feature. Each invocation is a slightly different attempt at the same question, and the diffs across runs find things single-shot audits miss.

A week later, Anthropic announced Mythos Preview. The headline claim was that Claude can now identify and exploit zero-days in every major operating system and every major web browser when directed to do so. The Mythos post included the detail that non-experts have used the model to produce working exploits overnight. That one sat with us for a few days.

Then GPT-5.4-Cyber landed on April 14, the same day our 5.0 release branch was green and ready to push. OpenAI’s announcement framed it as a defensive model with lowered refusal boundaries for legitimate security work. Reverse engineering, vulnerability analysis, malware analysis. Access gated behind the Trusted Access for Cyber program.

Three announcements in sixteen days. We serve the security industry. Our customers use Dradis to hold their most sensitive client data. We ship a major release the same week two of the largest AI labs release models purpose-built for offensive security work. Reading those announcements while the release sat ready, the question answered itself. Not shipping without trying this on our own code first was the only defensible position.

The brute-force approach, and why we didn’t use it

The first instinct was the Carlini method exactly. Bash loop. Every file. Claude Code with the vuln-finding prompt. The Pro codebase narrowed to roughly 660 Ruby files in the audit surface. Routed controllers, models, jobs, libs touching user input. We did the math on Opus pricing against that file count and came out around $1,000 to $1,600 for a single pass. On Sonnet, $200 to $330. Wall-clock hours per pass, much of it re-discovering the same architecture in every cold invocation.

The cost was not the deal-breaker. The time was. We had 48 hours to run the audit and ship, not 48 hours of compute budget. And more importantly, the quality signal was wrong. Every file-scan spent most of its token budget re-learning what authentication looked like in our codebase, where current_project came from, how the CanCanCan ability model routed authorization. Each call arrived at the file cold, rediscovered the scaffolding, then had a few thousand tokens left to reason about the actual file.

So we changed the shape.

The Primer pivot

Instead of 660 cold starts, one warm one. A single Opus session read the routes, the controller hierarchy, the four Warden strategies, the CanCan ability model, the API base controllers, the Personal Access Token scope-enforcement logic, and the project-scoping concerns. That session produced THREAT_PRIMER.md, an architectural orientation document. Its opening paragraph sets the frame for everything after it:

You are auditing the Dradis Pro Rails app for exploitable vulnerabilities. This primer summarizes the auth, authorization, and request-flow scaffolding so you don’t have to rediscover it for every file. Read this first, then focus your investigation on the specific file you were given.

The primer runs about 2,000 tokens. Authentication chains. Authorization model. Sensitive sinks (file uploads, Liquid templates, mass assignment, jobs that deserialize user input). A section called “Patterns that look scary but are intentional“, so the audit wouldn’t waste time re-flagging design decisions like returning 404 instead of 403 on authorization failures as info-hiding.

This is what the primer bought us: subsequent per-file audits started warm. Every scan got the primer in context before it saw the file. Same coverage as Carlini’s approach, sharper findings because each pass arrived knowing the architecture, dramatically lower cost because the structural learning happened once instead of 660 times.

That pivot alone took the work from “not doable in 48 hours” to “doable in 48 hours, with time left for fixes.”

The eight findings

The audit surfaced eight issues. Severities spread from CVSS High down to Info. Click for more details:

  1. Personal Access Token authentication didn’t check whether the authenticating user was disabled or locked. Every other Warden strategy in the codebase did. The new PAT strategy, shipping for the first time in 5.0, didn’t. Fixed pre-release.
  2. PAT project_id conditions could be bypassed on the direct /api/projects endpoints. The condition system expects a Dradis-Project-Id header; the direct-projects controller reads the project from URL params. A PAT restricted to “project 2 only” could read project 1 by omitting the header. Fixed pre-release.
  3. PAT scope enforcement failed open when a controller’s resource name wasn’t in the allowed list. A new engine forgetting to register its resources would silently grant full access. Flipped to fail-closed. Fixed pre-release.
  4. SubscriptionsController had no authorization checks. Any authenticated user could subscribe to, or enumerate subscribers of, any resource across any project. A real shipped vulnerability in both CE and Pro. The CE-side fix is public at dradis-ce#1563 and the vuln report in our Security Reports page.
  5. Cross-project tag manipulation through an unscoped load_and_authorize_resource. Pro-specific vuln; the controller-shape fix went to CE too for code parity (dradis-ce#1563 covers the refactor that landed alongside the subscriptions fix).
  6. Echo configuration UI was accessible to non-admin users in Pro deployments. Dradis Echo shipped in 4.19 as a separate addon. Dradis 5.0 takes it out of Beta and includes it in the main framework, so this never reached a public version. The admin gate landed pre-release at dradis-ce#1565.
  7. Console job logs not scoped per user. After investigation, accepted as a capability-token design: the job UUID is server-generated via SecureRandom.uuid, 122 bits of entropy, only returned to the initiating user and short-lived. The documentation landed at dradis-ce#1564 so the design intent is recorded in the code where the next person looking at it will see it.
  8. A latent operator-precedence bug in the Comment and InlineThread CanCan ability blocks. Not exploitable under shipped ability rules. Still fixed defensively, because the next engine that adds a can :read ability on a commentable type would have silently opened a hole.

The full public advisories are on the Dradis Security Reports page. CHANGELOG entries are dated and in the 5.0.0 release notes.

Dradis tracking the Dradis audit

Granting Access

It was a great opportunity to test-drive our new Personal Access Token feature that allows user to create restricted access tokens for integrations or tools. We gave full Project access:

And then narrowed down to a single Project:

And set a 30 day expiration:

Gave the generated PAT to Claude and asked it to test access:

⏺ Good β€” API works, issue created. Let me delete that test issue and start the real audit.

Choosing the schema

Every finding went into a Dradis project, we chose the OWASP Top 10:2025, schema which we vibe-coded about a month ago (see dradis/dradis-claude for the SKILL files):

This is the part that still feels absurd when you say it out loud. The OWASP Top 10 2025 kit we were about to ship in Dradis 5.0, the methodology, the sample project, and the three report templates, were used to run and document the audit on the codebase that was about to ship them. The reporting tool reported on its own vulnerabilities, using its own brand-new methodology content, while the release that introduced that content waited for the audit to finish.

Dogfooding is a cliche. This was the literal form of it. If the kit doesn’t work for organizing an eight-finding audit under time pressure, we don’t ship the kit. The kit worked. The findings are in Dradis. The Dradis findings were used to drive the Dradis fixes. The fixes shipped in Dradis 5.0 alongside the kit.

“You got lucky”

AI-assisted security work has a known failure mode. Models assert things with absolute confidence, including things they have not actually verified. Experienced operators learn to hear the tone and push back before an assertion becomes a design decision.

One concrete example from this audit: While fixing the cross-project tag manipulation issue, Claude asserted that a particular CanCan ability tightening could not be ported from Pro to CE because of a framework limitation. The reasoning sounded plausible. It was plausible. But the reviewer had seen this pattern often enough to flag it.

“Unless you confirm this as a limitation with CanCan, I don’t buy your argument.”

The assertion got tested. It was correct. The exact error, from CanCan’s internal accessible_by when building SQL for an association that doesn’t exist in one of the two repos: NoMethodError: undefined method 'klass' for nil. Real limitation. The fix landed.

The reviewer’s response: “You got lucky.”

That is an important moment in AI-assisted engineering. Not when the model produces output, but when the operator decides whether to trust it.

The model had produced a confident, specific, mechanically-plausible claim, and it happened to be right. It also could have been wrong, and if it had been, we would have shipped a broken CE refactor and a misleading commit message explaining why.

The lesson is not “AI is unreliable.” The lesson is that AI-assisted engineering without an experienced operator calling bluffs on overconfident output is not engineering. It is rolling dice on production code. The same discipline applies whether the output is a bug report, a fix proposal, or an architectural justification.

Trust, then verify. Every single time.

The AI-assisted code review pipeline is why this fit in 48 hours

The 48-hour turnaround was not possible because of Claude alone. It was possible because Dradis already runs an AI-assisted code review pipeline as part of the standard PR workflow. That pipeline reviewed the audit’s own fixes.

One of the PRs in the audit received an automated review from a separate agent. The review flagged a real edge case in the fix: an empty allowlist in a personal access token’s project conditions passed the present? check as false, so the token’s index endpoint still worked, while every member action returned 403. Incoherent behavior. The review caught it, the fix was tightened to use Hash#key? instead of present?, two regression tests were added, 27 examples continued to pass, and the PR merged cleaner than it started.

That review happened on an audit fix, by an AI agent reviewing AI-drafted code, and it caught a real bug.

This is what enabled the 48-hour window. The audit found eight things. The fixes needed to be reviewed in the same window. A team relying only on human reviewers to triage eight security PRs in two days produces either careful work or fast work. Not both. The AI-assisted review pipeline, already running against every PR on this codebase, turned “fast enough to ship on time” and “careful enough to ship safely” from competing constraints into complementary ones.

Ongoing AI-assisted audits are now part of our release pipeline. The 48-hour window was a one-time event driven by news. The audit pattern it used is now continuous, running against new code as it goes in, in the same pipeline that already checks everything else.

What this means, and what it doesn’t

What this means is not that AI replaces security engineers. The audit identified eight findings. A human with 20 hours of focused attention on the same codebase would have identified findings too. Possibly a different eight. Possibly overlapping. What AI-assisted audits change is the frequency and breadth. You can do this more often, across more surface area, with cost low enough that it becomes continuous rather than event-driven.

What it doesn’t mean is that the data is safe to send anywhere. This audit ran against a Rails codebase on engineer laptops. The prompts contained code, and the code contained architectural details of a commercial platform. Had this been a customer’s code, under NDA, the same prompts would have been a meaningful data exposure. We’ve covered the use of “Shadow AI” in pentesting before. It is also why Dradis Echo exists: local Ollama, no external API calls, scoped permissions. AI-assisted work on data that cannot leave your environment requires AI that runs inside your environment.

And what it especially does not mean is that your security platform should pretend to not need audits. Every security tool will be audited eventually. The question is whether the vendor audits first or a customer does. The week the two largest AI labs released models purpose-built for offensive security work is the week every security tool vendor should have run one of those models against their own code. That we did, in 48 hours, using our own product to track the work. We see it as the baseline a serious tool should meet.

Evaluating Dradis for a regulated environment? The audit is proof of one thing: the code is inspectable, the fixes are public, and the team that writes it will tell you when they find a bug in it. If that matches your procurement criteria, book a demo and we’ll walk through the deployment and audit approach with your team.

Practical next steps if you want to do this on your own code

If you want to run a version of this audit on your codebase, not every piece requires the same tooling we used. The shape generalizes.

  1. Build the primer first. One warm session with a frontier model, focused on architecture. Authentication, authorization, request-flow scaffolding, known-intentional patterns. Stop there. Commit the artifact. Every downstream scan reads it.
  2. Narrow the file list to the audit surface. Routed entry points, jobs that consume user input, serialization boundaries. Not templates, not schema files, not vendored dependencies.
  3. Use a cheaper model for the per-file pass, a stronger one for the primer. The primer is a one-time structural read. The per-file scans are stochastic pattern-matching against the primer’s context.
  4. Track every candidate finding in a reporting tool. Title, affected paths, reproduction, severity. If you use Dradis, we open-sourced the OWASP Top 10 2025 kit that shipped in 5.0 has a methodology and templates sized for this shape of work. If you don’t, any structured tracker works. The point is that findings stay in a system where they get triaged, not a chat log.
  5. Have an operator who will push back on confident-sounding claims. Every asserted framework limitation, every claimed impossibility, every “this can’t be done because,” gets verified. Every single one. That is the job.
  6. Wire the audit into the ongoing pipeline, not an annual event. The 48-hour turnaround was a news-driven intensity burst. The steady-state version is AI-assisted review against every PR, against every new entry point, continuously. That is what catches the next finding before it ships.

FAQ

Did Claude find everything?

No. That’s not the right question. The question is whether AI-assisted audits find things humans miss and vice versa. This audit surfaced eight findings. A human with the same time budget would have found some overlap and some gaps. The value of AI-assisted auditing is breadth and frequency, not replacement.

What model did you use?

A single Opus session produced the primer. The per-file scans used a mix of Opus and Sonnet depending on file size and suspected complexity. Total spend for the audit was well under a single engineer-day of time at equivalent billable rates.

Why publish this? Doesn’t it invite attackers?

Two reasons. First, the fixes shipped before this post did. The findings described here are closed. Second, a serious security posture is not a secret. Customers evaluating Dradis for regulated environments ask specifically for evidence of how we operate internally. Posts like this are the evidence (but of course, there’s more evidence from industry 3rd parties). A vendor who won’t tell you how they audit their own code is a vendor asking you to trust them without proof.

Is Dradis audited by third parties?

Yes. This post describes an internal AI-assisted audit specific to the 5.0 release. It is not a substitute for third-party penetration testing or code audit. Our customers cannot avoid running their own audits on the code they self-host. It’s in their nature πŸ˜‚

Can I use Echo for this kind of work?

Echo is designed for report-writing assistance (finding descriptions, remediation language, CVSS scoring), not for code audit… yet. The architecture (local Ollama, no external API, scoped permissions) is the same principle, but the model sizes and prompt patterns are different. We used Claude Code and Codex for code audits specifically.

What about the findings that were accepted rather than fixed?

Finding 7 (console logs not scoped per user) was accepted as capability-token design after analysis. The reasoning is documented in the code itself at dradis-ce#1564. The summary: the job UUID is server-generated, random, only returned to the initiating user, and treated as the read capability for the associated logs. Adding row-level scoping would have required a migration and threading user context through 25+ call sites across core and every upload plugin. The cost was not proportionate to the residual risk, and the design intent is now recorded, where a future maintainer will see it. These logs are also garbage-collected daily.

What we learned that will change how we ship

Three things, all of them boring.

First: the primer-first pattern beats brute force on cost, speed, and output quality. We will use it again, and we will recommend it to teams asking us how we did this.

Second: AI-assisted security code review as a continuous pipeline (not an event) is the thing that made the 48-hour window possible. We had already been running one for general coding, now we’ve added the security review flavour. Teams that want the 48-hour capability need to build theirs before the news arrives, not after.

Third: the OWASP Top 10 2025 kit that shipped in 5.0 (methodology, sample project, three report templates) was battle-tested on an actual eight-finding audit under time pressure before it reached a single customer. That’s a better confidence signal than any launch blog could have delivered.

See how Dradis handles this in practice The audit turnaround, the OWASP 2025 kit, and the AI-assisted review pipeline are all part of Dradis 5.0. The code is self-hosted, the core is open-source, and the security fixes from this audit are in public CE PRs you can read.
If you’re evaluating platforms for a regulated environment, book a demo and we’ll walk through the deployment architecture and what this kind of audit looks like against your own team’s code. If you’re here for the AI-security angle and want to see how local AI fits into a pentest workflow, that’s the Echo page. Self-hosted, Ollama, no external API.

New in Dradis Pro v2.4

Dradis Professional Edition is a collaboration and automated reporting tool for information security teams that will help you create the same reports, in a fraction of the time.

This month we’re pleased to bring you Dradis Pro v2.4 with some long-requested improvements.

The highlights of Dradis Pro v2.4

  • Project-wide search (see below)
  • UI improvements (see below)
  • Copying of Report Template Properties
  • Word reports
    • Better file extension handling in Windows
  • Minor bug fixing.

A quick video summary of what’s new in this release:

Project search

It is now possible to perform a project-wide full-text search against Evidence, Issues, Nodes and Notes:

A screenshot showing the "All" tab with results for a "DNS" search

A screenshot from the Search results page showing only Node matches

UI improvements

Dradis is used by over 270 teams in 33 countries around the world. When people are using your platform to edit and generate content in languages as varied as Simplified Chinese, Slovenian or Turkish, it becomes very easy to spot and squash internationalisation and character encoding bugs.

With this release we’ve made sure that Tags fully support names encoded in UTF-8:

A screenshot showing a tag in simplified Chinese

Evidence multi-add

It is not uncommon to need to link the same Issue to a number of hosts in your project. We’ve redesigned the UI to make this task a lot simpler:

  • Select the Evidence template you need (or start with a blank slate).
  • Tick off the relevant items from the Existing Hosts list.
  • If needed, paste list of new IP addresses that will be added to the project and also associated with your Issues.

A screenshot showing the new Add Evidence feature that lets you select existing nodes from a list, or paste a list of IP address.

Validate on save

Teams working with Dradis normally need to use a number of different report templates (e.g. one for vulnerability assessments and one for social engineering). To make it easy for users to remember what information they need to provide on each template we’re now validating the contents supplied by the user against the individual template requirements so we can present a warning if the content doesn’t match the template’s expectations:

A screenshot showing warnings about missing fields and mismatched values in a recently created issue.

Optimistic locking

Have you ever been in a situation where just after updating an Issue or Note, you find out that one of your team mates was also editing that feature? From now on, Dradis will warn you when someone else has been modifying the content you were busy with, so you have the peace of mind to know you’re always working on the latest version of the content:

A screenshot showing how Dradis detects a modification to the content you were just trying to edit.

Still not using Dradis in your team?

These are some of the benefits you’re missing out:

Read more about Dradis Pro’s time-saving features, what our users are saying, or if you want to start from the beginning, read the the 1-page summary.

New in Dradis Pro v2.3

Dradis Professional Edition is a collaboration and automated reporting tool for information security teams that will help you create the same reports, in a fraction of the time.

This month we’re pleased to bring you Dradis Pro v2.3 with some interesting additions.

The highlights of Dradis Pro v2.3

  • Smart issues table (see below):
    • Filter / search contents
    • Custom columns
    • Show / hide columns
  • Tabbed view for: Issues, Notes and Evidence (see below)
  • Admin > Templates > Reports improvements
  • Admin > Templates > Projects improvements
  • Redesign of empty views: project, issues, methodologies
  • Add-on enhancements
    • Acunetix: better code / syntax parsing
    • OpenVAS: bug fixing
    • – Project export: improve SQL efficiency
  • Methodologies module
    • Fix task status handler (tasks w/ special chars)
    • Progressive design enhancements
  • REST/JSON API:
    • New coverage: Notes, Evidence
    • Track API actions in Activity Feed
  • Word reports
    • Image captions (see below)
    • Fix bug w/ special chars in Node labels
  • Security fixes
  • Bugs fixed: #324, #325

Smart issues table

Dradis is used by over 270 teams in 33 countries around the world. Each team has a very different way of structuring their findings. With the new smart issues table, each user can decide what information should be presented on the screen for each project:

UI improvements

A few screenshots of the recent redesigns:

A screenshot of an Issue showing tabs for Information, Evidence and Activity

A screenshot showing the All Issues table with the new controls for filtering and showing/hiding columns.

A screenshot showing the Web Application Hacker's Handbook methodology

Word image captions in action

You can now specify the caption associated with your screenshots so it appears in your reports:

A screenshot showing how to specify the caption for an image

Hover the image to show the associated caption:

A screenshot showing Dradis rendering an image with a caption.

And select a custom Caption style for your Word image captions:

A screenshot showing a Word document with an image and a caption

Still not using Dradis in your team?

These are some of the benefits you’re missing out:

Read more about Dradis Pro’s time-saving features, what our users are saying, or if you want to start from the beginning, read the the 1-page summary.

New in Dradis Pro v2.2

Dradis Professional Edition is a collaboration and automated reporting tool for information security teams that will cut your reporting time in half.

Two short months after the release of Dradis Pro v2.1 in February we’re pleased to bring you Dradis Pro v2.2 which is focused around connectivity and performance.

The highlights of Dradis Pro v2.2

  • Full REST/JSON API coverage (documentation)
  • Performance improvements: Rails 4.2, Ruby 2.2, memory monitoring.
  • Fix bug in Activity Feed of project templates.
  • Add-on enhancements:
    • CSV: export evidence data, fix CLI integration
    • HTML: fix CLI integration
  • Bugs fixed: #204, #319

The REST API

Through the new HTTP JSON APPI you can securely access all of the application entities including:

Screenshot showing a GET request to the /clients endpoint

Perform CRUD operations on all application objects through an easy-to-use JSON interface.

Screenshot showing a POST request to the /issues endpoint

Use your favorite language to interact with the data contained in your Dradis environment.

Performance boost: faster, more responsive interface

Dradis Pro v2.2 also comes with a new version of the Rails framework and a modern version of Ruby. Both of these upgrades should have a significant impact in the overall performance and snappiness of the app and also bring some interesting security features out of the box. Strong parameters and DB performance come to mind on the Rails front and garbage collection (GC) of symbols on the Ruby front are some of the notable changes.

For the nitty gritty details please see the Rails 4.2 release notes and the Ruby 2.2 announcements.

Still not a Dradis user?

These are some of the benefits you’re missing out:

Read more about Dradis Pro’s time-saving features, what our users are saying, or if you want to start from the beginning, read the the 1-page summary.

New in Dradis Pro v2.1

Dradis Professional Edition is a collaboration and automated reporting tool for information security teams that will cut your reporting time in half.

Throughout 2016 we’re aiming to shorten our release cycle, and we’re pleased to bring you Dradis Pro v2.1 with a collection of enhancements that will make your day-to-day life a little bit easier.

The highlights:

  • DB performance improvements.
  • Session timeouts.
  • New add-ons
    • CVSSv3 score calculator.
    • DREAD score calculator.
  • Add-on enhancements:
    • Nessus: add support for compliance checks.
    • Nessus: use Node properties.
    • IssueLibrary: tagging of findings + UI improvements.
    • Rules Engine: rule sorting + UI improvements.

A few screenshots of the release

Screenshot showing the IssueLibrary entries with a badge showing their tags

Tag entries in your IssueLibrary

A screenshot showing each rule with handle bars for easy dragging / moving.

Drag and drop rules to re-order them

A screenshot showing the interface of the new calculator that lets you generate CVSSv3 by choosing the value for each subscore.

Calculate CVSSv3 scores and vectors from within Dradis

A screenshot of a piece of Evidence in Dradis with the Policy Value, the Actual Value and the Compliance Status of the check.

We can parse and export to your report Nessus’ compliance data.

How to upgrade to Dradis Pro v2.1?

Just head over to the release page and follow the instructions:

https://portal.securityroots.com/releases/latest

Still not a Dradis user?

These are some of the benefits you’re missing out:

Read more about Dradis Pro’s time-saving features, what our users are saying, or if you want to start from the beginning, read the the 1-page summary.

New in Dradis Pro v2.0

Dradis Professional Edition is a collaboration and automated reporting tool for information security teams.

Just in time for the new year a fresh release of Dradis Pro is out of the oven. We’re really excited about Dradis Pro v2.0 as it is going to allow you to have a much better understanding of what is going on in all your security assessments.

The highlights:

  • Activity Feed: see what others are doing (see below)
  • Content revisions: track and *diff* edits (see below)
  • REST API: Clients and Projects
  • New Change Value action for the Rules Engine
  • Open support ticket from the app
  • Better issue Tagging support
  • Scheduled DB cleanup
  • DB performance enhancements
  • New add-ons
    • Brakeman Rails security
    • Metasploit Framework
  • Word reports
    • Better handling of screenshots
    • Pre-export validator (see below)
    • Add .docx / .docm support CLI generation
    • Report template properties (see below)
  • Plugin enhancements:
    • Acunetix issue identification accuracy
    • LDAP integration
    • NMap CLI bug fixed
    • NTOSpider additional data gathering
    • NTOSpider Plugin Manager bug fix
    • Qualys port and protocol information
  • Security fixes

Bugs fixed: #223, #301, #303, #307b

Dradis v2.0 video summary

The most juicy features in a 1m32s video:

The Activity Feed

The new Activity Feed is displayed on every view of the project. It lets you see who has been working on what (and when).

In the Project Summary page, the feed looks like this:

creenshot showing different activities with the associated user, and data (e.g. Rachel created a note), along with a link to the activity.

The project activity stream.

There is an Activity Feed for issues, evidence, notes and nodes, so nothing will slip through the cracks.

Versioned content

In addition to knowing who did what and when, we’ve taken it one step further: it is now possible to view and compare the changes that were introduced in any piece of content during the lifetime of the project:

A screenshot showing the view comparing the differences between two revisions of the same content.

The Activity Feed view from the Project Summary page.

Report template properties and pre-export validator

Finally a handy feature on the reporting front. Since Dradis doesn’t force you to change the way you write your report, we don’t make any assumptions about how you want to work (trivia fact: Dradis has been used by over 200 teams in 32 countries and dozens of languages). As a result some times there is a small discrepancy between the content in your Dradis project and what your report template is expecting.

For example, say you use High, Medium and Low for risk rating. Maybe in one of the issues somebody made a typo and used Hihg instead of the appropriate spelling. Or say that your template is expecting you to define properties for Project name and Client point of contact but your forgot? Fear not, the new pre-export validator is here to help!

A screenshot showing the different checks the validator is making.

The pre-export validator in action.

So far we’ve got the following checks, but we’re already working in the next batch:

How to upgrade to Dradis Pro v2.0?

Just head over to the release page and follow the instructions:

https://portal.securityroots.com/releases/latest

Still not a Dradis user?

These are some of the benefits you’re missing out:

Read more about Dradis Pro’s time-saving features, what our users are saying, or if you want to start from the beginning, read the the 1-page summary.

New in Dradis Pro v1.12

Today we’re happy to announce a new release of Dradis Professional Edition: Dradis Pro v1.12. Dradis is a collaboration and automated reporting tool for information security teams.

The highlights:

  • New Accunetix and NTOSpider connectors
  • Updated Burp and OpenVAS connectors
  • Business Intelligence add-on (see below)
  • Rules Engine add-on (see below)
  • Reporting engine enhancements:
    • Pre-export validator
    • Native support for .docx and .docm
    • IssueCounter control
    • Concurrency enhancements
  • Bugs fixed and feature requests: #128, #131, #141, #145, #152, #184, #189, #197, #201, #205, #207, #212, #216, #232, #238, #239, #254

Rules Engine add-on

Define rules that kick in when you upload the output of a scanner. Akin to your email client processing rules, the Rules Engine allows you, among other actions, to:

  • Tag findings based on their fields (e.g. tag as Critical if CVSSv2 is > 9)
  • Merge several findings into a single one (e.g. group all those pesky “missing patches” entries under a single finding)
  • Replace the default description with your own. That’s right, every time Burp finds XSS, you will get a finding with your team’s custom Description / Recommendation for this vulnerability class.

A screenshot showing the list of configured rules in this Dradis Pro instance.

Define the rules that will kick in when you upload the output of a scanner.

A screenshot showing a rule definition where two findings (one from Nessus and one from Qualys) will be replaced with the team's own description of the problem.

Sample rule: de-duplicate findings.

A screenshot showing a rule definition where any finding coming from a scanner is replaced with the team's own description in the IssueLibrary

Sample rule: use your own descriptions.

Business Intelligence add-on

Most likely you’re running 100s of projects each year. The Business Intelligence add-on helps you make sense of the wealth of information that is at your fingertips but that most likely you haven’t been tracking. These are some of the questions you will be able to start answering:

  • What do you know about the types of projects you’re running (what percentage is webapps vs infrastructure)?
  • What types of clients are you serving? In what industry?
  • How are the most profitable client types?
  • What percentage of your projects is under-scoped or over-scoped?

A screenshot showing the Business Intelligence view with: a list of custom properties for Clients, for Projects and a search facility.

The Business Intelligence dashboard. Define custom properties for Clients and Projects to track business metrics.

New admin layout

Yes, we finally have a layout like it’s 2015 (well maybe 2013), but a great improvement over our bare-bones previous one. Here are just a couple of quick examples:

A screenshot showing the project selection view inside Dradis Pro.

Project section view.

A screenshot showing the list of users registered in a Dradis Pro instance.

All users registered in the Dradis Pro instance.

How to upgrade to Dradis Pro v1.12?

Just head over to the release page and follow the instructions:

https://portal.securityroots.com/releases/1.12.0

Still not a Dradis user?

These are some of the benefits you’re missing out:

Read more about Dradis Pro’s time-saving features. Or if you want to start from the beginning, read the the 1-page summary.

New in Dradis Pro v1.11

Today we’re happy to announce a new release of Dradis Professional Edition: Dradis Pro v1.11. Dradis is a collaboration and report generation tool for information security teams.

The community of Dradis users is very passionate about their craft and they rely on us to run their infosec practice. We live to make their lives better by moving out of their lives as much of the grudge work and repetition involved in delivering each project. Part of that effort also consists on creating great documentation to make the most out of Dradis, and we have two new manuals:

  • Working with projects: covering every module you will use on a day-to-day basis when running a project with Dradis.
  • Custom Word reports: showing you how our flexible reporting engine can be used to adapt your existing report template.

As promised a few months ago, we keep our focus on software quality and continuously raising the bar for ourselves. As a result this release is more about stability, performance, and enhancing existing functionality than it is about introducing flashy new features (not that we’re not working on flashy new features, of course we are, and they’ll blow your socks off when you see them, but they are not part of this release ;)).

Without further ado, the highlights of this release:

  • Performance improvements for really large projects. Running internal assessments with 100s of hosts and 1000s of vulnerabilities is completely painless.
  • Enhancements to the reporting engine:
    • Filter Issues by tag
    • Better screenshot support
    • Better paragraph / text styling detection
    • Better internal formatting (when inside Word tables)
    • Background report generation
  • Onboarding Tour for new users
  • In-project methodology editor
  • Drop old interface support
  • Bugs fixed: #20, #24, #50, #52, #55, #74, #142, #143, #146, #147, #151, #159

How to upgrade to Dradis Pro v1.11?

Just head over to the release page and follow the instructions:

https://portal.securityroots.com/releases/1.11.0

Still not a Dradis user?

These are some of the benefits you’re missing out:

Read more about Dradis Pro’s time-saving features and pricing. Or if you want to start from the beginning, read the the 1-page summary.

Dradis Pro is sponsoring BSides London 2014

Dradis Professional is sponsoring the next edition of the B-Sides London security conference:

http://www.securitybsides.org.uk/

B-Sides London 2014 will be held at the Kensington and Chelsea Town Hall on April 28, 2014 in London, UK.

We’ve put together a page for the event and are raffling a Dradis Pro license, read more at:

http://securityroots.com/dradispro/events/bsideslondon2014.html

Are you planing to attend or want to get in touch? Contact us or ping us on Twitter: @dradispro

New in Dradis Pro v1.10

Today we’re happy to announce a new release of Dradis Professional Edition: Dradis Pro v1.10.

March 2014 has been a great month: first we took part in Corelan Team’s 5th Anniversary party then we attended the first edition of the Rooted Warfare event and now we have a fresh release ready for you (yes, yes, technically we’re not in March any more, but it’s close enough!).

It’s been only 3 months since our last release, but this one is full of action:

  • A more useful Project Summary view (see below).
  • Tag issues and group them by tag.
  • New Project Template manager.
  • Performance improvements to several plugins (Nmap, Word, etc.)
  • Improvements to the management console (see below).
  • Several improvements on the UTF-8 and i18n front.
  • And of course bug fixes, lots of bug fixes
    (#43, #44, #64, #65, #72, #75, #77, #78, #85,… full list)

Lets get a closer look of some of the most significant enhancements…

Interface improvements

This is what the new Project Summary page looks like:

A screenshot showing the new Project Summary view. Includes an issue chart and a methodology progress meter

All in all, the new Project Summary gives you a nice big picture overview of what is going on with the project. This is great for team leaders and technical directors wanting to keep an eye on the projects across the board. And if the client asks for an update, you’ll have all the information you need in a single screen. Nice and easy.

Lets delve into the key components of this new summary view.

Finding tagging

First of all, it is now possible to group and tag your findings. You can define your own categories and colors or you can use the default ones, up to you.

In terms of doing the real assigning, a nice drag-and-drop interface makes it a very straightforward and intuitive process:

A screenshot showing the interface that allows you to drag issues and drop them into the right category

Track methodology progress

Testing methodology support was introduced some time ago. However in this release we’re making it a lot easier to keep track of how much progress you and your team have made.

A screenshot showing the new graph that keeps track of your progress in the methodologies of the project.

You can of course create your own testing methodologies. But remember that to help you get started there are quite a few already available in the Resources section of our Users Portal:

http://securityroots.com/dradispro/extras.html

Management console improvements

We’ve some good news on the Dradis CIC as well.

There are a few services ticking along in the background to make sure you have a great Dradis Pro experience. Every once in a while however, you may want to restart some of this services (e.g. you developed a new custom plugin, you made a change to your MySQL config, etc.). Before you had to roll up your sleeves and prepare for some good old console goodness. Not any more! From now on, it is possible to check the status of the different services and restart them from the web interface itself:

A screenshot of Dradis' Admin Console showing an interface that lets you re-start the different services the app depends on.

How to upgrade to Dradis Pro v1.10

Just head over to the release page and follow the instructions:

https://portal.securityroots.com/releases/1.10.0

Still not a Dradis user?

These are some of the benefits you will get:

Read more about Dradis Pro’s time-saving features.