Self-Hosted AI Pentest Reporting with Dradis Echo

AI-assisted reporting has become a standard feature comparison for pentest management tools. The question isn't whether AI can help with finding descriptions, remediation steps, and executive summaries. It can.

The question is whether the AI runs on infrastructure you control.

Most implementations route your pentest findings through vendor-controlled cloud infrastructure. For teams without data constraints, that's a reasonable architecture. For regulated-sector teams, government contractors, and anyone operating under client NDAs that prohibit assessment data from leaving the engagement network, it's not an option. These teams aren't late adopters of AI-assisted reporting. They've been architecturally locked out of it.

Echo exists because of that gap. It runs entirely via Ollama on your own hardware. No external API calls. No data leaving your network. No vendor training on your findings. With Dradis 5.0, Echo ships as a core part of the framework, included on all plans at no additional cost.

Key takeaways

  • Echo runs AI-assisted pentest reporting locally via Ollama — no external API calls, no data leaving your network, with scoped permissions that limit what the model can access.
  • Regulated-sector teams, government contractors, and NDA-constrained engagements have been architecturally locked out of AI-assisted reporting because every other implementation routes findings through vendor infrastructure.
  • Echo handles the reporting tasks that consume practitioner time: rewriting tester notes into polished descriptions, expanding remediation steps, generating executive summaries, and extracting structured metadata from verbose notes.
  • Unlike generic AI tools, Echo is context-aware — it knows the finding's severity, affected systems, project methodology, and your team-defined writing standards.
  • Echo is included on all Dradis plans (Assess, Remediate, Enterprise) at no additional cost as of April 2026.
  • The human-in-the-loop workflow means Echo generates suggestions that you review and edit before saving — the practitioner decides, not the model.

Who Echo is for — and who it isn't

Echo solves a problem for three types of teams:

If you're a boutique consultancy (3-7 testers) burning Friday afternoons rewriting finding descriptions and formatting Word documents, Echo cuts the repetitive writing work. Your senior tester's best finding description, refined into a team prompt, becomes the standard every tester produces against.

If you're a team lead standardizing output across testers with varying writing abilities, Echo applies your team-defined prompts consistently across every project. Two testers, same finding, same quality — because the prompts encode your standards, not each tester's interpretation.

If you're a regulated buyer with data residency requirements, Echo is the only AI-assisted reporting option that runs entirely on infrastructure you control. Local Ollama runtime. No external API. Scoped permissions that limit what the model can access within your Dradis instance. Each of these is independently verifiable.

Echo is not for you if:

  • You need a fully autonomous report generator. Echo assists; it doesn't replace the practitioner.
  • You're comfortable routing pentest findings through cloud infrastructure. Cloud-based AI tools work fine for teams without data constraints.
  • You don't have the hardware to run a local LLM. Echo requires a machine capable of running Ollama (current requirements: quad-core CPU, 32GB RAM, 12GB SSD storage — verify against the setup guide for your version).

What Echo actually does in your reporting workflow

Echo handles the reporting tasks that eat practitioner time without adding security value:

Rewrite tester notes into client-ready finding descriptions. A tester drops in "SQLi in login form, parameterized queries not used, session tokens in URL" and Echo produces a polished description with proper structure, context, and language appropriate for the client audience. Review it, edit it, save it.

Expand brief remediation steps into actionable guidance. "Implement parameterized queries" becomes a detailed, client-ready remediation section with specific steps, code patterns to adopt, and common implementation pitfalls. Your team's prompts control the depth and tone.

Generate executive summaries from technical findings. Echo synthesizes across a project's findings to produce an executive summary that frames technical risk in business terms. Cuts the "staring at findings trying to write the exec summary" block from hours to a starting point you refine.

Extract structured metadata from verbose notes. CVSS scores, affected systems, business impact — Echo pulls structured fields from unstructured tester notes so the data populates your report template correctly.

Apply team writing standards consistently. Create prompts that encode your severity language, your remediation depth, your client tone. Every tester, every project, same standard. The prompts are yours — not vendor-defined, not model-default.

Why context-awareness matters more than model quality

Generic AI tools receive a finding in isolation. They don't know whether it's a critical on a production banking application or an informational on a development server. They don't know your team uses "High" instead of "Critical" or that your clients expect remediation steps in a numbered list format.

Echo knows context. When it generates a suggestion, it has access to:

  • The finding's severity and affected systems
  • The project methodology and scope
  • Your team-defined prompts and writing standards
  • The structure of your report template

Context-awareness is what makes the difference between a suggestion you throw away and one you edit slightly and ship. A more powerful model without context produces more confident generic output. A smaller model with full context produces suggestions that already match your workflow.

See it in action. Request a walkthrough with your own sample findings to see how Echo handles your team's specific reporting workflow.

The architecture: local Ollama, no external API, scoped permissions

For teams where data routing matters, the architecture details are the product. Here's exactly how Echo works:

Local Ollama runtime. Echo connects to Ollama — an open-source local LLM runtime with 167,000+ GitHub stars, actively maintained, compatible with Llama, Mistral, Gemma, DeepSeek, and other models. Ollama runs on your hardware, inside your network. Switch models without changing your Dradis configuration.

No external API calls. When Echo processes a finding, the data path is: Dradis instance → Ollama (on your network) → back to Dradis. At no point do findings, prompts, or suggestions leave your infrastructure. There is no "phone home," no telemetry on processed content, no vendor-side training on your data.

Scoped permissions. Echo's access within Dradis is scoped to the context it needs for the current task. It doesn't have blanket access to all projects, all findings, or all historical data. The model sees what's relevant to the specific suggestion being generated.

This architecture means Echo works in environments where cloud-based AI tools are excluded by policy: air-gapped networks, classified facilities, client sites without internet access, and any engagement where the NDA prohibits data from leaving the assessment network.

Echo is included on all plans

As of April 2026, Echo is included on every Dradis plan at no additional cost:

  • Assess ($79/user/month) — for consultancies doing third-party assessments
  • Remediate ($149/user/month) — for internal teams owning remediation
  • Enterprise ($299/user/month, 5-seat minimum) — for teams requiring LDAP/SAML, audit logging, and legal/security requirements

Most buyers evaluating AI-assisted reporting assume it's a premium add-on. It isn't. If you have a Dradis subscription, you have Echo.

The Community Edition (free, open-source, GPLv2) supports one project at a time. For teams ready to evaluate Echo alongside multi-project support, custom reporting, and the full Issue Library, request a demo.

The human-in-the-loop model: Echo suggests, you decide

This audience has seen AI produce confident nonsense. The review-before-save workflow isn't a limitation — it's the correct design for security reporting where accuracy has consequences.

Echo generates a suggestion. You see it inline. You edit it — adjust the severity language, add a specific detail from your testing, refine the remediation steps for this client's environment. Then you save it. Or you discard it and write from scratch. The practitioner stays in control of every word that enters the report.

This matters because pentest findings carry real-world consequences. A wrong remediation step, an incorrect severity rating, or a mischaracterized business impact doesn't just look unprofessional — it can misdirect a client's remediation budget. Echo accelerates the writing; the tester owns the accuracy.

Common use cases

Web application assessment report assembly. Your team runs a standard web app engagement, identifies 15-20 findings, and needs a polished report with executive summary by Friday. Echo rewrites tester notes across all findings, expands remediation, and generates the executive summary. The work that used to consume a full afternoon becomes an editing session.

OSCP-style report standardization. For OSCP exam reports and similar structured deliverables, Echo applies consistent formatting, finding structure, and remediation depth across all entries. The methodology is fixed; the writing quality becomes consistent.

Multi-tester engagement consolidation. Three testers contribute findings from different assessment phases. Each has their own writing style and level of detail. Echo applies the same team prompts to normalize output quality, so the final report reads like one person wrote it.

Scanner output enrichment. Import Nessus, Burp, or Nmap output into Dradis, then use Echo to rewrite scanner-generated descriptions into language appropriate for your client audience. Replace vendor-generic finding text with your team's voice.

Practical next steps

  • Verify the architecture fits your requirements: review the Echo documentation for setup details and Ollama compatibility
  • See Echo with your own findings: request a demo and bring sample data from a recent engagement
  • Compare deployment models: see how self-hosted Dradis + Echo compares to cloud-based alternatives on data sovereignty and AI
  • Inspect the code: the Community Edition is open-source on GitHub under GPLv2
  • Check hardware requirements: Echo runs via Ollama, which needs a machine with a quad-core CPU, 32GB RAM, and 12GB SSD storage (verify current requirements in the setup guide)

Frequently asked questions

Does Echo send pentest findings to an external API?

No. Echo runs entirely via Ollama on your own hardware. The data path is Dradis instance to Ollama (on your network) and back. No findings, prompts, or suggestions leave your infrastructure. There is no external API call, no telemetry on processed content, and no vendor-side training on your data.

What LLM models work with Echo?

Echo works with any model available through Ollama, including Llama, Mistral, Gemma, and DeepSeek. You can switch models without changing your Dradis configuration. Ollama is an open-source local LLM runtime with over 167,000 GitHub stars, actively maintained as of April 2026.

Is Echo an additional cost on top of Dradis?

No. As of April 2026, Echo is included on all Dradis plans (Assess at $79/user/month, Remediate at $149/user/month, Enterprise at $299/user/month) at no additional cost. There is no separate AI add-on fee.

Can Echo run in an air-gapped environment?

Yes. Because Echo runs via a local Ollama instance with no external API calls, it works in fully air-gapped environments with no internet connectivity. After initial installation and model download, no network connection is required for ongoing operation.

Does Echo replace the human reviewer?

No. Echo generates suggestions that you review, edit, and approve before they're saved. The practitioner stays in control of every word in the report. Echo accelerates the writing; the tester owns the accuracy. This is a deliberate design choice — pentest findings carry real-world consequences and require human judgment.

What hardware do I need to run Echo?

Echo runs via Ollama, which requires a machine with a quad-core CPU, 32GB RAM, and approximately 12GB of SSD storage. These are the documented requirements as of April 2026 — check the Dradis support guide for your specific version, as requirements may change with different default models.


Ready to see Echo with your own findings? Request a walkthrough — bring sample data from a recent engagement and we'll show you the full workflow on your infrastructure. Or compare deployment models to see how self-hosted AI stacks up against cloud alternatives.

Seven Strategies To Differentiate Your Cybersecurity Consultancy

You don’t need to reinvent the wheel to stand out from other cybersecurity consultancies. Often, it's about doing the simple things better, and clearly communicating what sets you apart.

  • Tell your story better
  • Improve your testimonials and case studies
  • Build strategic partnerships

Your email is kept private. We don't do the spam thing.