Echo: Configurable Prompts Built Into Your Workflow

Configurable prompts appear directly in your Dradis workflow. No context switching, no separate tool. Your findings never leave your network and you are not locked into one model.

AI assistance built into the workflow, not bolted on

Echo surfaces prompts directly inside Dradis - trigger a suggestion, review it, save. No separate tool, no copy-pasting, no finding data leaving your network.

Your findings stay on your infrastructure

Most AI writing tools send your text to a third-party API. For pentest findings, that is a non-starter.

Echo runs on your own infrastructure. Prompts surface directly inside Dradis, processed locally via Ollama. Your findings never leave the network. No external API calls, no third-party data handling, no data residency risk.

You are also not locked into one model. Echo's BYOLLM approach means you choose which LLM to run locally and switch as better options become available, without changing your Dradis setup or sending data anywhere new. The intelligence is yours.

Available for on-premise deployment for compliance, regulatory, or clearance requirements.

Read the full architecture deep dive: how Echo runs AI-assisted reporting without sending data to the cloud.

Diagram showing Ollama and Dradis running locally within organization network

Lean on Echo to reduce reporting time

Echo generates contextual suggestions directly in Dradis - summarize raw scanner output, rewrite tester notes into executive language, or enhance brief remediation advice with detailed steps.

Echo understands your finding's severity, affected systems, and context.

Review the suggestion, edit as needed, and save. All without leaving your workflow or writing custom code.

Before and after comparison of finding text enhanced by Echo

Define prompts that match your workflow and keep control with human review

Echo adapts to your writing standards and client expectations. Create context-driven prompts to:

  • Summarize findings for executives.
  • Deepen technical detail for developers.
  • Extract key metadata from long write-ups.

Echo speeds up reporting, but you stay in control:

  • Generate context-aware suggestions for each finding.
  • Review, edit, and approve before saving.
  • Accelerate reporting while improving consistency.

Combined with Dradis built-in Quality Assurance workflow to always deliver consistent results.

Screenshot of creating and saving custom Echo prompts

How Echo Enhances Your Workflow

Streamline Finding Summaries

Turn raw scanner output or tester notes into polished executive summaries in seconds. Echo understands context and delivers relevant suggestions.

Enhance Remediation Advice

Expand brief remediation steps into detailed, client-ready guidance. Echo contextualizes your findings and fills in the details.

Rewrite for Different Audiences

Generate technical versions for development teams and executive summaries for leadership - all from the same finding data.

Extract Key Details

Have Echo pull out CVSS scores, affected systems, or business impact from verbose notes, speeding up metadata entry.

Enforce Team Standards

Configure Echo prompts to use your approved severity levels, remediation language, and terminology - ensuring consistency across all reports.

Bring Your Own LLM

Choose your LLM via Ollama and switch anytime. Echo works with any compatible model - you own the prompts, the data, and the workflow. The intelligence is yours.

Why Echo suggestions are better than generic AI output

Echo prompts run with full awareness of where you are in Dradis and what you are working on. That context is what separates a useful suggestion from a generic one.

  • Finding context: Echo knows the severity rating, affected systems, and business impact of the finding you are editing - so suggestions are calibrated to the stakes, not just the words on screen.
  • Project context: Prompts run with awareness of the engagement methodology, client standards, and framework requirements - so output fits the report, not just the finding.
  • Team context: Your custom prompts and writing standards are baked in - so every suggestion reflects how your team writes, not a generic AI default.
  • Workflow context: Echo knows where you are in the reporting process - so it helps with what you need next, not something you have already handled.

Dradis Echo FAQs

How fast is local processing?

Local processing via Ollama is typically fast for finding summaries and rewrites. Latency depends on your server hardware and the processing model you choose.


Which LLMs work with Echo?

Echo works with any LLM available on Ollama, including Llama 2, Mistral, Neural Chat, and others. BYOLLM: Bring Your Own LLM - choose the model that best fits your performance and quality needs, and switch anytime without changing your Dradis setup. The intelligence is yours.


Should I trust Echo's suggestions?

Echo is a productivity tool that speeds up writing and improves consistency - not a decision-maker. All suggestions require human review before saving. Echo delivers suggestions that you refine and approve.


Can I create my own custom prompts?

Yes. You define and manage custom prompts in Dradis Echo tailored to your team's standards, audience, and use cases. Save and reuse prompts across your entire team. Echo adapts to your workflow and context.

BYOLLM: Bring Your Own LLM. The intelligence is yours.

Request a demo

Seven Strategies To Differentiate Your Cybersecurity Consultancy

You don’t need to reinvent the wheel to stand out from other cybersecurity consultancies. Often, it's about doing the simple things better, and clearly communicating what sets you apart.

  • Tell your story better
  • Improve your testimonials and case studies
  • Build strategic partnerships

Loading form...

Your email is kept private. We don't do the spam thing.