Intro to Dradis Echo

Echo is a context engine that understands Dradis finding fields, project data, and your team's standards. It runs locally via Ollama - a local processing framework that lets you choose which model powers your suggestions. No external APIs. No cloud processing. No data leaving your network. Echo understands where you are in your workflow, what you're working on, and delivers the right suggestion at the right moment.

Echo comes pre-installed in Dradis as of v5.0. It uses a local Ollama installation to connect Dradis to your preferred LLMs.

Setup

Run Ollama and pull one of the models:

ollama serve
ollama run qwen2.5:14b

The RAM requirement is directly tied to the model's parameter count. Always aim for the recommended amount rather than the minimum for a smoother experience:

  • 3B–7B models: at least 8 GB RAM
  • 13B–14B models: at least 16 GB RAM
  • 30B–34B models: at least 32 GB RAM
  • 70B models: at least 64 GB RAM

In addition to RAM, we recommend at a minimum:

  • A quad-core CPU
  • An SSD with at least 12 GB of storage space for the LLM

Configure

Configure Echo with the Ollama server address and your selected model in the Tools → Tool Manager → Configure (in the Echo section)

Usage

Navigate to an Issue and click the Echo tab. From there you’ll be able to Summarize or Reword your Issue content, or generate a cheeky Haiku.

Next help article: Dradis Echo Prompts →

Last updated by Christoffer Bjørk Pedersen on 2026-04-16

Seven Strategies To Differentiate Your Cybersecurity Consultancy

You don’t need to reinvent the wheel to stand out from other cybersecurity consultancies. Often, it's about doing the simple things better, and clearly communicating what sets you apart.

  • Tell your story better
  • Improve your testimonials and case studies
  • Build strategic partnerships

Your email is kept private. We don't do the spam thing.