Echo is a context engine that understands Dradis finding fields, project data, and your team's standards. It runs locally via Ollama - a local processing framework that lets you choose which model powers your suggestions. No external APIs. No cloud processing. No data leaving your network. Echo understands where you are in your workflow, what you're working on, and delivers the right suggestion at the right moment.
Echo comes pre-installed in Dradis as of v5.0. It uses a local Ollama installation to connect Dradis to your preferred LLMs.
Run Ollama and pull one of the models:
ollama serve ollama run qwen2.5:14b
The smaller the model, the faster the responses, but potentially less accurate. Larger models can be slower but should produce higher-quality results.
The RAM requirement is directly tied to the model's parameter count. Always aim for the recommended amount rather than the minimum for a smoother experience:
In addition to RAM, we recommend at a minimum:
Configure Echo with the Ollama server address and your selected model in the Tools → Tool Manager → Configure (in the Echo section)
Navigate to an Issue and click the Echo tab. From there you’ll be able to Summarize or Reword your Issue content, or generate a cheeky Haiku.
Next help article: Dradis Echo Prompts →
Last updated by Christoffer Bjørk Pedersen on 2026-04-16
Your email is kept private. We don't do the spam thing.