Echo is a context engine that understands Dradis finding fields, project data, and your team's standards. It runs locally via Ollama - a local processing framework that lets you choose which model powers your suggestions. No external APIs. No cloud processing. No data leaving your network. Echo understands where you are in your workflow, what you're working on, and delivers the right suggestion at the right moment.
Dradis Echo is currently in beta. The up-to-date guide to setup and configuration can be seen in Echo's GitHub Repository.
Echo uses a local Ollama installation to connect Dradis to your preferred LLMs.
Running your local LLM will be resource-intensive. We find this guide is quite accurate in terms of system requirements. In brief, we recommend at a minimum:
Great job, you reached the end of the guide! Have you read all of them?
Your email is kept private. We don't do the spam thing.