Intro to Dradis Echo

Echo is a context engine that understands Dradis finding fields, project data, and your team's standards. It runs locally via Ollama - a local processing framework that lets you choose which model powers your suggestions. No external APIs. No cloud processing. No data leaving your network. Echo understands where you are in your workflow, what you're working on, and delivers the right suggestion at the right moment.

Dradis Echo is currently in beta. The up-to-date guide to setup and configuration can be seen in Echo's GitHub Repository.

Echo uses a local Ollama installation to connect Dradis to your preferred LLMs.

Running your local LLM will be resource-intensive. We find this guide is quite accurate in terms of system requirements. In brief, we recommend at a minimum:

  • A quad-core CPU
  • 32 GB RAM
  • An SSD for storage with a bare minimum of 12GB storage space for the LLM

Great job, you reached the end of the guide! Have you read all of them?

Seven Strategies To Differentiate Your Cybersecurity Consultancy

You don’t need to reinvent the wheel to stand out from other cybersecurity consultancies. Often, it's about doing the simple things better, and clearly communicating what sets you apart.

  • Tell your story better
  • Improve your testimonials and case studies
  • Build strategic partnerships

Your email is kept private. We don't do the spam thing.