- Pentest Data Sovereignty: Why Your Reporting Platform Is a Security Decision
- Pentest data is not normal data
- The vendor-as-target problem
- Three architectures, three risk profiles
- The GDPR angle: eliminating the processor relationship
- NIS2 supply chain requirements
- Air-gapped and restricted environments
- The consultancy angle: your clients are starting to ask
- This applies if / skip this if
- What to do next
- Related content
Pentest Data Sovereignty: Why Your Reporting Platform Is a Security Decision
A pentest report contains detailed instructions for compromising your systems. Open ports, network topology, authentication bypasses, proof-of-concept payloads, screenshots of successful exploits. Taken together, it is a map that tells an attacker exactly where to walk. When the engagement is over, that map lives somewhere. Where it lives, who can access the infrastructure it sits on, and what happens to it if that infrastructure is breached are questions your security team should be answering with the same rigor they applied to the pentest itself.
This is the definitive resource on pentest data sovereignty: what it means, why it matters more for pentest findings than for most data types, what the architectural options are, and how to evaluate them for your organization.
Key Takeaways
- Pentest findings are not generic business data with some PII mixed in. They are detailed blueprints for compromising the target organization's infrastructure, which makes their governance a security problem, not just a compliance one.
- A cloud-hosted pentest reporting platform that aggregates findings from hundreds or thousands of engagements is a high-value target for sophisticated threat actors. The SolarWinds breach demonstrated that compromising a trusted security vendor once can yield access to thousands of downstream organizations.
- Self-hosted deployment eliminates the vendor-as-target risk entirely: the vendor cannot be breached for data they never received, compelled to hand it over, or subpoenaed for records that do not exist on their infrastructure.
- Open-source code allows any customer, auditor, or regulator to verify that no telemetry, usage data, or findings are transmitted externally. Proprietary self-hosted software can make data residency claims that are difficult to verify without source access.
- Under GDPR Article 28, self-hosted deployment eliminates the data processing relationship entirely. There is no third-party processor, no DPA to negotiate, and no processor compliance to monitor.
- Self-hosted deployment is operationally more demanding than cloud SaaS. Teams without dedicated infrastructure or DevOps capacity may find that a cloud vendor with strong SOC 2, ISO 27001, and a well-structured DPA is the pragmatic choice. The right answer depends on your risk profile, not on a vendor's marketing page.
Pentest data is not normal data
Most data governance conversations treat all sensitive data as roughly equivalent: PII here, financial records there, everything gets encrypted at rest and in transit, access controls applied, DPA signed. Pentest data breaks that model.
A vulnerability assessment of a hospital network produces a document that describes exactly how to access patient records, disable medical device monitoring, and move laterally through the network. A pentest of a power utility produces a map of SCADA system access points, authentication weaknesses, and network segmentation gaps. A web application test of a payment platform produces proof-of-concept exploits for the transaction processing system.
These are not records that need to be "protected" in the same way a customer database needs protection. These are records that, in the wrong hands, enable the specific attacks they describe. The governance question is proportionally more serious: where this data lives is not a policy decision. It is an architectural one.
The vendor-as-target problem
In December 2020, CISA published Advisory AA20-352A documenting the SolarWinds supply chain compromise. A nation-state actor infiltrated SolarWinds' build pipeline and distributed a backdoor through routine software updates to approximately 18,000 customer organizations, including US government agencies, critical infrastructure operators, and major enterprises. The attacker did not target each victim individually. They compromised one trusted security vendor and inherited access to the entire customer base.
MITRE ATT&CK classifies this as T1195.002 — Supply Chain Compromise: Compromise Software Supply Chain and T1199 — Trusted Relationship. It is not a novel technique. It is a documented, repeatable attack pattern that sophisticated actors use when the target set is large and the aggregation point is obvious.
Now apply that pattern to a cloud-hosted pentest reporting platform.
A SaaS pentest tool that serves hundreds of security teams aggregates something unusual: not log data, not email, not source code, but the exact vulnerabilities, attack paths, and exploitation evidence for every organization those teams have tested. Government agencies. Defence contractors. Financial institutions. Healthcare providers. Critical infrastructure operators. If any of those organizations, or any consultancy serving them, uses the platform, the platform holds the map.
The question is not whether any specific cloud pentest vendor has been publicly breached. Absence from public breach records is not evidence of security. Sophisticated actors, by definition, prioritise remaining undetected. The relevant question is: what resources does a single SaaS company command relative to the resources of a well-funded nation-state actor motivated by what that company holds?
The reader can complete that arithmetic.
NIST SP 800-161 Rev. 1 (Cybersecurity Supply Chain Risk Management) and the NCSC's supply chain security guidance establish the principle directly: supply chain risk management cannot rely on vendor assurances alone. It requires architectural controls that limit exposure regardless of vendor behaviour.
Three architectures, three risk profiles
Cloud SaaS (vendor-hosted)
Findings are stored on the vendor's cloud infrastructure. Data residency depends on the vendor's hosting provider, data center locations, and DPA terms. Security depends on the vendor's infrastructure security, access controls, patch management, and incident response.
What you get: Zero infrastructure overhead. The vendor handles deployment, updates, backups, and availability. Fastest time to value. Lowest operational burden.
What you accept: The vendor has access to your data. Their staff can access it under their internal access model. Their infrastructure is a target. Their compliance posture can change between audit cycles. A breach at the vendor exposes every customer's findings, not just yours. Cross-border data transfer is possible unless explicitly restricted in the DPA. You are dependent on the vendor's continued operation, pricing stability, and security practices.
When this is appropriate: Teams with limited infrastructure capacity, low regulatory exposure, and findings that would not cause catastrophic harm if exposed. A boutique consultancy testing small business web applications has a different risk profile than a team testing hospital networks.
Self-hosted, proprietary
Findings are stored on infrastructure the customer controls. The vendor has no access to customer data. Security depends on the customer's own infrastructure security.
What you get: Full data residency control. No vendor access to findings. No cross-border transfer risk. The vendor cannot be breached for data they never received.
What you accept: Infrastructure overhead — deployment, updates, backups, and availability are your responsibility. The software is proprietary, so you cannot inspect the code to verify that no telemetry or data exfiltration occurs. You trust the vendor's claims about what the software does, but you cannot independently verify them.
When this is appropriate: Teams that need data residency control but operate in environments where source code audit is not a procurement requirement.
Self-hosted, open-source
Findings are stored on infrastructure the customer controls. The vendor has no access to customer data. The source code is inspectable — any customer, auditor, or regulator can verify the software's behaviour independently.
What you get: Everything self-hosted proprietary provides, plus verifiability. Your security team can audit the code, confirm that no data leaves the environment, and verify that encryption, access controls, and session management work as documented. When an auditor asks "how do you know no data is transmitted externally?", the answer is "we read the code." Regulated buyers in government and defence procurement routinely require source code access as a condition of tooling approval. Open-source satisfies this by default.
What you accept: The same infrastructure overhead as self-hosted proprietary. Open-source does not mean zero-cost — deployment, configuration, and maintenance require resources.
When this is appropriate: Teams that need both data residency control and independent verifiability: regulated enterprises, government agencies, defence contractors, and any organization whose procurement process requires source code access or audit rights.
Dradis has been self-hosted and open-source since 2007. Your pentest findings stay on your infrastructure. Your team can audit every line of code. Deploy on your servers, your private cloud, or fully air-gapped with no internet connection. See how it works for your team
The GDPR angle: eliminating the processor relationship
Under GDPR Article 28, a data controller must use only data processors that provide sufficient guarantees about technical and organizational data protection measures. For any team using a cloud SaaS pentest tool, this means:
- A Data Processing Agreement (DPA) with the tool vendor
- Ongoing assessment that the processor's security measures remain adequate
- Documentation of the processing purpose, data categories, and retention
- Due diligence on sub-processors (the vendor's hosting provider, CDN, logging services)
- Breach notification chain — the vendor must notify you, and you must notify your supervisory authority, within defined timeframes
This is not onerous for well-run organizations. But it is a friction surface that compounds with every additional processor in the chain.
With self-hosted deployment, there is no data processing relationship for pentest data. The customer is both controller and sole processor. There is no DPA to negotiate, no sub-processor chain to document, no processor breach notification dependency. The GDPR obligation for pentest data reduces to the customer's own internal security measures — which they control entirely.
ENISA's threat landscape analysis (2023) identifies the processor relationship as a primary vector for cascading data exposure across organizations. Eliminating that relationship eliminates the vector.
NIS2 supply chain requirements
NIS2 (EU Directive 2022/2555) adds a supply chain dimension. Article 21(2)(d) requires essential and important entities to assess the security of their direct suppliers and service providers. Article 21(3) requires entities to evaluate each supplier's "secure development procedures." For EU essential entities using cloud-hosted pentest tools, the platform vendor is a supplier whose security must be assessed, documented, and defended during audits.
Self-hosted deployment removes the platform from the Article 21 supply chain assessment scope. Open-source deployment satisfies the Article 21(3) secure development evaluation through direct code inspection.
For the full breakdown of NIS2's specific impact on pentest data residency, see NIS2 and Pentest Data Residency: What Security Teams Need to Know.
Air-gapped and restricted environments
Some environments take the data sovereignty question to its logical conclusion. Classified facilities, defence contractors operating under ITAR, secure government networks, and organizations conducting assessments in client locations without internet access need platforms that function with zero external dependencies.
This means:
- No internet connection required after installation
- No phone-home telemetry, license checking, or update pings
- Offline license activation
- Local user management (no dependency on cloud-based SSO)
- Full functionality with no degradation in air-gapped mode
Cloud SaaS is not an option in these environments by definition. Self-hosted proprietary tools may work if they support fully offline operation — many do not, requiring periodic license checks or update downloads. Self-hosted open-source tools that support air-gapped deployment satisfy the requirement completely: the software runs, the data stays local, and nothing leaves the network.
Dradis supports fully air-gapped deployment with offline license activation and zero internet connectivity requirements.
The consultancy angle: your clients are starting to ask
If you run a consultancy, the data sovereignty question arrives through your clients. An enterprise client commissioning a pentest increasingly asks: where will findings be stored? Who has access to the infrastructure? What happens to the data after the engagement? What is your data retention policy?
For consultancies using cloud SaaS tools, each of these questions opens a conversation about the vendor's policies, not yours. Your client's security team may not accept the answer. Their procurement process may require findings to remain on infrastructure the client controls, or on infrastructure within a specific jurisdiction.
A self-hosted reporting platform converts these conversations. Your data retention policy is your data retention policy, not the vendor's. Your infrastructure security is auditable by you, not proxied through a SOC 2 report you received from a third party. When the client asks "where is the data?", the answer is specific and verifiable.
For consultancies serving EU clients subject to NIS2, the dynamic compounds. Your client must assess your security practices under Article 21(2)(d). The tools you use to process their findings are part of that assessment. A self-hosted, open-source platform simplifies the chain: your client assesses you, and the tooling assessment stops there because no third-party vendor holds the data.
This applies if / skip this if
This piece is for you if:
- You commission or run penetration tests that produce findings about critical infrastructure, regulated systems, or high-value targets
- Your procurement or vendor risk team is evaluating the security posture of your current pentest tooling
- You have clients or stakeholders asking where pentest findings are stored and who has access
- You operate under regulatory frameworks (NIS2, GDPR, ITAR, FedRAMP, HIPAA) that impose supply chain or data residency requirements
- You need to complete a security questionnaire about your pentest reporting platform and want to understand the architectural options before answering
This piece is less relevant if:
- Your pentest scope is limited to non-critical systems where finding exposure would have minimal operational impact
- Your organization has already completed a vendor risk assessment for your current platform and your auditors have accepted it
- You are looking for a general data sovereignty overview rather than the specific intersection with pentest tooling
What to do next
- Map your pentest data flow. Document where findings are created, stored, processed, and transmitted. Identify every third-party service in that chain. Each one is a link in your supply chain that regulators and auditors can examine.
- Classify your risk profile. Not all pentest data carries the same exposure risk. Findings about a critical infrastructure SCADA system have different governance requirements than findings about a marketing website's contact form. Match your tooling architecture to your highest-risk engagement type.
- Evaluate your vendor's verifiability. Can you independently confirm what the software does with your data? If it is proprietary, you are trusting the vendor's claims. If it is open-source, you can verify them. For regulated environments, this distinction matters during audits.
- Assess your infrastructure readiness. Self-hosted deployment requires infrastructure. If your team has the DevOps capacity to deploy and maintain an on-premise platform, the sovereignty benefit is clear. If not, evaluate whether the operational cost is justified by your risk profile — for some teams, a well-vetted cloud vendor is the pragmatic answer.
- Prepare your supply chain documentation now. Whether your auditors arrive under NIS2, GDPR, or an enterprise client's procurement process, having a documented answer for "where is the data and who can access it?" before the question is asked is the position you want to be in.
Related content
Data sovereignty cluster: - NIS2 and Pentest Data Residency — what Article 21 specifically requires for pentest tooling and how self-hosted deployment closes the compliance question - Shadow AI in Pentesting: What Happens When Your Tester Uses ChatGPT With Client Findings — the data leakage risk when testers use external AI with client data
Deployment and architecture: - Self-Hosted vs. Cloud SaaS — the broader deployment comparison including data sovereignty, operational portability, and vendor independence - Dradis Deployment Options — air-gapped deployment, private cloud, offline license activation, infrastructure requirements - Dradis Echo: AI-Assisted Pentest Reporting Without Sending Data to the Cloud — how local AI via Ollama keeps finding data on your infrastructure
Evaluation: - Best Pentest Report Generators Compared — honest comparison of reporting platforms including data residency architecture for each - Enterprise Pentest Management — deployment, compliance certifications, and infrastructure control for enterprise teams - Why Dradis — the conjunction: self-hosted, open-source, compounding knowledge, operational portability - Pentest Report Generator — the reporting automation case for teams evaluating workflow tools - Consistency and Standards — how standardized output intersects with data governance (same data, same format, same place)
Your pentest findings deserve the same security rigor as the systems they describe. Dradis is self-hosted, open-source, and runs fully air-gapped. No vendor access to your data. No third-party processing. No residual supply chain risk. Talk to us about your deployment