Research

Why Acompli is built for governance, not auto-drafting

Hand-drawn illustration of clipboard and checklist on terracotta background

When people talk about using LLMs for compliance, the conversation often gets reduced to "speed": generate a DPIA, spit out a RoPA entry, list some risks. That framing misses the real problem. In GDPR work, the output is only valuable if it is defensible: consistent with the project reality, traceable to evidence, and reviewable by accountable owners. The European Data Protection Board's work on privacy risks and mitigations for LLM systems lands on the same theme: you need to understand data flows, identify and evaluate risks, apply mitigation measures, and continuously review and monitor residual risk — in other words, governance.

The first failure mode: input-quality sensitivity

The first failure mode is simple and familiar: garbage in, garbage out. If the initial project description is thin, if key facts are missing, or if the team's terminology is inconsistent, an LLM cannot magically "discover" truth. It can only generate the most plausible continuation of what it has been given. In a compliance context, that can look deceptively professional while still being inaccurate, incomplete, or inconsistent across sections. That's not an AI "bug" as much as a predictable outcome of low-quality inputs meeting a fluent text generator — and it's why Acompli treats the DPIA as a structured workflow, not a single prompt.

The second failure mode: hallucination propagation

The second failure mode is more subtle: contextual misalignment. Surveys on LLM hallucinations describe how models can produce content that reads well but is not grounded in the underlying context or external evidence. [1] In a DPIA, this shows up when the model fills gaps ("the system does X", "data is retained for Y", "there is a DPA in place") because that is the shape of a good answer — even when those details were never provided. Once that happens, the text can drift away from the actual processing activity and start to reflect a generic template rather than your organisation's reality.

The third failure mode: cascading hallucinations

The third failure mode is where the risk becomes operational: compounding errors in workflows. In multi-step automation, one small wrong assumption doesn't stay local. It becomes "context" for the next step, and then the next, and soon the assessment is coherent but built on sand. Research on hallucinations and generation dynamics explicitly calls out how issues during training/inference can lead to cascading errors during text generation. [2] And in practical systems, once an early section is wrong, downstream artefacts (RoPA fields, risk statements, controls, residual risk language) can inherit and amplify that initial mistake — especially when teams start copy/pasting outputs between documents. [3]

Human-in-the-loop by design

This is the core reason Acompli is human-in-the-loop by design. The platform is not trying to replace accountability with automation. Instead, it turns AI into a controlled accelerator inside an end-to-end governance workflow:

  • Accidental context contamination (quality risk): incomplete or inconsistent answers can cause the model to infer details; those inferences then compound across the DPIA → RoPA → risk register chain. Acompli addresses this with knowledge-base grounding (where available), confidence/quality signals, consistency checks, and explicit review gates before anything is published.
  • Defensibility and traceability (governance risk): compliance outputs must be reviewable, auditable, and owned. Acompli's workflow approach aligns to the risk-management mindset: you can see what changed, what is low confidence, what requires sign-off, and what is safe to publish — rather than relying on "trust the model".

The bottom line

Put plainly: Acompli exists because compliance teams don't need more generated text — they need a system that prevents plausible nonsense from entering the record, and that stops small uncertainties from turning into large downstream errors. The value isn't that the AI can draft; the value is that Acompli makes AI usable in a regulated documentation lifecycle by enforcing structure, signals, review, and publication control.

References

  1. A Survey on Hallucination in Large Language Models — arXiv, 2023
  2. A comprehensive taxonomy of hallucinations in Large Language Models — arXiv, 2025
  3. Hallucination Mitigation for Retrieval-Augmented Large Language Models — MDPI, 2025