Research · Platform Engineering

Parallel Reasoning for Cross-Domain Compliance Assessments

Most AI systems generate answers sequentially — one prompt in, one answer out. Acompli uses a different pattern. When an assessment requires reasoning across multiple knowledge domains simultaneously, the system splits the generation into parallel domain-specialised paths, each with a filtered context window, then reconciles the outputs into a single coherent response.

The sequential generation problem

A data protection impact assessment is not a single question. It is a structured questionnaire spanning legal analysis, technical architecture, organisational measures, risk evaluation, and data subject rights — often forty or more questions that collectively describe how a processing activity handles personal data.

The naive approach is to send all of this to a single LLM call with a single system prompt. The model gets one persona, one instruction set, one framing. This works — to a point. But the quality ceiling is low, because the generation task is not uniform across question types.

A question about the lawful basis for processing requires legal reasoning grounded in GDPR Article 6. A question about technical security measures requires architectural thinking grounded in the organisation's IT systems. A question about data subject rights requires procedural knowledge about how access requests are handled operationally. These are fundamentally different cognitive tasks, and a single system prompt cannot optimise for all of them simultaneously.

A prompt tuned for legal precision produces stilted, overly cautious answers to operational questions. A prompt tuned for practical operational language undersells the legal analysis. A prompt that tries to do both does neither well.

What forking looks like

Parallel specialised generation splits the process into domain-specific paths. Each fork receives:

  • A tailored system prompt — the persona, tone, and reasoning style optimised for the question type. A legal fork writes with the precision of counsel. A technical fork writes with the confidence of an engineering lead. An operational fork writes in the practical language of a compliance manager.
  • A filtered context window — not the entire knowledge base dump, but the subset most relevant to that fork's questions. The legal fork sees entity records emphasising legal bases, DPA status, and transfer safeguards. The technical fork sees security measures, encryption standards, access controls, and architecture details. The operational fork sees retention schedules, workflow descriptions, and incident response procedures.
  • A domain-specific generation overlay — additional instructions that shape how the model uses its context. The legal fork is instructed to cite specific GDPR articles and never assert a legal basis without KB evidence. The technical fork is instructed to reference specific system names and certifications. The operational fork is instructed to describe processes in present tense with named roles.

Each fork generates answers for its assigned questions independently. The outputs are then reconciled in a merge pass that checks for cross-fork consistency — ensuring that the legal basis stated by the legal fork aligns with the processing purposes described by the operational fork, and that the systems referenced by the technical fork match the entities in the legal fork's answers.

Assessment questions + KB contextClassify and forkLegal forkLegal personaLegal bases, DPA statusGDPR article citationTechnical forkEngineering personaSecurity, encryption, accessSystem-specific detailOperational forkCompliance personaRetention, workflowsPractical languageLegal answersTechnical answersOperational answersReconciliation passConsistency, terminology, toneUnified assessment output

The three-tier knowledge base

Forking is only useful if each fork gets the right context. The knowledge base is organised into three tiers that feed into forks differently:

Organisation-wide knowledge — IT systems, third parties, locations, policies, and certifications that apply across all assessments. Every fork sees this tier, but each fork's context window emphasises different fields. The legal fork gets legal bases and DPA statuses prominently. The technical fork gets security measures and architecture details prominently. Same entities, different emphasis.

Project-specific knowledge — uploaded documents, data flow diagrams, and configuration details specific to the assessment. These are distributed to forks based on content classification. A network architecture diagram goes to the technical fork. A data sharing agreement goes to the legal fork. A process description goes to the operational fork.

User response context — draft answers, notes, and annotations the user has already provided. These are critical because they represent the user's intent. If a user has drafted a partial answer to a legal question, the legal fork receives that draft as context to enhance rather than replace. If a user has left a note saying "check with IT about the encryption standard," the technical fork receives that note.

Knowledge baseOrganisation-wideSystems, policies, certsProject-specificDocs, diagrams, configUser responsesDrafts, notes, intentContext filtered by question typeLegal fork contextLegal bases emphasisedDPA status, transfer safeguardsTechnical fork contextSecurity measures emphasisedEncryption, access, architectureOperational fork contextRetention, workflows emphasisedIncident response, rolesSame entitiesSame entitiesSame entitiesDifferent field emphasis per forkAll three tiers are available to all forks — the filtering controls prominence, not access

Why "transparent"

The forking is transparent in two senses.

First, to the user: every generated answer carries metadata indicating which fork produced it, which context sources were in that fork's window, and what system prompt variant was used. The user can inspect why an answer reads the way it does — and if they prefer a different tone, they can adjust the fork configuration.

Second, to the verification layer: when the grounding verifier (Layer 4) checks a generated answer, it knows which fork produced it and which context was available to that fork. A claim that appears ungrounded may simply be grounded in context that was available to a different fork. The verifier checks across all fork contexts before flagging a claim, preventing false negatives caused by context partitioning.

Forking vs. single-pass

The multi-source grounding architecture described in our companion article uses single-pass generation — all questions processed in one LLM call for cross-answer consistency. Forking and single-pass are not contradictory. They operate at different levels:

Single-pass ensures that when the model generates answers, it sees all questions simultaneously and can maintain consistency across them. This prevents contradictions — the same retention period everywhere, the same entity names, the same legal basis.

Forking ensures that different question types get optimised generation contexts and personas. The forks operate within the single-pass framework — the reconciliation step enforces the same cross-answer consistency that single-pass provides, but after each fork has had the benefit of a specialised context.

In practice, the system decides whether to fork based on assessment complexity. A short assessment with 10 questions from a single domain runs as a straight single-pass. A 50-question DPIA spanning legal, technical, and operational domains forks into three paths and reconciles. The decision is automatic based on question classification and count.

The reconciliation pass

After all forks complete, a reconciliation pass runs across the combined output. This is a lightweight LLM call that receives all generated answers and checks for:

Terminology alignment — if the legal fork calls it "Workday HRIS" and the technical fork calls it "the HR system," the reconciler normalises to the most specific name used in the knowledge base.

Factual consistency — if the legal fork states a 7-year retention period and the operational fork states 5 years for the same data category, the reconciler flags the conflict and resolves it against the KB record (or flags it for human review if the KB is ambiguous).

Tone smoothing — while each fork writes in its own register, the final assessment should read as a coherent document, not as three documents stitched together. The reconciler adjusts transitions and cross-references so that the output reads as if a single senior author wrote the entire assessment.

Evidence chain validation — the reconciler verifies that evidence sources cited by one fork are consistent with claims made by another. If the technical fork cites ISO 27001 certification for a system, and the legal fork asserts adequate security measures for that same system, the reconciler links these claims to reinforce the evidence chain.

Legal answersTechnical answersOperational answersReconciliation passTerminologyNormalise namesConsistencyResolve conflictsToneSmooth transitionsEvidence chainLink cross-fork refsUnified coherent assessmentGrounding verification (Layer 4)

The deterministic skill layer

Not everything requires LLM reasoning. Below the forking layer sits a set of deterministic skills — rule-based operations that handle structured, repeatable tasks without LLM involvement.

These include: extracting Article 30 (RoPA) fields from completed answers using pattern matching and field mapping; classifying data categories against a controlled taxonomy; generating risk register entries from identified processing risks; formatting entity references into consistent citation styles; and validating that mandatory assessment fields are populated before submission.

Deterministic skills are faster, cheaper, and more predictable than LLM calls. They handle the administrative scaffolding so that the LLM forks can focus on the reasoning tasks that actually require language understanding.

What this means in practice

A DPO using Acompli does not see the forking. They see a set of generated answers that read naturally, cite specific organisational evidence, and maintain consistency across the entire assessment. The answers about legal bases sound like they were written by someone who understands GDPR deeply. The answers about technical measures sound like they were written by someone who knows the organisation's IT architecture. The answers about operational processes sound like they were written by someone who has seen how the team actually works.

That is because, in a sense, they were. Each fork is a specialised reasoning path with the context and instructions to produce that specific kind of answer. The user gets the benefit of three specialists collaborating on a single document — without needing three specialists.