Relvia Labs
Security & Compliance

Security & Compliance

Relvia is built as auditable infrastructure for high-stakes work. Data security, AI-native threat modeling, and operational controls are not features — they are the system.

Compliance posture
SOC 2 Type II
In progress
ISO 27001
In progress
GDPR
Compliant
HIPAA
On request
EU AI Act
Aligned
01 / Data Security

How customer data is stored, scoped, and bounded.

The foundational properties every Relvia deployment inherits without configuration.

D-01

Encryption everywhere

All data is encrypted in transit (TLS 1.3) and at rest (AES-256). Encryption keys are scoped per-tenant and rotated on a fixed cadence.

D-02

Tenant isolation

Customer data is logically isolated per tenant. Retrieval, evaluation, and reporting workloads operate in scoped contexts that cannot cross tenant boundaries.

D-03

No training on customer data

Customer inputs are never used to train upstream foundation models or shared between tenants. Data lifetime is governed by contract, not provider defaults.

D-04

Regional data residency

EU and US data residency options are supported. Workloads can be pinned to specific regions to meet data sovereignty and EU AI Act requirements.

D-05

Data minimization

We collect only what is required to produce the requested output. Inputs are not retained beyond the configured workflow window unless explicitly opted in.

D-06

Configurable retention & deletion

Customers control retention windows per-workflow. Hard delete is available on request and is propagated to all downstream stores within contractual SLAs.

02 / AI Security & Threat Model

Prompt injection, jailbreaks, exfiltration — modeled, not hoped against.

AI systems introduce a class of threats that traditional security frameworks don’t cover. Relvia treats them as first-class infrastructure problems with explicit mitigations, benchmarked per release.

Threat · T-01

Direct prompt injection

An adversarial user crafts an input designed to override the system prompt — getting the agent to ignore instructions, reveal internal context, or take unintended actions.

Mitigation

Strict instruction hierarchy with a privileged system context that is never co-mingled with user input. Inputs are parsed against a structured schema; outputs are validated before being released. Detected override attempts are logged and surfaced to the customer audit trail.

Threat · T-02

Indirect prompt injection (IPI)

Malicious instructions are embedded in third-party content the agent retrieves — a web page, a document, a public dataset — that attempt to hijack the model's behavior at synthesis time.

Mitigation

Retrieved content is treated as data, never as instructions. Sources are wrapped in a quarantine context with explicit role tags. Cross-source corroboration is required for any non-trivial claim. Anomalous retrieval patterns trigger automatic re-evaluation.

Threat · T-03

Jailbreak resistance

Inputs designed to bypass safety controls — encoded instructions, role-play setups, multi-turn social engineering — that aim to elicit policy-violating output.

Mitigation

Multi-model evaluation surfaces disagreement on policy-adjacent outputs. Refusal calibration is benchmarked per-release against a private adversarial suite. High-impact responses pass an independent safety classifier before being returned.

Threat · T-04

Data exfiltration

An adversary attempts to extract sensitive customer data — secrets, internal documents, prior queries — through carefully crafted prompts or tool invocations.

Mitigation

Output scanning for sensitive entities and PII before egress. Hard tenant boundaries on retrieval — no agent can read another tenant's context. Tool calls execute under per-call permission grants, never with standing access to customer stores.

Threat · T-05

Source poisoning

An adversary publishes content engineered to mislead AI research systems — false data, fabricated citations, or coordinated content designed to manufacture consensus.

Mitigation

Source reliability scoring weights provenance, recency, and corroboration before any source contributes to a claim. Multi-source corroboration is required for high-confidence findings. Anomaly detection flags coordinated content patterns for re-evaluation.

Threat · T-06

Tool & agent boundary abuse

An agent is induced to invoke tools with unsafe arguments, escalate privileges, or chain tool calls in ways that violate intended workflow scope.

Mitigation

Each tool is invoked with a per-call permission schema validated against the originating workflow. Tool execution is sandboxed; arguments are validated against type and value constraints. All tool calls produce immutable audit records visible to the customer.

threat-surface.json
relvia/security
{
  "input_layer":      ["instruction_hierarchy", "schema_validation", "rate_limiting"],
  "retrieval_layer":  ["source_quarantine", "role_tagging", "reliability_scoring"],
  "synthesis_layer":  ["multi_model_eval", "claim_grounding", "refusal_calibration"],
  "tool_layer":       ["per_call_permissions", "sandbox_exec", "argument_validation"],
  "output_layer":     ["pii_scanning", "policy_classifier", "schema_enforcement"],
  "audit_layer":      ["immutable_logs", "tenant_visible_trails", "incident_replay"]
}
03 / Operational Security

Internal controls, audit, and incident response.

O-01

Least-privilege access

Internal access is gated by role-based permissions, hardware-backed authentication, and full audit logging. No engineer has standing access to customer data.

O-02

Auditable claim trails

Every claim produced by Relvia is bound to its evidence graph. Customers can replay how a conclusion was generated, by which agent, against which sources.

O-03

Incident response

24/7 on-call rotation with defined escalation paths. Customers are notified of material security incidents within contractual SLAs, with full post-mortem disclosure.

O-04

Sub-processors disclosed

All sub-processors are publicly listed and reviewed before onboarding. Customers receive advance notice of any material change to the sub-processor list.

Data flow

How customer data moves through Relvia.

Every input, retrieval, and generation is a scoped event — logged, attributable, and bounded by retention policy.

Inputs
Encrypted, scoped to tenant, never used for upstream training
Retrieval
Source quarantine, role tagging, reliability scoring per call
Synthesis
Cross-model evaluation; claims bound to evidence graphs
Tool calls
Per-call permission schemas, sandboxed execution
Output
PII scanning, schema validation, policy classification
Storage
Configurable retention windows, hard delete on request
Audit
Immutable, customer-visible trail with replayable evidence
04 / Vulnerability Disclosure

Responsible disclosure.

We work directly with security researchers. If you have identified a potential vulnerability, please report it via the channels below. We acknowledge reports within one business day.

Email
security@relvialabs.ai
PGP fingerprint
A4F1 92D7 3B0E 6C18 — 5DAA 91FE 04C3 712B
Acknowledgement
Within 1 business day
Initial triage
Within 5 business days
Safe harbor
Good-faith research is protected — see policy on request

Need a deeper security review?

We’ll walk your security and legal teams through our architecture, controls, AI threat model, and audit posture.