Invoance

Loading…

AI attestation

AI systems produce outputs that influence real decisions — approvals, diagnoses, recommendations, contract terms. The AI Attestation endpoint anchors those inputs and outputs as tamper-evident, cryptographically signed records at the moment of generation.

The record proves what the model received, what it produced, which model version was used, and when — independently verifiable by any third party without contacting Invoance.

Available on Professional and Enterprise plans · No model changes required

What gets recorded

A single call to POST /ai/attestations anchors the full attestation bundle into the immutable ledger.

Input hash

SHA-256 of the prompt or input payload. Proves what the model received — not just what it returned.

Output hash

SHA-256 of the exact AI output bytes at generation time. Any subsequent change to the output produces a different hash.

Model metadata

Model name, provider, and version at time of generation. Anchored as part of the signed payload — not editable after the fact.

Timestamp

Recorded at ingestion and included in the signed payload. Cannot be backdated or adjusted.

Ed25519 signature

The full attestation hash is signed with the tenant's private key. Proves organizational origin and integrity independently.

Public verification link

Every attestation produces a public URL resolvable by any third party — auditor, regulator, or counterparty — without authentication.

How it works

Attestation runs alongside your existing AI pipeline. No model changes. No routing changes. One API call per output you want anchored.

1
Generate your AI output as normal
response = openai.chat.completions.create(
  model="gpt-4o",
  messages=[{"role": "user", "content": prompt}]
)
2
Submit attestation to Invoance
POST /ai/attestations
{
  "attestation_type": "output",
  "input_hash":       "sha256:<hash_of_prompt>",
  "content_hash":     "sha256:<hash_of_response>",
  "model":            "gpt-4o",
  "model_provider":   "openai",
  "model_version":    "2025-01-01",
  "signer_label":     "policy-engine"
}
3
Receive immutable proof
{
  "attestation_id": "att_01HX…",
  "hash":           "9f3a...c21e",
  "signature":      "ed25519:7b1c...a4f9",
  "anchored_at":    "2026-02-25T14:22:01Z",
  "verify_url":     "https://invoance.com/verify/att_01HX…"
}

No model changes required. Attestation is a parallel operation. Your AI pipeline continues unchanged. Invoance adds the proof layer without touching inference.

What attestation proves and does not prove
Attestation proves
The exact output the model produced at a specific time.
The exact input the model received.
Which model version was used.
The output has not been altered since anchoring.
The record was issued by the stated organization.
Outside the scope of attestation
Whether the output is accurate or correct.
Whether the model's decision was the right one.
Legal admissibility in any specific jurisdiction.
Whether the input itself was truthful or complete.
Human intent behind the query.
Attestation is a technical guarantee, not a legal or factual one. What it provides — an immutable, independently verifiable record of what happened — is exactly what is absent from most AI systems today.
Quantify your unmitigated exposure

Unverified AI outputs become undefended liabilities when challenged. Estimate your organization's exposure based on event volume and sector.

Exposure model

At scale, unverified events become undefended liabilities

Every automated decision, approval, or system action your organization produces is a potential audit point. Without cryptographic proof at the moment of creation, each one depends on trust when challenged. Adjust your volume and sector to see what that exposure looks like.

Sector
Monthly events10,000
1,000500,000
Unverified exposure
$504.0M
Estimated annual liability without proof infrastructure
Invoance annual cost
$18K
Business plan, all events anchored
Coverage ratio
28,019:1
Proof coverage per dollar spent
Calculation breakdown
Annual events120,000
Challenge rate0.1%
Every $1 of Invoance covers$28,019 of exposure
Methodology

Assumes 0.1% of recorded events may be subject to dispute, audit, or regulatory review. Claim values reflect published industry averages for contested records.

Figures are illustrative and do not constitute legal or financial advice.

When attestation matters

AI outputs are challenged when they cause harm, deny services, or produce costly mistakes. In every case, the question is identical: can you prove what your system produced, unchanged, at a specific time?

Healthcare

AI-assisted clinical recommendation challenged 18 months later.

Without attestation

Internal logs and screenshots presented. Opposing expert contests chain of custody.

With attestation

Attestation record shows exact model output, version, and timestamp. Tamper-evident by construction.

Financial services

Regulator audits 50,000 AI credit decisions for post-hoc adjustment.

Without attestation

Database exports produced. Auditor requires additional attestation from engineering team.

With attestation

Every decision has an immutable attestation with hash, signature, and public verification URL.

Legal tech

Dispute over AI-drafted contract clause — opposing party claims human intervention.

Without attestation

No mechanism to prove when the AI generated the clause versus when it was edited.

With attestation

Attestation anchors the AI output at generation time, independent of document history.

HR tech

Employment discrimination claim over AI hiring recommendations.

Without attestation

Original AI outputs reconstructed from memory and logs. Credibility contested.

With attestation

Original AI recommendations cryptographically anchored before human review.

Regulatory context

Several regulatory frameworks now require or strongly imply that AI systems operating in high-stakes domains maintain auditable records of inputs, outputs, and model behavior. Attestation is not a legal conclusion — consult qualified counsel for your specific obligations.

EU AI Act2026 enforcement

High-risk AI systems must maintain logs of inputs and outputs for post-market monitoring and regulatory audit.

FDA AI/ML guidanceSaMD requirements

Software as a Medical Device using AI must support transparency and traceability of model decisions.

FRE 902(14)Federal Rules of Evidence

Electronically stored records are self-authenticating when generated and stored in the regular course of business with process integrity controls.

EEOC AI guidanceEmployment law

Employers using AI in hiring are responsible for demonstrating the absence of discriminatory outcomes, including the ability to produce original AI outputs.

Fair Lending / ECOAFinancial regulation

Adverse action notices require that AI-driven credit decisions can be explained and the original decision preserved.

AI attestation uses the same cryptographic primitives as all Invoance records — SHA-256, Ed25519, append-only Postgres — extended with model metadata, input binding, and AI-specific proof fields.