AI systems produce outputs that influence real decisions — approvals, diagnoses, recommendations, contract terms. The AI Attestation endpoint anchors those inputs and outputs as tamper-evident, cryptographically signed records at the moment of generation.
The record proves what the model received, what it produced, which model version was used, and when — independently verifiable by any third party without contacting Invoance.
Available on Professional and Enterprise plans · No model changes required
A single call to POST /ai/attestations anchors the full attestation bundle into the immutable ledger.
SHA-256 of the prompt or input payload. Proves what the model received — not just what it returned.
SHA-256 of the exact AI output bytes at generation time. Any subsequent change to the output produces a different hash.
Model name, provider, and version at time of generation. Anchored as part of the signed payload — not editable after the fact.
Recorded at ingestion and included in the signed payload. Cannot be backdated or adjusted.
The full attestation hash is signed with the tenant's private key. Proves organizational origin and integrity independently.
Every attestation produces a public URL resolvable by any third party — auditor, regulator, or counterparty — without authentication.
Attestation runs alongside your existing AI pipeline. No model changes. No routing changes. One API call per output you want anchored.
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
)POST /ai/attestations
{
"attestation_type": "output",
"input_hash": "sha256:<hash_of_prompt>",
"content_hash": "sha256:<hash_of_response>",
"model": "gpt-4o",
"model_provider": "openai",
"model_version": "2025-01-01",
"signer_label": "policy-engine"
}{
"attestation_id": "att_01HX…",
"hash": "9f3a...c21e",
"signature": "ed25519:7b1c...a4f9",
"anchored_at": "2026-02-25T14:22:01Z",
"verify_url": "https://invoance.com/verify/att_01HX…"
}No model changes required. Attestation is a parallel operation. Your AI pipeline continues unchanged. Invoance adds the proof layer without touching inference.
Unverified AI outputs become undefended liabilities when challenged. Estimate your organization's exposure based on event volume and sector.
At scale, unverified events become undefended liabilities
Every automated decision, approval, or system action your organization produces is a potential audit point. Without cryptographic proof at the moment of creation, each one depends on trust when challenged. Adjust your volume and sector to see what that exposure looks like.
Assumes 0.1% of recorded events may be subject to dispute, audit, or regulatory review. Claim values reflect published industry averages for contested records.
Figures are illustrative and do not constitute legal or financial advice.
AI outputs are challenged when they cause harm, deny services, or produce costly mistakes. In every case, the question is identical: can you prove what your system produced, unchanged, at a specific time?
AI-assisted clinical recommendation challenged 18 months later.
Internal logs and screenshots presented. Opposing expert contests chain of custody.
Attestation record shows exact model output, version, and timestamp. Tamper-evident by construction.
Regulator audits 50,000 AI credit decisions for post-hoc adjustment.
Database exports produced. Auditor requires additional attestation from engineering team.
Every decision has an immutable attestation with hash, signature, and public verification URL.
Dispute over AI-drafted contract clause — opposing party claims human intervention.
No mechanism to prove when the AI generated the clause versus when it was edited.
Attestation anchors the AI output at generation time, independent of document history.
Employment discrimination claim over AI hiring recommendations.
Original AI outputs reconstructed from memory and logs. Credibility contested.
Original AI recommendations cryptographically anchored before human review.
Several regulatory frameworks now require or strongly imply that AI systems operating in high-stakes domains maintain auditable records of inputs, outputs, and model behavior. Attestation is not a legal conclusion — consult qualified counsel for your specific obligations.
2026 enforcementHigh-risk AI systems must maintain logs of inputs and outputs for post-market monitoring and regulatory audit.
SaMD requirementsSoftware as a Medical Device using AI must support transparency and traceability of model decisions.
Federal Rules of EvidenceElectronically stored records are self-authenticating when generated and stored in the regular course of business with process integrity controls.
Employment lawEmployers using AI in hiring are responsible for demonstrating the absence of discriminatory outcomes, including the ability to produce original AI outputs.
Financial regulationAdverse action notices require that AI-driven credit decisions can be explained and the original decision preserved.
AI attestation uses the same cryptographic primitives as all Invoance records — SHA-256, Ed25519, append-only Postgres — extended with model metadata, input binding, and AI-specific proof fields.