InvoanceInvoance
Log inStart free
In this article
Resources/How to Prove Your AI Did What It Said: A Developer's Guide to Verifiable AI Outputs
AI Governance·9 min read·May 6, 2026

How to Prove Your AI Did What It Said: A Developer's Guide to Verifiable AI Outputs

By Adeola Okunola, Founder, Invoance·

Your AI's output is gone the moment it returns. Logs aren't proof. This guide shows how to attach a cryptographic receipt to every model call, in three lines of code, with a public URL anyone can verify, no Invoance account required.

Your AI's output is gone the moment it returns

Most teams treat AI outputs like ephemeral function returns. The model responds, the response goes to the user, maybe a row gets written to a database, and that is the entire trail. When a customer disputes an answer six months later, when a regulator asks what your model produced for user 4f9a on March 14 at 2:47 PM, or when your legal team needs to defend a decision in a deposition, that database row is your evidence.

The problem is that database rows can be edited, deleted, or corrupted, accidentally or otherwise. There is nothing in the row that proves it has not been touched since the AI generated it. There is nothing that proves your timestamp column was not bulk-updated by a migration last quarter. There is nothing that proves the input column matches the prompt the model actually saw.

Your logs are a story you are asking someone to take on faith. That works until somebody has a reason to challenge them. The first thing opposing counsel, an auditor, or a regulator will do is question the chain of custody. Without a cryptographic anchor, you have no answer to that question.

AI attestation closes that gap. At generation time, you submit a small payload describing the call. The attestation service hashes the input, hashes the output, signs the whole thing with your tenant's private Ed25519 key, and writes it to an append-only ledger. You get back an attestation ID and a public verification URL. From that point on, anyone can verify what happened, without trusting you, your database, or even Invoance.

What attestation actually proves (and what it does not)

Precision matters here, because half the confusion in this category comes from people overclaiming what cryptographic proof gives you.

Attestation proves five specific things. First, the exact output the model produced at the moment of attestation. Second, the exact input the model received. Third, which model and version produced it. Fourth, that the record has not been altered since it was signed. Fifth, that the record was issued by your organization, identified by your tenant's public key.

Attestation does not prove that the output was correct, that the model's reasoning was sound, that the input was truthful, or that any human acted on the output appropriately. It does not establish legal admissibility on its own, although it produces exactly the kind of self-authenticating record that Federal Rules of Evidence 902(14) is designed for.

The distinction matters because attestation is a technical guarantee, not an editorial one. What it gives you is the evidentiary foundation everything else rests on. Without it, every downstream argument about your AI's behavior starts from "trust our logs." With it, every downstream argument starts from a signed receipt that any third party can verify in under a second.

Key insight. Attestation proves what your model said. It does not prove the model was right. Those are different problems, and conflating them is how organizations end up overpromising on governance.

Your first attestation in three lines

The Invoance Node SDK ships with a single ingest method. You instantiate the client (which reads INVOANCE_API_KEY from the environment), pass the input, output, and model metadata, and you receive an attestation record with an ID, a payload hash, and a created-at timestamp.

This call sits alongside your existing inference pipeline. It does not modify your prompts, your model, or your response shape. It does not introduce latency on your AI response, the attestation runs after generation, in parallel with your own logging. It is one network call.

Node SDK — first attestation
import { InvoanceClient } from "invoance";

const client = new InvoanceClient();
// Reads INVOANCE_API_KEY=invoance_live_... from env

// 1. Run your model however you do today
const userPrompt = "Summarize this contract clause";
const modelOutput = await yourLLM.complete(userPrompt);

// 2. Attest the input/output pair
const att = await client.attestations.ingest({
  type: "output",
  input: userPrompt,
  output: modelOutput,
  modelProvider: "openai",
  modelName: "gpt-4o",
  modelVersion: "2025-01-01",
  subject: { userId: "u_42", sessionId: "sess_4f9a" },
});

console.log(att.attestation_id);
// → "att_01HXY..."

What you get back

The response includes the attestation ID, the SHA-256 hashes of the input, output, and combined payload, the timestamp the record was sealed, and a status field that lets the SDK distinguish a fresh write from an idempotent retry. Every field on the response is significant.

The attestation_id is the handle you store alongside your own database row, so the next time anyone asks about that specific decision, you can hand them the attestation. The input_hash and output_hash are what a verifier compares against when they want to confirm the content has not changed since it was signed. The payload_hash is the canonical hash that was actually signed, and it is what the public verification endpoint validates against your tenant's published public key.

The status field is worth highlighting for anyone running at-least-once delivery patterns: if you submit the same payload twice, the second response returns status: "duplicate" instead of "accepted", and your client gets back the original attestation. The system will not silently double-write. This means you can wire the call into a retry loop without worrying about ledger pollution.

Response shape — POST /v1/ai/attestations → 201 Created
{
  "attestation_id": "att_01HXY7K3M9P2QV5R8WNBC4DAE6",
  "created_at": "2026-05-06T14:47:13.482Z",
  "input_hash":   "9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08",
  "output_hash":  "2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb7a72",
  "payload_hash": "ef2d127de37b942baad06145e54b0c619a1f22327b2ebbcfbec78f5564afe39d",
  "status": "accepted"
}

How a third party verifies it independently

This is the part that separates real attestation from "we have logs and we promise nobody changed them." When you give an auditor, a customer, opposing counsel, or a regulator the attestation ID, they hit a public endpoint, no API key, no Invoance account, and receive the full proof bundle.

The verification endpoint returns the original record (input hash, output hash, payload hash, signature, model metadata, timestamp), the public key that signed it, and a structured signature-verification result. They can also POST a content hash to the verify subroute and the service will tell them whether the content they hold matches what was sealed. If a customer disputes an output, you both compute the SHA-256 of the disputed text, send it to /verify, and the result is unambiguous.

The signature is verifiable offline as well. Each tenant has its own Ed25519 keypair, and the tenant's public key is embedded inline in every proof bundle returned by the verification endpoint, so a verifier never needs a second round trip. If a verifier wants to pin the key independently, they can fetch it directly by the tenant's verified domain at GET /keys/{domain}, which returns the base64url-encoded public key, algorithm, and key ID. Either way, anyone with the record and the public key can verify the Ed25519 signature using any standard library, without contacting Invoance at all. That is the point. The proof does not depend on us being online, in business, or trusted.

Public verification — no auth required
# Anyone can hit this. No API key. No account.
curl https://api.invoance.com/v1/proof/ai/att_01HXY7K3M9P2QV5R8WNBC4DAE6

# Or check whether a specific piece of content matches what was sealed:
curl -X POST https://api.invoance.com/v1/proof/ai/att_01HXY.../verify \
  -H "Content-Type: application/json" \
  -d '{"content_hash": "2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb7a72"}'

# Response:
# {
#   "match": true,
#   "signature_valid": true,
#   "signed_by": "tenant_org_invoance",
#   "signed_at": "2026-05-06T14:47:13.482Z"
# }
See it in action
  • AI attestation SDK reference— Live, runnable code for creating and fetching AI attestations in Python and Node, with the request and response shapes used in this post.
  • Fetch a tenant's public key— Each tenant has its own Ed25519 keypair. GET /keys/{domain} returns the base64url public key for offline verification.

Same flow, Python edition

For ML and data teams who live in Python, the SDK exposes the same surface. The async client is the recommended path for production pipelines because attestation is an I/O-bound network call, blocking your inference loop on it would waste capacity. Use it inside an asyncio.gather() if you are attesting batches.

The field names switch from camelCase to snake_case to match Python conventions, but the semantics are identical. The same ingest call, the same response shape, the same verification URL. If your stack is mixed (Python inference, Node application server), you can attest from one and verify from the other without coordination.

Python SDK — async pattern
import asyncio
from invoance import InvoanceClient

async def attest_completion(prompt: str, completion: str, user_id: str):
    async with InvoanceClient() as client:  # reads INVOANCE_API_KEY
        att = await client.attestations.ingest(
            attestation_type="output",
            input=prompt,
            output=completion,
            model_provider="openai",
            model_name="gpt-4o",
            model_version="2025-01-01",
            subject={"user_id": user_id, "session_id": "sess_4f9a"},
        )
        return att.attestation_id

asyncio.run(attest_completion(
    "Summarize this contract clause",
    "The clause states...",
    "u_42",
))

The questions developers ask first

On latency: the attestation call is independent of your inference path. Run it after you return the response to the user, or fire-and-forget into a queue. The end-user latency cost is zero if you do not block on it. The wall-clock cost of the call itself is typically under 100 ms.

On PII: the attestation service stores hashes of the input and output by default, not the plaintext. If you want plaintext stored alongside the proof for replay (so auditors can read what the model said, not just verify the hash), opt in per attestation. If you do not, the only way the original content can be recovered is from your own systems, and the attestation only proves what hash was sealed.

On batching: high-throughput pipelines attest hundreds of outputs per second by submitting in parallel. The ingest endpoint is idempotent on the payload-hash level, so duplicate submissions are deduplicated server-side and you can retry freely under at-least-once semantics.

On cost: the Builder tier covers small dev workloads for free, the Growth tier covers most production AI startups, and Compliance and Enterprise tiers cover regulated workloads with retention guarantees, dedicated signing keys, and audit support. See the pricing page for current limits.

On where the signature lives: each tenant has its own Ed25519 keypair. The signature is generated server-side using your tenant's private key, which is encrypted at rest with the platform master key and never leaves the backend. Your tenant's public key is embedded inline in every proof bundle returned by the verification endpoint, and is also fetchable independently by your verified domain at GET /keys/{domain}, so any third party can validate signatures with a standard Ed25519 library.

What this looks like to a compliance team

If you are a developer and a colleague from compliance, GRC, or legal is reading this over your shoulder, here is the version they care about.

Under the EU AI Act, high-risk AI systems are required to maintain logs of inputs and outputs sufficient for post-market monitoring and regulatory review. AI attestation produces those logs in a form that is independently verifiable, satisfying not only the existence requirement but the integrity requirement that traditional logging tends to fall short on.

Under ISO 42001, the new AI management system standard, organizations must demonstrate accountability for AI system behavior through auditable records. Attestation provides the technical mechanism that auditors can sample directly, instead of relying on screenshots and database exports of unknown provenance.

Under the NIST AI Risk Management Framework, the Measure and Manage functions both call for evidence-grade records of AI system behavior. Cryptographic attestation maps cleanly to those subcategories.

Under Federal Rules of Evidence 902(14) in the United States, electronically stored records produced by a process that generates a unique identifier and authenticates the record are self-authenticating. Ed25519-signed attestations with publicly verifiable signatures are designed for exactly this provision.

The point is that attestation is not a separate compliance project. It is one API call your engineering team makes, and the resulting evidentiary record is consumable by every downstream framework you are subject to.

Key insight. Compliance teams: ask your engineers what percentage of high-stakes AI calls currently produce a signed, externally verifiable record. If the answer is zero, that is the gap. Attestation closes it without changing the AI pipeline.

From signup to your first attestation in five minutes

The activation path is short by design. Sign up at the dashboard, create an organization, and your tenant's signing keys are generated automatically on first use. Generate an API key from the API Keys section. Install the SDK in your project (npm install invoance, or pip install invoance). Set INVOANCE_API_KEY in your environment. Make the call.

The first attestation you make against your free tenant returns a real attestation ID, a real signature, and a real public verification URL. The same URL pattern that backs Compliance and Enterprise customers backs the Builder tier. The infrastructure is identical, the limits scale with the plan.

If you are evaluating this for a regulated or high-volume use case, the public verification URL is the asset to share with internal auditors and external counsel before you commit. Hand them an attestation_id from a test run, let them verify it themselves, and the trust question collapses into a technical demo. That is usually enough to unblock procurement.

Anchor every AI input and output as tamper-evident proof at generation time, one API call, no model changes.

Start freeAI AttestationDiscuss your use case
Adeola Okunola
Adeola Okunola

Founder, Invoance

About the author

I'm Adeola, founder of Invoance. I've spent most of my engineering life building systems where everything is provable. Invoance is what happens when you turn that obsession into infrastructure other people can use. Most "audit trails" can be quietly edited after the fact, which makes them stories, not proof. Most people use "evidence" and "proof" interchangeably. They aren't the same thing. I write here about audit integrity, AI attestation, and the gap between documenting controls and proving outcomes.

All articles by Adeola

Recommended

AI Governance·10 min read

AI Attestation: What It Is, Why It Matters, and How to Implement It

AI systems make decisions that affect loans, diagnoses, hiring, and contracts. When those decisions are challenged, organizations need proof of what the model produced, when, and with what inputs. AI attestation provides that proof.

Read
Compliance·9 min read

ISO 42001 Compliance: What Engineering Teams Need to Know

ISO 42001 is the first international standard for AI management systems. For engineering teams, it means specific technical requirements around auditability, traceability, and governance. Here is what you actually need to build.

Read
Compliance·7 min read

Why Traditional Audit Logs Fail Under Regulatory Scrutiny

Your application logs record what happened. But in an audit or legal proceeding, the first question is not what your logs say, it is whether anyone can trust your logs. Traditional logging has a fundamental integrity problem that most teams do not address until it is too late.

Read
Product·7 min read

Introducing Document Anchor: Cryptographic Proof That a Document Existed, Unchanged, at a Specific Moment

Contracts get disputed. Filings get questioned. Wire instructions get spoofed. Document Anchor replaces 'trust our DMS' with cryptographic proof anyone can verify, and breaks the BEC playbook in the process.

Read

How to Prove Your AI Did What It Said: A Developer's Guide to Verifiable AI Outputs

Stop relying on logs. Learn how to attach a cryptographically signed receipt to every LLM call so you, your auditors, and your customers can verify what your AI produced, when, and from what input. Includes Node and Python code, a public verification flow, and what attestation does and does not prove.

Category: AI Governance. Published 2026-05-06 by Adeola Okunola, Founder, Invoance. Tags: AI Attestation, Verifiable AI, LLM Audit Trail, AI Output Verification, Developer Guide, Node SDK, Python SDK, Cryptographic Proof, Ed25519.

Invoance

Neutral digital proof infrastructure for business. Tamper-evident, independently verifiable records.

Subscribe to our newsletter

Products
Platform
How It Works
Developers
Verify
Resources
Help & Legal
Products
  • Event Ledger
  • Document Anchoring
  • AI Attestation
  • Traces
Platform
  • Why Invoance
  • For Compliance Teams
  • For Finance Teams
  • Pricing
How It Works
  • Overview
  • Event Ledger
  • Document Anchoring
  • AI Attestation
Developers
  • Overview
  • Endpoints
  • Authentication
  • Concepts
Verify
  • Verify Document
  • Verify AI Attestation
  • Verify Event
  • Verify Trace
Resources
  • All Resources
  • SOC 2 Guide
  • HIPAA Guide
  • ISO 27001 Guide
Help & Legal
  • Support
  • Status
  • Verification Help
  • FAQ

Invoance provides technical verification and proof infrastructure for digital records. Invoance does not issue legal, financial, or regulatory advice.

Records anchored through Invoance are cryptographically signed and tamper-evident by design. Invoance does not verify the accuracy, legality, or authenticity of document contents, only that a record existed in a specific form at a specific time. Verification links are publicly resolvable and do not require authentication. Invoance does not act as a custodian of funds, a legal authority, or a regulated financial entity. Use of Invoance does not constitute legal compliance. Consult qualified counsel for your specific obligations.

© 2025 – 2026 Invoance, Inc. All rights reserved.•