Documentation
TopNetworks API
The neutral intelligence layer for AI agents. Live health monitoring for 52 AI providers — plus decision tools, pricing data, and trust primitives. No signup. No API key for free endpoints. Paid endpoints use x402 micropayments: pay per call in USDC on Base.
Providers
52
Endpoints
16
Poll rate
every 10m
Free tier
12 endpoints
All Endpoints
16 endpoints across 4 groups. Free endpoints need no auth. Paid endpoints use x402 — your agent pays per call in USDC on Base.
| Endpoint | Method | Price | Group | Use case |
|---|---|---|---|---|
| /api/v1/health | GET | Free | Live Status | Live status for all 52 providers. No auth. Start here. |
| /api/v1/health/premium | GET | $0.001 | Uptime %, p95 latency, incidents, trend, degradation flag. | |
| /api/v1/freshness | GET | $0.0005 | Data freshness, drift score, latency trend per provider. | |
| /api/v1/latency | GET | $0.0005 | p50/p95/p99 percentiles + TTFT estimate. 1h/6h/24h windows. | |
| /api/v1/failover | GET | Free | Decision Tools | Ordered failover chain when primary fails. Scored by status + latency. |
| /api/v1/recommend | GET | Free | Ranked operational alternatives by task type. | |
| /api/v1/incidents | GET | Free | De-duplicated outage + degradation feed. Up to 168h history. | |
| /api/v1/cost-estimate | GET | Free | Pre-flight token cost estimate with cache breakdown + cheaper alternatives. | |
| /api/v1/pricing | GET | Free | Provider Data | Token, image, TTS, STT, embedding pricing across all providers. |
| /api/v1/models | GET | Free | Model registry: context window, capabilities, knowledge cutoff. | |
| /api/v1/rate-limits | GET | Free | RPM, RPD, TPM limits per provider and tier. | |
| /api/v1/benchmarks | GET | Free | MMLU, HumanEval, MATH, GPQA, MGSM benchmark scores. | |
| /api/v1/register | POST | $0.001 | Trust & Identity | Register an agent output contract. Returns a verifiable ID. |
| /api/v1/verify/{id} | GET | $0.001 | Verify a contract exists. Optional integrity hash match. | |
| /api/v1/sign | POST | Free | Sign an input payload hash. Get a tamper-evident HMAC receipt. | |
| /api/v1/validate/{id} | GET | Free | Re-derive HMAC server-side to verify receipt was not tampered. |
Status Codes
OK
Success. Parse the response body.
Bad Request
Missing or invalid params. Check the error field for details.
Payment Required
x402 endpoint — see the accepts array for payment instructions.
Not Found
Unknown provider ID or resource. Check spelling against the health endpoint.
Server Error
Retry with exponential backoff. Our cron is independent so the API recovers fast.
x402 Payments
Paid endpoints use the x402 protocol. Hit the endpoint without a payment header → get a 402 Payment Required with instructions. Pay in USDC on Base L2 using any x402-compatible wallet. Settlement takes ~2 seconds. No API keys, no accounts, no monthly bills.
Network
Base (L2)
Currency
USDC
Wallet
0x8De7…2f34
curl -i https://topnetworks.com/api/v1/health/premium
# HTTP/1.1 402 Payment Required
# { "x402Version": 1, "accepts": [{
# "scheme": "exact", "network": "base",
# "maxAmountRequired": "1000",
# "payTo": "0x4e22ea2467C51EAED5dd70b1122E73D0007E3d50",
# "asset": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"
# }]}Live Status
HealthFree
The core endpoint. Live status for all 52 monitored AI providers, polled every 10 minutes. No auth required. Returns operational, degraded, outage, or unknown for each provider, plus a global summary. Start here.
| Field | Type | Description |
|---|---|---|
| timestamp | string | ISO 8601 timestamp the response was generated. |
| providers | object | Map of provider_id → health object. |
| .status | string | "operational" | "degraded" | "outage" | "unknown" |
| .last_checked | string | null | ISO timestamp of last successful poll. |
| .response_time_ms | number | null | Time (ms) to fetch provider status page from our poller. Not your API latency. |
| summary.operational | number | Count of operational providers. |
| summary.degraded | number | Count of degraded providers (slow but not down). |
| summary.outage | number | Count of providers with active outage. |
| summary.unknown | number | Count of providers with unknown status (not yet polled or stale). |
curl -s https://topnetworks.com/api/v1/health | jq '.summary'Freshness Oraclex402 · $0.0005
Answers one question: can I trust this provider's status right now? Returns data age, a drift score (0 = rock solid → 1 = highly variable), and the direction of latency change. Half the price of the premium endpoint. Designed for agents making failover decisions.
| Field | Type | Description |
|---|---|---|
| fresh | boolean | true if last poll was within 12 minutes. |
| age_seconds | number | null | Seconds since last successful poll. |
| drift_score | number | 0 (rock solid) → 1 (highly variable). CV of last 10 response times. |
| trend | string | "improving" | "stable" | "degrading" | "unknown" |
| latency_trend_ms | number | null | ms delta: recent avg minus prior avg. Positive = getting slower. |
| avg_response_ms | number | null | Average response time over last 10 checks. |
| fresh_threshold_seconds | number | Max age before fresh=false. Currently 720 (12 min). |
curl -i "https://topnetworks.com/api/v1/freshness?provider=openai"
# HTTP/1.1 402 Payment Required — pay $0.0005 USDC on Base, then:
# { "provider": "openai", "fresh": true, "age_seconds": 42,
# "drift_score": 0.04, "trend": "stable", "avg_response_ms": 118 }Latencyx402 · $0.0005
Real latency percentiles from live polling data — p50, p95, p99, and average over a 1h, 6h, or 24h window. Includes a TTFT estimate for LLM providers and uptime % for the window. Use this to set smart timeouts. A provider can be “operational” but have p95 at 12 seconds — that breaks streaming.
providerrequiredrequired — provider ID. e.g. openai
window1h · 6h · 24h — lookback window (default: 1h)
p50_msMedian response time
p95_ms95th percentile — set timeouts here
p99_ms99th percentile tail latency
ttft_estimate_msTTFT estimate (LLM only, ~40% of avg)
trendimproving | stable | degrading | unknown
uptime_pct_in_windowUptime % over the window
total_checksSample count used
// "Before I set my timeout, what is OpenAI's p95 right now?"
const data = await pay(
"https://topnetworks.com/api/v1/latency?provider=openai&window=1h"
).then(r => r.json())
// { p50_ms: 312, p95_ms: 1840, p99_ms: 4200, avg_ms: 480,
// ttft_estimate_ms: 192, trend: "stable", uptime_pct_in_window: 100 }
const timeout = Math.round(data.p95_ms * 1.2) // p95 + 20% buffer
console.log("Use timeout:", timeout, "ms") // 2208msDecision Tools
Failover ChainFree
When your primary provider goes down, this returns an ordered list of alternatives — ranked by live status, latency, and task-type capability. Each entry includes a reason, live status, response time, and pricing so your agent makes an informed switch. Every multi-agent system hand-rolls this logic; call this instead.
primaryrequiredrequired — the failing provider ID. e.g. openai
taskoptional — llm · image · embedding · speech · search · video · code · agent
limitoptional — max alternatives (1–10, default 5)
max_cost_per_1moptional — max input price per 1M tokens USD (budget filter)
const { failover_chain } = await fetch(
"https://topnetworks.com/api/v1/failover?primary=openai&task=llm&max_cost_per_1m=2.00"
).then(r => r.json())
// failover_chain: [
// { provider_id: "anthropic", status: "operational", score: 88,
// response_time_ms: 390, input_per_1m_usd: 3.00,
// reason: "Operational — 20% more expensive than OpenAI" },
// { provider_id: "groq", status: "operational", score: 85,
// response_time_ms: 180, input_per_1m_usd: 0.59,
// reason: "Operational and fast — 76% cheaper than OpenAI" },
// ]
for (const p of failover_chain) {
if (p.status === 'operational') {
reroute(p.provider_id) // try in order
break
}
}RecommendFree
Ranked operational alternatives by task type. Given a task and an optional exclusion list, returns the best available providers right now — scored by status, latency, and free-tier availability. Use this for initial routing decisions; use Failover when a specific primary fails.
taskllm · image · embedding · speech · search · video · code · agent
avoidComma-separated provider IDs to exclude. e.g. avoid=openai,anthropic
limitMax results (1–20, default 5)
free_onlytrue — only providers with a free tier
const { recommendations } = await fetch(
"https://topnetworks.com/api/v1/recommend?task=llm&avoid=openai&limit=3"
).then(r => r.json())
// [{ id: "anthropic", name: "Anthropic", status: "operational",
// response_time_ms: 98, score: 90, reason: "Operational and fast" },
// { id: "groq", ... }, ...]
const fallback = recommendations[0]?.id // "anthropic"IncidentsFree
De-duplicated outage and degradation feed across all 52 providers. Consecutive bad checks are collapsed into single incidents with duration and an ongoing flag. Poll this to know the global state of AI infrastructure at a glance.
hoursLookback window 1–168. Default: 24
severityoutage · degraded · all (default)
providerFilter to one provider. e.g. provider=openai
const { incidents, summary } = await fetch(
"https://topnetworks.com/api/v1/incidents?hours=24"
).then(r => r.json())
// summary: { total_incidents: 2, outages: 1, degraded: 1, ongoing: 1 }
// incidents[0]: { provider_id: "scaleway", severity: "degraded",
// started_at: "2026-03-15T...", duration_minutes: 45, ongoing: true }Cost EstimateFree
Pre-flight token cost estimation. Pass provider, input tokens, output tokens, and optional cached tokens — get a full cost breakdown in USD/USDC plus up to 3 cheaper alternatives from the same category. Agents running long autonomous tasks need to budget before they commit.
providerrequiredrequired — provider ID. e.g. openai
input_tokensrequiredrequired — number of input tokens
output_tokensrequiredrequired — number of output tokens
cached_tokensoptional — cached input tokens (50% discount applied)
modeloptional — partial model name filter. e.g. model=haiku
const data = await fetch(
"https://topnetworks.com/api/v1/cost-estimate" +
"?provider=openai&input_tokens=10000&output_tokens=2000&cached_tokens=5000"
).then(r => r.json())
// {
// model: "gpt-4o",
// rates: { input_per_1m_usd: 2.50, output_per_1m_usd: 10.00, cache_discount_pct: 50 },
// breakdown: { input_cost_usd: 0.0125, cached_cost_usd: 0.00625, output_cost_usd: 0.02 },
// estimated_total_usd: 0.038750,
// cache_savings_usd: 0.00625,
// cheaper_alternatives: [
// { provider_id: "groq", input_per_1m_usd: 0.59,
// estimated_total_usd: 0.009900, savings_pct: 74 }
// ]
// }Provider Data
PricingFree
Unified pricing across all 52 providers — input/output per 1M tokens, per-image, TTS character rate, STT per-minute, embedding rate. Filter by task to compare within a category.
providerFilter to one provider
taskllm · embedding · image · speech · video · search · code · agent
free_onlytrue — only providers with a free tier
comparetrue — token-priced models only, sorted cheapest first
const { pricing } = await fetch(
"https://topnetworks.com/api/v1/pricing?task=llm&compare=true"
).then(r => r.json())
// Sorted cheapest input first:
// [{ provider_id: "deepinfra", model: "llama-3.3-70b",
// pricing: { input_per_1m_tokens: 0.23, output_per_1m_tokens: 0.40 } },
// { provider_id: "groq", ... }, ...]ModelsFree
Model capability registry. Filter by task, required capability (vision, function calling, JSON mode), or minimum context window. Use this to pick the right model before a task, not after hitting a capability error.
providerFilter to one provider
taskllm · embedding · image · speech · video · code · multimodal
visiontrue — vision-capable models only
function_callingtrue — function calling required
json_modetrue — JSON mode required
min_contextMinimum context window in tokens. e.g. 128000
const { models } = await fetch(
"https://topnetworks.com/api/v1/models?task=llm&vision=true&min_context=128000"
).then(r => r.json())
// [{ provider_id: "google-gemini", model_id: "gemini-2.0-flash",
// context_window: 1048576, max_output_tokens: 8192,
// capabilities: { vision: true, function_calling: true, json_mode: true },
// knowledge_cutoff: "2025-01" }, ...]Rate LimitsFree
Published rate limits per provider and tier — RPM, RPD, TPM, TPD, concurrent requests. Consult this before starting burst tasks to avoid wasted compute on 429s.
providerFilter to one provider
tierfree · tier1 · tier2 · standard · paid (partial match)
free_onlytrue — free tier limits only
min_rpmMinimum RPM required. e.g. min_rpm=100
const { rate_limits } = await fetch(
"https://topnetworks.com/api/v1/rate-limits?provider=openai"
).then(r => r.json())
// [{ tier: "free", limits: { requests_per_minute: 3, tokens_per_minute: 40000 } },
// { tier: "tier1", limits: { requests_per_minute: 500, tokens_per_minute: 200000 } }]BenchmarksFree
Public benchmark scores across models — MMLU, HumanEval, MATH, GPQA Diamond, MGSM, HellaSwag. Sort by any metric or pass a task type to auto-select the most relevant benchmark.
taskcoding · math · reasoning · general · multilingual · commonsense
sort_bymmlu · humaneval · math · gpqa · mgsm · hellaswag
providerFilter to one provider
limitMax results (1–50, default 20)
const { benchmarks, meta } = await fetch(
"https://topnetworks.com/api/v1/benchmarks?task=coding&limit=5"
).then(r => r.json())
// meta.sorted_by: "humaneval"
// benchmarks[0]: { provider_id: "openai", model_id: "o3",
// scores: { humaneval: 98.0, mmlu: 96.7, math: 97.0 } }Trust & Identity
Agent Contract Registryx402 · $0.001
A neutral inter-agent contract standard. Register your output schema + an integrity hash and receive a verifiable ID. Any other agent can verify that contract before trusting your output — solving the orphaned task problem.
/api/v1/registerRegister an agent output contract. Returns a unique ID and verify_url.
/api/v1/verify/{id}Verify a registered contract. Optional ?hash= to confirm integrity hash matches.
import { wrapFetchWithPayment } from 'x402-fetch'
import { createHash } from 'crypto'
const output = JSON.stringify({ result: 'the answer', confidence: 0.98 })
const hash = createHash('sha256').update(output).digest('hex')
// Step 1: register the contract — $0.001
const { id, verify_url } = await pay('https://topnetworks.com/api/v1/register', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
agent_id: 'my-agent-v2',
schema_version: '1.0.0',
integrity_hash: hash,
tags: ['llm', 'structured-output'],
}),
}).then(r => r.json())
// Step 2: another agent verifies before trusting output — $0.001
const { valid, hash_match } = await pay(`${verify_url}?hash=${hash}`).then(r => r.json())
// { valid: true, hash_match: true, verified_count: 1 }Input ProvenanceFree
Sign the input you acted on, not just the output you produced. Get a tamper-evident HMAC receipt that any downstream agent can verify. Solves the trust inversion problem — a perfectly-audited decision on poisoned data is worse than an unaudited one on clean data.
/api/v1/signSign an input payload hash. Returns a UUID receipt with HMAC-SHA256 signature.
/api/v1/validate/{id}Validate a receipt. Re-derives HMAC server-side to prove no tampering.
import { createHash } from 'crypto'
const inputData = await fetchInputFromUpstream()
const hash = createHash('sha256').update(JSON.stringify(inputData)).digest('hex')
// Agent A signs before acting
const receipt = await fetch('https://topnetworks.com/api/v1/sign', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ signer_id: 'pipeline-agent-v3', payload_hash: hash }),
}).then(r => r.json())
// { id: "...", signature: "...", validate_url: "..." }
// Pass receipt.id + hash downstream alongside your output
const output = { result: processInput(inputData), provenance_id: receipt.id, payload_hash: hash }
// Agent B validates before trusting the output
const { valid, signature_valid, hash_match } = await fetch(
`https://topnetworks.com/api/v1/validate/${output.provenance_id}?hash=${output.payload_hash}`
).then(r => r.json())
if (!valid || !signature_valid || !hash_match) throw new Error('Provenance check failed')