← TopNetworks

Documentation

TopNetworks API

The neutral intelligence layer for AI agents. Live health monitoring for 52 AI providers — plus decision tools, pricing data, and trust primitives. No signup. No API key for free endpoints. Paid endpoints use x402 micropayments: pay per call in USDC on Base.

Providers

52

Endpoints

16

Poll rate

every 10m

Free tier

12 endpoints

All Endpoints

16 endpoints across 4 groups. Free endpoints need no auth. Paid endpoints use x402 — your agent pays per call in USDC on Base.

EndpointMethodPriceGroupUse case
/api/v1/healthGETFreeLive StatusLive status for all 52 providers. No auth. Start here.
/api/v1/health/premiumGET$0.001Uptime %, p95 latency, incidents, trend, degradation flag.
/api/v1/freshnessGET$0.0005Data freshness, drift score, latency trend per provider.
/api/v1/latencyGET$0.0005p50/p95/p99 percentiles + TTFT estimate. 1h/6h/24h windows.
/api/v1/failoverGETFreeDecision ToolsOrdered failover chain when primary fails. Scored by status + latency.
/api/v1/recommendGETFreeRanked operational alternatives by task type.
/api/v1/incidentsGETFreeDe-duplicated outage + degradation feed. Up to 168h history.
/api/v1/cost-estimateGETFreePre-flight token cost estimate with cache breakdown + cheaper alternatives.
/api/v1/pricingGETFreeProvider DataToken, image, TTS, STT, embedding pricing across all providers.
/api/v1/modelsGETFreeModel registry: context window, capabilities, knowledge cutoff.
/api/v1/rate-limitsGETFreeRPM, RPD, TPM limits per provider and tier.
/api/v1/benchmarksGETFreeMMLU, HumanEval, MATH, GPQA, MGSM benchmark scores.
/api/v1/registerPOST$0.001Trust & IdentityRegister an agent output contract. Returns a verifiable ID.
/api/v1/verify/{id}GET$0.001Verify a contract exists. Optional integrity hash match.
/api/v1/signPOSTFreeSign an input payload hash. Get a tamper-evident HMAC receipt.
/api/v1/validate/{id}GETFreeRe-derive HMAC server-side to verify receipt was not tampered.

Status Codes

200

OK

Success. Parse the response body.

400

Bad Request

Missing or invalid params. Check the error field for details.

402

Payment Required

x402 endpoint — see the accepts array for payment instructions.

404

Not Found

Unknown provider ID or resource. Check spelling against the health endpoint.

500

Server Error

Retry with exponential backoff. Our cron is independent so the API recovers fast.

x402 Payments

Paid endpoints use the x402 protocol. Hit the endpoint without a payment header → get a 402 Payment Required with instructions. Pay in USDC on Base L2 using any x402-compatible wallet. Settlement takes ~2 seconds. No API keys, no accounts, no monthly bills.

Network

Base (L2)

Currency

USDC

Wallet

0x8De7…2f34

x402 — paid endpoint integration
curl -i https://topnetworks.com/api/v1/health/premium
# HTTP/1.1 402 Payment Required
# { "x402Version": 1, "accepts": [{
#     "scheme": "exact", "network": "base",
#     "maxAmountRequired": "1000",
#     "payTo": "0x4e22ea2467C51EAED5dd70b1122E73D0007E3d50",
#     "asset": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"
# }]}

Live Status


HealthFree

The core endpoint. Live status for all 52 monitored AI providers, polled every 10 minutes. No auth required. Returns operational, degraded, outage, or unknown for each provider, plus a global summary. Start here.

GET https://topnetworks.com/api/v1/health · No auth
FieldTypeDescription
timestampstringISO 8601 timestamp the response was generated.
providersobjectMap of provider_id → health object.
.statusstring"operational" | "degraded" | "outage" | "unknown"
.last_checkedstring | nullISO timestamp of last successful poll.
.response_time_msnumber | nullTime (ms) to fetch provider status page from our poller. Not your API latency.
summary.operationalnumberCount of operational providers.
summary.degradednumberCount of degraded providers (slow but not down).
summary.outagenumberCount of providers with active outage.
summary.unknownnumberCount of providers with unknown status (not yet polled or stale).
GET /api/v1/health
curl -s https://topnetworks.com/api/v1/health | jq '.summary'

Health Premiumx402 · $0.001

Enhanced health data per provider — historical uptime %, p95 latency, recent incident count, rate limit risk level, and the direction of change (improving / stable / degrading). Designed for agents that need confidence, not just a green dot.

uptime_pct

30-day uptime percentage

avg_response_ms

Average poll response time

p95_response_ms

95th-percentile response time

recent_incidents

Outages/degradations in last 7 days

rate_limit_risk

"low" | "medium" | "high"

trend

"improving" | "stable" | "degrading"

degraded

true if operational but avg latency > 3s

GET /api/v1/health/premium · $0.001 USDC
curl -i https://topnetworks.com/api/v1/health/premium
# HTTP/1.1 402 Payment Required
# { "x402Version": 1, "accepts": [{
#     "scheme": "exact", "network": "base",
#     "maxAmountRequired": "1000",
#     "payTo": "0x4e22ea2467C51EAED5dd70b1122E73D0007E3d50",
#     "asset": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"
# }]}

Freshness Oraclex402 · $0.0005

Answers one question: can I trust this provider's status right now? Returns data age, a drift score (0 = rock solid → 1 = highly variable), and the direction of latency change. Half the price of the premium endpoint. Designed for agents making failover decisions.

FieldTypeDescription
freshbooleantrue if last poll was within 12 minutes.
age_secondsnumber | nullSeconds since last successful poll.
drift_scorenumber0 (rock solid) → 1 (highly variable). CV of last 10 response times.
trendstring"improving" | "stable" | "degrading" | "unknown"
latency_trend_msnumber | nullms delta: recent avg minus prior avg. Positive = getting slower.
avg_response_msnumber | nullAverage response time over last 10 checks.
fresh_threshold_secondsnumberMax age before fresh=false. Currently 720 (12 min).
GET /api/v1/freshness?provider={id} · $0.0005 USDC
curl -i "https://topnetworks.com/api/v1/freshness?provider=openai"
# HTTP/1.1 402 Payment Required — pay $0.0005 USDC on Base, then:
# { "provider": "openai", "fresh": true, "age_seconds": 42,
#   "drift_score": 0.04, "trend": "stable", "avg_response_ms": 118 }

Latencyx402 · $0.0005

Real latency percentiles from live polling data — p50, p95, p99, and average over a 1h, 6h, or 24h window. Includes a TTFT estimate for LLM providers and uptime % for the window. Use this to set smart timeouts. A provider can be “operational” but have p95 at 12 seconds — that breaks streaming.

providerrequired

required — provider ID. e.g. openai

window

1h · 6h · 24h — lookback window (default: 1h)

p50_ms

Median response time

p95_ms

95th percentile — set timeouts here

p99_ms

99th percentile tail latency

ttft_estimate_ms

TTFT estimate (LLM only, ~40% of avg)

trend

improving | stable | degrading | unknown

uptime_pct_in_window

Uptime % over the window

total_checks

Sample count used

GET /api/v1/latency?provider=openai&window=1h · $0.0005 USDC
// "Before I set my timeout, what is OpenAI's p95 right now?"
const data = await pay(
  "https://topnetworks.com/api/v1/latency?provider=openai&window=1h"
).then(r => r.json())
// { p50_ms: 312, p95_ms: 1840, p99_ms: 4200, avg_ms: 480,
//   ttft_estimate_ms: 192, trend: "stable", uptime_pct_in_window: 100 }

const timeout = Math.round(data.p95_ms * 1.2)  // p95 + 20% buffer
console.log("Use timeout:", timeout, "ms")       // 2208ms

Decision Tools


Failover ChainFree

When your primary provider goes down, this returns an ordered list of alternatives — ranked by live status, latency, and task-type capability. Each entry includes a reason, live status, response time, and pricing so your agent makes an informed switch. Every multi-agent system hand-rolls this logic; call this instead.

primaryrequired

required — the failing provider ID. e.g. openai

task

optional — llm · image · embedding · speech · search · video · code · agent

limit

optional — max alternatives (1–10, default 5)

max_cost_per_1m

optional — max input price per 1M tokens USD (budget filter)

GET /api/v1/failover?primary=openai&task=llm
const { failover_chain } = await fetch(
  "https://topnetworks.com/api/v1/failover?primary=openai&task=llm&max_cost_per_1m=2.00"
).then(r => r.json())

// failover_chain: [
//   { provider_id: "anthropic", status: "operational", score: 88,
//     response_time_ms: 390, input_per_1m_usd: 3.00,
//     reason: "Operational — 20% more expensive than OpenAI" },
//   { provider_id: "groq", status: "operational", score: 85,
//     response_time_ms: 180, input_per_1m_usd: 0.59,
//     reason: "Operational and fast — 76% cheaper than OpenAI" },
// ]

for (const p of failover_chain) {
  if (p.status === 'operational') {
    reroute(p.provider_id)  // try in order
    break
  }
}

RecommendFree

Ranked operational alternatives by task type. Given a task and an optional exclusion list, returns the best available providers right now — scored by status, latency, and free-tier availability. Use this for initial routing decisions; use Failover when a specific primary fails.

task

llm · image · embedding · speech · search · video · code · agent

avoid

Comma-separated provider IDs to exclude. e.g. avoid=openai,anthropic

limit

Max results (1–20, default 5)

free_only

true — only providers with a free tier

GET /api/v1/recommend?task=llm&avoid=openai&limit=3
const { recommendations } = await fetch(
  "https://topnetworks.com/api/v1/recommend?task=llm&avoid=openai&limit=3"
).then(r => r.json())

// [{ id: "anthropic", name: "Anthropic", status: "operational",
//    response_time_ms: 98, score: 90, reason: "Operational and fast" },
//  { id: "groq", ... }, ...]

const fallback = recommendations[0]?.id  // "anthropic"

IncidentsFree

De-duplicated outage and degradation feed across all 52 providers. Consecutive bad checks are collapsed into single incidents with duration and an ongoing flag. Poll this to know the global state of AI infrastructure at a glance.

hours

Lookback window 1–168. Default: 24

severity

outage · degraded · all (default)

provider

Filter to one provider. e.g. provider=openai

GET /api/v1/incidents?hours=24
const { incidents, summary } = await fetch(
  "https://topnetworks.com/api/v1/incidents?hours=24"
).then(r => r.json())

// summary: { total_incidents: 2, outages: 1, degraded: 1, ongoing: 1 }
// incidents[0]: { provider_id: "scaleway", severity: "degraded",
//   started_at: "2026-03-15T...", duration_minutes: 45, ongoing: true }

Cost EstimateFree

Pre-flight token cost estimation. Pass provider, input tokens, output tokens, and optional cached tokens — get a full cost breakdown in USD/USDC plus up to 3 cheaper alternatives from the same category. Agents running long autonomous tasks need to budget before they commit.

providerrequired

required — provider ID. e.g. openai

input_tokensrequired

required — number of input tokens

output_tokensrequired

required — number of output tokens

cached_tokens

optional — cached input tokens (50% discount applied)

model

optional — partial model name filter. e.g. model=haiku

GET /api/v1/cost-estimate?provider=openai&input_tokens=10000&output_tokens=2000&cached_tokens=5000
const data = await fetch(
  "https://topnetworks.com/api/v1/cost-estimate" +
  "?provider=openai&input_tokens=10000&output_tokens=2000&cached_tokens=5000"
).then(r => r.json())

// {
//   model: "gpt-4o",
//   rates: { input_per_1m_usd: 2.50, output_per_1m_usd: 10.00, cache_discount_pct: 50 },
//   breakdown: { input_cost_usd: 0.0125, cached_cost_usd: 0.00625, output_cost_usd: 0.02 },
//   estimated_total_usd: 0.038750,
//   cache_savings_usd: 0.00625,
//   cheaper_alternatives: [
//     { provider_id: "groq", input_per_1m_usd: 0.59,
//       estimated_total_usd: 0.009900, savings_pct: 74 }
//   ]
// }

Provider Data


PricingFree

Unified pricing across all 52 providers — input/output per 1M tokens, per-image, TTS character rate, STT per-minute, embedding rate. Filter by task to compare within a category.

provider

Filter to one provider

task

llm · embedding · image · speech · video · search · code · agent

free_only

true — only providers with a free tier

compare

true — token-priced models only, sorted cheapest first

GET /api/v1/pricing?task=llm&compare=true
const { pricing } = await fetch(
  "https://topnetworks.com/api/v1/pricing?task=llm&compare=true"
).then(r => r.json())
// Sorted cheapest input first:
// [{ provider_id: "deepinfra", model: "llama-3.3-70b",
//    pricing: { input_per_1m_tokens: 0.23, output_per_1m_tokens: 0.40 } },
//  { provider_id: "groq", ... }, ...]

ModelsFree

Model capability registry. Filter by task, required capability (vision, function calling, JSON mode), or minimum context window. Use this to pick the right model before a task, not after hitting a capability error.

provider

Filter to one provider

task

llm · embedding · image · speech · video · code · multimodal

vision

true — vision-capable models only

function_calling

true — function calling required

json_mode

true — JSON mode required

min_context

Minimum context window in tokens. e.g. 128000

GET /api/v1/models?task=llm&vision=true&min_context=128000
const { models } = await fetch(
  "https://topnetworks.com/api/v1/models?task=llm&vision=true&min_context=128000"
).then(r => r.json())
// [{ provider_id: "google-gemini", model_id: "gemini-2.0-flash",
//    context_window: 1048576, max_output_tokens: 8192,
//    capabilities: { vision: true, function_calling: true, json_mode: true },
//    knowledge_cutoff: "2025-01" }, ...]

Rate LimitsFree

Published rate limits per provider and tier — RPM, RPD, TPM, TPD, concurrent requests. Consult this before starting burst tasks to avoid wasted compute on 429s.

provider

Filter to one provider

tier

free · tier1 · tier2 · standard · paid (partial match)

free_only

true — free tier limits only

min_rpm

Minimum RPM required. e.g. min_rpm=100

GET /api/v1/rate-limits?provider=openai
const { rate_limits } = await fetch(
  "https://topnetworks.com/api/v1/rate-limits?provider=openai"
).then(r => r.json())
// [{ tier: "free", limits: { requests_per_minute: 3, tokens_per_minute: 40000 } },
//  { tier: "tier1", limits: { requests_per_minute: 500, tokens_per_minute: 200000 } }]

BenchmarksFree

Public benchmark scores across models — MMLU, HumanEval, MATH, GPQA Diamond, MGSM, HellaSwag. Sort by any metric or pass a task type to auto-select the most relevant benchmark.

task

coding · math · reasoning · general · multilingual · commonsense

sort_by

mmlu · humaneval · math · gpqa · mgsm · hellaswag

provider

Filter to one provider

limit

Max results (1–50, default 20)

GET /api/v1/benchmarks?task=coding&limit=5
const { benchmarks, meta } = await fetch(
  "https://topnetworks.com/api/v1/benchmarks?task=coding&limit=5"
).then(r => r.json())
// meta.sorted_by: "humaneval"
// benchmarks[0]: { provider_id: "openai", model_id: "o3",
//   scores: { humaneval: 98.0, mmlu: 96.7, math: 97.0 } }

Trust & Identity


Agent Contract Registryx402 · $0.001

A neutral inter-agent contract standard. Register your output schema + an integrity hash and receive a verifiable ID. Any other agent can verify that contract before trusting your output — solving the orphaned task problem.

POST/api/v1/register

Register an agent output contract. Returns a unique ID and verify_url.

agent_id (required)schema_version (required)integrity_hash (required — SHA-256 hex)description (optional)tags[] (optional)ttl_hours (optional)
GET/api/v1/verify/{id}

Verify a registered contract. Optional ?hash= to confirm integrity hash matches.

id (path — UUID)hash (query — optional SHA-256)
Register → verify flow · $0.002 total$0.001 register + $0.001 verify
import { wrapFetchWithPayment } from 'x402-fetch'
import { createHash } from 'crypto'

const output = JSON.stringify({ result: 'the answer', confidence: 0.98 })
const hash = createHash('sha256').update(output).digest('hex')

// Step 1: register the contract — $0.001
const { id, verify_url } = await pay('https://topnetworks.com/api/v1/register', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    agent_id: 'my-agent-v2',
    schema_version: '1.0.0',
    integrity_hash: hash,
    tags: ['llm', 'structured-output'],
  }),
}).then(r => r.json())

// Step 2: another agent verifies before trusting output — $0.001
const { valid, hash_match } = await pay(`${verify_url}?hash=${hash}`).then(r => r.json())
// { valid: true, hash_match: true, verified_count: 1 }

Input ProvenanceFree

Sign the input you acted on, not just the output you produced. Get a tamper-evident HMAC receipt that any downstream agent can verify. Solves the trust inversion problem — a perfectly-audited decision on poisoned data is worse than an unaudited one on clean data.

POST/api/v1/sign

Sign an input payload hash. Returns a UUID receipt with HMAC-SHA256 signature.

signer_id (required)payload_hash (required — SHA-256 hex)metadata (optional)ttl_hours (optional)
GET/api/v1/validate/{id}

Validate a receipt. Re-derives HMAC server-side to prove no tampering.

id (path — UUID)hash (query — optional SHA-256)
Sign → validate flow
import { createHash } from 'crypto'

const inputData = await fetchInputFromUpstream()
const hash = createHash('sha256').update(JSON.stringify(inputData)).digest('hex')

// Agent A signs before acting
const receipt = await fetch('https://topnetworks.com/api/v1/sign', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ signer_id: 'pipeline-agent-v3', payload_hash: hash }),
}).then(r => r.json())
// { id: "...", signature: "...", validate_url: "..." }

// Pass receipt.id + hash downstream alongside your output
const output = { result: processInput(inputData), provenance_id: receipt.id, payload_hash: hash }

// Agent B validates before trusting the output
const { valid, signature_valid, hash_match } = await fetch(
  `https://topnetworks.com/api/v1/validate/${output.provenance_id}?hash=${output.payload_hash}`
).then(r => r.json())

if (!valid || !signature_valid || !hash_match) throw new Error('Provenance check failed')