← TopNetworks

Documentation

TopNetworks API

The neutral intelligence layer for AI agents. Live health monitoring for 52 AI providers — plus decision tools, pricing data, trust primitives, and an MCP server. No signup. No API key. No rate limits. Completely free. (One endpoint requires x402: /api/v1/health/premium)

Providers

52

Endpoints

75

Poll rate

every 10m

Free tier

74 endpoints

All Endpoints

75 endpoints across 10 groups. 74 are completely free — no auth, no API key. One premium endpoint (/api/v1/health/premium) uses x402 micropayments.

EndpointMethodPriceGroupUse case
/api/v1/healthGETFreeLive StatusLive status for all 52 providers. No auth. Start here.
/api/v1/health/premiumGET$0.001Uptime %, p95 latency, incidents, trend, degradation flag.
/api/v1/freshnessGETFreeData freshness, drift score, latency trend per provider.
/api/v1/latencyGETFreep50/p95/p99 percentiles + TTFT estimate. 1h/6h/24h windows.
/api/v1/changelogGETFreeStatus change feed. Detect provider transitions over time.
/api/v1/pickGETFreeDecision ToolsSmart single-call router. Best provider for your task in one request.
/api/v1/failoverGETFreeOrdered failover chain when primary fails. Scored by status + latency.
/api/v1/recommendGETFreeRanked operational alternatives by task type.
/api/v1/compareGETFreeHead-to-head provider comparison across all metrics.
/api/v1/cheapestGETFreeBudget optimizer. Cheapest provider meeting your quality bar.
/api/v1/incidentsGETFreeDe-duplicated outage + degradation feed. Up to 168h history.
/api/v1/cost-estimateGETFreePre-flight token cost estimate with cache breakdown + cheaper alternatives.
/api/v1/pricingGETFreeProvider DataToken, image, TTS, STT, embedding pricing across all providers.
/api/v1/modelsGETFreeModel registry: context window, capabilities, knowledge cutoff.
/api/v1/rate-limitsGETFreeRPM, RPD, TPM limits per provider and tier.
/api/v1/benchmarksGETFreeMMLU, HumanEval, MATH, GPQA, MGSM benchmark scores.
/api/v1/quota-checkGETFreeRate limit feasibility check. Will your planned usage get 429d?
/api/v1/attestPOSTFreeTrust & IdentityAttest that an output was produced by a specific model. Returns attestation_id + verify URL.
/api/v1/handoffPOSTFreeRecord an agent-to-agent task handoff. Auditable chain of custody.
/api/v1/webhooksPOST/GET/DELETEFreeWebhooksSubscribe to real-time status change notifications via webhook callbacks.
/api/v1/openapi.jsonGETFreeDeveloper ToolsOpenAPI 3.1 spec. Machine-readable for auto-generating clients.
/api/v1/mcpPOSTFreeMCP server. JSON-RPC 2.0 for agent framework integration.
/api/v1/sdk-supportGETFreeOfficial SDK availability per provider/language. Package names, repo URLs, OpenAI-compat flag.
/api/v1/changelog/apiGETFreeCross-provider API changelog — new models, deprecations, pricing changes, breaking changes.
/api/v1/function-callingGETFreeModel IntelligencePer-provider/model function calling support — parallel calls, max tools, forced mode, tool_choice options.
/api/v1/deprecationsGETFreeModel deprecation & sunset tracker. Announced EOL dates, replacement models. Filter by status.
/api/v1/max-output-tokensGETFreeMax output tokens per model, sorted descending. Filter by provider, min tokens, task type.
/api/v1/logprob-supportGETFreeLog probability support per provider/model. Essential for confidence scoring in agent pipelines.
/api/v1/embedding-qualityGETFreeMTEB benchmark scores for embedding models — retrieval, clustering, reranking, STS task types.
/api/v1/resolve-aliasGETFreeResolve model alias (e.g. gpt-4, claude-sonnet-4-5) to the current pinned snapshot model ID.
/api/v1/json-modeGETFreeJSON output mode support per provider/model — json_object, strict schema enforcement, workarounds.
/api/v1/model-versionsGETFreeVersion history, release dates, pinning, and breaking changes per model alias.
/api/v1/websocket-supportGETFreeWebSocket vs SSE streaming support per provider. Endpoints, auth methods, multiplexing.
/api/v1/streaming-latencyGETFreeTTFT and throughput benchmarks per provider/model. Curated from artificialanalysis.ai.
/api/v1/context-windowGETFreeAdvertised vs effective context window sizes per model. Recommended max fill % and degradation notes.
/api/v1/thinking-supportGETFreeExtended thinking/reasoning mode support — param names, pricing, visibility, budget configuration.
/api/v1/multimodalGETFreeInput/output modality matrix — text, image, audio, video, PDF per model with limits and formats.
/api/v1/structured-outputGETFreeJSON schema enforcement — strict mode, constrained decoding, supported schema features, failure modes.
/api/v1/prompt-cachingGETFreeCost & BatchPrompt caching support, TTL, and savings per provider. Up to 90% cost reduction on repeated system prompts.
/api/v1/batch-apiGETFreeBatch API availability, discount %, max batch size, and typical turnaround time per provider.
/api/v1/fine-tuningGETFreeFine-tuning availability, supported models, methods (LoRA/full/DPO), cost, and constraints.
/api/v1/audio-pricingGETFreeSTT and TTS pricing comparison — price per minute (STT) or per 1k chars (TTS). Realtime support flag.
/api/v1/rerankingGETFreeReranking API availability, models, price per 1k queries, max documents. Essential for RAG pipelines.
/api/v1/task-costGETFreeRank ALL providers by estimated cost for a task type + token count. Cheapest first.
/api/v1/caching-granularityGETFreeCaching mechanics — cacheable elements, min tokens, TTL, automatic vs explicit, savings %.
/api/v1/free-tierGETFreeDetailed free tier breakdown — permanent vs trial credit, caps, included models, changelog.
/api/v1/token-estimatePOSTFreeEstimate token count for text across tokenizer families. ±10% accuracy, max 50k chars.
/api/v1/cost-forecastGETFreeProject daily/weekly/monthly costs for a usage pattern. Includes cache savings projections.
/api/v1/agent-protocolsGETFreeAgent IntelligenceWhich agent protocols (MCP, A2A, ACP, ANP, OAP) each provider supports and at what compliance level.
/api/v1/knowledge-cutoffGETFreeTraining data cutoff date per model. Essential for routing time-sensitive tasks away from stale models.
/api/v1/tool-call-formatGETFreeExact message role/content structure required for tool calls and tool results per provider.
/api/v1/streaming-protocolsGETFreeSSE vs WebSocket streaming details, supported events, end signals, and known quirks per provider.
/api/v1/output-reproducibilityGETFreeSeed parameter support and deterministic output guarantees — essential for testing and compliance.
/api/v1/native-toolsGETFreeBuilt-in tools each provider offers natively — web search, code interpreter, image generation, and more.
/api/v1/model-task-fitGETFreeCurated task suitability scores per model (code, reasoning, RAG, math, etc.). TopNetworks composite score.
/api/v1/pii-handlingGETFreeNative PII detection and redaction capabilities per provider. Types, configurability, audit trail.
/api/v1/context-compressionGETFreeNative context compression support — methods, auto-compression, max ratio per provider.
/api/v1/security-certificationsGETFreeSecurity certifications per provider — SOC2, ISO27001, HIPAA, FedRAMP, PCI-DSS, HITRUST.
/api/v1/semantic-cachingGETFreeSemantic similarity-based caching support — caching type, hit rates, TTL, scope per provider.
/api/v1/mcp-supportGETFreeProvider MCP client/server compliance — distinct from TopNetworks own MCP server.
/api/v1/model-lifecycleGETFreeModel lifecycle stage — GA, beta, preview, soft/hard deprecated, sunset. Replacement model tracking.
/api/v1/delegation-supportGETFreeSecure agent delegation semantics — time-bound, auditable, revocable authority transfer between agents.
/api/v1/prompt-moderationGETFreeProvider native input-side prompt moderation and injection detection before inference.
/api/v1/complianceGETFreeTrust & ComplianceSOC2, HIPAA, ISO27001, GDPR certifications per provider. DPA and BAA availability.
/api/v1/data-retentionGETFreePrompt logging policies, retention periods, opt-out options, and ZDR availability per provider.
/api/v1/slaGETFreePublished uptime SLA guarantees — uptime %, credit terms, tier required. Separate from observed uptime.
/api/v1/overflow-behaviourGETFreeWhat happens when context limit is exceeded — error, silent truncation, or sliding window. Per provider/model.
/api/v1/openai-compatGETFreeOpenAI-compatible API matrix — base URLs, compatible endpoints, known quirks. Drop-in replacement flag.
/api/v1/error-codesGETFreeCross-provider error taxonomy. Maps native errors to standard categories with retry guidance.
/api/v1/rate-limit-recoveryGETFree429 recovery guide — retry headers, reset window semantics, backoff strategy per provider.
/api/v1/regionsGETFreeInference regions per provider. EU availability, per-model region matrices.
/api/v1/uptime-historyGETFreeDaily uptime % timeseries (7/30/90 day) from live health polling data.
/api/v1/guardrailsGETFreeContent filtering config per provider — categories, strictness, disableable, false positive risk.
/api/v1/rate-limit-statusGETFreeLive congestion from polling data — response time trends, error rates, congestion level.
/api/v1/migration-guideGETFreeProvider switch guide — param mapping, missing features, auth changes, gotchas.

Status Codes

200

OK

Success. Parse the response body.

400

Bad Request

Missing or invalid params. Check the error field for details.

402

Payment Required

x402 endpoint — see the accepts array for payment instructions.

404

Not Found

Unknown provider ID or resource. Check spelling against the health endpoint.

500

Server Error

Retry with exponential backoff. Our cron is independent so the API recovers fast.

x402 — Premium Endpoint

Almost everything is free. The one exception is /api/v1/health/premium — enhanced analytics including uptime %, p95 latency, and rate limit risk. It uses the x402 protocol: pay $0.001 USDC on Base L2 per call. No signup, no accounts, no monthly bills. Settlement in ~2 seconds.

Network

Base (L2)

Currency

USDC

Price

$0.001/call

GET /api/v1/health/premium · $0.001 USDC
curl -i https://topnetworks.com/api/v1/health/premium
# HTTP/1.1 402 Payment Required
# { "x402Version": 1, "accepts": [{
#     "scheme": "exact", "network": "base",
#     "maxAmountRequired": "1000",
#     "payTo": "0x4e22ea2467C51EAED5dd70b1122E73D0007E3d50",
#     "asset": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"
# }]}

Live Status


HealthFree

The core endpoint. Live status for all 52 monitored AI providers, polled every 10 minutes. No auth required. Returns operational, degraded, outage, or unknown for each provider, plus a global summary. Start here.

GET https://topnetworks.com/api/v1/health · No auth
FieldTypeDescription
timestampstringISO 8601 timestamp the response was generated.
providersobjectMap of provider_id → health object.
.statusstring"operational" | "degraded" | "outage" | "unknown"
.last_checkedstring | nullISO timestamp of last successful poll.
.response_time_msnumber | nullTime (ms) to fetch provider status page from our poller. Not your API latency.
summary.operationalnumberCount of operational providers.
summary.degradednumberCount of degraded providers (slow but not down).
summary.outagenumberCount of providers with active outage.
summary.unknownnumberCount of providers with unknown status (not yet polled or stale).
GET /api/v1/health
curl -s https://topnetworks.com/api/v1/health | jq '.summary'

Health Premiumx402 · $0.001

Enhanced health data per provider — historical uptime %, p95 latency, recent incident count, rate limit risk level, and the direction of change (improving / stable / degrading). Designed for agents that need confidence, not just a green dot.

uptime_pct

30-day uptime percentage

avg_response_ms

Average poll response time

p95_response_ms

95th-percentile response time

recent_incidents

Outages/degradations in last 7 days

rate_limit_risk

"low" | "medium" | "high"

trend

"improving" | "stable" | "degrading"

degraded

true if operational but avg latency > 3s

GET /api/v1/health/premium · $0.001 USDC
curl -i https://topnetworks.com/api/v1/health/premium
# HTTP/1.1 402 Payment Required
# { "x402Version": 1, "accepts": [{
#     "scheme": "exact", "network": "base",
#     "maxAmountRequired": "1000",
#     "payTo": "0x4e22ea2467C51EAED5dd70b1122E73D0007E3d50",
#     "asset": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"
# }]}

Freshness OracleFree

Answers one question: can I trust this provider's status right now? Returns data age, a drift score (0 = rock solid → 1 = highly variable), and the direction of latency change. Designed for agents making failover decisions.

FieldTypeDescription
freshbooleantrue if last poll was within 12 minutes.
age_secondsnumber | nullSeconds since last successful poll.
drift_scorenumber0 (rock solid) → 1 (highly variable). CV of last 10 response times.
trendstring"improving" | "stable" | "degrading" | "unknown"
latency_trend_msnumber | nullms delta: recent avg minus prior avg. Positive = getting slower.
avg_response_msnumber | nullAverage response time over last 10 checks.
fresh_threshold_secondsnumberMax age before fresh=false. Currently 720 (12 min).
GET /api/v1/freshness?provider={id} · Free
curl "https://topnetworks.com/api/v1/freshness?provider=openai"
# { "provider": "openai", "fresh": true, "age_seconds": 42,
#   "drift_score": 0.04, "trend": "stable", "avg_response_ms": 118 }

const f = await fetch("https://topnetworks.com/api/v1/freshness?provider=openai").then(r => r.json())
if (!f.fresh) console.warn(`Data is ${f.age_seconds}s old — stale`)
if (f.drift_score > 0.3) console.warn("Latency volatile — consider failover")
console.log(f.trend)  // "stable" | "improving" | "degrading"

LatencyFree

Real latency percentiles from live polling data — p50, p95, p99, and average over a 1h, 6h, or 24h window. Includes a TTFT estimate for LLM providers and uptime % for the window. Use this to set smart timeouts. A provider can be “operational” but have p95 at 12 seconds — that breaks streaming.

providerrequired

required — provider ID. e.g. openai

window

1h · 6h · 24h — lookback window (default: 1h)

p50_ms

Median response time

p95_ms

95th percentile — set timeouts here

p99_ms

99th percentile tail latency

ttft_estimate_ms

TTFT estimate (LLM only, ~40% of avg)

trend

improving | stable | degrading | unknown

uptime_pct_in_window

Uptime % over the window

total_checks

Sample count used

GET /api/v1/latency?provider=openai&window=1h · Free
// "Before I set my timeout, what is OpenAI's p95 right now?"
const data = await fetch(
  "https://topnetworks.com/api/v1/latency?provider=openai&window=1h"
).then(r => r.json())
// { p50_ms: 312, p95_ms: 1840, p99_ms: 4200, avg_ms: 480,
//   ttft_estimate_ms: 192, trend: "stable", uptime_pct_in_window: 100 }

const timeout = Math.round(data.p95_ms * 1.2)  // p95 + 20% buffer
console.log("Use timeout:", timeout, "ms")       // 2208ms

ChangelogFree

Status change feed across all providers. Detects transitions (operational→degraded, degraded→outage, etc.) over a configurable lookback window. Use as an AI infrastructure RSS feed — poll daily to know what changed.

days

Lookback window 1–30. Default: 7

provider

Filter to one provider. e.g. provider=openai

GET /api/v1/changelog?days=7 · Free
const { entries } = await fetch(
  "https://topnetworks.com/api/v1/changelog?days=7"
).then(r => r.json())

// entries: [
//   { date: "2026-03-21", provider_id: "anthropic", type: "status_change",
//     description: "Status changed from operational to degraded",
//     details: { from: "operational", to: "degraded" } },
//   { date: "2026-03-20", provider_id: "openai", ... },
// ]

const outages = entries.filter(e => e.details.to === 'outage')
console.log(`${outages.length} outages in the last 7 days`)

Decision Tools


Pick (Smart Router)Free

One call replaces querying multiple endpoints. Combines live status, latency, pricing, model capabilities, and rate limits to return the single best provider for your task — plus 2 runners-up. Use this as your default routing decision.

taskrequired

required — llm · image · embedding · speech · search · video · code · agent

budget

optional — max USD per 1M input tokens

min_context

optional — minimum context window in tokens

needs

optional — comma-separated: vision,function_calling,json_mode

avoid

optional — comma-separated provider IDs to exclude

free_only

optional — true to only return free tier providers

GET /api/v1/pick?task=llm&min_context=128000&needs=vision · Free
const { pick, runners_up } = await fetch(
  "https://topnetworks.com/api/v1/pick?task=llm&min_context=128000&needs=vision"
).then(r => r.json())

// pick: {
//   provider_id: "google-gemini", model: "gemini-2.0-flash",
//   status: "operational", response_time_ms: 98,
//   input_per_1m_usd: 0.10, context_window: 1048576,
//   capabilities: { vision: true, function_calling: true, json_mode: true },
//   score: 105, reason: "Operational, fastest, $0.1/1M input, 1048K context, free tier available"
// }

const provider = pick.provider_id  // "google-gemini"
const fallback = runners_up[0]?.provider_id  // next best option

CompareFree

Head-to-head comparison of 2–10 providers. Get status, latency, pricing, benchmarks, rate limits, and models side-by-side in one call. Perfect for reports, dashboards, or agents evaluating options.

providersrequired

required — comma-separated provider IDs (2-10)

metrics

optional — comma-separated: status,latency,price,benchmarks,rate_limits,models (default: all)

GET /api/v1/compare?providers=openai,anthropic,deepseek · Free
const { comparison } = await fetch(
  "https://topnetworks.com/api/v1/compare?providers=openai,anthropic,deepseek&metrics=status,price,benchmarks"
).then(r => r.json())

// comparison: {
//   openai: {
//     name: "OpenAI", status: "operational", response_time_ms: 197,
//     pricing: { model: "gpt-4o", input_per_1m_usd: 2.50, output_per_1m_usd: 10.00 },
//     benchmarks: { model: "o3", mmlu: 96.7, humaneval: 98, math: 97 }
//   },
//   anthropic: { ... },
//   deepseek: { ... }
// }

// Find cheapest operational provider
const cheapest = Object.entries(comparison)
  .filter(([, v]) => v.status === "operational" && v.pricing)
  .sort(([, a], [, b]) => a.pricing.input_per_1m_usd - b.pricing.input_per_1m_usd)[0]

CheapestFree

Budget optimizer with optional quality floor. Find the cheapest operational provider for a task, optionally filtered by minimum benchmark score. Agents running long autonomous loops need to minimize cost without sacrificing quality.

taskrequired

required — llm · image · embedding · speech · search · video · code · agent

input_tokens

optional — tokens for cost calc (default: 1M)

output_tokens

optional — output tokens (default: 500K)

min_quality

optional — format: metric:score e.g. mmlu:85

limit

optional — max results 1-20, default 5

GET /api/v1/cheapest?task=llm&min_quality=mmlu:85 · Free
const { results } = await fetch(
  "https://topnetworks.com/api/v1/cheapest?task=llm&min_quality=mmlu:85&limit=3"
).then(r => r.json())

// results: [
//   { provider_id: "deepseek", model: "deepseek-v3", status: "operational",
//     estimated_cost_usd: 0.82, input_per_1m_usd: 0.27,
//     quality_score: { mmlu: 88.5 }, free_tier: true },
//   { provider_id: "groq", model: "llama-3.3-70b", estimated_cost_usd: 0.985, ... },
// ]

const cheapestOperational = results.find(r => r.status === "operational")
console.log(`Use ${cheapestOperational.provider_id} at $${cheapestOperational.estimated_cost_usd}/1M tokens`)

Failover ChainFree

When your primary provider goes down, this returns an ordered list of alternatives — ranked by live status, latency, and task-type capability. Each entry includes a reason, live status, response time, and pricing so your agent makes an informed switch. Every multi-agent system hand-rolls this logic; call this instead.

primaryrequired

required — the failing provider ID. e.g. openai

task

optional — llm · image · embedding · speech · search · video · code · agent

limit

optional — max alternatives (1–10, default 5)

max_cost_per_1m

optional — max input price per 1M tokens USD (budget filter)

GET /api/v1/failover?primary=openai&task=llm
const { failover_chain } = await fetch(
  "https://topnetworks.com/api/v1/failover?primary=openai&task=llm&max_cost_per_1m=2.00"
).then(r => r.json())

// failover_chain: [
//   { provider_id: "anthropic", status: "operational", score: 88,
//     response_time_ms: 390, input_per_1m_usd: 3.00,
//     reason: "Operational — 20% more expensive than OpenAI" },
//   { provider_id: "groq", status: "operational", score: 85,
//     response_time_ms: 180, input_per_1m_usd: 0.59,
//     reason: "Operational and fast — 76% cheaper than OpenAI" },
// ]

for (const p of failover_chain) {
  if (p.status === 'operational') {
    reroute(p.provider_id)  // try in order
    break
  }
}

RecommendFree

Ranked operational alternatives by task type. Given a task and an optional exclusion list, returns the best available providers right now — scored by status, latency, and free-tier availability. Use this for initial routing decisions; use Failover when a specific primary fails.

task

llm · image · embedding · speech · search · video · code · agent

avoid

Comma-separated provider IDs to exclude. e.g. avoid=openai,anthropic

limit

Max results (1–20, default 5)

free_only

true — only providers with a free tier

GET /api/v1/recommend?task=llm&avoid=openai&limit=3
const { recommendations } = await fetch(
  "https://topnetworks.com/api/v1/recommend?task=llm&avoid=openai&limit=3"
).then(r => r.json())

// [{ id: "anthropic", name: "Anthropic", status: "operational",
//    response_time_ms: 98, score: 90, reason: "Operational and fast" },
//  { id: "groq", ... }, ...]

const fallback = recommendations[0]?.id  // "anthropic"

IncidentsFree

De-duplicated outage and degradation feed across all 52 providers. Consecutive bad checks are collapsed into single incidents with duration and an ongoing flag. Poll this to know the global state of AI infrastructure at a glance.

hours

Lookback window 1–168. Default: 24

severity

outage · degraded · all (default)

provider

Filter to one provider. e.g. provider=openai

GET /api/v1/incidents?hours=24
const { incidents, summary } = await fetch(
  "https://topnetworks.com/api/v1/incidents?hours=24"
).then(r => r.json())

// summary: { total_incidents: 2, outages: 1, degraded: 1, ongoing: 1 }
// incidents[0]: { provider_id: "scaleway", severity: "degraded",
//   started_at: "2026-03-15T...", duration_minutes: 45, ongoing: true }

Cost EstimateFree

Pre-flight token cost estimation. Pass provider, input tokens, output tokens, and optional cached tokens — get a full cost breakdown in USD/USDC plus up to 3 cheaper alternatives from the same category. Agents running long autonomous tasks need to budget before they commit.

providerrequired

required — provider ID. e.g. openai

input_tokensrequired

required — number of input tokens

output_tokensrequired

required — number of output tokens

cached_tokens

optional — cached input tokens (50% discount applied)

model

optional — partial model name filter. e.g. model=haiku

GET /api/v1/cost-estimate?provider=openai&input_tokens=10000&output_tokens=2000&cached_tokens=5000
const data = await fetch(
  "https://topnetworks.com/api/v1/cost-estimate" +
  "?provider=openai&input_tokens=10000&output_tokens=2000&cached_tokens=5000"
).then(r => r.json())

// {
//   model: "gpt-4o",
//   rates: { input_per_1m_usd: 2.50, output_per_1m_usd: 10.00, cache_discount_pct: 50 },
//   breakdown: { input_cost_usd: 0.0125, cached_cost_usd: 0.00625, output_cost_usd: 0.02 },
//   estimated_total_usd: 0.038750,
//   cache_savings_usd: 0.00625,
//   cheaper_alternatives: [
//     { provider_id: "groq", input_per_1m_usd: 0.59,
//       estimated_total_usd: 0.009900, savings_pct: 74 }
//   ]
// }

Provider Data


PricingFree

Unified pricing across all 52 providers — input/output per 1M tokens, per-image, TTS character rate, STT per-minute, embedding rate. Filter by task to compare within a category.

provider

Filter to one provider

task

llm · embedding · image · speech · video · search · code · agent

free_only

true — only providers with a free tier

compare

true — token-priced models only, sorted cheapest first

GET /api/v1/pricing?task=llm&compare=true
const { pricing } = await fetch(
  "https://topnetworks.com/api/v1/pricing?task=llm&compare=true"
).then(r => r.json())
// Sorted cheapest input first:
// [{ provider_id: "deepinfra", model: "llama-3.3-70b",
//    pricing: { input_per_1m_tokens: 0.23, output_per_1m_tokens: 0.40 } },
//  { provider_id: "groq", ... }, ...]

ModelsFree

Model capability registry. Filter by task, required capability (vision, function calling, JSON mode), or minimum context window. Use this to pick the right model before a task, not after hitting a capability error.

provider

Filter to one provider

task

llm · embedding · image · speech · video · code · multimodal

vision

true — vision-capable models only

function_calling

true — function calling required

json_mode

true — JSON mode required

min_context

Minimum context window in tokens. e.g. 128000

GET /api/v1/models?task=llm&vision=true&min_context=128000
const { models } = await fetch(
  "https://topnetworks.com/api/v1/models?task=llm&vision=true&min_context=128000"
).then(r => r.json())
// [{ provider_id: "google-gemini", model_id: "gemini-2.0-flash",
//    context_window: 1048576, max_output_tokens: 8192,
//    capabilities: { vision: true, function_calling: true, json_mode: true },
//    knowledge_cutoff: "2025-01" }, ...]

Rate LimitsFree

Published rate limits per provider and tier — RPM, RPD, TPM, TPD, concurrent requests. Consult this before starting burst tasks to avoid wasted compute on 429s.

provider

Filter to one provider

tier

free · tier1 · tier2 · standard · paid (partial match)

free_only

true — free tier limits only

min_rpm

Minimum RPM required. e.g. min_rpm=100

GET /api/v1/rate-limits?provider=openai
const { rate_limits } = await fetch(
  "https://topnetworks.com/api/v1/rate-limits?provider=openai"
).then(r => r.json())
// [{ tier: "free", limits: { requests_per_minute: 3, tokens_per_minute: 40000 } },
//  { tier: "tier1", limits: { requests_per_minute: 500, tokens_per_minute: 200000 } }]

BenchmarksFree

Public benchmark scores across models — MMLU, HumanEval, MATH, GPQA Diamond, MGSM, HellaSwag. Sort by any metric or pass a task type to auto-select the most relevant benchmark.

task

coding · math · reasoning · general · multilingual · commonsense

sort_by

mmlu · humaneval · math · gpqa · mgsm · hellaswag

provider

Filter to one provider

limit

Max results (1–50, default 20)

GET /api/v1/benchmarks?task=coding&limit=5
const { benchmarks, meta } = await fetch(
  "https://topnetworks.com/api/v1/benchmarks?task=coding&limit=5"
).then(r => r.json())
// meta.sorted_by: "humaneval"
// benchmarks[0]: { provider_id: "openai", model_id: "o3",
//   scores: { humaneval: 98.0, mmlu: 96.7, math: 97.0 } }

Quota CheckFree

Will your planned workload get rate-limited? Check before you start. Returns a safe / tight / exceeds verdict and suggests alternatives if your plan doesn't fit. No one else does this.

providerrequired

required — provider ID

tier

optional — free · tier1 · standard · paid (default: free)

planned_rpm

optional — planned requests per minute

planned_tpm

optional — planned tokens per minute

GET /api/v1/quota-check?provider=groq&planned_rpm=25&planned_tpm=5000 · Free
const data = await fetch(
  "https://topnetworks.com/api/v1/quota-check?provider=groq&planned_rpm=25&planned_tpm=5000"
).then(r => r.json())

// {
//   provider: "groq", tier: "free",
//   planned: { rpm: 25, tpm: 5000 },
//   limits: { rpm: 30, rpd: 14400, tpm: 6000, tpd: 500000 },
//   verdict: "tight",
//   details: {
//     rpm: { status: "tight", usage_pct: 83, headroom: 5 },
//     tpm: { status: "tight", usage_pct: 83, headroom: 1000 }
//   },
//   suggestions: [
//     { provider_id: "cerebras", tier: "free", rpm: 30, tpm: 60000 }
//   ]
// }

if (data.verdict === 'exceeds') {
  console.log("Switch to:", data.suggestions[0]?.provider_id)
}

Trust & Identity


Output AttestationFree

Attest that a specific output was produced by a specific model at a specific time. TopNetworks stores a SHA-256 hash (never the raw output) and returns a verifiable attestation ID. Lightweight audit trail for agent-produced content.

POST/api/v1/attest

Attest agent output. Returns attestation_id + verify URL.

provider (required)model (required)output (text — hashed server-side)payload_hash (or pre-hash)agent_id (optional)
Attest → verify flow · Free
// Agent attests its output
const { attestation_id, verify_url } = await fetch('https://topnetworks.com/api/v1/attest', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    provider: 'openai',
    model: 'gpt-4o',
    output: 'The capital of France is Paris.',
    agent_id: 'my-agent-v1',
  }),
}).then(r => r.json())
// { attestation_id: "a1b2c3...", verify_url: "https://topnetworks.com/api/v1/validate/a1b2c3..." }

Agent Task HandoffFree

Record agent-to-agent task transfers with an auditable chain of custody. Context is hashed for privacy. Agent IDs are self-declared — low friction, high auditability.

POST/api/v1/handoff

Record a task handoff. Returns handoff_id + verify URL.

from_agent (required)to_agent (required)task_id (optional)context (text — hashed)
Handoff → verify flow · Free
// Orchestrator hands task to researcher
const { handoff_id, verify_url } = await fetch('https://topnetworks.com/api/v1/handoff', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    from_agent: 'orchestrator-v1',
    to_agent: 'researcher-v2',
    task_id: 'task-abc-123',
    context: 'Research Q1 2026 AI market trends.',
  }),
}).then(r => r.json())
// { handoff_id: "h1b2c3...", verify_url: "https://topnetworks.com/api/v1/validate/h1b2c3..." }

Webhooks


Webhook SubscriptionsFree

Get notified in real-time when provider status changes. Subscribe a callback URL to receive POST requests when providers go down, recover, or degrade. No polling required — TopNetworks pushes events to you. Supports HMAC signature verification, provider filtering, and auto-expiry.

POST/api/v1/webhooks

Subscribe — register a callback URL + events

GET/api/v1/webhooks?id=...

Status — check subscription stats + health

DELETE/api/v1/webhooks?id=...

Unsubscribe — deactivate a subscription

Available Events

provider.down

Provider entered outage state

provider.up

Provider recovered to operational

provider.degraded

Provider entered degraded state

incident.new

New incident started (outage or degradation)

incident.resolved

Provider recovered from incident

Subscribe Parameters (POST body)

callback_urlrequired

required — HTTPS URL to receive webhook POSTs

eventsrequired

required — array of event types to subscribe to

providers

optional — filter to specific provider IDs. null = all

secret

optional — HMAC secret for X-TopNetworks-Signature verification

expires_in_hours

optional — auto-expire subscription after N hours

metadata

optional — arbitrary metadata stored with subscription

Subscribe → receive events · Free
// Step 1: Subscribe to OpenAI + Anthropic outages
const sub = await fetch("https://topnetworks.com/api/v1/webhooks", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    callback_url: "https://your-agent.com/hooks/topnetworks",
    events: ["provider.down", "provider.up", "incident.new"],
    providers: ["openai", "anthropic"],
    secret: "your-hmac-secret",
    expires_in_hours: 720,  // 30 days
  })
}).then(r => r.json())

console.log(sub.subscription.id)  // UUID — save this to manage later

// Your endpoint receives POSTs like:
// Headers:
//   X-TopNetworks-Event: provider.down
//   X-TopNetworks-Signature: a1b2c3...  (HMAC-SHA256 with your secret)
// Body:
// { "event": "provider.down", "timestamp": "...",
//   "payload": { "provider_id": "openai", "previous_status": "operational",
//     "current_status": "outage", "description": "..." } }

// Verify HMAC signature
import { createHmac } from 'crypto'
const sig = createHmac('sha256', 'your-hmac-secret').update(rawBody).digest('hex')
if (sig !== headers['x-topnetworks-signature']) throw new Error('Bad sig')

// Check subscription health
const status = await fetch(sub.verify_url).then(r => r.json())
console.log(status.subscription.fire_count)

// Unsubscribe
await fetch(sub.manage_url, { method: "DELETE" })

Reliability Notes

  • Webhooks fire within seconds of each 10-minute poll cycle detecting a status change.
  • After 10 consecutive delivery failures, the subscription auto-deactivates.
  • Use GET to monitor fire_count, fail_count, and last_error.
  • Expired subscriptions are automatically deactivated.

Developer Tools


OpenAPI SpecFree

Machine-readable OpenAPI 3.1 specification covering all 75 endpoints. Use this to auto-generate clients, import into Postman, or register with API directories.

GET /api/v1/openapi.json · Free
// Fetch the spec
const spec = await fetch("https://topnetworks.com/api/v1/openapi.json").then(r => r.json())
console.log(spec.info.title)    // "TopNetworks API"
console.log(Object.keys(spec.paths).length)  // 60

// Import into any OpenAPI client generator
// npx openapi-generator-cli generate -i https://topnetworks.com/api/v1/openapi.json -g python -o ./client

MCP ServerFree

Model Context Protocol server for agent framework integration. Exposes 34 TopNetworks tools via JSON-RPC 2.0. Works with Claude, ChatGPT, Cursor, VS Code, and any MCP-compatible agent. Learn about MCP →

topnetworks_health

Live status for all providers

topnetworks_pick

Smart single-call routing

topnetworks_failover

Failover chain

topnetworks_compare

Head-to-head comparison

topnetworks_cheapest

Budget optimizer

topnetworks_recommend

Ranked recommendations

topnetworks_incidents

Incident feed

topnetworks_latency

Latency percentiles

topnetworks_freshness

Data freshness check

topnetworks_changelog

Status change feed

topnetworks_cost_estimate

Single-request cost

topnetworks_cost_forecast

Budget forecasting

topnetworks_task_cost

Rank providers by cost

topnetworks_pricing

Unified pricing data

topnetworks_models

Model capability registry

topnetworks_benchmarks

Benchmark scores

topnetworks_rate_limits

Published rate limits

topnetworks_quota_check

Rate limit feasibility

topnetworks_rate_limit_status

Live congestion levels

topnetworks_context_window

Effective context sizes

topnetworks_thinking_support

Reasoning mode per model

topnetworks_multimodal

Input/output modalities

topnetworks_structured_output

JSON schema enforcement

topnetworks_function_calling

Tool use support

topnetworks_json_mode

JSON output mode

topnetworks_deprecations

Model sunset tracker

topnetworks_max_output_tokens

Max output per model

topnetworks_resolve_alias

Alias → pinned model

topnetworks_prompt_caching

Caching support & savings

topnetworks_error_codes

Error taxonomy & retries

topnetworks_openai_compat

OpenAI-compatible matrix

topnetworks_overflow_behaviour

Silent truncation check

topnetworks_guardrails

Content filtering config

topnetworks_migration_guide

Provider switch guide

POST /api/v1/mcp · JSON-RPC 2.0 · Free
// Initialize
const init = await fetch("https://topnetworks.com/api/v1/mcp", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ jsonrpc: "2.0", id: 1, method: "initialize", params: {} })
}).then(r => r.json())
// { jsonrpc: "2.0", id: 1, result: { protocolVersion: "2024-11-05", capabilities: { tools: {} }, serverInfo: { name: "topnetworks" } } }

// List tools
const tools = await fetch("https://topnetworks.com/api/v1/mcp", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ jsonrpc: "2.0", id: 2, method: "tools/list", params: {} })
}).then(r => r.json())
// { result: { tools: [{ name: "topnetworks_health", ... }, { name: "topnetworks_pick", ... }, ...] } }

// Call a tool
const result = await fetch("https://topnetworks.com/api/v1/mcp", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    jsonrpc: "2.0", id: 3, method: "tools/call",
    params: { name: "topnetworks_pick", arguments: { task: "llm", needs: "vision" } }
  })
}).then(r => r.json())
// { result: { content: [{ type: "text", text: "{ pick: { provider_id: \"google-gemini\", ... } }" }] } }

SDK SupportFree

Official SDK availability per provider, by language. Returns package names, whether official or community-maintained, repo URLs, and whether the provider supports OpenAI-compatible SDKs. Prevents routing to providers with no client library support for your stack.

provider

Filter to one provider

language

Filter by language. e.g. python, node, go, rust

official_only

true — only official SDKs

openai_compat_only

true — only providers usable via OpenAI SDK

GET /api/v1/sdk-support?language=go&official_only=true · Free
const { sdk_support, meta } = await fetch(
  "https://topnetworks.com/api/v1/sdk-support?language=go&official_only=true"
).then(r => r.json())

// meta: { available_languages: ["python","node","go","java","rust","dotnet","dart","swift"] }
// sdk_support[0]: {
//   provider_id: "openai", openai_compat_sdk: false,
//   sdk_count: 1, languages: ["go"],
//   sdks: [{ language: "go", package_name: "openai-go",
//     official: true, repo_url: "https://github.com/openai/openai-go" }]
// }

const goProviders = sdk_support.map(r => r.provider_id)
console.log("Providers with official Go SDK:", goProviders)

API ChangelogFree

Cross-provider API changelog — new model releases, deprecations, pricing changes, feature additions, breaking changes, and rate limit updates. Like an RSS feed for AI infrastructure. Different from /changelog (status transitions) — this tracks API surface changes.

provider

Filter to one provider

type

new_model · deprecation · pricing_change · feature_added · breaking_change · rate_limit_change

impact

high · medium · low

days

Lookback window in days (default: 30, max: 90)

limit

Max entries to return (default: 50)

GET /api/v1/changelog/api?impact=high&days=30 · Free
const { changelog, meta } = await fetch(
  "https://topnetworks.com/api/v1/changelog/api?impact=high&days=30"
).then(r => r.json())

// meta: { by_type: { new_model: 3, deprecation: 1, breaking_change: 1 }, by_impact: { high: 5 } }
// changelog[0]: {
//   date: "2026-03-20", provider_id: "anthropic",
//   change_type: "new_model", impact: "high",
//   title: "Claude Opus 4.6 released",
//   description: "Improved agentic capabilities and adaptive thinking.",
//   affected_models: ["claude-opus-4.6"],
//   source_url: "https://docs.anthropic.com/en/docs/about-claude/models"
// }

const breaking = changelog.filter(e => e.change_type === "breaking_change")
if (breaking.length) console.warn("⚠️ Breaking changes:", breaking.map(e => e.title))

Model Intelligence


Function CallingFree

Per-provider and per-model function calling support — whether tool calls are supported at all, whether parallel calls are allowed, how many tools can be specified per request, and what tool_choice options are available. Essential before building a tool-using agent.

provider

Filter to one provider

parallel_only

true — only providers supporting parallel tool calls

supported_only

true — only providers where tool_call_supported=true

GET /api/v1/function-calling?supported_only=true · Free
const { function_calling } = await fetch(
  "https://topnetworks.com/api/v1/function-calling?supported_only=true&parallel_only=true"
).then(r => r.json())

// function_calling[0]: {
//   provider_id: "openai", provider_name: "OpenAI",
//   tool_call_supported: true, parallel_calls_supported: true,
//   max_tools: 128, forced_mode: true,
//   tool_choice_options: ["auto", "none", "required", "function"],
//   notes: "Streaming function calls supported. Parallel since gpt-4-turbo."
// }

const parallel = function_calling.filter(r => r.parallel_calls_supported)
console.log(`${parallel.length} providers support parallel function calls`)

JSON ModeFree

JSON output support matrix per provider and model. Covers json_object mode, strict schema enforcement, the enforcement method (native vs prompt-based), and whether a workaround is required. Know before you hit a 500 invalid JSON in prod.

provider

Filter to one provider

model

Partial model name match. e.g. model=gpt-4o

schema_enforcement_only

true — strict schema enforcement only

json_mode_only

true — json_object mode supported

GET /api/v1/json-mode?schema_enforcement_only=true · Free
const { json_mode, meta } = await fetch(
  "https://topnetworks.com/api/v1/json-mode?schema_enforcement_only=true"
).then(r => r.json())

// meta: { schema_enforcement_count: 8, json_object_count: 22, no_native_support_count: 3 }
// json_mode[0]: {
//   provider_id: "openai", model_id: "gpt-4o",
//   json_object_supported: true, schema_enforcement_supported: true,
//   strict_mode_supported: true, enforcement_method: "native",
//   workaround_required: false, notes: "Use response_format: {type: 'json_schema'}"
// }

// Find providers that need a workaround
const workarounds = json_mode.filter(r => r.workaround_required)
console.log(`${workarounds.length} models need a prompt workaround for JSON`)

Streaming LatencyFree

TTFT (time-to-first-token) and throughput benchmarks from published third-party data — not live polling. Median and P90 TTFT in ms, median inter-token delay, and tokens/second throughput. Use /latency for live observed p95 from our own polling; use this for cross-model streaming comparison.

provider

Filter to one provider

model

Partial model name match

sort

ttft (default) · tpt — sort ascending by median TTFT or inter-token delay

GET /api/v1/streaming-latency?sort=ttft · Free
const { streaming_latency, meta } = await fetch(
  "https://topnetworks.com/api/v1/streaming-latency?sort=ttft"
).then(r => r.json())

// meta.benchmark_source: "artificialanalysis.ai + provider docs"
// streaming_latency[0]: {
//   provider_id: "groq", provider_name: "Groq",
//   model_id: "llama-3.3-70b", model_name: "Llama 3.3 70B",
//   median_ttft_ms: 218, p90_ttft_ms: 480,
//   median_tpt_ms: 14, throughput_tokens_per_sec: 285,
//   benchmark_source: "artificialanalysis.ai", measured_at: "2026-Q1"
// }

// Fastest TTFT for streaming chat
const fastest = streaming_latency[0]
console.log(`Fastest: ${fastest.provider_id} at ${fastest.median_ttft_ms}ms TTFT`)

DeprecationsFree

Model deprecation and sunset tracker. Sourced from official provider announcements — announced EOL dates, replacement model IDs, and current lifecycle status. Filter by status to get a list of all models currently in the danger zone before they break your agent.

provider

Filter to one provider

status

active · warning · deprecated · sunset

active

No known deprecation planned

warning

Deprecation announced — still live

deprecated

EOL announced, still responds

sunset

No longer available

GET /api/v1/deprecations?status=warning · Free
const { deprecations, summary } = await fetch(
  "https://topnetworks.com/api/v1/deprecations?status=warning"
).then(r => r.json())

// summary: { active: 84, warning: 6, deprecated: 12, sunset: 4 }
// deprecations[0]: {
//   provider_id: "openai", model_id: "gpt-4-0314",
//   status: "deprecated", announced_at: "2023-06-13",
//   sunset_date: "2024-06-13", replacement_model: "gpt-4o",
//   notes: "Use gpt-4o. Price is 50% lower."
// }

const risky = deprecations.filter(r => r.status === 'warning' || r.status === 'deprecated')
console.log(`${risky.length} models at risk of breaking your agent`)

Max Output TokensFree

Maximum output tokens per response across all models, sorted descending. Distinct from context window — a model can have a 200K context but only 4096 output tokens. Always check both when planning long-form generation tasks.

provider

Filter to one provider

min_output

Minimum max_output_tokens required. e.g. min_output=32000

task

Filter by task type. e.g. task=llm

limit

Max results 1–100 (default 50)

GET /api/v1/max-output-tokens?min_output=32000 · Free
const { max_output_tokens } = await fetch(
  "https://topnetworks.com/api/v1/max-output-tokens?min_output=32000&task=llm"
).then(r => r.json())

// max_output_tokens[0]: {
//   provider_id: "anthropic", model_id: "claude-opus-4-5",
//   max_output_tokens: 64000, context_window: 200000,
//   output_to_context_ratio: 0.32,
//   notes: "64k output requires extended thinking mode."
// }

// Find models that can write a full book in one call
const longForm = max_output_tokens.filter(m => m.max_output_tokens >= 32000)
console.log(`${longForm.length} models support 32k+ output tokens`)

Logprob SupportFree

Log probability support per provider and model. Logprobs enable per-token confidence scoring, uncertainty quantification, and calibrated output filtering in agent pipelines. Use this to pick a provider that supports logprobs: true before building self-reflective agents.

provider

Filter to one provider

supported_only

true — only models where logprob_supported=true

GET /api/v1/logprob-support?supported_only=true · Free
const { logprob_support, meta } = await fetch(
  "https://topnetworks.com/api/v1/logprob-support?supported_only=true"
).then(r => r.json())

// meta: { supported_count: 18, total: 52 }
// logprob_support[0]: {
//   provider_id: "openai", provider_name: "OpenAI",
//   logprob_supported: true, max_logprobs: 20,
//   top_logprobs_param: "top_logprobs",
//   notes: "Available on all chat completion models. Pass logprobs: true."
// }

console.log(`${meta.supported_count}/${meta.total} providers support logprobs`)

Embedding QualityFree

MTEB benchmark scores for embedding models, sourced from the Hugging Face MTEB leaderboard. Filter by task type — retrieval, clustering, reranking, or STS — to pick the best embedding model for your RAG use case. Sorted descending by score.

provider

Filter to one provider

task_type

retrieval · clustering · reranking · sts

min_score

Minimum MTEB score. e.g. min_score=60

limit

Max results 1–100 (default 20)

GET /api/v1/embedding-quality?task_type=retrieval&limit=5 · Free
const { embedding_quality } = await fetch(
  "https://topnetworks.com/api/v1/embedding-quality?task_type=retrieval&limit=5"
).then(r => r.json())

// embedding_quality[0]: {
//   provider_id: "openai", provider_name: "OpenAI",
//   model_id: "text-embedding-3-large",
//   task_type: "retrieval", mteb_score: 62.3,
//   dimensions: 3072, notes: "Supports Matryoshka dimension reduction."
// }

// Pick best retrieval embedding for a RAG pipeline
const best = embedding_quality[0]
console.log(`Use ${best.model_id} — MTEB retrieval: ${best.mteb_score}`)

Resolve AliasFree

Resolve a model alias (e.g. gpt-4o, claude-sonnet-4-5) to the current pinned snapshot model ID. Aliases that auto_updates=true may silently point to a different model version in future — pin to a dated ID for production stability.

alias

Alias to resolve. e.g. alias=gpt-4o — omit to list all known aliases

provider

Filter to one provider

GET /api/v1/resolve-alias?alias=claude-sonnet-4-5 · Free
// Resolve a single alias
const result = await fetch(
  "https://topnetworks.com/api/v1/resolve-alias?alias=claude-sonnet-4-5"
).then(r => r.json())
// {
//   provider_id: "anthropic", alias: "claude-sonnet-4-5",
//   current_pinned_version: "claude-sonnet-4-5-20251001",
//   auto_updates: true, release_date: "2025-10-01",
//   notes: "Alias is auto-updated by Anthropic. Pin snapshot for production."
// }

// List all auto-updating aliases (production risk)
const { aliases } = await fetch(
  "https://topnetworks.com/api/v1/resolve-alias"
).then(r => r.json())
const risky = aliases.filter(a => a.auto_updates)
console.log(`${risky.length} aliases auto-update — check your hardcoded model IDs`)

Model VersionsFree

Full version history per model alias — release dates, whether the alias can be pinned to a specific snapshot, and a list of breaking changes between versions. Use this to audit whether a model update could break a prompt that was working last month.

provider

Filter to one provider

model

Partial alias match. e.g. model=gpt-4o

pinnable_only

true — only aliases that can be pinned to a snapshot

has_breaking_changes

true — only aliases with known breaking changes

GET /api/v1/model-versions?has_breaking_changes=true · Free
const { model_versions } = await fetch(
  "https://topnetworks.com/api/v1/model-versions?has_breaking_changes=true"
).then(r => r.json())

// model_versions[0]: {
//   provider_id: "openai", alias: "gpt-4",
//   alias_auto_updates: true,
//   current_pinned_version: "gpt-4-0613",
//   pinnable: true,
//   versions: [
//     { version_id: "gpt-4-0125-preview", release_date: "2024-01-25" },
//     { version_id: "gpt-4-0613",         release_date: "2023-06-13" },
//   ],
//   breaking_changes: ["System prompt handling changed in 0125-preview"],
//   lifecycle_policy_url: "https://platform.openai.com/docs/deprecations"
// }

const breaking = model_versions.filter(m => m.breaking_changes.length > 0)
console.log(`${breaking.length} aliases have known breaking changes between versions`)

WebSocket SupportFree

WebSocket vs SSE streaming support per provider. Returns endpoints, authentication methods, multiplexing support, and use case (realtime audio, chat, etc.). Not all providers that support “streaming” use WebSockets — most use SSE. Check before building a realtime voice pipeline.

provider

Filter to one provider

websocket_only

true — only providers with websocket_supported=true

category

Filter by use case. e.g. category=realtime_audio

GET /api/v1/websocket-support?websocket_only=true · Free
const { websocket_support, meta } = await fetch(
  "https://topnetworks.com/api/v1/websocket-support?websocket_only=true"
).then(r => r.json())

// meta: { websocket_count: 4, sse_only_count: 48 }
// websocket_support[0]: {
//   provider_id: "openai", websocket_supported: true,
//   streaming_method: "websocket", also_supports_sse: true,
//   use_case: "realtime_audio",
//   websocket_endpoint: "wss://api.openai.com/v1/realtime",
//   supported_models: ["gpt-4o-realtime-preview"],
//   auth_method: "bearer_token", multiplexing_supported: false,
//   notes: "Realtime API. Not available on standard chat endpoint."
// }

console.log(`Only ${meta.websocket_count} providers support WebSocket streaming`)

Context WindowFree

Advertised vs effective (tested) context window sizes per model. Models often advertise larger contexts than they can reliably use — recall degrades past a threshold. Returns recommended max fill percentage and degradation notes. The most asked question in agent routing.

provider

Filter to one provider

model

Search by model ID (partial match)

min_context

Minimum context tokens required

effective_only

true — filter by effective (not advertised) context

GET /api/v1/context-window?min_context=200000 · Free
const { context_windows, meta } = await fetch(
  "https://topnetworks.com/api/v1/context-window?min_context=200000"
).then(r => r.json())

// Sorted by effective context, descending
// context_windows[0]: {
//   provider_id: "google-gemini", model_id: "gemini-2.5-pro",
//   advertised_context: 1048576, effective_context: 500000,
//   recommended_max_fill: 0.50, usable_tokens: 250000,
//   degradation_note: "1M advertised. Strong to ~500k, noticeable recall loss beyond."
// }

// Don't trust advertised — use effective
const safe = context_windows.filter(m => m.effective_context >= 200000)
console.log(`${safe.length} models with 200k+ effective context`)

Thinking SupportFree

Extended thinking / reasoning mode support per model. Every provider implements thinking differently — Anthropic uses budget_tokens, OpenAI uses reasoning_effort, DeepSeek has it always-on. Returns parameter names, pricing, visibility, and whether the budget is configurable.

provider

Filter to one provider

model

Search by model ID (partial match)

supported_only

true — only models with thinking support

visible_thinking

true — only models where thinking tokens are visible

budget_configurable

true — only models where thinking budget is controllable

GET /api/v1/thinking-support?supported_only=true · Free
const { thinking_support, meta } = await fetch(
  "https://topnetworks.com/api/v1/thinking-support?supported_only=true"
).then(r => r.json())

// meta: { thinking_capable: 11, visible_thinking: 8, budget_configurable: 7, always_on: 5 }
// thinking_support[0]: {
//   provider_id: "anthropic", model_id: "claude-opus-4",
//   thinking_supported: true, thinking_visible: true,
//   thinking_param: "thinking.type",
//   thinking_cost_per_1m: 60.00, budget_configurable: true,
//   budget_param: "thinking.budget_tokens (or adaptive)",
//   default_behavior: "opt_in",
//   notes: "Adaptive thinking recommended for Opus 4+."
// }

const cheapThinking = thinking_support
  .filter(m => m.thinking_cost_per_1m && m.thinking_cost_per_1m < 5)
  .sort((a, b) => a.thinking_cost_per_1m - b.thinking_cost_per_1m)
console.log("Cheapest reasoning:", cheapThinking[0]?.model_id)

MultimodalFree

Input/output modality matrix per model — which models accept images, audio, video, or PDF as input, and which can generate images or audio as output. Includes max image count, size limits, and supported formats. Filter by modality to find vision-capable or audio-capable models instantly.

provider

Filter to one provider

model

Search by model ID

input_type

text · image · audio · video · pdf

output_type

text · image · audio

GET /api/v1/multimodal?input_type=image · Free
const { multimodal, meta } = await fetch(
  "https://topnetworks.com/api/v1/multimodal?input_type=image"
).then(r => r.json())

// meta: { vision_capable: 12, audio_input: 3, video_input: 3, pdf_input: 5 }
// multimodal[0]: {
//   provider_id: "google-gemini", model_id: "gemini-2.5-pro",
//   inputs: ["text","image","audio","video","pdf"],
//   outputs: ["text","image","audio"],
//   max_images: 3600, max_image_size_mb: 20,
//   supported_image_formats: ["png","jpeg","gif","webp","heic","heif"]
// }

const pdfCapable = multimodal.filter(m => m.inputs.includes("pdf"))
console.log(`${pdfCapable.length} models accept native PDF input`)

Structured OutputFree

JSON schema enforcement beyond basic JSON mode. Returns whether a model supports strict schema enforcement, constrained decoding, what schema features are supported, and what happens on schema violation. Different from /json-mode (outputs JSON) — this is about guaranteed schema adherence. Critical for agent tool pipelines.

provider

Filter to one provider

model

Search by model ID

strict_only

true — only models with strict enforcement

constrained_decoding

true — only models using constrained decoding

schema_supported

true — only models that accept a JSON schema

GET /api/v1/structured-output?strict_only=true · Free
const { structured_output, meta } = await fetch(
  "https://topnetworks.com/api/v1/structured-output?strict_only=true"
).then(r => r.json())

// meta: { strict_enforcement: 8, constrained_decoding: 5, no_structured_output: 3 }
// structured_output[0]: {
//   provider_id: "openai", model_id: "gpt-4o",
//   json_schema_supported: true, strict_enforcement: true,
//   constrained_decoding: true,
//   response_format_param: "response_format.type=json_schema",
//   supported_schema_features: ["object","array","enum","anyOf","nested","required"],
//   failure_mode: "error",
//   notes: "Best structured output support. Strict mode uses constrained decoding."
// }

// Which models will SILENTLY return invalid JSON?
const dangerous = structured_output.filter(m => m.failure_mode === "silent_invalid")
console.log("Avoid for structured tasks:", dangerous.map(m => m.model_id))

Cost & Batch


Prompt CachingFree

Prompt caching support, TTL, and savings per provider. Up to 90% cost reduction on repeated system prompts. Not all providers implement it the same way — some require explicit markup, others cache automatically. Know the mechanics before assuming caching is active.

provider

Filter to one provider

supported_only

true — only providers where caching_supported=true

caching_supported

Whether provider supports prompt caching

cache_ttl_minutes

How long cache lives (minutes)

savings_pct

Calculated % discount on cached tokens

cacheable_elements

What can be cached: system prompt, tools, messages

requires_explicit_markup

Whether cache_control must be added to prompt

uncached_input_price_per_mtoken

Full price per 1M input tokens

cached_input_price_per_mtoken

Discounted price per 1M cached tokens

GET /api/v1/prompt-caching?supported_only=true · Free
const { prompt_caching, meta } = await fetch(
  "https://topnetworks.com/api/v1/prompt-caching?supported_only=true"
).then(r => r.json())

// meta: { supported_count: 6, total: 52 }
// prompt_caching[0]: {
//   provider_id: "anthropic", provider_name: "Anthropic",
//   caching_supported: true, cache_ttl_minutes: 5,
//   savings_pct: 90,  // cached tokens cost 10% of full price
//   cacheable_elements: ["system", "tools", "messages"],
//   requires_explicit_markup: true,
//   uncached_input_price_per_mtoken: 3.00,
//   cached_input_price_per_mtoken: 0.30
// }

// Multi-agent loop: estimate monthly savings from caching 4k token system prompt
const anthropic = prompt_caching.find(r => r.provider_id === 'anthropic')
const saving = 4000 / 1e6 * anthropic.uncached_input_price_per_mtoken * (anthropic.savings_pct / 100)
console.log(`Save $${(saving * 1000).toFixed(4)} per 1000 requests with caching`)

Batch APIFree

Batch API availability, discount percentage, max batch size, and typical turnaround time per provider. Batch APIs trade latency for a ~50% cost reduction — essential for offline evaluations, dataset generation, and any non-urgent workload at scale.

provider

Filter to one provider

available_only

true — only providers where batch_api_available=true

GET /api/v1/batch-api?available_only=true · Free
const { batch_api, meta } = await fetch(
  "https://topnetworks.com/api/v1/batch-api?available_only=true"
).then(r => r.json())

// meta: { available_count: 5, total: 52 }
// batch_api[0]: {
//   provider_id: "openai", provider_name: "OpenAI",
//   batch_api_available: true,
//   discount_pct: 50,
//   max_batch_size: 50000,
//   typical_turnaround_hours: 24,
//   supported_endpoints: ["/v1/chat/completions", "/v1/embeddings"],
//   notes: "Results expire after 29 days. Use for evals, not real-time."
// }

// Pick cheapest batch provider
const best = batch_api.sort((a, b) => (b.discount_pct ?? 0) - (a.discount_pct ?? 0))[0]
console.log(`${best.provider_id} gives ${best.discount_pct}% batch discount`)

Fine TuningFree

Fine-tuning availability, supported models, training methods (LoRA, full, supervised, DPO), per-token training cost, and constraints per provider. Filter by method to find providers supporting the training approach you need.

provider

Filter to one provider

available_only

true — only providers where available=true

method

lora · full · supervised · dpo

GET /api/v1/fine-tuning?available_only=true&method=lora · Free
const { fine_tuning, meta } = await fetch(
  "https://topnetworks.com/api/v1/fine-tuning?available_only=true"
).then(r => r.json())

// meta: { available_count: 7, total: 52 }
// fine_tuning[0]: {
//   provider_id: "openai", provider_name: "OpenAI",
//   available: true,
//   methods: ["supervised", "dpo", "reinforcement"],
//   supported_models: ["gpt-4o-mini", "gpt-4o", "gpt-3.5-turbo"],
//   training_cost_per_1m_tokens: 25.00,
//   inference_cost_multiplier: 1.0,
//   max_training_examples: null,
//   notes: "Hyperparameter tuning available. Supports function calling fine-tuning."
// }

// Find cheapest fine-tuning option
const cheapest = fine_tuning.sort((a, b) =>
  (a.training_cost_per_1m_tokens ?? 999) - (b.training_cost_per_1m_tokens ?? 999)
)[0]
console.log(`Cheapest: ${cheapest.provider_id} at $${cheapest.training_cost_per_1m_tokens}/1M tokens`)

Audio PricingFree

STT and TTS pricing comparison across all providers. STT prices in $/audio-minute, TTS prices in $/1k characters. Includes realtime streaming support flag — critical for voice agent pipelines. Sorted STT-first, then cheapest within type.

provider

Filter to one provider

type

stt · tts · both (default: both)

realtime_only

true — only providers supporting realtime streaming audio

free_only

true — only providers with a free tier for audio

GET /api/v1/audio-pricing?type=stt · Free
const { audio_pricing, meta } = await fetch(
  "https://topnetworks.com/api/v1/audio-pricing?type=stt"
).then(r => r.json())

// meta: { stt_count: 8, tts_count: 12, realtime_count: 3 }
// audio_pricing[0] (STT, cheapest):
// { provider_id: "groq", type: "stt", model: "whisper-large-v3-turbo",
//   price_per_minute: 0.0002, realtime_supported: false,
//   free_tier: "unlimited (rate limited)", notes: "Whisper-based, ~15x faster than OpenAI" }
// audio_pricing[N] (TTS):
// { provider_id: "openai", type: "tts", model: "tts-1",
//   price_per_1k_chars: 0.015, realtime_supported: false,
//   free_tier: null }

// Find cheapest STT for a transcription pipeline
const stt = audio_pricing.filter(r => r.type === 'stt')
console.log(`Cheapest STT: ${stt[0].provider_id} at $${stt[0].price_per_minute}/min`)

RerankingFree

Reranking API availability, supported models, price per 1k queries, max documents per call, and multilingual support. Reranking is a second-pass scoring step that dramatically improves RAG retrieval accuracy — use this to find the cheapest provider before your retrieval pipeline goes to prod.

provider

Filter to one provider

available_only

true — only providers where reranking_available=true

multilingual_only

true — only providers with multilingual reranking

GET /api/v1/reranking?available_only=true · Free
const { reranking, meta } = await fetch(
  "https://topnetworks.com/api/v1/reranking?available_only=true"
).then(r => r.json())

// meta: { available_count: 5, total: 52 }
// reranking[0]: {
//   provider_id: "cohere", provider_name: "Cohere",
//   reranking_available: true,
//   models: ["rerank-english-v3.0", "rerank-multilingual-v3.0"],
//   price_per_1k_queries: 2.00, max_documents: 1000,
//   supports_multilingual: true,
//   notes: "Industry standard. Also available via Azure / AWS."
// }

// Compare reranking costs for a RAG pipeline doing 100k queries/day
reranking.sort((a, b) => (a.price_per_1k_queries ?? 999) - (b.price_per_1k_queries ?? 999))
const cheapest = reranking[0]
console.log(`${cheapest.provider_id}: $${(cheapest.price_per_1k_queries / 1000 * 100000).toFixed(2)}/day`)

Task CostFree

Rank all providers by estimated cost for a given task type and token count — cheapest first. Unlike Cost Estimate (single provider), this sweeps the entire catalog so you always know if you're leaving money on the table. Supports chat, embedding, image, audio, and search tasks.

task_typerequired

required — chat · embedding · image · audio · search

input_tokens

Input tokens (default 10000)

output_tokens

Output tokens for chat (default 2000)

cached_tokens

Cached input tokens at 50% discount (default 0)

limit

Max results 1–50 (default 10)

free_only

true — only providers with a free tier

GET /api/v1/task-cost?task_type=chat&input_tokens=50000&output_tokens=5000 · Free
const { providers_ranked, meta } = await fetch(
  "https://topnetworks.com/api/v1/task-cost" +
  "?task_type=chat&input_tokens=50000&output_tokens=5000&limit=5"
).then(r => r.json())

// meta.cheapest_provider: "deepseek"
// providers_ranked[0]: {
//   rank: 1, provider_id: "deepseek", model: "deepseek-v3",
//   estimated_cost_usd: 0.017750,
//   input_cost_usd: 0.013500, output_cost_usd: 0.004250,
//   input_per_1m_usd: 0.27, output_per_1m_usd: 0.85,
//   pct_more_expensive_than_cheapest: 0,
//   free_tier: true
// }
// providers_ranked[4]: { pct_more_expensive_than_cheapest: 1240, ... }

console.log(`Cheapest: ${providers_ranked[0].provider_id} — $${providers_ranked[0].estimated_cost_usd.toFixed(6)}`)
console.log(`Most expensive is ${providers_ranked[4].pct_more_expensive_than_cheapest}% pricier`)

Caching GranularityFree

Detailed caching mechanics per provider — what can be cached (system prompt, tools, messages, images), whether elements cache independently, minimum token block sizes, TTL options, and whether explicit markup is required. Understand the exact caching rules before implementing to avoid silent cache misses.

provider

Filter to one provider

supports_caching

true — only providers where caching_supported=true

GET /api/v1/caching-granularity?supports_caching=true · Free
const { caching_granularity } = await fetch(
  "https://topnetworks.com/api/v1/caching-granularity?supports_caching=true"
).then(r => r.json())

// caching_granularity[0]: {
//   provider_id: "anthropic",
//   caching_supported: true, caching_type: "explicit",
//   cacheable_elements: ["system", "tools", "user_messages", "images"],
//   elements_cache_independently: true,
//   min_tokens_per_block: 1024,
//   max_cache_breakpoints: 4,
//   ttl_options_seconds: [300, 3600],
//   default_ttl_seconds: 300,
//   cached_price_pct_of_input: 10,  // 10% = 90% savings
//   requires_explicit_markup: true,
//   markup_method: "cache_control: { type: 'ephemeral' }",
//   notes: "Min 1024 tokens per cacheable block. Images count towards token limit."
// }

Free TierFree

Detailed free tier breakdown per provider — permanent free tier vs trial credits, whether a credit card is required, monthly token and request caps, and included models. Use this before sending a new project to a paid API. Many providers have generous free tiers that cover prototyping entirely.

provider

Filter to one provider

permanent_only

true — only permanently free tiers (no expiry)

has_free_tier

true · false — filter by free tier existence

GET /api/v1/free-tier?permanent_only=true · Free
const { free_tiers, meta } = await fetch(
  "https://topnetworks.com/api/v1/free-tier?permanent_only=true"
).then(r => r.json())

// meta: { permanent_free_count: 14, trial_credit_count: 18, no_free_access_count: 20 }
// free_tiers[0]: {
//   provider_id: "google-gemini", provider_name: "Google Gemini",
//   has_free_tier: true, tier_type: "permanent",
//   requires_credit_card: false,
//   included_models: ["gemini-2.0-flash", "gemini-1.5-flash"],
//   monthly_token_cap: null,  // rate-limited but no hard cap
//   daily_request_cap: 1500, monthly_request_cap: null,
//   trial_credit_usd: null, trial_credit_recurring: false,
//   rate_limits_apply: true,
//   rate_limits_url: "/api/v1/rate-limits?provider=google-gemini&tier=free",
//   notes: "Flash free tier is generous. Pro models require billing."
// }

const noCard = free_tiers.filter(r => !r.requires_credit_card)
console.log(`${noCard.length} providers offer free access with no credit card`)

Token EstimateFree

Estimate token count for a text string across different provider tokenizer families. Returns ±10% approximate counts. Max 50,000 characters. POST endpoint — send text in the body. Optionally target a specific provider for a single estimate instead of all tokenizers.

textrequired

required — text to estimate (max 50k chars)

provider

optional — specific provider for targeted estimate

model

optional — model ID for tokenizer selection

POST /api/v1/token-estimate · Free
const data = await fetch("https://topnetworks.com/api/v1/token-estimate", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ text: "Your prompt text here...", provider: "openai" })
}).then(r => r.json())

// Single provider estimate:
// { estimate: { provider_id: "openai", tokenizer: "o200k_base (GPT-4o/o3)",
//   estimated_tokens: 7, confidence: "approximate",
//   token_range: { low: 6, high: 8 } },
//   input: { char_count: 27, word_count: 5 } }

// Without provider: returns all tokenizer estimates
// { estimates: [{ tokenizer: "o200k", estimated_tokens: 7, providers: ["openai"] }, ...],
//   summary: { min_tokens: 6, max_tokens: 8, avg_tokens: 7 } }

Cost ForecastFree

Project daily, weekly, monthly, and yearly costs across providers for a given usage pattern. Different from /cost-estimate (single request cost) — this forecasts ongoing budget impact. Includes prompt caching savings projections when cache_hit_rate is specified.

requests_per_day

Requests per day (default: 100)

avg_input_tokens

Average input tokens per request (default: 2000)

avg_output_tokens

Average output tokens per request (default: 500)

cache_hit_rate

0–1 — cache hit ratio for savings projection (default: 0)

task

llm · embedding · image · audio · all (default: llm)

limit

Max providers to return (default: 10)

GET /api/v1/cost-forecast?requests_per_day=500&avg_input_tokens=4000&avg_output_tokens=1000&cache_hit_rate=0.3 · Free
const { forecasts, usage_profile } = await fetch(
  "https://topnetworks.com/api/v1/cost-forecast?requests_per_day=500&avg_input_tokens=4000&avg_output_tokens=1000&cache_hit_rate=0.3"
).then(r => r.json())

// Ranked cheapest first
// forecasts[0]: {
//   provider_id: "deepseek", model: "DeepSeek V3",
//   daily_cost: 0.27, weekly_cost: 1.89,
//   monthly_cost: 8.10, yearly_cost: 98.55,
//   cost_breakdown: { daily_input_cost: 0.16, daily_output_cost: 0.11, cache_savings_per_day: 0.04 },
//   pricing: { input_per_1m: 0.27, output_per_1m: 1.10, caching_available: true }
// }

console.log(`Cheapest: $${forecasts[0].monthly_cost}/month on ${forecasts[0].provider_id}`)

Trust & Compliance


ComplianceFree

SOC 2, HIPAA, ISO 27001, GDPR, PCI-DSS, and FedRAMP certifications per provider. DPA and BAA availability. Use this to short-list providers before an enterprise security review — filter by the certification that matters for your use case.

provider

Filter to one provider

certification

soc2 · hipaa · iso27001 · gdpr · pci-dss · fedramp

hipaa

true — only providers with HIPAA BAA available

gdpr

true — only GDPR-compliant providers

GET /api/v1/compliance?hipaa=true · Free
const { compliance, meta } = await fetch(
  "https://topnetworks.com/api/v1/compliance?hipaa=true"
).then(r => r.json())

// meta: { gdpr_compliant_count: 28, hipaa_baa_count: 9, soc2_ii_count: 19 }
// compliance[0]: {
//   provider_id: "openai", provider_name: "OpenAI",
//   certifications: ["soc2", "hipaa", "gdpr"],
//   soc2_type: "II",
//   hipaa_baa_available: true,
//   gdpr_compliant: true,
//   dpa_available: true,
//   iso27001: false,
//   pci_dss: false,
//   fedramp: false,
//   notes: "BAA available under Enterprise plan. Zero data retention configurable."
// }

// Filter for EU healthcare use case
const euHealth = compliance.filter(r => r.hipaa_baa_available && r.gdpr_compliant)
console.log(`${euHealth.length} providers suitable for EU healthcare data`)

Data RetentionFree

Prompt logging policies, retention periods, opt-out options, and Zero Data Retention (ZDR) availability per provider. Some providers train on API data by default. Know this before routing sensitive prompts — especially in legal, healthcare, or finance contexts.

provider

Filter to one provider

zdr_available

true — only providers offering Zero Data Retention

no_training

true — only providers that do not train on API data

GET /api/v1/data-retention?no_training=true · Free
const { data_retention, meta } = await fetch(
  "https://topnetworks.com/api/v1/data-retention?no_training=true"
).then(r => r.json())

// meta: { zdr_available_count: 8, trains_on_data_count: 3 }
// data_retention[0]: {
//   provider_id: "anthropic", provider_name: "Anthropic",
//   prompt_logging: "30_days",
//   retention_period_days: 30,
//   opt_out_available: true,
//   opt_out_method: "account_setting",
//   zero_data_retention_available: true,
//   zdr_plan_required: "enterprise",
//   trained_on_api_data: false,
//   notes: "Prompts not used for training by default. ZDR on Enterprise plan."
// }

const noLog = data_retention.filter(r => !r.trained_on_api_data && r.opt_out_available)
console.log(`${noLog.length} providers don't train on data + offer opt-out`)

SLAFree

Published uptime SLA guarantees per provider — contractual uptime %, credit terms, and which tier the SLA applies to. Note: observed uptime from /health/premium often differs from contractual guarantees. This endpoint covers what's promised; use live health data for what's actually happening.

provider

Filter to one provider

sla_available_only

true — only providers with a published SLA

min_uptime

Minimum guaranteed uptime %. e.g. min_uptime=99.9

GET /api/v1/sla?min_uptime=99.9 · Free
const { sla, meta } = await fetch(
  "https://topnetworks.com/api/v1/sla?sla_available_only=true&min_uptime=99.9"
).then(r => r.json())

// meta: { sla_available_count: 12 }
// sla[0]: {
//   provider_id: "openai", provider_name: "OpenAI",
//   sla_available: true,
//   guaranteed_uptime_pct: 99.9,
//   credit_pct_per_hour_outage: 10,
//   max_credit_pct: 30,
//   tier_required: "enterprise",
//   sla_url: "https://openai.com/enterprise",
//   notes: "SLA applies to API. Consumer products have no SLA."
// }

// Cross-check SLA guarantee vs observed uptime
const { sla: slaRecord } = await fetch("/api/v1/sla?provider=openai").then(r => r.json())
const { overall_uptime_pct } = await fetch("/api/v1/uptime-history?provider=openai&period=30d").then(r => r.json())
console.log(`Promised: ${slaRecord[0].guaranteed_uptime_pct}% | Observed: ${overall_uptime_pct}%`)

Overflow BehaviourFree

What happens when context limit is exceeded — provider throws an error, silently truncates, or applies a sliding window. Silent truncation without warning is a common source of hard-to-debug agent failures where the model appears to forget earlier context mid-task.

provider

Filter to one provider

behaviour

error · truncate · sliding_window

error

Returns 400/413 — you know immediately

truncate

Cuts oldest content — may or may not warn

sliding_window

Automatically drops oldest messages silently

GET /api/v1/overflow-behaviour · Free
const { overflow_behaviour, summary } = await fetch(
  "https://topnetworks.com/api/v1/overflow-behaviour"
).then(r => r.json())

// summary: {
//   error: 31, truncate: 14, sliding_window: 7,
//   silent_truncators: ["mistral", "together-ai"]  // truncate with no warning_provided
// }
// overflow_behaviour[0]: {
//   provider_id: "openai", overflow_behaviour: "error",
//   warning_provided: true,
//   error_code: "context_length_exceeded",
//   notes: "Returns 400 with max_tokens hint. Use tiktoken to count before sending."
// }

// Find dangerous silent truncators for a long-context agent
const silent = overflow_behaviour.filter(r =>
  r.overflow_behaviour === 'truncate' && !r.warning_provided
)
console.warn(`AVOID for long context: ${silent.map(r => r.provider_id).join(', ')}`)

OpenAI CompatFree

OpenAI-compatible API matrix — which providers expose an OpenAI-compatible base URL, what endpoints they support, known quirks, and whether they are a true drop-in replacement. Use this before migrating an OpenAI integration to an alternative provider.

provider

Filter to one provider

compatible_only

true — only OpenAI-compatible providers

drop_in_only

true — only true drop-in replacements (minimal code change)

GET /api/v1/openai-compat?drop_in_only=true · Free
const { openai_compat, meta } = await fetch(
  "https://topnetworks.com/api/v1/openai-compat?drop_in_only=true"
).then(r => r.json())

// meta: { compatible_count: 24, drop_in_count: 11 }
// openai_compat[0]: {
//   provider_id: "groq", provider_name: "Groq",
//   openai_compatible: true,
//   drop_in_replacement: true,
//   base_url: "https://api.groq.com/openai/v1",
//   compatible_endpoints: ["/chat/completions", "/models"],
//   known_quirks: ["No streaming function calls", "tool_choice 'required' not supported"],
//   notes: "Change base_url + model name. Most OpenAI SDK code works unchanged."
// }

// Migrate from OpenAI in one line
const groq = openai_compat.find(r => r.provider_id === 'groq')
// openai.baseURL = groq.base_url  → done

Error CodesFree

Cross-provider error taxonomy. Maps provider-native error types and codes to standard categories (rate_limit, auth, context_length, content_filter, etc.) with retry guidance — whether to retry, recommended backoff in ms, and max retries. Write one error handler, handle all providers.

provider

Filter to one provider

category

Standard category. e.g. rate_limit · auth · context_length · content_filter

retryable_only

true — only retryable errors

http_status

Filter by HTTP status. e.g. http_status=429

GET /api/v1/error-codes?category=rate_limit · Free
const { error_codes, meta } = await fetch(
  "https://topnetworks.com/api/v1/error-codes?category=rate_limit&retryable_only=true"
).then(r => r.json())

// meta: { categories: ["rate_limit","auth","context_length","content_filter","server_error","timeout"] }
// error_codes[0]: {
//   provider_id: "openai", http_status: 429,
//   provider_error_type: "rate_limit_exceeded",
//   provider_error_code: "rate_limit_exceeded",
//   standard_category: "rate_limit",
//   retryable: true,
//   recommended_backoff_ms: 1000,
//   max_retries: 5,
//   resolution: "Exponential backoff. Check x-ratelimit-reset-requests header.",
//   notes: "Different limits for tokens vs requests."
// }

// Generic retry handler for any provider
function shouldRetry(error, provider) {
  const codes = error_codes.filter(c =>
    c.provider_id === provider && c.http_status === error.status
  )
  return codes.some(c => c.retryable)
}

Rate Limit RecoveryFree

429 recovery guide per provider — which response headers tell you when to retry, whether the window is sliding or fixed, the recommended backoff strategy, base and max delay in ms, and whether to add jitter. Every provider's 429 behavior is slightly different; implement this once correctly.

provider

Filter to one provider (returns all providers if omitted)

GET /api/v1/rate-limit-recovery?provider=openai · Free
const { rate_limit_recovery } = await fetch(
  "https://topnetworks.com/api/v1/rate-limit-recovery?provider=openai"
).then(r => r.json())

// rate_limit_recovery[0]: {
//   provider_id: "openai",
//   retry_after_header: "retry-after",
//   retry_after_format: "seconds",
//   reset_headers: ["x-ratelimit-reset-requests", "x-ratelimit-reset-tokens"],
//   reset_header_format: "iso8601",
//   rpm_window_type: "sliding",
//   tpm_window_type: "sliding",
//   recommended_strategy: "exponential_backoff_with_jitter",
//   base_delay_ms: 1000,
//   max_delay_ms: 60000,
//   jitter: true,
//   notes: "Read x-ratelimit-reset-requests for exact reset time. Always add jitter."
// }

// Build a correct retry loop from the data
const { base_delay_ms, max_delay_ms, jitter } = rate_limit_recovery[0]
const delay = attempt => Math.min(base_delay_ms * 2 ** attempt, max_delay_ms)
const withJitter = ms => ms * (0.5 + Math.random() * 0.5)
// retryDelay = jitter ? withJitter(delay(attempt)) : delay(attempt)

RegionsFree

Inference regions per provider — which geographic regions are available, which models are available in each region, and EU data residency availability. Essential for GDPR compliance, data sovereignty requirements, and latency optimization by routing to a closer inference region.

provider

Filter to one provider

region

Filter by geography. e.g. region=eu · region=us · region=ap

model

Partial model name — only providers with that model in a region

eu_only

true — only providers with EU inference regions

GET /api/v1/regions?eu_only=true · Free
const { regions, meta } = await fetch(
  "https://topnetworks.com/api/v1/regions?eu_only=true"
).then(r => r.json())

// meta: { eu_available_count: 18, us_available_count: 42, ap_available_count: 22 }
// regions[0]: {
//   provider_id: "azure-openai", provider_name: "Azure OpenAI",
//   inference_regions: [
//     { region_id: "swedencentral", name: "Sweden Central",
//       geography: "eu", models: ["gpt-4o", "gpt-4o-mini"] },
//     { region_id: "westeurope",    name: "West Europe", geography: "eu", models: null }
//   ],
//   geographies: ["us", "eu", "ap"],
//   eu_available: true,
//   notes: "Model availability varies by region. Check Azure portal for current availability."
// }

// Find EU-only providers for GDPR-sensitive workloads
const euOnly = regions.filter(r => r.geographies.length === 1 && r.eu_available)
console.log(`${euOnly.length} providers with EU inference only`)

Uptime HistoryFree

Daily uptime % timeseries from live health polling data — 7, 30, or 90 day windows. Returns a per-day breakdown with check counts and a cross-reference against the provider's published SLA target. Use this to make data-driven decisions about which providers belong in your primary vs fallback slot.

providerrequired

required — provider ID. e.g. provider=openai

period

7d · 30d · 90d (default: 30d)

overall_uptime_pct

Weighted uptime % for the entire period

sla_target

Contractual SLA % (from /api/v1/sla)

sla_met

true if observed uptime >= SLA target

incidents_in_period

Days with at least one outage check

timeline[].date

ISO date (YYYY-MM-DD)

timeline[].uptime_pct

Daily uptime percentage

timeline[].total_checks

Number of polls on that day

timeline[].outage_checks

Number of failed checks on that day

GET /api/v1/uptime-history?provider=openai&period=30d · Free
const data = await fetch(
  "https://topnetworks.com/api/v1/uptime-history?provider=openai&period=30d"
).then(r => r.json())

// {
//   provider: "openai", period: "30d", granularity: "daily",
//   overall_uptime_pct: 99.4,
//   sla_target: 99.9, sla_met: false,
//   incidents_in_period: 2,
//   timeline: [
//     { date: "2026-03-24", uptime_pct: 100, total_checks: 144, outage_checks: 0 },
//     { date: "2026-03-23", uptime_pct: 93.8, total_checks: 144, outage_checks: 9 },
//     ...
//   ]
// }

if (!data.sla_met) {
  console.warn(`${data.provider} is below its ${data.sla_target}% SLA — observed: ${data.overall_uptime_pct}%`)
}
// Plot the timeline for a dashboard
const worstDay = data.timeline.sort((a, b) => a.uptime_pct - b.uptime_pct)[0]
console.log(`Worst day: ${worstDay.date} at ${worstDay.uptime_pct}% uptime`)

GuardrailsFree

Content filtering and safety configuration per provider. Returns filter categories, strictness levels, whether filters are configurable or disableable, and false-positive risk rating. Essential for agents that handle medical, security research, or other content that triggers aggressive safety filters on some providers.

provider

Filter to one provider

configurable_only

true — only providers with configurable filters

can_disable

true — only providers where filters can be disabled

category

Filter by category. e.g. category=violence

GET /api/v1/guardrails?can_disable=true · Free
const { guardrails, meta } = await fetch(
  "https://topnetworks.com/api/v1/guardrails?can_disable=true"
).then(r => r.json())

// meta: { can_disable: 3, no_filters: 4, high_false_positive: 1 }
// guardrails[0]: {
//   provider_id: "google-gemini",
//   safety_filters_enabled: true, configurable: true,
//   filter_categories: ["harassment","hate_speech","sexually_explicit","dangerous_content"],
//   strictness_levels: ["BLOCK_NONE","BLOCK_ONLY_HIGH","BLOCK_MEDIUM_AND_ABOVE","BLOCK_LOW_AND_ABOVE"],
//   can_disable: true, false_positive_risk: "high",
//   notes: "Most aggressive default filters. BLOCK_NONE available..."
// }

Rate Limit StatusFree

Live congestion and rate-limit pressure derived from our health polling data. Returns response time trends, error rates, and a congestion level (low/moderate/high/critical) per provider. Different from /rate-limits (published limits) — this tells you how busy a provider is right now.

provider

Filter to one provider

GET /api/v1/rate-limit-status · Free
const { rate_limit_status } = await fetch(
  "https://topnetworks.com/api/v1/rate-limit-status"
).then(r => r.json())

// Sorted by congestion severity (worst first)
// rate_limit_status[0]: {
//   provider_id: "deepseek", congestion: "high",
//   avg_response_time_ms: 4200, p95_response_time_ms: 8100,
//   response_time_trend: 0.35, error_rate_1h: 0.08,
//   recommendation: "Expect slower responses — consider alternatives"
// }

const avoid = rate_limit_status
  .filter(r => r.congestion === "critical" || r.congestion === "high")
  .map(r => r.provider_id)
console.log("Avoid:", avoid)  // Use with /api/v1/pick?avoid=...

Migration GuideFree

When switching from one provider to another — parameter name mapping, missing features, auth differences, response format changes, and gotchas. Requires at least from or to parameter. Essential for failover implementations and planned provider migrations.

from

Source provider ID

to

Target provider ID

max_difficulty

drop_in · easy · moderate · hard

drop_in_only

true — only drop-in replacements

GET /api/v1/migration-guide?from=openai&to=deepseek · Free
const { migration_guides } = await fetch(
  "https://topnetworks.com/api/v1/migration-guide?from=openai&to=deepseek"
).then(r => r.json())

// migration_guides[0]: {
//   from_provider: "openai", to_provider: "deepseek",
//   difficulty: "drop_in", openai_compat: true,
//   param_changes: [{ from: "base_url=api.openai.com", to: "base_url=api.deepseek.com" }],
//   missing_features: ["vision (V3 only)","audio","structured outputs (R1)"],
//   auth_change: "Same Authorization: Bearer format. Different API key.",
//   gotchas: [
//     "DeepSeek-R1 does NOT support JSON mode — use V3 for structured output",
//     "R1 thinking tokens counted toward context window",
//     "Rate limits are lower than OpenAI"
//   ]
// }

Agent ProtocolsFree

Which agent interoperability protocols each provider supports — MCP, A2A, ACP, ANP, OAP, and custom. Includes support level (native/partial/planned/none), protocol version, and docs. Essential for building multi-provider agent pipelines.

provider

Filter by provider ID

protocol

MCP · A2A · ACP · ANP · OAP · custom

support_level

native · partial · planned · none

GET /api/v1/agent-protocols?protocol=MCP&support_level=native · Free
const { data } = await fetch(
  "https://topnetworks.com/api/v1/agent-protocols?protocol=MCP&support_level=native"
).then(r => r.json())

// data[0]: {
//   provider_id: "anthropic", provider_name: "Anthropic",
//   protocol: "MCP", support_level: "native",
//   version: "2025-03-26",
//   notes: "MCP creator. Native MCP client in API and Claude desktop.",
//   docs_url: "https://docs.anthropic.com/en/docs/build-with-claude/mcp"
// }

Knowledge CutoffFree

Training data cutoff dates per model. Filter by date range to find models with the freshest training data. Null cutoff_date means the model uses live search (e.g., Perplexity Sonar). Dates are approximate for most providers.

provider

Filter by provider ID

model

Filter by model ID (partial match)

after

Only models with cutoff after YYYY-MM-DD

before

Only models with cutoff before YYYY-MM-DD

GET /api/v1/knowledge-cutoff?after=2024-12-01 · Free
const { data } = await fetch(
  "https://topnetworks.com/api/v1/knowledge-cutoff?after=2024-12-01"
).then(r => r.json())

// data[0]: {
//   provider_id: "anthropic", model_id: "claude-opus-4",
//   cutoff_date: "2025-03-01", cutoff_approximate: true,
//   notes: "Most recent Anthropic cutoff. Training through early 2025.",
//   docs_url: "https://docs.anthropic.com/en/docs/about-claude/models"
// }

Tool Call FormatFree

Exact message role/content structure required for tool calls and tool results per provider. Anthropic uses user role for tool results; OpenAI uses a dedicated tool role; Google uses a function role. Critical for building provider-agnostic tool-use pipelines.

provider

Filter by provider ID

model

Filter by model ID (partial match)

GET /api/v1/tool-call-format?provider=anthropic · Free
const { data } = await fetch(
  "https://topnetworks.com/api/v1/tool-call-format?provider=anthropic"
).then(r => r.json())

// data[0]: {
//   provider_id: "anthropic", model_id: null,
//   role_order: ["user","assistant","user"],
//   tool_call_role: "assistant", tool_result_role: "user",
//   tool_result_content_key: "content",
//   parallel_tool_calls: true, strict_role_ordering: true,
//   tool_schema_format: "anthropic-v1",
//   notes: "Tool results sent as user messages with type:tool_result blocks."
// }

Streaming ProtocolsFree

SSE vs WebSocket streaming details per provider — supported events, stream end signals, realtime API availability, and known quirks. Groq has the fastest SSE; OpenAI and Azure have full WebSocket realtime APIs. AWS Bedrock uses a binary EventStream protocol.

provider

Filter by provider ID

protocol

SSE · WebSocket — filter by supported protocol

GET /api/v1/streaming-protocols?protocol=WebSocket · Free
const { data, meta } = await fetch(
  "https://topnetworks.com/api/v1/streaming-protocols?protocol=WebSocket"
).then(r => r.json())

// meta.websocket_count: 3
// data[0]: {
//   provider_id: "openai", primary_protocol: "both",
//   sse_supported: true, websocket_supported: true,
//   supported_events: ["content.delta","message.stop","tool_calls.delta",...],
//   stream_end_signal: "[DONE]", realtime_api: true,
//   known_quirks: "Realtime API uses WebSocket for voice/audio."
// }

Output ReproducibilityFree

Seed parameter support for deterministic outputs. Only some providers expose a seed parameter, and none guarantee strong determinism — most offer "best effort." OpenAI and Azure expose system_fingerprint to detect when backend changes break reproducibility.

provider

Filter by provider ID

deterministic_only

true — only providers with seed + non-none guarantee

GET /api/v1/output-reproducibility?deterministic_only=true · Free
const { data } = await fetch(
  "https://topnetworks.com/api/v1/output-reproducibility?deterministic_only=true"
).then(r => r.json())

// data[0]: {
//   provider_id: "openai", seed_supported: true, seed_param: "seed",
//   deterministic_guarantee: "best_effort",
//   same_hardware_required: true, system_fingerprint_exposed: true,
//   known_variability_factors: ["model updates","infrastructure changes","different system_fingerprint"],
//   notes: "When seed is set and system_fingerprint matches, outputs are highly reproducible."
// }

Native ToolsFree

Built-in tools providers offer natively — web search, code interpreter, image generation, file search, computer use. Includes pricing, compatible models, and availability (GA/beta/preview). Perplexity and xAI have native web search with no separate charge.

provider

Filter by provider ID

tool_type

web_search · code_interpreter · image_generation · file_search · computer_use · memory · other

model

Filter by compatible model

GET /api/v1/native-tools?tool_type=web_search · Free
const { data } = await fetch(
  "https://topnetworks.com/api/v1/native-tools?tool_type=web_search"
).then(r => r.json())

// data[0]: {
//   provider_id: "perplexity", tool_name: "web_search",
//   tool_type: "web_search",
//   compatible_models: ["sonar","sonar-pro","sonar-reasoning","sonar-reasoning-pro"],
//   separate_pricing: false,
//   pricing_notes: "Web search is intrinsic to all Sonar models — no additional charge",
//   availability: "ga"
// }

Model Task FitFree

Curated task suitability scores (0–100) per model across 10 task types. Tiers: elite (90+), strong (75–89), capable (60–74), limited (<60). Use this to quickly find the best model for a specific workload — or filter by min_score to surface viable options.

provider

Filter by provider ID

model

Filter by model ID (partial match)

task

code_generation · reasoning · summarization · creative_writing · tool_use · structured_extraction · rag_qa · math · instruction_following · long_context

min_score

Minimum score (0–100)

tier

elite · strong · capable · limited

GET /api/v1/model-task-fit?task=reasoning&tier=elite · Free
const { data } = await fetch(
  "https://topnetworks.com/api/v1/model-task-fit?task=reasoning&tier=elite"
).then(r => r.json())

// data sorted by score desc:
// [
//   { provider_id: "openai", model_id: "o3", task: "reasoning", score: 98, tier: "elite" },
//   { provider_id: "deepseek", model_id: "deepseek-r1", task: "reasoning", score: 96, tier: "elite" },
//   { provider_id: "anthropic", model_id: "claude-opus-4", task: "reasoning", score: 95, tier: "elite" },
//   { provider_id: "google-gemini", model_id: "gemini-2.5-pro", task: "reasoning", score: 94, tier: "elite" },
// ]

PII HandlingFree

Native PII detection and redaction capabilities per provider. Only AWS Bedrock Guardrails provides comprehensive native PII detection + redaction. Azure Content Safety offers partial detection. All other providers require external tooling (e.g., Microsoft Presidio).

provider

Filter by provider ID

detection_only

true — only providers with detection

redaction

true — only providers with redaction

GET /api/v1/pii-handling?redaction=true · Free
const { data } = await fetch(
  "https://topnetworks.com/api/v1/pii-handling?redaction=true"
).then(r => r.json())

// data[0]: {
//   provider_id: "aws-bedrock", pii_detection_supported: true,
//   pii_redaction_supported: true,
//   supported_pii_types: ["name","email","phone","address","ssn","credit_card",...],
//   detection_in_prompt: true, detection_in_response: true,
//   audit_trail_supported: true, configurable: true,
//   notes: "Amazon Bedrock Guardrails: comprehensive PII detection + redaction."
// }

Context CompressionFree

Native context compression and management capabilities. Google Gemini offers Context Caching with configurable TTL. Anthropic and OpenAI have prompt caching (exact-match prefix). No provider offers automatic summarization-based compression — use MemGPT, Letta, or LangChain externally.

provider

Filter by provider ID

native_only

true — only providers with native compression

GET /api/v1/context-compression?native_only=true · Free
const { data, meta } = await fetch(
  "https://topnetworks.com/api/v1/context-compression?native_only=true"
).then(r => r.json())

// meta.native_compression_count: 1
// data[0]: {
//   provider_id: "google-gemini", native_compression: true,
//   compression_methods: ["key_extraction"],
//   prompt_caching_as_compression: true, auto_compression: true,
//   ttl_configurable: true,
//   notes: "Context Caching API allows caching large stable context chunks."
// }

Security CertificationsFree

Security certifications per provider — SOC2 Type2, ISO27001, HIPAA, GDPR, FedRAMP, PCI DSS, CSA STAR, HITRUST. Azure OpenAI and AWS Bedrock hold the most certifications. DeepSeek has unknown compliance status. Filter by certification to find HIPAA-eligible or FedRAMP-authorized providers.

provider

Filter by provider ID

certification

SOC2_Type2 · ISO27001 · HIPAA · GDPR · FedRAMP · PCI_DSS · CSA_STAR · HITRUST

status

certified · in_progress · not_applicable · unknown

GET /api/v1/security-certifications?certification=HIPAA&status=certified · Free
const { data, summary } = await fetch(
  "https://topnetworks.com/api/v1/security-certifications?certification=HIPAA&status=certified"
).then(r => r.json())

// data[0]: {
//   provider_id: "aws-bedrock", certification: "HIPAA", status: "certified",
//   scope: "AWS Bedrock covered under AWS HIPAA BAA"
// }
// summary: [{ provider_id: "azure-openai", certified_count: 7, certifications: [...] }, ...]

Semantic CachingFree

Semantic similarity-based caching support per provider. As of 2026-03, no major LLM provider offers native semantic caching — it must be implemented via middleware (GPTCache, LangChain, Redis + embeddings). Most providers with caching use exact-match prefix matching only.

provider

Filter by provider ID

semantic_only

true — only providers with semantic caching (currently empty)

GET /api/v1/semantic-caching?provider=openai · Free
const { data, meta } = await fetch(
  "https://topnetworks.com/api/v1/semantic-caching?provider=openai"
).then(r => r.json())

// meta.note: "As of 2026-03, no major LLM provider offers native semantic caching."
// data[0]: {
//   provider_id: "openai", semantic_caching_supported: false,
//   caching_type: "exact_match", similarity_threshold_configurable: false,
//   cache_scope: "org", ttl_configurable: false,
//   notes: "Prompt caching uses prefix matching (exact_match on token prefix). No semantic similarity."
// }

MCP SupportFree

Provider MCP (Model Context Protocol) client/server compliance. Anthropic created MCP and has the most complete implementation. OpenAI added MCP client support via Responses API in 2025. Google focuses on A2A and has MCP planned. AWS Bedrock and Azure have MCP client support in their agent frameworks.

provider

Filter by provider ID

server

true — filter providers with MCP server support

client

true — filter providers with MCP client support

GET /api/v1/mcp-support?client=true · Free
const { data, meta } = await fetch(
  "https://topnetworks.com/api/v1/mcp-support?client=true"
).then(r => r.json())

// meta.mcp_client_count: 5
// data[0]: {
//   provider_id: "anthropic", mcp_server_supported: true, mcp_client_supported: true,
//   mcp_version: "2025-03-26",
//   hosted_mcp_servers: ["filesystem","github","postgres","puppeteer","brave-search","google-maps"],
//   custom_mcp_server_support: true,
//   notes: "MCP creators. Native client in Claude API."
// }

Model LifecycleFree

Current lifecycle stage per model — experimental, preview, beta, GA, soft deprecated, hard deprecated, or sunset. Use deprecated_only to find models that need migration. Use active_only to filter to production-ready models with SLA eligibility.

provider

Filter by provider ID

model

Filter by model ID (partial match)

stage

experimental · preview · beta · ga · soft_deprecated · hard_deprecated · sunset

deprecated_only

true — only soft/hard deprecated and sunset

active_only

true — only experimental/preview/beta/ga

GET /api/v1/model-lifecycle?deprecated_only=true · Free
const { data, meta } = await fetch(
  "https://topnetworks.com/api/v1/model-lifecycle?deprecated_only=true"
).then(r => r.json())

// meta: { deprecated_count: 8, sunset_count: 3 }
// data[0]: {
//   provider_id: "openai", model_id: "gpt-4-turbo",
//   stage: "soft_deprecated", launch_date: "2023-11-06",
//   deprecation_date: "2025-04-01",
//   replacement_model: "gpt-4o", sla_eligible: false,
//   notes: "Replaced by gpt-4o. Migrate at your earliest convenience."
// }

Delegation SupportFree

Secure agent delegation semantics per provider. Google (A2A) and AWS Bedrock (IAM) offer the most robust delegation — time-bound, auditable, and revocable. Anthropic and OpenAI have partial delegation via MCP and Assistants API respectively. Most providers have no delegation protocol.

provider

Filter by provider ID

protocol

Filter by protocol name (partial match)

GET /api/v1/delegation-support?provider=aws-bedrock · Free
const { data } = await fetch(
  "https://topnetworks.com/api/v1/delegation-support?provider=aws-bedrock"
).then(r => r.json())

// data[0]: {
//   provider_id: "aws-bedrock", delegation_supported: true,
//   protocol: "IAM + Bedrock Agents", time_bound: true,
//   auditable: true, revocable: true,
//   notes: "IAM roles + Bedrock Agents chaining. Time-bound via STS AssumeRole. CloudTrail audit."
// }

Prompt ModerationFree

Provider native INPUT-side prompt moderation before inference. AWS Bedrock Guardrails and Azure Prompt Shield offer prompt injection detection. OpenAI has a separate free /moderations API. Google Gemini has configurable SafetySettings per category. Groq and most inference providers have no native moderation.

provider

Filter by provider ID

injection_detection

true — only providers with prompt injection detection

configurable

true — only providers with configurable moderation

GET /api/v1/prompt-moderation?injection_detection=true · Free
const { data, meta } = await fetch(
  "https://topnetworks.com/api/v1/prompt-moderation?injection_detection=true"
).then(r => r.json())

// meta.injection_detection_count: 2
// data[0]: {
//   provider_id: "aws-bedrock", input_moderation_supported: true,
//   prompt_injection_detection: true,
//   supported_categories: ["hate","insults","sexual","violence","prompt_attack",...],
//   moderation_action: ["block","flag"],
//   configurable: true, separate_moderation_api: false,
//   latency_impact: "medium",
//   notes: "Bedrock Guardrails: Prompt Attack detection for injection."
// }