Helicone
AgentFREE TIERLLM observability, caching, and cost tracking for agents
Operational
All systems responding normally
Last checked 09/04/2026, 6:41:55 pm
417ms response
Uptime
100.00%
Avg Latency
489ms
P95 Latency
605ms
Fastest
387ms
Checks
150
Response Time
Last 60 checks💰 Pricing
Growth: $20/mo. Team: $200/mo
10k requests/month free
⚡ Rate Limits
Proxy layer. Rate limits depend on underlying provider. 10k requests/month cap on free.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Observability LLM observability and caching. Not an LLM. | llm | — | — | — | — |
Recent Checks
Showing last 15Other Agent Providers
E2B
Secure cloud sandboxes for AI agents — code execution
Browserbase
Cloud browser infrastructure for AI agents — Playwright/Puppeteer
LangSmith
LLM observability, tracing, and evaluation platform
Portkey AI
AI gateway — routing, fallbacks, guardrails, observability
Vapi
Voice AI infrastructure for agents — real-time phone calls
Composio
100+ tool integrations for AI agents — GitHub, Slack, Gmail