SiliconFlow
InferenceFREE TIERHigh-performance open-source model API — DeepSeek, Qwen
Operational
All systems responding normally
Last checked 06/04/2026, 12:23:14 am
1947ms response
Uptime History100.00% uptime
2026-04-03Today
Uptime
100.00%
Avg Latency
1695ms
P95 Latency
1997ms
Fastest
1076ms
Checks
150
Response Time
Last 60 checks1076ms min1695ms avg2242ms max
💰 Pricing
deepseek-v3FREE
Input: $0.27/1MOutput: $1.1/1M
Strong for CJK tasks
Free tier available
⚡ Rate Limits
free
RPM: 30TPM: 60,000
Free: 30 RPM, 60K TPM. Paid: up to 500 RPM.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| DeepSeek V3 High-performance open-source API. Strong for CJK. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
1947ms6 Apr, 12:23 am
Operational
1794ms6 Apr, 12:03 am
Operational
1524ms5 Apr, 11:42 pm
Operational
1453ms5 Apr, 11:18 pm
Operational
1679ms5 Apr, 10:48 pm
Operational
1647ms5 Apr, 10:23 pm
Operational
1898ms5 Apr, 10:00 pm
Operational
1422ms5 Apr, 09:49 pm
Operational
1501ms5 Apr, 09:31 pm
Operational
1706ms5 Apr, 09:14 pm
Operational
1727ms5 Apr, 08:56 pm
Operational
1561ms5 Apr, 08:38 pm
Operational
1850ms5 Apr, 08:20 pm
Operational
1414ms5 Apr, 08:03 pm
Operational
1700ms5 Apr, 07:46 pm
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand