One SDK. Smart routing across OpenAI, Anthropic, Gemini, and Llama via Groq. Automatic failover. Cost optimisation. Zero lock-in.
from uniq import AI ai = AI(api_key="uq_••••••••••••••••") response = ai.complete( "Explain transformer architecture", model="auto", # routes to best provider) print(response.text) # ✓ Provider : anthropic → claude-sonnet-4-6# ✓ Latency : 1.24s │ Tokens: 342 │ $0.0014Platform
Built for developers who want the simplest possible interface to the most powerful AI models — without sacrificing control.
Automatically classifies your prompt — code, math, analysis, translation — and routes to the model with the strongest performance for that task. No config required.
If a provider is down, rate-limiting, or times out, the request is instantly retried against the next healthy provider in your fallback chain — fully transparent to your app.
Routes to the cheapest model capable of handling the request. The routing engine balances latency, quality, and per-token cost in real time across all configured providers.
Every call is logged with provider, model, token counts, latency, and estimated cost. Query summaries by day or month, or page through the full history — all via REST API.
Generate scoped `uq_*` keys, revoke compromised credentials instantly, and track per-key request volumes — all without restarting or redeploying your application.
A typed Python SDK with sync and async clients, context-manager support, and exponential-backoff retries baked in. Works exactly like you'd expect from a mature platform SDK.
Pricing
Start free. Scale when you're ready. No hidden fees, no egress charges.
For side projects and exploration.
For developers shipping real products.
For teams with production workloads.
For large-scale, compliance-critical deployments.
All plans include API key management and the full Python SDK. Prices in USD. Cancel anytime.