AI Providers — Deep Comparison
Pick the right AI provider for the job. VORTΞXHQ supports 7 backends, each with different strengths, costs, and privacy profiles.
Built-in providers
| Provider | Best for | Privacy |
|---|---|---|
| Vortex (built-in) | Zero-config first-time experience. | Routed via vortexhq.dev — encrypted in transit. |
| OpenAI | Tool use, vision, large context. | Your OpenAI key, calls go directly. |
| Anthropic | Long-form reasoning, code editing. | Your Anthropic key. |
| DeepSeek | Cost-effective code generation. | Your DeepSeek key. |
| Groq | Ultra-low latency inference. | Your Groq key. |
| Ollama | 100% local, offline-friendly. | Stays on your machine. |
| Custom | Any OpenAI-compatible endpoint (LM Studio, vLLM, OpenRouter). | Whatever you configure. |
Per-feature provider override
Set a default provider, then override it for specific features (e.g. Anthropic for SQL Agent, Groq for inline completions, Ollama for embeddings).
Tool-calling support
Agents work best with providers that support native tool calling (OpenAI, Anthropic, Vortex). Local models can still drive agents via prompt-based tool emulation — see Local Runtime & Embeddings.