Docs AI Engine AI Providers — Deep Comparison

AI Providers — Deep Comparison

Pick the right AI provider for the job. VORTΞXHQ supports 7 backends, each with different strengths, costs, and privacy profiles.

Built-in providers

ProviderBest forPrivacy
Vortex (built-in)Zero-config first-time experience.Routed via vortexhq.dev — encrypted in transit.
OpenAITool use, vision, large context.Your OpenAI key, calls go directly.
AnthropicLong-form reasoning, code editing.Your Anthropic key.
DeepSeekCost-effective code generation.Your DeepSeek key.
GroqUltra-low latency inference.Your Groq key.
Ollama100% local, offline-friendly.Stays on your machine.
CustomAny OpenAI-compatible endpoint (LM Studio, vLLM, OpenRouter).Whatever you configure.

Per-feature provider override

Set a default provider, then override it for specific features (e.g. Anthropic for SQL Agent, Groq for inline completions, Ollama for embeddings).

Tool-calling support

Agents work best with providers that support native tool calling (OpenAI, Anthropic, Vortex). Local models can still drive agents via prompt-based tool emulation — see Local Runtime & Embeddings.

Last updated 3 hours ago

No matches.