AI Engine
VortexHQ integrates 7 AI providers across every module — generate code, queries, commands, docs, and summaries. Includes an AI Command Center and floating chat.
Supported Providers
| Provider | Default Model | API Key |
|---|---|---|
| Vortex (Built-in) | auto | Not required — uses your Vortex account |
| OpenAI | gpt-4o | Required |
| Anthropic | claude-sonnet-4-20250514 | Required |
| DeepSeek | deepseek-chat | Required |
| Groq | llama-3.3-70b-versatile | Required |
| Ollama (local) | llama3 | Not required — runs locally |
| Custom | gpt-4o (configurable) | Required — any OpenAI-compatible endpoint |
Configuration
Go to Settings → AI Engine to configure your provider. Each provider can be tuned with:
- API Key — Stored securely (base64-encoded in encrypted config)
- Model — Override the default model
- Base URL — Override for self-hosted or custom endpoints
- Temperature — 0 to 1.0 (default: 0.3)
- Max Tokens — 256 to 8,192 (default: 2,048; step: 256)
- Test Connection — Verify your configuration works
AI Command Center (Home)
The Home screen is VortexHQ's AI command center with 40+ quick actions organized by category:
- General — Explain code, write documentation, convert between formats, generate regex, translate text
- Email — Summarize emails, check for phishing, analyze headers
- API — Generate API requests, create full clusters, analyze responses
- SSH — Generate shell commands, explain errors, create scripts
- SQL — Generate queries, optimize performance, explain execution plans
- FTP — Generate config files, deployment scripts, permission setups
- Tasks — Break down projects, suggest next steps, add tasks by description
Type a prompt and VortexHQ intelligently routes it to the appropriate module with full context awareness. Results include a chat history that persists during the session.
Floating Chat (Cmd+K)
Press Cmd+K from any module to open the universal AI floating chat panel. Features:
- Context-aware — The AI knows which module you're in and can access relevant data (current email, query, terminal output, etc.)
- Fuzzy command matching — Type naturally and VortexHQ matches to the right AI action
- Action execution — Results can trigger module actions (switch view, create task, generate request)
- Token usage display — Shows token count for each request
Per-Module AI Capabilities
| Module | Capabilities |
|---|---|
| Summarize emails (action items, tone, dates), interactive Q&A chat about email content | |
| API Client | Generate single requests from descriptions, generate entire clusters from descriptions, generate clusters from project source code (auto-expands Route::resource), analyze API responses, auto-generate endpoint documentation |
| SSH Terminal | Natural language → shell commands, explain terminal output and errors |
| SQL Client | Natural language → SQL queries (MySQL/PostgreSQL/SQLite-aware with schema context), explain and optimize queries |
| FTP / SFTP | File analysis ("Ask AI about this file"), generate config files (.htaccess, nginx.conf, .env), deployment assistance |
| PHP REPL | Ghost-text inline completions as you type (with 200+ Laravel autocomplete entries), PHP code generation from descriptions |
| Tasks | Break projects into 3–8 sub-tasks, suggest next tasks based on context, add tasks from natural language descriptions |
| General | General text/code explanation, format conversion, documentation generation |
Token Limits
When using the Vortex (Built-in) provider, AI usage is metered with a 3-window token system (daily, weekly, monthly). The backend automatically caps tokens at 4,000 per request. Requests using your own API key (OpenAI, Anthropic, etc.) bypass all VortexHQ limits entirely.