Docs AI Engine AI Engine

AI Engine

VortexHQ integrates 7 AI providers across every module — generate code, queries, commands, docs, and summaries. Includes an AI Command Center and floating chat.

Supported Providers

ProviderDefault ModelAPI Key
Vortex (Built-in)autoNot required — uses your Vortex account
OpenAIgpt-4oRequired
Anthropicclaude-sonnet-4-20250514Required
DeepSeekdeepseek-chatRequired
Groqllama-3.3-70b-versatileRequired
Ollama (local)llama3Not required — runs locally
Customgpt-4o (configurable)Required — any OpenAI-compatible endpoint

Configuration

Go to Settings → AI Engine to configure your provider. Each provider can be tuned with:

  • API Key — Stored securely (base64-encoded in encrypted config)
  • Model — Override the default model
  • Base URL — Override for self-hosted or custom endpoints
  • Temperature — 0 to 1.0 (default: 0.3)
  • Max Tokens — 256 to 8,192 (default: 2,048; step: 256)
  • Test Connection — Verify your configuration works

AI Command Center (Home)

The Home screen is VortexHQ's AI command center with 40+ quick actions organized by category:

  • General — Explain code, write documentation, convert between formats, generate regex, translate text
  • Email — Summarize emails, check for phishing, analyze headers
  • API — Generate API requests, create full clusters, analyze responses
  • SSH — Generate shell commands, explain errors, create scripts
  • SQL — Generate queries, optimize performance, explain execution plans
  • FTP — Generate config files, deployment scripts, permission setups
  • Tasks — Break down projects, suggest next steps, add tasks by description

Type a prompt and VortexHQ intelligently routes it to the appropriate module with full context awareness. Results include a chat history that persists during the session.

Floating Chat (Cmd+K)

Press Cmd+K from any module to open the universal AI floating chat panel. Features:

  • Context-aware — The AI knows which module you're in and can access relevant data (current email, query, terminal output, etc.)
  • Fuzzy command matching — Type naturally and VortexHQ matches to the right AI action
  • Action execution — Results can trigger module actions (switch view, create task, generate request)
  • Token usage display — Shows token count for each request

Per-Module AI Capabilities

ModuleCapabilities
EmailSummarize emails (action items, tone, dates), interactive Q&A chat about email content
API ClientGenerate single requests from descriptions, generate entire clusters from descriptions, generate clusters from project source code (auto-expands Route::resource), analyze API responses, auto-generate endpoint documentation
SSH TerminalNatural language → shell commands, explain terminal output and errors
SQL ClientNatural language → SQL queries (MySQL/PostgreSQL/SQLite-aware with schema context), explain and optimize queries
FTP / SFTPFile analysis ("Ask AI about this file"), generate config files (.htaccess, nginx.conf, .env), deployment assistance
PHP REPLGhost-text inline completions as you type (with 200+ Laravel autocomplete entries), PHP code generation from descriptions
TasksBreak projects into 3–8 sub-tasks, suggest next tasks based on context, add tasks from natural language descriptions
GeneralGeneral text/code explanation, format conversion, documentation generation

Token Limits

When using the Vortex (Built-in) provider, AI usage is metered with a 3-window token system (daily, weekly, monthly). The backend automatically caps tokens at 4,000 per request. Requests using your own API key (OpenAI, Anthropic, etc.) bypass all VortexHQ limits entirely.

Last updated 1 month ago