Docs AI Engine Local AI Runtime & Embeddings

Local AI Runtime & Embeddings

Run agents and search your codebase entirely offline with a local Ollama runtime and a built-in vector index.

Local Agent Runtime

VORTΞXHQ wraps Ollama (or any compatible local server) with a tool-calling shim so agents work even on small local models. Configure context window, temperature, and per-model routing in Settings → AI → Local Runtime.

Workspace Indexing

Index any folder once and the agent can semantically search it instantly. Embeddings are computed locally with the configured embedding model and stored in an encrypted vector store inside ~/.vortex/.

  • Per-project ignore patterns (.vortexignore).
  • Incremental re-indexing on file changes.
  • Hybrid search — keyword + vector with rerank.

Privacy

Local runtime + local embeddings = nothing leaves your machine. Ideal for regulated industries, air-gapped environments, or simply faster iteration.

Tip: Use a fast model (llama3.1:8b, qwen2.5-coder:7b) for inline tasks, and a slower one (qwen2.5:32b, deepseek-coder:33b) for deep agent runs.

Last updated 3 hours ago

No matches.