VexNet is an AI agent that reads, writes, searches, browses, and builds other agents. Runs locally with Ollama. Chat via CLI or Telegram. No cloud required.
Installs Python (if needed), Ollama, AI models, and VexNet. Takes 2-5 minutes.
Run in PowerShell as Administrator. Installs Ollama, AI models, and VexNet.
Manual install. You'll need to set up Ollama separately (see setup guide below).
Explore codebases, write new files, make precise edits. Understands project structure and conventions.
Execute shell commands, run tests, build projects. Graduated security controls so you stay in charge.
Web search, fetch pages, and full browser control via headless Chromium for complex research tasks.
Dynamically creates specialist agents for complex tasks. A scraper agent, a docs agent, a test agent — on demand.
Chat with Vex from your phone. Group monitoring with intelligent interjections. Persistent conversation memory.
Uses Ollama for local AI. Your data never leaves your machine. No API keys needed to get started.
Vex has a unique personality that evolves over time. Remembers users, learns preferences, and grows curious about you.
4-level autonomy system, secret redaction, prompt injection detection, workspace sandboxing, and full audit logs.
Ollama runs AI models locally on your machine. The installer handles this automatically, but if you need to install manually:
curl -fsSL https://ollama.com/install.sh | shVexNet needs a language model and an embedding model:
You can use any Ollama model. Popular alternatives:
llama3.2, mistral, qwen2.5.
Update the model name in your config file after installing.
Use the one-line installer above, or install with pip:
Open a terminal and run:
That's it! Vex will launch an interactive session. Type a task and watch it work.
Chat with Vex from your phone:
/newbot and follow the prompts to create your botexport TELEGRAM_BOT_TOKEN=your_token_herevex --telegram
VexNet looks for config in ./vex.toml or ~/.vex/config.toml.
Key settings: