MindStudio, on your machine
Connect local AI models, edit interfaces and scripts in your IDE, and ship changes instantly — no cloud dependency required.
curl -fsSL https://msagent.ai/install-tunnel.sh | bash npm install -g @mindstudio-ai/local-model-tunnel irm https://msagent.ai/install-tunnel.ps1 | iex One command. Full local workflow.
The tunnel is a lightweight background process. It discovers local models, maintains a secure connection to MindStudio Cloud, and routes inference requests to your machine. It also watches your local files for interface and script changes.
Start your provider
Launch Ollama, LM Studio, Stable Diffusion WebUI, or ComfyUI as you normally would.
ollama serveRun the tunnel
One command starts the tunnel process in the background.
mindstudio-localModels are discovered
The tunnel auto-detects available models from all running providers.
✓ Detected ollama: llama3.2, mistral
✓ Detected lmstudio: phi-4Requests route locally
When MindStudio runs a task with a local model, the request travels to your machine and results stream back.
Run the tunnel
The same command that connects your models also starts the file watcher.
mindstudio-localEdit in your IDE
Open your MindStudio app’s interface HTML/CSS/JS and scripts in your favorite editor.
Changes hot-reload
Saved changes sync to your MindStudio app instantly — no manual deploy step.
✓ Watching: ~/my-app/interface/
→ Change detected, syncing…
✓ Synced in 240msShip when ready
Once you’re happy with the result, your changes are already live in MindStudio.
Your models, your hardware
The tunnel supports four local AI providers out of the box. If it runs on your machine, it works in MindStudio.
Ollama
textAny model in your Ollama library — Llama, Mistral, Gemma, Phi, and more.
localhost:11434 LM Studio
textModels loaded in LM Studio’s local server with full API compatibility.
localhost:1234 Stable Diffusion WebUI
imageAUTOMATIC1111 or Forge WebUI — your checkpoints, LoRAs, and workflows.
localhost:7860 ComfyUI
videoComfyUI server with video workflow nodes for local video generation.
localhost:8188 Your data never leaves your machine
All inference runs on your hardware. All editing happens in your IDE. The tunnel only streams results to MindStudio Cloud — your data stays where it belongs.
Data stays local
Sensitive prompts and outputs never transit external servers. Inference runs entirely on your machine.
Bring your own GPU
Use the hardware you already own — Mac M-series chips, high-VRAM workstations, or bare-metal servers.
Any model, any format
Not limited to cloud-hosted models. Run fine-tunes, LoRAs, custom checkpoints — whatever your provider supports.
Real-time dev workflow
Edit interfaces and scripts locally, see changes instantly in MindStudio. No deploy steps, no waiting.
Start building locally in minutes
Open source. MIT licensed. macOS, Linux, and Windows.
curl -fsSL https://msagent.ai/install-tunnel.sh | bash npm install -g @mindstudio-ai/local-model-tunnel irm https://msagent.ai/install-tunnel.ps1 | iex