Skip to main content
MindStudio
Pricing
Blog About
My Workspace
Local development toolkit

MindStudio, on your machine

Connect local AI models, edit interfaces and scripts in your IDE, and ship changes instantly — no cloud dependency required.

curl -fsSL https://msagent.ai/install-tunnel.sh | bash
npm install -g @mindstudio-ai/local-model-tunnel
irm https://msagent.ai/install-tunnel.ps1 | iex
Local Models Run Ollama, LM Studio, Stable Diffusion & ComfyUI through MindStudio
Local Development Edit interfaces and scripts in your IDE with live hot-reload

One command. Full local workflow.

The tunnel is a lightweight background process. It discovers local models, maintains a secure connection to MindStudio Cloud, and routes inference requests to your machine. It also watches your local files for interface and script changes.

Local Models
01

Start your provider

Launch Ollama, LM Studio, Stable Diffusion WebUI, or ComfyUI as you normally would.

ollama serve
02

Run the tunnel

One command starts the tunnel process in the background.

mindstudio-local
03

Models are discovered

The tunnel auto-detects available models from all running providers.

Detected ollama: llama3.2, mistral
Detected lmstudio: phi-4
04

Requests route locally

When MindStudio runs a task with a local model, the request travels to your machine and results stream back.

Local Development
01

Run the tunnel

The same command that connects your models also starts the file watcher.

mindstudio-local
02

Edit in your IDE

Open your MindStudio app’s interface HTML/CSS/JS and scripts in your favorite editor.

03

Changes hot-reload

Saved changes sync to your MindStudio app instantly — no manual deploy step.

Watching: ~/my-app/interface/
Change detected, syncing…
Synced in 240ms
04

Ship when ready

Once you’re happy with the result, your changes are already live in MindStudio.

Your models, your hardware

The tunnel supports four local AI providers out of the box. If it runs on your machine, it works in MindStudio.

Ollama

text

Any model in your Ollama library — Llama, Mistral, Gemma, Phi, and more.

localhost:11434

LM Studio

text

Models loaded in LM Studio’s local server with full API compatibility.

localhost:1234

Stable Diffusion WebUI

image

AUTOMATIC1111 or Forge WebUI — your checkpoints, LoRAs, and workflows.

localhost:7860

ComfyUI

video

ComfyUI server with video workflow nodes for local video generation.

localhost:8188

Your data never leaves your machine

All inference runs on your hardware. All editing happens in your IDE. The tunnel only streams results to MindStudio Cloud — your data stays where it belongs.

01

Data stays local

Sensitive prompts and outputs never transit external servers. Inference runs entirely on your machine.

02

Bring your own GPU

Use the hardware you already own — Mac M-series chips, high-VRAM workstations, or bare-metal servers.

03

Any model, any format

Not limited to cloud-hosted models. Run fine-tunes, LoRAs, custom checkpoints — whatever your provider supports.

04

Real-time dev workflow

Edit interfaces and scripts locally, see changes instantly in MindStudio. No deploy steps, no waiting.

Start building locally in minutes

Open source. MIT licensed. macOS, Linux, and Windows.

curl -fsSL https://msagent.ai/install-tunnel.sh | bash
npm install -g @mindstudio-ai/local-model-tunnel
irm https://msagent.ai/install-tunnel.ps1 | iex