تشغيل LLM محلي

Run AI models privately in your browser using WebGPU.

100% Private (No Server Uploads)
Model:
Initializing...
👋 Hi! I'm a local AI running entirely in your browser. Select a model and start chatting!

Note: First load requires downloading model weights (1-4GB). Please be patient.

تشغيل LLMs محليًا باستخدام WebGPU

This tool uses WebAssembly and WebGPU to run AI models directly on your graphics card relative to your browser. No data leaves your device.

System Requirements

You need a modern browser (Chrome/Edge 113+) and a GPU with at least 4GB VRAM for decent performance.

الأسئلة الشائعة

Is this really private?

Yes. The model weights are downloaded to your browser cache, and all computation happens on your local hardware.

Why is the first load slow?

The browser needs to download the model weights (approx 1GB-4GB). Subsequent visits will be instant.