# Ollama-webui
Created: [[2024_01_15]] 12:35
Tags: #selfhost #Software #Docker
I have really been interested in Local LLM models because I'm concerned about our data just being used for training data (which is similar to why I don't use Google).
So I have been utilizing two tools lately and can essentially host my own ChatGPT interface locally.
![[ollama-webui_interface.png]]
I have been using ollama as an easy way to run local models on my M1 Max MacBook Pro with 32GB of RAM for some time now, and llama was good, but the larger model did not fit into my memory. But recently mistral came out with the mixtral model that I have quantized to fit within my specs and I have been loving this model. Totally speculatively, but in my day-to-day usage I find it to be about as useful as ChatGPT 3.5 (The currently free version).
Before, I was running ollama directly and prompting from the command line, which is straightforward and all but having a UI that you can go to in the middle of your browsing is even more convenient and attractive of an interface. I'm very excited to have this setup.
## Technical Setup
I am running ollama using the macOS app directly on my machine. I have ollama-webui running on my (separate) macOS server. It is configured to hit the macOS server by default, but that machine does not have enough RAM for some models so I have configured it to hit my machine (since they are on the same network). To do that, I had to start ollama with: `OLLAMA_HOST=10.0.0.x ollama serve` to get it to bind to that network interface (allowing me to access it from my other machine) which is configured to run ollama like this:
```yaml
version: '3.8'
services:
ollama-webui:
image: ghcr.io/ollama-webui/ollama-webui:main
container_name: ollama-webui
volumes:
- ollama-webui:/app/backend/data
restart: unless-stopped
extra_hosts:
- host.docker.internal:host-gateway
volumes:
ollama-webui:
```
So it ends up looking something like this:
![[ollama_diagram.excalidraw.svg]]
## References
-