Posted: 2024-03-13 16:21:36 by Alasdair Keyes
I've been playing about with local LLM (Large Language Model) AIs.
I knocked together this docker-compose.yml
file to help people get started with Ollama with a nice Open-WebUI front-end to have the "joy" of AI, but locally.
It's available through a Gitlab snippet, or you can copy and paste from below. https://gitlab.com/-/snippets/3687211
---
# Created by Alasdair Keyes (https://www.akeyes.co.uk)
# * `docker-compose up`
# * Visit http://127.0.0.1:3000 to create account and login
# * Click 'Select a model'
# * Enter the model name to use. Click the link on the page to see all. `llama2` or `llama2-uncensored` are suitable first options.
# * Chat
version: '3'
services:
ollama:
image: "ollama/ollama"
volumes:
- ollama-data:/root/.ollama
# Uncomment ports to allow access to ollama API from the host
# ports:
# - "127.0.0.1:11434:11434"
open-webui:
image: "ghcr.io/open-webui/open-webui:main"
depends_on:
- ollama
ports:
- "127.0.0.1:3000:8080"
environment:
- "OLLAMA_BASE_URL=http://ollama:11434"
volumes:
- open-webui-data:/app/backend/data
volumes:
ollama-data:
open-webui-data:
If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz
© Alasdair Keyes
I'm now available for IT consultancy and software development services - Cloudee LTD.
Happy user of Digital Ocean (Affiliate link)