visit
Step 1: Install Docker Desktop on your machine (if you haven’t already)
//www.docker.com/products/docker-desktop/
Why use Docker? Instead of wasting time writing 3 separate articles that handle the nuances of each different OS (MacOS, Linux, or Windows) > we make use of one source of truth (universal environment) as defined in a Dockerfile to get things situated and running.
^ This is a simplified explanation. I may revisit this when I have a better answer.
Step 2: In your terminal, paste and enter this command to get Ollama installed on your system:
docker pull ollama/ollama
Step 3: Now that it’s installed, let’s run it by entering this:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Boom! You know can communicate with one of the smartest LLMs on the planet OFFLINE on your computer. Welcome to the club. Perhaps you’ll ask a more informative input than just ‘hey!’
Bonus Points!
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Select the Ollama model above and chat away!