smart_toy
psychology AI & Self-Hosting

Running DeepSeek R1 on My Laptop CPU (No GPU)

Running DeepSeek on a CPU with Ollama and Docker

Note: Everything Written Here is from LLMs (OpenAI and Deepseek)

AI Concept Image
auto_awesome

Introduction to DeepSeek

DeepSeek is a powerful AI tool designed for natural language processing and deep learning tasks, often relying on GPUs to accelerate computation. However, not everyone has access to high-performance GPUs, and DeepSeek's adaptability allows it to be deployed on CPU-only systems. In this blog post, I'll demonstrate how to run DeepSeek on a self-hosted server, specifically an 11th Gen Intel i5 laptop CPU. We'll leverage Ollama for model optimization and Docker for containerized deployment, ensuring an efficient and streamlined setup. Whether you're exploring AI for personal projects or lightweight applications, this guide will help you make the most of your hardware resources.

deployed_code

Installing Docker on Linux, macOS, and Windows

Docker is a powerful tool for containerization, making it easy to run and deploy applications in isolated environments. Here's how to install Docker on the three major operating systems.

terminal

1. Installing Docker on Linux

For Ubuntu, Debian, and similar distributions:

Step 1: Update your system

Bash
sudo apt update
sudo apt upgrade -y

Step 2: Install required dependencies

Bash
sudo apt install -y ca-certificates curl gnupg

Step 3: Add Docker's official GPG key and repository

Bash
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Step 4: Install Docker

Bash
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Step 5: Start Docker and enable it on boot

Bash
sudo systemctl start docker
sudo systemctl enable docker

Step 6: Verify installation

Bash
docker --version
laptop_mac

2. Installing Docker on macOS

Step 1: Download Docker Desktop

  • arrow_right Visit the Docker Desktop for macOS page.
  • arrow_right Download the appropriate installer for Intel or Apple Silicon (M1/M2) chips.

Step 2: Install Docker

  • arrow_right Open the downloaded .dmg file.
  • arrow_right Drag the Docker icon into the Applications folder.

Step 3: Start Docker

  • arrow_right Launch Docker from the Applications folder.
  • arrow_right Follow the on-screen instructions to complete the setup.

Step 4: Verify installation

Open a terminal and run:

Bash
docker --version
desktop_windows

3. Installing Docker on Windows

Step 1: Download Docker Desktop

  • arrow_right Visit the Docker Desktop for Windows page.
  • arrow_right Download the installer.

Step 2: Install Docker

  • arrow_right Run the downloaded .exe file.
  • arrow_right Follow the installation wizard.
  • arrow_right During the installation, ensure the option Enable WSL 2 features is selected (required for Windows 10/11).

Step 3: Start Docker

  • arrow_right Launch Docker Desktop from the Start Menu.
  • arrow_right Sign in with your Docker Hub account or create one.

Step 4: Verify installation

Open PowerShell or Command Prompt and run:

PowerShell
docker --version
tips_and_updates

Post-Installation Tips

Add Your User to the Docker Group (Linux):

Bash
sudo usermod -aG docker $USER

Log out and back in to apply changes.

Test Docker Installation:

Run a test container:

Bash
docker run hello-world

Install Docker Compose (if not included):

Bash
docker compose version
web

Install a Frontend for the LLM

After setting up Docker and Ollama, install a frontend like Chatbox.ai or open-webui for a user-friendly chat interface.

Open-WEBUI Screenshot

Open-WEBUI Interface

hub

Installing Ollama

Ollama is a tool for running large language models (LLMs) locally. It simplifies model management and allows running advanced AI models on your hardware.

1. Installing Ollama on macOS

Ollama currently supports macOS natively. Here's how to install it:

Install Ollama via Homebrew:

Bash
brew install ollama/tap/ollama

Start the Ollama service:

Bash
ollama serve

Verify Installation:

Bash
ollama --version

2. Installing Ollama on Windows or Linux

Ollama doesn't yet natively support Windows or Linux, but you can run it on these platforms via macOS virtualization or containerization solutions like Docker. Stay updated by visiting the Ollama official site.

download

Downloading and Running Different DeepSeek LLMs

Once Ollama is installed, you can easily install and run models like DeepSeek.

1. Install a Model

To install a model, use the ollama run command. This will pull the model if it's not already downloaded. For example, to install and run a DeepSeek model:

Bash
ollama run deepseek-r1:8b

(Replace 8b with the desired model size)

2. List Available Models

To see all installed models:

Bash
ollama list

3. Run a Model

To use a specific installed model:

Bash
ollama run deepseek-r1:8b

4. Managing Models

Delete a Model: If you need to remove a model to free up space:

Bash
ollama rm deepseek-r1:8b

5. Testing and Using DeepSeek LLMs

You can interact with the DeepSeek models through the terminal after running them. Then, type your input query to test the model's capabilities.

DeepSeek Models Available

DeepSeek provides multiple models optimized for various tasks. Common versions include:

Bash
# 1.5B version (smallest):
ollama run deepseek-r1:1.5b

# 8B version:
ollama run deepseek-r1:8b

# 14B version:
ollama run deepseek-r1:14b

# 32B version:
ollama run deepseek-r1:32b

# 70B version (biggest/smartest):
ollama run deepseek-r1:70b

This is the command to run and install a model from Ollama:

Bash
ollama run deepseek-r1:8b
screenshot_monitor

Screenshots

Screenshot 1
Screenshot 2
Screenshot 3
task_alt

Conclusion

In conclusion, running DeepSeek on an 11th Gen Intel i5 laptop CPU proves to be a practical solution for lightweight AI workloads. With the 8B model, the system achieves a processing speed of 1.5-2 words per second, making it perfectly suitable for small-scale applications. While it utilizes around 80-90% of the CPU during operation, the performance is stable and reliable, demonstrating that even modest hardware can power advanced language models effectively when optimized with tools like Ollama and Docker.

menu_book

Further Reading

Read this for the comparison of the Models available: https://huggingface.co/deepseek-ai/DeepSeek-V3

A guide on self-hosting DeepSeek R1 with Ollama and Docker on CPU-only hardware.

diy 2025 devops homeserver docker