Skip to content

Installation

scmd works offline by default using llama.cpp and local Qwen models. This guide covers installation on all supported platforms using multiple installation methods.

Quick Install

Choose the installation method that works best for you:

The easiest way to install on macOS and Linux:

# Add the scmd tap
brew tap scmd/tap

# Install scmd
brew install scmd

# Verify installation
scmd --version

# Install llama.cpp for offline usage
brew install llama.cpp

Homebrew automatically:

  • Installs the binary to your PATH
  • Adds shell completions for bash, zsh, and fish
  • Manages updates via brew upgrade scmd

Works on any platform with Node.js:

# Install globally
npm install -g scmd-cli

# Verify installation
scmd --version

# Install llama.cpp for offline usage
# macOS:
brew install llama.cpp
# Linux: build from source (see below)

The npm package:

  • Downloads the correct binary for your platform
  • Automatically adds scmd to your PATH
  • Works on macOS, Linux, and Windows

Universal installer for Unix-like systems:

# Using curl (recommended)
curl -fsSL https://scmd.sh/install.sh | bash

# Using wget
wget -qO- https://scmd.sh/install.sh | bash

# Verify installation
scmd --version

The install script:

  • Auto-detects your OS and architecture
  • Verifies checksums for security
  • Installs to /usr/local/bin or ~/.local/bin
  • Sets up shell completions

Custom installation:

# Install to custom directory
curl -fsSL https://scmd.sh/install.sh | SCMD_INSTALL_DIR=$HOME/bin bash

# Install without sudo (user-local)
curl -fsSL https://scmd.sh/install.sh | SCMD_NO_SUDO=true bash

# Install specific version
curl -fsSL https://scmd.sh/install.sh | SCMD_VERSION=v1.0.0 bash

Native packages for Debian, Red Hat, and Alpine:

Debian/Ubuntu (apt):

# Download and install
wget https://github.com/scmd/scmd/releases/latest/download/scmd_VERSION_linux_amd64.deb
sudo dpkg -i scmd_VERSION_linux_amd64.deb

# Or use apt for dependency resolution
sudo apt install ./scmd_VERSION_linux_amd64.deb

# Verify
scmd --version

Red Hat/Fedora/CentOS (rpm):

# Download and install
wget https://github.com/scmd/scmd/releases/latest/download/scmd_VERSION_linux_amd64.rpm

# Fedora/RHEL 8+
sudo dnf install scmd_VERSION_linux_amd64.rpm

# CentOS 7/RHEL 7
sudo yum install scmd_VERSION_linux_amd64.rpm

# Verify
scmd --version

Alpine Linux (apk):

wget https://github.com/scmd/scmd/releases/latest/download/scmd_VERSION_linux_amd64.apk
sudo apk add --allow-untrusted scmd_VERSION_linux_amd64.apk

Linux packages include:

  • Binary in /usr/bin/scmd
  • Shell completions (bash, zsh, fish)
  • Integration with system package manager

Download pre-built binaries from GitHub:

  1. Visit GitHub Releases
  2. Download the archive for your platform:
  3. macOS (Intel): scmd_VERSION_macOS_amd64.tar.gz
  4. macOS (Apple Silicon): scmd_VERSION_macOS_arm64.tar.gz
  5. Linux (x64): scmd_VERSION_linux_amd64.tar.gz
  6. Linux (ARM64): scmd_VERSION_linux_arm64.tar.gz
  7. Windows (x64): scmd_VERSION_windows_amd64.zip

  8. Extract and install:

    # macOS/Linux
    tar -xzf scmd_VERSION_macOS_arm64.tar.gz
    sudo mv scmd /usr/local/bin/
    
    # Windows (PowerShell)
    Expand-Archive scmd_VERSION_windows_amd64.zip
    # Add to PATH
    

  9. Verify checksums (recommended):

    wget https://github.com/scmd/scmd/releases/download/v1.0.0/checksums.txt
    shasum -a 256 -c checksums.txt 2>&1 | grep scmd
    

For developers or custom builds:

# Prerequisites: Go 1.24 or later

# Clone the repository
git clone https://github.com/scmd/scmd
cd scmd

# Build using Makefile
make build

# Install to /usr/local/bin
sudo make install

# Or install to $GOPATH/bin
make install-go

# Or build with Go directly
go build -o scmd ./cmd/scmd

# Verify
./scmd --version

Prerequisites

llama.cpp (for offline usage)

scmd requires llama.cpp for offline inference:

brew install llama.cpp

# Verify
which llama-server
llama-server --version
# Ubuntu/Debian - from package manager (if available)
sudo apt install llama-cpp

# Or build from source (recommended for latest version)
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
mkdir build && cd build

# For NVIDIA GPU support
cmake .. -DLLAMA_CUDA=ON

# For CPU only
# cmake ..

cmake --build . --config Release
sudo cp bin/llama-server /usr/local/bin/

# Verify
which llama-server
llama-server --version

Download pre-built binaries or build from source:

  1. Visit llama.cpp releases
  2. Download Windows binaries
  3. Add to PATH

Or build with CMake:

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
mkdir build
cd build
cmake ..
cmake --build . --config Release

Post-Installation

1. Verify Installation

# Check scmd version
scmd --version

# Check available backends
scmd backends

Expected output:

Available backends:

✓ llamacpp     qwen2.5-1.5b
✗ ollama       (not running)
✗ openai       (not configured)

2. First Run (Model Download)

On first use, scmd will automatically download the default model (~1.0GB):

scmd /explain "what is a channel in Go?"

Output:

[INFO] First run detected
[INFO] Downloading qwen2.5-1.5b model (1.0 GB)...
[INFO] Progress: ████████████████████ 100%
[INFO] Model downloaded to ~/.scmd/models/qwen2.5-1.5b-q4_k_m.gguf
[INFO] Starting llama-server...

A channel in Go is a typed conduit through which you can send
and receive values with the channel operator <-...

3. Set Up Shell Completions (Optional)

Enable tab completion for scmd commands:

# Generate completion script
scmd completion bash > /tmp/scmd-completion.bash

# Install for current user
mkdir -p ~/.bash_completion.d
mv /tmp/scmd-completion.bash ~/.bash_completion.d/scmd

# Or install system-wide (requires sudo)
sudo scmd completion bash > /etc/bash_completion.d/scmd

# Reload
source ~/.bashrc
# Enable completion system
echo "autoload -U compinit; compinit" >> ~/.zshrc

# Install completion
scmd completion zsh > "${fpath[1]}/_scmd"

# Reload
source ~/.zshrc
# Install completion
scmd completion fish > ~/.config/fish/completions/scmd.fish

# Reload
source ~/.config/fish/config.fish

Directory Structure

scmd uses the following directory structure:

~/.scmd/
├── config.yaml          # Configuration file
├── slash.yaml           # Slash command mappings
├── repos.json           # Repository list
├── models/              # Downloaded GGUF models
│   ├── qwen2.5-1.5b-q4_k_m.gguf
│   └── qwen2.5-3b-q4_k_m.gguf
├── commands/            # Installed command specs
│   ├── git-commit.yaml
│   └── explain.yaml
└── cache/               # Cached manifests
    └── official/
        └── manifest.yaml

XDG Base Directory Support

scmd respects XDG environment variables if set:

export XDG_CONFIG_HOME=~/.config
export XDG_DATA_HOME=~/.local/share
export XDG_CACHE_HOME=~/.cache

# scmd will use:
# - $XDG_CONFIG_HOME/scmd/ for config
# - $XDG_DATA_HOME/scmd/ for models and data
# - $XDG_CACHE_HOME/scmd/ for cache

Or customize the data directory:

export SCMD_DATA_DIR=/path/to/custom/dir
scmd /explain "test"

Updating scmd

brew upgrade scmd
npm update -g scmd-cli

Re-run the install script:

curl -fsSL https://scmd.sh/install.sh | bash

# Debian/Ubuntu
sudo apt upgrade scmd

# Fedora/RHEL
sudo dnf upgrade scmd
cd scmd
git pull origin main
make build
sudo make install

Uninstalling scmd

brew uninstall scmd

# Remove data (optional)
rm -rf ~/.scmd
npm uninstall -g scmd-cli

# Remove data (optional)
rm -rf ~/.scmd
# Use the uninstall script
curl -fsSL https://scmd.sh/uninstall.sh | bash

# Or manually
sudo rm /usr/local/bin/scmd
rm -rf ~/.scmd
# Debian/Ubuntu
sudo apt remove scmd

# Fedora/RHEL
sudo dnf remove scmd

# Remove data (optional)
rm -rf ~/.scmd

Troubleshooting

For detailed troubleshooting, see the Troubleshooting Guide.

Common Issues

Command not found: scmd

Issue: Shell can't find the scmd binary.

Solution: Add scmd's installation directory to PATH:

# For ~/.local/bin
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc

# For Homebrew on Apple Silicon
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

llama-server not found

Issue: Offline functionality requires llama.cpp.

Solution: Install llama.cpp (see Prerequisites section above)

Model download failed

Issue: Network issues or firewall restrictions.

Solution:

# Check network
curl -I https://huggingface.co

# Manually download model
scmd models pull qwen2.5-1.5b

# Use alternative model
scmd models pull qwen3-4b

Permission denied

Issue: Binary is not executable.

Solution:

chmod +x /path/to/scmd

Optional: Additional LLM Backends

Ollama

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model
ollama pull qwen2.5-coder:1.5b

# Start Ollama server
ollama serve

# Use with scmd
scmd -b ollama /explain main.go

OpenAI

# Set API key
export OPENAI_API_KEY=sk-...

# Use with scmd
scmd -b openai -m gpt-4o-mini /review code.py

Together.ai (Free Tier Available)

# Get API key from https://together.ai
export TOGETHER_API_KEY=...

# Use with scmd
scmd -b together /explain main.go

Groq (Free Tier Available)

# Get API key from https://groq.com
export GROQ_API_KEY=gsk_...

# Use with scmd
scmd -b groq -m llama-3.1-8b-instant /review code.py

Next Steps

Getting Help

For a detailed installation guide with platform-specific instructions and advanced options, see INSTALL.md in the repository.