Install this package:
emerge -a sci-ml/ollama-bin
If the package is masked, you can unmask it using the autounmask tool or standard emerge options:
autounmask sci-ml/ollama-bin
Or alternatively:
emerge --autounmask-write -a sci-ml/ollama-bin
| Version | EAPI | Keywords | Slot |
|---|---|---|---|
| 0.20.5 | 8 | ~amd64 ~arm64 | 0 |
<pkgmetadata> <maintainer type="person"> <email>lucascs@proton.me</email> <name>Lucas C.S.</name> </maintainer> <longdescription lang="en"> Ollama is a tool for running large language models (LLMs) locally on your machine. It provides a simple interface to download, run, and manage models like Llama 3.2, Mistral, Gemma, and many others. This is a binary distribution package that installs pre-built binaries from the official Ollama releases. The binaries are provided under the MIT license and include GPU acceleration support for both NVIDIA (CUDA) and AMD (ROCm) graphics cards. Key features: - Easy model management with pull, push, and create commands - Built-in API server for programmatic access - GPU acceleration support (CUDA and ROCm) - Efficient memory management with automatic model loading/unloading - Support for multiple models and concurrent requests - Compatible with OpenAI API format Models are stored in /var/lib/ollama and can range from 2GB (3B parameters) to 40GB+ (70B parameters) in size. GPU acceleration significantly improves inference speed but requires compatible hardware. Security Note: This package installs pre-compiled binaries. Security hardening features (ASLR, PIE, stack protections) depend on upstream's build configuration. The service runs as a dedicated 'ollama' user with restricted permissions for defense in depth. </longdescription> <slots> <subslots>
			Package does not use subslots as it is a self-contained binary 
			distribution. All dependencies are either bundled or runtime-only.
		</subslots> </slots> <use> <flag name="cuda"> Enable NVIDIA CUDA GPU acceleration support. Requires compatible NVIDIA GPU (compute capability 6.0+, Pascal architecture or newer) and nvidia-cuda-toolkit. Significantly improves inference performance for large models. </flag> <flag name="rocm"> Enable AMD ROCm GPU acceleration support. Requires compatible AMD GPU (Radeon RX 6000 series or newer, or Radeon VII) and ROCm libraries. May require HSA_OVERRIDE_GFX_VERSION environment variable for optimal compatibility. This is experimental and not all GPU models are supported. </flag> </use> <upstream> <changelog>https://github.com/ollama/ollama/releases</changelog> <doc lang="en">https://github.com/ollama/ollama/tree/main/docs</doc> <bugs-to>https://github.com/ollama/ollama/issues</bugs-to> <remote-id type="github">ollama/ollama</remote-id> </upstream> </pkgmetadata>
Manage flags for this package:
euse -i <flag> -p sci-ml/ollama-bin |
euse -E <flag> -p sci-ml/ollama-bin |
euse -D <flag> -p sci-ml/ollama-bin
| Flag | Description | 0.20.5 |
|---|---|---|
| cuda | Enable NVIDIA CUDA GPU acceleration support. Requires compatible NVIDIA GPU (compute capability 6.0+, Pascal architecture or newer) and nvidia-cuda-toolkit. Significantly improves inference performance for large models. | ✓ |
| rocm | Enable AMD ROCm GPU acceleration support. Requires compatible AMD GPU (Radeon RX 6000 series or newer, or Radeon VII) and ROCm libraries. May require HSA_OVERRIDE_GFX_VERSION environment variable for optimal compatibility. This is experimental and not all GPU models are supported. | ✓ |
| systemd | Support systemd ⚠️ | ✓ |
| Type | File | Size | Versions |
|---|---|---|---|
| DIST | ollama-bin-0.20.5-amd64.tar.zst | 2051850319 bytes | 0.20.5 |
| DIST | ollama-bin-0.20.5-arm64.tar.zst | 1324342896 bytes | 0.20.5 |
| DIST | ollama-bin-0.20.5-rocm.tar.zst | 1039578711 bytes | 0.20.5 |
| Type | File | Size |
|---|