Install this package:
emerge -a sci-ml/ollama
If the package is masked, you can unmask it using the autounmask tool or standard emerge options:
autounmask sci-ml/ollama
Or alternatively:
emerge --autounmask-write -a sci-ml/ollama
| Version | EAPI | Keywords | Slot |
|---|---|---|---|
| 0.20.5 | 8 | 0 |
<pkgmetadata> <maintainer type="project"> <email>bentoo@protonmail.com</email> <name>Bentoo Project</name> </maintainer> <longdescription lang="en"> Ollama is a tool for running large language models locally. It supports models like Llama 3, Mistral, Gemma, and many others. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile, and optimizes setup and configuration details, including GPU usage. </longdescription> <use> <flag name="blas">Enable BLAS acceleration for CPU inference</flag> <flag name="cuda">Enable NVIDIA CUDA GPU acceleration</flag> <flag name="mkl">Use Intel MKL instead of generic BLAS</flag> <flag name="rocm">Enable AMD ROCm/HIP GPU acceleration</flag> <flag name="vulkan">Enable Vulkan GPU acceleration</flag> </use> <upstream> <doc>https://github.com/ollama/ollama/blob/main/docs/README.md</doc> <bugs-to>https://github.com/ollama/ollama/issues</bugs-to> <remote-id type="github">ollama/ollama</remote-id> </upstream> </pkgmetadata>
Manage flags for this package:
euse -i <flag> -p sci-ml/ollama |
euse -E <flag> -p sci-ml/ollama |
euse -D <flag> -p sci-ml/ollama
| Flag | Description | 0.20.5 |
|---|---|---|
| blas | Enable BLAS acceleration for CPU inference | ✓ |
| blis | ⚠️ | ✓ |
| cuda | Enable NVIDIA CUDA GPU acceleration | ✓ |
| flexiblas | ⚠️ | ✓ |
| mkl | Use Intel MKL instead of generic BLAS | ✓ |
| openblas | ⚠️ | ✓ |
| rocm | Enable AMD ROCm/HIP GPU acceleration | ✓ |
| vulkan | Enable Vulkan GPU acceleration | ✓ |
| Type | File | Size | Versions |
|---|---|---|---|
| DIST | ollama-0.20.5.gh.tar.gz | 26470263 bytes | 0.20.5 |
| Type | File | Size |
|---|