Install this package:
emerge -a sci-misc/llama-cpp
If the package is masked, you can unmask it using the autounmask tool or standard emerge options:
autounmask sci-misc/llama-cpp
Or alternatively:
emerge --autounmask-write -a sci-misc/llama-cpp
<pkgmetadata> <maintainer type="person"> <email>zl29ah@gmail.com</email> <name>Sergey Alirzaev</name> </maintainer> <use> <flag name="blis">Build a BLIS backend</flag> <flag name="flexiblas">Build a FlexiBLAS backend</flag> <flag name="rocm">Build a HIP (ROCm) backend</flag> <flag name="hip">Build a HIP (ROCm) backend</flag> <flag name="wmma">Use rocWMMA to enhance flash attention performance</flag> <flag name="openblas">Build an OpenBLAS backend</flag> <flag name="opencl">Build an OpenCL backend, so far only works on Adreno and Intel GPUs</flag> <flag name="openssl">Use openssl to support HTTPS</flag> </use> <upstream> <remote-id type="github">ggml-org/llama.cpp</remote-id> </upstream> </pkgmetadata>
Manage flags for this package:
euse -i <flag> -p sci-misc/llama-cpp |
euse -E <flag> -p sci-misc/llama-cpp |
euse -D <flag> -p sci-misc/llama-cpp
| Flag | Description | 9999 | 0_pre8838 |
|---|---|---|---|
| blis | Build a BLIS backend | ✓ | ✓ |
| cuda | Enable NVIDIA CUDA support ⚠️ | ✓ | ✓ |
| curl | Use curl for http,https and ftp backends ⚠️ | ✓ | ✓ |
| examples | ⚠️ | ✓ | ✓ |
| flexiblas | Build a FlexiBLAS backend | ✓ | ✓ |
| openblas | Build an OpenBLAS backend | ✓ | ✓ |
| opencl | Build an OpenCL backend, so far only works on Adreno and Intel GPUs | ✓ | ✓ |
| openmp | ⚠️ | ⊕ | ⊕ |
| openssl | Use openssl to support HTTPS | ✓ | ✓ |
| rocm | Build a HIP (ROCm) backend | ✓ | ✓ |
| vulkan | ⚠️ | ✓ | ✓ |
| wmma | Use rocWMMA to enhance flash attention performance | ✓ | ✓ |
| Type | File | Size | Versions |
|---|---|---|---|
| DIST | llama-cpp-0_pre8838.tar.gz | 33811834 bytes | 0_pre8838 |
| Type | File | Size |
|---|---|---|
| DIST | ggml-org_models_tinyllamas_stories15M-q4_0-99dd1a73db5a37100bd4ae633f4cfce6560e1567.gguf | 19077344 bytes |