sci-ml/ollama (sinustrom)

Search

Install

Install this package:

emerge -a sci-ml/ollama

If the package is masked, you can unmask it using the autounmask tool or standard emerge options:

autounmask sci-ml/ollama

Or alternatively:

emerge --autounmask-write -a sci-ml/ollama

Package Information

Description:
Get up and running with Llama 3, Mistral, Gemma, and other language models.
Homepage:
https://ollama.com
License:
MIT

Versions

Version EAPI Keywords Slot
0.17.7 8 ~amd64 0
0.16.1 8 ~amd64 0

Metadata

Maintainers

Upstream

Raw Metadata XML
<pkgmetadata>
	<maintainer type="person">
		<email>negril.nx+gentoo@gmail.com</email>
		<name>Paul Zander</name>
	</maintainer>
	<maintainer type="person">
		<email>zoltan@sinustrom.info</email>
		<name>Zoltan Puskas</name>
	</maintainer>
	<use>
		<flag name="cuda">Enable NVIDIA CUDA support</flag>
		<flag name="mkl">Use <pkg>sci-libs/mkl</pkg> for blas, lapack and sparse blas routines</flag>
		<flag name="rocm">Enable ROCm gpu computing support</flag>
		<flag name="cuda_sm_75">Compile for compute capability 7.5</flag>
		<flag name="cuda_sm_80">Compile for compute capability 8.0</flag>
		<flag name="cuda_sm_86">Compile for compute capability 8.6</flag>
		<flag name="cuda_sm_87">Compile for compute capability 8.7</flag>
		<flag name="cuda_sm_89">Compile for compute capability 8.9</flag>
		<flag name="cuda_sm_90">Compile for compute capability 9.0</flag>
		<flag name="cuda_sm_100">Compile for compute capability 10.0</flag>
		<flag name="cuda_sm_103">Compile for compute capability 10.3</flag>
		<flag name="cuda_sm_110">Compile for compute capability 11.0</flag>
		<flag name="cuda_sm_120">Compile for compute capability 12.0</flag>
		<flag name="cuda_sm_121">Compile for compute capability 12.1</flag>
	</use>
	<upstream>
		<remote-id type="github">ollama/ollama</remote-id>
	</upstream>
</pkgmetadata>

Lint Warnings

USE Flags

Manage flags for this package: euse -i <flag> -p sci-ml/ollama | euse -E <flag> -p sci-ml/ollama | euse -D <flag> -p sci-ml/ollama

Flag Description 0.17.7 0.16.1
"( ⚠️
( ⚠️
) ⚠️
)" ⚠️
avx ⚠️
avx2 ⚠️
avx512_vnni ⚠️
avx512f ⚠️
avx512vbmi ⚠️
avx_vnni ⚠️
blas ⚠️
bmi2 ⚠️
cuda Enable NVIDIA CUDA support
cuda_sm_100 Compile for compute capability 10.0
cuda_sm_103 Compile for compute capability 10.3
cuda_sm_110 Compile for compute capability 11.0
cuda_sm_120 Compile for compute capability 12.0
cuda_sm_121 Compile for compute capability 12.1
cuda_sm_75 Compile for compute capability 7.5
cuda_sm_80 Compile for compute capability 8.0
cuda_sm_86 Compile for compute capability 8.6
cuda_sm_87 Compile for compute capability 8.7
cuda_sm_89 Compile for compute capability 8.9
cuda_sm_90 Compile for compute capability 9.0
f16c ⚠️
fma3 ⚠️
mkl Use <pkg>sci-libs/mkl</pkg> for blas, lapack and sparse blas routines
rocm Enable ROCm gpu computing support
sse4_2 ⚠️
vulkan ⚠️

Files

Manifest

Type File Size Versions
DIST ollama-0.16.1-deps.tar.xz 53044084 bytes 0.16.1
DIST ollama-0.16.1.gh.tar.gz 23110002 bytes 0.16.1
DIST ollama-0.17.7-deps.tar.xz 85522252 bytes 0.17.7
DIST ollama-0.17.7.gh.tar.gz 23221312 bytes 0.17.7
Unmatched Entries
Type File Size