sci-ml/ollama (bentoo)

Search

Package Information

Description:
Ollama is a tool for running large language models locally. It supports models like Llama 3, Mistral, Gemma, and many others. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile, and optimizes setup and configuration details, including GPU usage.
Homepage:
https://ollama.com
License:
MIT

Versions

Version EAPI Keywords Slot
0.18.0 8 ~amd64 0

Metadata

Description

Maintainers

Upstream

Raw Metadata XML
<pkgmetadata>
	<maintainer type="project">
		<email>bentoo@protonmail.com</email>
		<name>Bentoo Project</name>
	</maintainer>
	<longdescription lang="en">
		Ollama is a tool for running large language models locally.
		It supports models like Llama 3, Mistral, Gemma, and many others.
		Ollama bundles model weights, configuration, and data into a single
		package, defined by a Modelfile, and optimizes setup and configuration
		details, including GPU usage.
	</longdescription>
	<use>
		<flag name="blas">Enable BLAS acceleration for CPU inference</flag>
		<flag name="cuda">Enable NVIDIA CUDA GPU acceleration</flag>
		<flag name="mkl">Use Intel MKL instead of generic BLAS</flag>
		<flag name="rocm">Enable AMD ROCm/HIP GPU acceleration</flag>
		<flag name="vulkan">Enable Vulkan GPU acceleration</flag>
	</use>
	<upstream>
		<doc>https://github.com/ollama/ollama/blob/main/docs/README.md</doc>
		<bugs-to>https://github.com/ollama/ollama/issues</bugs-to>
		<remote-id type="github">ollama/ollama</remote-id>
	</upstream>
</pkgmetadata>

Lint Warnings

USE Flags

Flag Description 0.18.0
blas Enable BLAS acceleration for CPU inference
blis ⚠️
cuda Enable NVIDIA CUDA GPU acceleration
flexiblas ⚠️
mkl Use Intel MKL instead of generic BLAS
openblas ⚠️
rocm Enable AMD ROCm/HIP GPU acceleration
vulkan Enable Vulkan GPU acceleration

Files

Manifest

Type File Size Versions
Unmatched Entries
Type File Size
DIST ollama-0.18.0-deps.tar.xz 84650388 bytes
DIST ollama-0.18.0.gh.tar.gz 23267563 bytes