Raw Metadata XML
<pkgmetadata>
<maintainer type="project">
<email>bentoo@protonmail.com</email>
<name>Bentoo Project</name>
</maintainer>
<longdescription lang="en">
Ollama is a tool for running large language models locally.
It supports models like Llama 3, Mistral, Gemma, and many others.
Ollama bundles model weights, configuration, and data into a single
package, defined by a Modelfile, and optimizes setup and configuration
details, including GPU usage.
</longdescription>
<use>
<flag name="blas">Enable BLAS acceleration for CPU inference</flag>
<flag name="cuda">Enable NVIDIA CUDA GPU acceleration</flag>
<flag name="mkl">Use Intel MKL instead of generic BLAS</flag>
<flag name="rocm">Enable AMD ROCm/HIP GPU acceleration</flag>
<flag name="vulkan">Enable Vulkan GPU acceleration</flag>
</use>
<upstream>
<doc>https://github.com/ollama/ollama/blob/main/docs/README.md</doc>
<bugs-to>https://github.com/ollama/ollama/issues</bugs-to>
<remote-id type="github">ollama/ollama</remote-id>
</upstream>
</pkgmetadata>