sci-ml/lm-eval (stuff)

Search

Install

Install this package:

emerge -a sci-ml/lm-eval

If the package is masked, you can unmask it using the autounmask tool or standard emerge options:

autounmask sci-ml/lm-eval

Or alternatively:

emerge --autounmask-write -a sci-ml/lm-eval

Package Information

Description:
A framework for evaluating language models (lm-evaluation-harness)
Homepage:
https://github.com/EleutherAI/lm-evaluation-harness https://pypi.org/project/lm-eval/
License:
MIT

Versions

Version EAPI Keywords Slot
0.4.11 8 ~amd64 0

Metadata

Maintainers

Upstream

Raw Metadata XML
<pkgmetadata>
	<maintainer type="person">
		<email>iohann.s.titov@gmail.com</email>
		<name>Ivan S. Titov</name>
	</maintainer>
	<use>
		<flag name="api">Wire dependencies for OpenAI/HF Inference / generic HTTP API model backends</flag>
		<flag name="math">Enable math-grading tasks (minerva_math, leaderboard math, hendrycks_math, etc.) — parses LaTeX answers and verifies symbolic equality between predicted and ground-truth solutions</flag>
		<flag name="sentencepiece">Pull sci-ml/sentencepiece for tasks that tokenise via SentencePiece</flag>
		<flag name="statsmodels">Pull dev-python/statsmodels for the discrim_eval task family</flag>
		<flag name="vllm">Wire dev-python/vllm for the vLLM model backend</flag>
	</use>
	<upstream>
		<remote-id type="pypi">lm-eval</remote-id>
		<remote-id type="github">EleutherAI/lm-evaluation-harness</remote-id>
	</upstream>
</pkgmetadata>

Lint Warnings

USE Flags

Manage flags for this package: euse -i <flag> -p sci-ml/lm-eval | euse -E <flag> -p sci-ml/lm-eval | euse -D <flag> -p sci-ml/lm-eval

Flag Description 0.4.11
api Wire dependencies for OpenAI/HF Inference / generic HTTP API model backends
math Enable math-grading tasks (minerva_math, leaderboard math, hendrycks_math, etc.) — parses LaTeX answers and verifies symbolic equality between predicted and ground-truth solutions
sentencepiece Pull sci-ml/sentencepiece for tasks that tokenise via SentencePiece
statsmodels Pull dev-python/statsmodels for the discrim_eval task family
vllm Wire dev-python/vllm for the vLLM model backend

Manifest

Type File Size Versions
Unmatched Entries
Type File Size
DIST lm_eval-0.4.11.tar.gz 3246509 bytes