Install this package:
emerge -a sci-ml/fastflowlm
If the package is masked, you can unmask it using the autounmask tool or standard emerge options:
autounmask sci-ml/fastflowlm
Or alternatively:
emerge --autounmask-write -a sci-ml/fastflowlm
<pkgmetadata> <maintainer type="person"> <email>iohann.s.titov@gmail.com</email> <name>Ivan S. Titov</name> </maintainer> <longdescription lang="en"> FastFlowLM (FLM) is a lightweight LLM inference runtime purpose-built for AMD Ryzen AI NPUs (XDNA2 architecture). It provides an Ollama-style CLI and OpenAI-compatible server API for running language models entirely on the NPU with no GPU or CPU compute required. Supported hardware: Ryzen AI 300-series (Strix Point, Strix Halo), 400-series (Gorgon Point), and Z2 Extreme. XDNA1 (Ryzen AI 7000/8000) is NOT supported. The orchestration code and CLI are MIT-licensed. NPU compute kernels (xclbins) are proprietary binaries, free for commercial use under $10M annual company revenue. </longdescription> <upstream> <doc>https://fastflowlm.com/docs/</doc> <bugs-to>https://github.com/FastFlowLM/FastFlowLM/issues</bugs-to> <remote-id type="github">FastFlowLM/FastFlowLM</remote-id> </upstream> </pkgmetadata>
| Type | File | Size | Versions |
|---|---|---|---|
| DIST | fastflowlm-0.9.39.tar.gz | 118784841 bytes | 0.9.39 |
| DIST | fastflowlm-0.9.40.tar.gz | 119072212 bytes | 0.9.40 |
| DIST | fastflowlm-0.9.41.tar.gz | 119071744 bytes | 0.9.41 |
| DIST | msgpack-c-092bc69b6e815980bce7808595c914dd3a29f905.tar.gz | 476163 bytes | 0.9.41, 0.9.40, 0.9.39 |
| DIST | sentencepiece-11051e3b73b3a6222a52acd720e39805dc7545ab.tar.gz | 13487870 bytes | 0.9.41, 0.9.40, 0.9.39 |
| DIST | tokenizers-cpp-34885cfd7b9ef27b859c28a41e71413dd31926f5.tar.gz | 39707 bytes | 0.9.41 |
| DIST | tokenizers-cpp-acbdc5a27ae01ba74cda756f94da698d40f11dfe.tar.gz | 39758 bytes | 0.9.40, 0.9.39 |
| Type | File | Size |
|---|