Install this package:
emerge -a sci-ml/caffe2
If the package is masked, you can unmask it using the autounmask tool or standard emerge options:
autounmask sci-ml/caffe2
Or alternatively:
emerge --autounmask-write -a sci-ml/caffe2
<pkgmetadata> <maintainer type="person"> <email>tupone@gentoo.org</email> <name>Tupone Alfredo</name> </maintainer> <use> <flag name="cusparselt">Use the CUDA/HIP Sparse Matrix Multiplication</flag> <flag name="distributed">Support distributed applications</flag> <flag name="fbgemm">Use <pkg>sci-ml/FBGEMM</pkg></flag> <flag name="flash">Enable flash attention</flag> <flag name="gloo">Use <pkg>sci-ml/gloo</pkg></flag> <flag name="kineto">Use<pkg>sci-ml/kineto</pkg>profiling library</flag> <flag name="memefficient">Enable mem efficient attention</flag> <flag name="mimalloc">Use <pkg>dev-libs/mimalloc</pkg> as replacement for system malloc</flag> <flag name="mkl">Use <pkg>sci-libs/mkl</pkg> for blas, lapack and sparse blas routines</flag> <flag name="nccl">Use <pkg>dev-libs/rccl</pkg> (NCCL compatible) backend for distributed operations</flag> <flag name="nnpack">Use <pkg>sci-ml/NNPACK</pkg></flag> <flag name="numpy">Add support for math operations through numpy</flag> <flag name="onednn">Use <pkg>sci-ml/oneDNN</pkg></flag> <flag name="openblas">Use <pkg>sci-libs/openblas</pkg> for blas routines</flag> <flag name="openmp">Use OpenMP for parallel code</flag> <flag name="qnnpack">Use QNNPACK</flag> <flag name="rocm">Enable ROCm gpu computing support</flag> <flag name="xnnpack">Use <pkg>sci-ml/XNNPACK</pkg></flag> </use> <upstream> <remote-id type="github">pytorch/pytorch</remote-id> </upstream> </pkgmetadata>
Manage flags for this package:
euse -i <flag> -p sci-ml/caffe2 |
euse -E <flag> -p sci-ml/caffe2 |
euse -D <flag> -p sci-ml/caffe2
| Flag | Description | 2.11.0-r3 | 2.10.0-r6 |
|---|---|---|---|
| cuda | Enable NVIDIA CUDA support (computation on GPU) | ✓ | ✓ |
| cusparselt | Use the CUDA/HIP Sparse Matrix Multiplication | ✓ | ✓ |
| distributed | Support distributed applications | ✓ | ✓ |
| fbgemm | Use <pkg>sci-ml/FBGEMM</pkg> | ✓ | ✓ |
| flash | Enable flash attention | ✓ | ✓ |
| gloo | Use <pkg>sci-ml/gloo</pkg> | ✓ | ✓ |
| kineto | Use<pkg>sci-ml/kineto</pkg>profiling library | ✓ | ✗ |
| memefficient | Enable mem efficient attention | ✓ | ✓ |
| mimalloc | Use <pkg>dev-libs/mimalloc</pkg> as replacement for system malloc | ✓ | ✓ |
| mkl | Use <pkg>sci-libs/mkl</pkg> for blas, lapack and sparse blas routines | ✓ | ✓ |
| mpi | Add MPI (Message Passing Interface) layer to the apps that support it | ✓ | ✓ |
| nccl | Use <pkg>dev-libs/rccl</pkg> (NCCL compatible) backend for distributed operations | ✓ | ✓ |
| nnpack | Use <pkg>sci-ml/NNPACK</pkg> | ✓ | ✓ |
| numpy | Add support for math operations through numpy | ⊕ | ⊕ |
| onednn | Use <pkg>sci-ml/oneDNN</pkg> | ✓ | ✓ |
| openblas | Use <pkg>sci-libs/openblas</pkg> for blas routines | ✓ | ✓ |
| opencl | Enable OpenCL support (computation on GPU) | ✓ | ✓ |
| openmp | Use OpenMP for parallel code | ✓ | ✓ |
| qnnpack | Use QNNPACK | ✓ | ✓ |
| rocm | Enable ROCm gpu computing support | ✓ | ✓ |
| xnnpack | Use <pkg>sci-ml/XNNPACK</pkg> | ✓ | ✓ |
| Type | File | Size | Versions |
|---|
| Type | File | Size |
|---|---|---|
| DIST | composable_kernel-7fe50dc3.tar.gz | 5380728 bytes |
| DIST | flash-attention-2.7.4.gh.tar.gz | 5841323 bytes |
| DIST | pytorch-2.10.0.tar.gz | 62555251 bytes |
| DIST | pytorch-2.11.0.tar.gz | 63504636 bytes |