Thursday, November 21, 2024
A high-throughput and memory-efficient inference and serving engine for LLMs