Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.