Go with your own intelligence - Go applications that directly integrate llama.cpp for local inference using hardware acceleration.