Show HN: Ergonomically call LLM in bulk from CLI
5 points ·
thatjoeoverthr
·
I've found myself repeatedly writing little scripts to do bulk calls to LLMs for various tasks. For example, run some analysis on a large list of records.
There are a few "gotchas" to doing this. For example, some service providers have rate limits, and some models will not reliably return JSON (if you're asking for it).
So, I've written a command for this.
What I've tried to do here is let the user break up prompts and configuration as they see fit.
For example, you can have a prompt file which includes the API key, rate limit, settings, etc. all together, or break these up into multiple files, or keep some parts local, or override parameters.
This solves the problem of sharing settings between activities, and keeping prompts in simple, committable files of narrow scope.
I hope this can be of use to someone. Thanks for reading.