# Bash One-Liners for LLMs - justine.lol Synced: [[2023_12_14]] 2:30 PM Last Highlighted: [[2023_12_14]] Tags: [[ai]] [[AI]] ![rw-book-cover](https://justine.lol/oneliners/llamafile.png) ## Highlights [[2023_12_14]] [View Highlight](https://read.readwise.io/read/01hhjq45zp06v26cgpdprjdf8d) > What if you could get LLaVA to rename all those files? You can control text generation by imposing language constraints in Backus-Naur Form. That restricts which tokens can be selected when generating the output. For example, the flag `--grammar 'root ::= "yes" | "no"'` will force the LLM to only print `"yes\n"` or `"no\n"` and then `exit()`. We can adapt that for generating safe filenames with lowercase letters and underscores as follows: [[2023_12_14]] [View Highlight](https://read.readwise.io/read/01hhkhnvdhdp7vfdn5vg5nmzvp) > The tricks I'm using here are as follows: > 1. By not using any English in my prompt, I avoid the possibilty of the LLM explaining to me like I'm 5 what the code is doing. > 2. By using markdown code block syntax, I greatly increase the chance that the LLM will generate a "reverse prompt" token (specified via the `-r` flag) that'll help the LLM `exit()` at an appropriate moment. Otherwise, it'll potentially go forever. For additional assurance auto-complete will halt, consider passing the `-n 100` flag to limit the response to 100 tokens.