Simon Willison’s Weblog

Subscribe

llm cmd undo last git commit—a new plugin for LLM

26th March 2024

I just released a neat new plugin for my LLM command-line tool: llm-cmd. It lets you run a command to to generate a further terminal command, review and edit that command, then hit <enter> to execute it or <ctrl-c> to cancel.

This is an alpha release. It’s a very dangerous piece of software! Do not use this unless you are fluent in terminal and confident that you understand what it’s doing for you and what could go wrong. I take no responsibility if you accidentally delete all of your files with this tool.

To try this out, you’ll need my LLM tool installed:

brew install llm # 'pipx install llm' works too
llm keys set openai
<paste in your OpenAI API key>

Now install the new plugin:

llm install llm-cmd

To run the new command, type llm cmd and then type what you want to do.

Here’s an example of how to use it:

llm cmd show the first three lines of every file in this directory

I ran this just now and it gave me the following:

head -n 3 *

Crucially, it will not excute that command directly. It pre-populates your terminal with the command, and you can edit it before hitting <enter> to run it (or cancel with <ctrl-c>).

Here’s an animated GIF demo showing it in action:

 $ llm cmd show the first three lines of every file in this directory head -n 3 * Command failed with error: head: Error reading llm_cmd.egg-info head: Error reading tests ==> LICENSE <==                                  Apache License                            Version 2.0, January 2004                         http://www.apache.org/licenses/  ==> Pipfile <== [[source]] url = "https://pypi.org/simple" verify_ssl = true  ==> README.md <== # llm-cmd  [![PyPI](https://img.shields.io/pypi/v/llm-cmd.svg)](https://pypi.org/project/llm-cmd/)  ==> llm_cmd.egg-info <==  ==> llm_cmd.py <== import click import llm import readline  ==> pyproject.toml <== [project] name = "llm-cmd" version = "0.1"  ==> tests <==  $ llm cmd show filename and first three lines of every file here find . -maxdepth 1 -type f -exec sh -c 'echo "{}" && head -n 3 "{}" && echo' \; ./LICENSE                                  Apache License                            Version 2.0, January 2004                         http://www.apache.org/licenses/  ./pyproject.toml [project] name = "llm-cmd" version = "0.1"  ./README.md # llm-cmd  [![PyPI](https://img.shields.io/pypi/v/llm-cmd.svg)](https://pypi.org/project/llm-cmd/)  ./Pipfile [[source]] url = "https://pypi.org/simple" verify_ssl = true  ./.gitignore .venv __pycache__/ *.py[cod]  ./llm_cmd.py import click import llm import readline

It has a couple of options: you can add -m gpt-4 to run against a different model (it defaults to the default configured for LLM). You can also pass -s/--system to set a different system prompt to change how it behaves.

The default system prompt took a little bit of iteration. Here’s the instruction it passes to the model, along with your input:

Return only the command to be executed as a raw string, no string delimiters wrapping it, no yapping, no markdown, no fenced code blocks, what you return will be passed to subprocess.check_output() directly.

For example, if the user asks: undo last git commit

You return only: git reset --soft HEAD~1

I really like “no yapping” as a brief way to encourage a model not to include lengthy explanatory text!

So far I’ve tried this prompt against gpt-3.5-turbo, GPT-4, Claude 3 Opus and Claude 3 Haiku (using my llm-claude-3 plugin) and it gave me the desired result for all four models. I’m sure there are many ways this could be improved, that likely vary depending on the model you’re using. That’s why this is an alpha!

Built with assistance from ChatGPT/GPT-4

The key feature that enables this plugin is the ability to populate the user’s terminal with text that they can edit before they execute it.

I didn’t know how to do this, so I asked GPT-4:

Write a Python function interactive_exec(“ls -lah”)

It makes that string available for the user to edit in the terminal—and when they hit enter it executes the command in their shell

It didn’t give me what I wanted at first (it suggested just using input("Edit command: ")), so I asked a follow-up:

Show more options, I’m not sure that would let them directly edit the existing text

And it spat this out, which was exactly what I wanted:

import readline
import subprocess

def interactive_exec(command):
    # Set the initial text for the input
    readline.set_startup_hook(lambda: readline.insert_text(command))
    try:
        edited_command = input("Edit command: ")
        output = subprocess.check_output(edited_command, shell=True, stderr=subprocess.STDOUT)
        print(output.decode())
    except subprocess.CalledProcessError as e:
        print(f"Command failed with error: {e.output.decode()}")
    finally:
        # Remove the startup hook to avoid affecting future inputs
        readline.set_startup_hook(None)

# Example usage
interactive_exec("ls -lah")

I tracked down the documentation for readline.set_startup_hook() and readline.insert_text() to make sure I understood what they were doing. This turned out to be the exact thing I needed to know.

llm cmd undo last git commit

This is my favourite example, because I’ve been using Git for 15+ years and I still can’t ever remember the exact command for this.

Here’s what llm cmd spits out:

$ llm cmd undo last git commit
git reset --soft HEAD~1

It should always get this one right, because it’s the example I provided in the system prompt!