Theme:
  • productivity|
  • ai

Introducing: magic-cli

Guy Waldman's ProfileGuy Waldman
July 16, 2024
Time to read:
4 minutes

#Problem statement

The command line is a superpower every developer should harness.
For me, have always been the "source of truth", even when alternative GUI applications exist for various tasks.
My main issues with command line interfaces are:
  1. Non-intuitive: CLI tools are often not intuitive to use, often due to historical reasons (prevalent in many )
  2. Context switches: I switch around often between operating systems and even shells in the same device, and so remembering the right commands in each system is a challenge. It's something that aliases and functions I share around between devices with dotfiles helps with, but the mental overhead of remembering the right command for each system is not easy
  3. Arguments: Many CLI tools have a lot of arguments, in which sometimes even the order of the arguments matters (case in point: ffmpeg which is notoriously difficult to use)
This is a problem that , despite some of their limitations, can help solve.
Fig terminal (now part of Amazon Q) and GitHub Copilot for CLI have been helpful in accelerating my workflow on the CLI, despite not being perfect (inherently so due to the nature of LLMs), and I wanted to take this idea further.
The main things I wanted from these types of tools are:
  1. Cross-platform: Should run on most platforms and shells
  2. Execution: Should be able to execute commands it suggests in the context of my current shell session (with a confirmation before doing so, to avoid accidentally running commands I didn't mean to)
  3. Local inference: Should support text completions with a local LLM such as ollama or llama.cpp
  4. Great UX: Should be easy to use and have a great UX
Introducing magic-cli which aims to solve these gripes.

#What's Magic CLI?

magic-cli is a command line utility to make you a magician in the terminal.
It started as a hobby project I hacked on during a vacation I took a little while ago, and it's something I think could be useful to others.
It is built in Rust (see the Why Rust? section below) and supports local and remote LLM providers (at the time of this writing, ollama and OpenAI).

#How does it work?

The normal suggest command accepts a prompt in natural language and prompts the LLM (local or remote, depending on the user configuration) to suggest a command to execute.
The user can then revise the command until they are content with the answer, and then either copy it to the clipboard or execute it (depending on the user configuration).
The more advanced ask command (experimental) accepts a "task" in natural language and the LLM will attempt to answer it, asking the user to run various commands if more context is required. The loop continues until the user is satisfied with the answer.
In addition, there is a search command which accepts an expression that should be similar to a previous command in your shell history, and performs a semantic search across your history to find the closest match.

#Why Rust?

A lot of projects use "powered by Rust" as a PR move that emphasizes strong performance and memory safety guarantees, but truth be told, I just really like Rust, especially for building CLI tools.
There are major drawbacks for choosing Rust for this, sure:
  1. Higher entry barrier for contributions, as it's a less popular language with a steep learning curve
  2. Rust's complexity compared to higher-level languages is usually positive ROI for software with significant performance and security requirements, but this may be considered somewhat overkill
  3. Iteration and development time can be slower, especially for newer Rustaceans
An obvious choice here would be Python, which is a popular choice for working with LLMs, and has a great ecosystem for that.
Especially since most tasks are IO-bound rather than CPU-bound (the code does not deal with the inference, but rather delegate to the configured LLM), and so the choice for a lower-level performant language may have diminishing returns.
However, Rust's solid ecosystem and language robustness empower me much more than other languages do.
"If it compiles, it works" generally proves correct in my workflows with Rust.
, but I strongly recommend to watch this talk by Jon Gjengset:
It's a bit old by now, but still relevant.
I have also developed a project for LLM orchestration in Rust, which hasn't really seen much love recently: guywaldman/orch

#Try it now!

You can install magic-cli in a variety of ways (even with homebrew). Check out the installation instructions in the repo: guywaldman/magic-cli
Please try it try out and let me know what you think!

Related content

    • technical|
    • productivity
    Git hooks for fun & profit
    Post
    August 9, 2022
    Using git hooks for developer workflow automation
    • opinion|
    • productivity|
    • ai
    The future of software development
    Post
    June 29, 2021
    What will the future of software tooling look like?
    • technical|
    • productivity
    Loading .env files with no dependencies
    Tidbit
    July 26, 2024
    Loading .env files easily in *NIX shells with no dependencies
    • ide|
    • productivity
    Quickfix in VS Code
    Tidbit
    July 18, 2024
    How to create a VS Code shortcut or vim binding to apply the first quickfix suggestion
    • devops|
    • productivity
    GitHub Actions step outputs
    Tidbit
    July 12, 2024
    How to output data from a GitHub Actions step to reuse in a different step