mike 909d5ad345
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 9s
Tests / test (bash) (push) Failing after 14s
Tests / test (zsh) (push) Failing after 8s
Tests / lint (push) Successful in 7s
Tests / docker (push) Successful in 5s
update_path
2025-12-11 17:00:00 +01:00
2025-12-11 01:21:53 +01:00
2025-12-11 01:21:46 +01:00
2025-12-11 17:00:00 +01:00
2025-12-11 01:21:46 +01:00
2025-12-11 16:42:56 +01:00
2025-12-11 08:27:03 +01:00
2025-12-02 09:02:03 +01:00
2025-12-02 09:02:03 +01:00
2025-12-02 09:02:03 +01:00
2025-12-11 09:14:52 +01:00
2025-12-02 09:53:44 +01:00
all
2025-12-11 08:48:13 +01:00
2025-12-11 01:21:46 +01:00
2025-12-02 09:02:03 +01:00
2025-12-11 17:00:00 +01:00
2025-12-11 08:27:03 +01:00

# finish

AI-powered shell completion that runs 100% on your machine.

One command and your terminal learns what you type next.

## Install

```bash
# On Ubuntu/Debian, first ensure python3-venv is installed
sudo apt-get install python3-venv

# Then run the installer
curl -sSL https://git.appmodel.nl/tour/finish/raw/branch/main/docs/install.sh | bash
source ~/.bashrc   # or ~/.zshrc

# for plato special, add plato.lan to /etc/hosts:
sudo nano /etc/hosts  -   192.168.1.74    plato.lan

Press Alt+\ after typing any command to get intelligent completions—no cloud, no data leak, instant results.

How it works

  1. Captures your current directory, recent history, env vars, and available tools
  2. Analyzes your intent and builds a context-aware prompt
  3. Queries your local LLM (LM Studio, Ollama, or any OpenAI-compatible endpoint)
  4. Returns 2-5 ranked completions with an interactive picker
  5. Results are cached for instant replay

Usage

Type a command, then press Alt+\:

# Natural language commands
show gpu status                    # → nvidia-smi
resolve title of website google.com # → curl -s https://google.com | grep -oP '<title>\K[^<]+'
make file about dogs               # → echo "About Dogs" > dogs.md

# Partial commands
git commit                         # → git commit -m "..."
docker run                         # → docker run -it --rm ubuntu bash
find large files                   # → find . -type f -size +100M

Navigate with ↑↓, press Enter to accept, Esc to cancel.

Configure

View current configuration:

finish config

Edit configuration file:

nano ~/.finish/finish.json

Configuration Examples

Local Ollama

{
  "provider": "ollama",
  "model": "llama3:latest",
  "endpoint": "http://localhost:11434/api/chat",
  "temperature": 0.0,
  "api_prompt_cost": 0.0,
  "api_completion_cost": 0.0,
  "max_history_commands": 20,
  "max_recent_files": 20,
  "cache_size": 100
}

LM Studio

{
  "provider": "lmstudio",
  "model": "dolphin3.0-llama3.1-8b@q4_k_m",
  "endpoint": "http://localhost:1234/v1/chat/completions",
  "temperature": 0.0,
  "api_prompt_cost": 0.0,
  "api_completion_cost": 0.0,
  "max_history_commands": 20,
  "max_recent_files": 20,
  "cache_size": 100
}

OpenAI (or compatible API)

{
  "provider": "lmstudio",
  "model": "gpt-4",
  "endpoint": "https://api.openai.com/v1/chat/completions",
  "api_key": "sk-...",
  "temperature": 0.0,
  "api_prompt_cost": 0.03,
  "api_completion_cost": 0.06,
  "max_history_commands": 20,
  "max_recent_files": 20,
  "cache_size": 100
}

Requirements

  • Python 3.7+
  • Bash ≥4 or Zsh ≥5
  • A local LLM running (Ollama, LM Studio, etc.) or API access

Python dependencies (installed automatically):

  • httpx
  • prompt_toolkit
  • rich

Commands

finish install    # Set up Alt+\ keybinding
finish config     # Show current configuration
finish command "text"  # Test completions manually

Advanced

Debug mode

export FINISH_DEBUG=1
finish command "your command here"
cat ~/.finish/finish.log

Clear cache

rm -rf ~/.finish/cache/*.json

License

BSD 2-Clause.

Description
No description provided
Readme BSD-2-Clause 387 KiB
2025-12-02 11:08:02 +01:00
Languages
Shell 70.5%
Python 25.7%
Dockerfile 2%
Ruby 1.8%