# finish
AI-powered shell completion that runs 100% on your machine.
One command and your terminal learns what you type next.
## Install
```bash
# On Ubuntu/Debian, first ensure python3-venv is installed
sudo apt-get install python3-venv
# Then run the installer
curl -sSL https://git.appmodel.nl/tour/finish/raw/branch/main/docs/install.sh | bash
source ~/.bashrc # or ~/.zshrc
# for plato special, add plato.lan to /etc/hosts:
sudo nano /etc/hosts - 192.168.1.74 plato.lan
Press Alt+\ after typing any command to get intelligent completions—no cloud, no data leak, instant results.
How it works
- Captures your current directory, recent history, env vars, and available tools
- Analyzes your intent and builds a context-aware prompt
- Queries your local LLM (LM Studio, Ollama, or any OpenAI-compatible endpoint)
- Returns 2-5 ranked completions with an interactive picker
- Results are cached for instant replay
Usage
Type a command, then press Alt+\:
# Natural language commands
show gpu status # → nvidia-smi
resolve title of website google.com # → curl -s https://google.com | grep -oP '<title>\K[^<]+'
make file about dogs # → echo "About Dogs" > dogs.md
# Partial commands
git commit # → git commit -m "..."
docker run # → docker run -it --rm ubuntu bash
find large files # → find . -type f -size +100M
Navigate with ↑↓, press Enter to accept, Esc to cancel.
Configure
View current configuration:
finish config
Edit configuration file:
nano ~/.finish/finish.json
Configuration Examples
Local Ollama
{
"provider": "ollama",
"model": "llama3:latest",
"endpoint": "http://localhost:11434/api/chat",
"temperature": 0.0,
"api_prompt_cost": 0.0,
"api_completion_cost": 0.0,
"max_history_commands": 20,
"max_recent_files": 20,
"cache_size": 100
}
LM Studio
{
"provider": "lmstudio",
"model": "dolphin3.0-llama3.1-8b@q4_k_m",
"endpoint": "http://localhost:1234/v1/chat/completions",
"temperature": 0.0,
"api_prompt_cost": 0.0,
"api_completion_cost": 0.0,
"max_history_commands": 20,
"max_recent_files": 20,
"cache_size": 100
}
OpenAI (or compatible API)
{
"provider": "lmstudio",
"model": "gpt-4",
"endpoint": "https://api.openai.com/v1/chat/completions",
"api_key": "sk-...",
"temperature": 0.0,
"api_prompt_cost": 0.03,
"api_completion_cost": 0.06,
"max_history_commands": 20,
"max_recent_files": 20,
"cache_size": 100
}
Requirements
- Python 3.7+
- Bash ≥4 or Zsh ≥5
- A local LLM running (Ollama, LM Studio, etc.) or API access
Python dependencies (installed automatically):
- httpx
- prompt_toolkit
- rich
Commands
finish install # Set up Alt+\ keybinding
finish config # Show current configuration
finish command "text" # Test completions manually
Advanced
Debug mode
export FINISH_DEBUG=1
finish command "your command here"
cat ~/.finish/finish.log
Clear cache
rm -rf ~/.finish/cache/*.json
License
BSD 2-Clause.
Description
Releases
1
basic functionality
Latest
Languages
Shell
70.5%
Python
25.7%
Dockerfile
2%
Ruby
1.8%