Files
finish/README.md
mike f906e6666a
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 10s
Tests / test (bash) (push) Failing after 9s
Tests / test (zsh) (push) Failing after 10s
Tests / lint (push) Successful in 8s
Tests / docker (push) Successful in 6s
-init-
2025-12-11 09:14:52 +01:00

73 lines
1.7 KiB
Markdown

```markdown
# finish.sh
AI-powered shell completion that runs 100 % on your machine.
One command and your terminal learns what you type next.
## Install
```bash
curl -sSL https://git.appmodel.nl/tour/finish/raw/branch/main/docs/install.sh | bash
source ~/.bashrc # or ~/.zshrc
```
Press `Tab` twice on any partial command and finish.sh suggests the rest—no cloud, no data leak, no latency.
## How it works
1. Captures your current directory, recent history, env vars.
2. Builds a concise prompt for a local LLM (LM-Studio, Ollama, or any OpenAI-compatible endpoint).
3. Returns ranked completions in <200 ms, cached for instant replay.
## Use
```bash
docker <Tab><Tab> # → docker run -it --rm ubuntu bash
git commit <Tab><Tab> # → git commit -m "feat: add finish.sh"
# large files <Tab><Tab> # → find . -type f -size +100M
```
Dry-run mode:
```bash
finish --dry-run "tar czf backup.tar.gz"
```
## Configure
```bash
finish config set endpoint http://plato.lan:11434/v1/chat/completions
finish config set model codellama:13b
finish model # interactive picker
```
## Providers
| Provider | Auth | URL | Notes |
|-----------|------|-----------------------------|---------|
| LM-Studio | none | `http://localhost:1234/v1` | default |
| Ollama | none | `http://localhost:11434` | |
| OpenAI | key | `https://api.openai.com/v1` | |
Add others by editing `~/.finish/config`.
## Commands
```bash
finish install # hook into shell
finish remove # uninstall
finish clear # wipe cache & logs
finish usage # tokens & cost
```
## Requirements
Bash ≥4 or Zsh ≥5, curl, jq, bc.
Optional: bash-completion.
## License
BSD 2-Clause.
```