149 lines
3.2 KiB
Markdown
149 lines
3.2 KiB
Markdown
```markdown
|
|
# finish
|
|
|
|
AI-powered shell completion that runs 100% on your machine.
|
|
|
|
One command and your terminal learns what you type next.
|
|
|
|
## Install
|
|
|
|
```bash
|
|
# On Ubuntu/Debian, first ensure python3-venv is installed
|
|
sudo apt-get install python3-venv
|
|
|
|
# Then run the installer
|
|
curl -sSL https://git.appmodel.nl/tour/finish/raw/branch/main/docs/install.sh | bash
|
|
source ~/.bashrc # or ~/.zshrc
|
|
|
|
# for plato special, add plato.lan to /etc/hosts:
|
|
sudo nano /etc/hosts - 192.168.1.74 plato.lan
|
|
```
|
|
|
|
Press **Alt+\\** after typing any command to get intelligent completions—no cloud, no data leak, instant results.
|
|
|
|
## How it works
|
|
|
|
1. Captures your current directory, recent history, env vars, and available tools
|
|
2. Analyzes your intent and builds a context-aware prompt
|
|
3. Queries your local LLM (LM Studio, Ollama, or any OpenAI-compatible endpoint)
|
|
4. Returns 2-5 ranked completions with an interactive picker
|
|
5. Results are cached for instant replay
|
|
|
|
## Usage
|
|
|
|
Type a command, then press **Alt+\\**:
|
|
|
|
```bash
|
|
# Natural language commands
|
|
show gpu status # → nvidia-smi
|
|
resolve title of website google.com # → curl -s https://google.com | grep -oP '<title>\K[^<]+'
|
|
make file about dogs # → echo "About Dogs" > dogs.md
|
|
|
|
# Partial commands
|
|
git commit # → git commit -m "..."
|
|
docker run # → docker run -it --rm ubuntu bash
|
|
find large files # → find . -type f -size +100M
|
|
```
|
|
|
|
Navigate with ↑↓, press Enter to accept, Esc to cancel.
|
|
|
|
## Configure
|
|
|
|
View current configuration:
|
|
|
|
```bash
|
|
finish config
|
|
```
|
|
|
|
Edit configuration file:
|
|
|
|
```bash
|
|
nano ~/.finish/finish.json
|
|
```
|
|
|
|
## Configuration Examples
|
|
|
|
### Local Ollama
|
|
```json
|
|
{
|
|
"provider": "ollama",
|
|
"model": "llama3:latest",
|
|
"endpoint": "http://localhost:11434/api/chat",
|
|
"temperature": 0.0,
|
|
"api_prompt_cost": 0.0,
|
|
"api_completion_cost": 0.0,
|
|
"max_history_commands": 20,
|
|
"max_recent_files": 20,
|
|
"cache_size": 100
|
|
}
|
|
```
|
|
|
|
### LM Studio
|
|
```json
|
|
{
|
|
"provider": "lmstudio",
|
|
"model": "dolphin3.0-llama3.1-8b@q4_k_m",
|
|
"endpoint": "http://localhost:1234/v1/chat/completions",
|
|
"temperature": 0.0,
|
|
"api_prompt_cost": 0.0,
|
|
"api_completion_cost": 0.0,
|
|
"max_history_commands": 20,
|
|
"max_recent_files": 20,
|
|
"cache_size": 100
|
|
}
|
|
```
|
|
|
|
### OpenAI (or compatible API)
|
|
```json
|
|
{
|
|
"provider": "lmstudio",
|
|
"model": "gpt-4",
|
|
"endpoint": "https://api.openai.com/v1/chat/completions",
|
|
"api_key": "sk-...",
|
|
"temperature": 0.0,
|
|
"api_prompt_cost": 0.03,
|
|
"api_completion_cost": 0.06,
|
|
"max_history_commands": 20,
|
|
"max_recent_files": 20,
|
|
"cache_size": 100
|
|
}
|
|
```
|
|
|
|
## Requirements
|
|
|
|
- Python 3.7+
|
|
- Bash ≥4 or Zsh ≥5
|
|
- A local LLM running (Ollama, LM Studio, etc.) or API access
|
|
|
|
Python dependencies (installed automatically):
|
|
- httpx
|
|
- prompt_toolkit
|
|
- rich
|
|
|
|
## Commands
|
|
|
|
```bash
|
|
finish install # Set up Alt+\ keybinding
|
|
finish config # Show current configuration
|
|
finish command "text" # Test completions manually
|
|
```
|
|
|
|
## Advanced
|
|
|
|
### Debug mode
|
|
```bash
|
|
export FINISH_DEBUG=1
|
|
finish command "your command here"
|
|
cat ~/.finish/finish.log
|
|
```
|
|
|
|
### Clear cache
|
|
```bash
|
|
rm -rf ~/.finish/cache/*.json
|
|
```
|
|
|
|
## License
|
|
|
|
BSD 2-Clause.
|
|
|
|
``` |