2.1 KiB
2.1 KiB
Usage Guide
Installation
Install and set up finish:
finish install
source ~/.bashrc
Configuration
View current settings:
finish config
Change settings:
finish config set temperature 0.5
finish config set endpoint http://localhost:1234/v1/chat/completions
finish config set model your-model-name
Reset to defaults:
finish config reset
Model Selection
Select a model interactively:
finish model
Use arrow keys to navigate, Enter to select, or 'q' to quit.
Basic Commands
Show help:
finish --help
Test completions without caching:
finish command "your command here"
Preview the prompt sent to the model:
finish command --dry-run "your command here"
View system information:
finish system
Usage Statistics
Check your usage stats and costs:
finish usage
Cache Management
Clear cached completions and logs:
finish clear
Enable/Disable
Temporarily disable:
finish disable
Re-enable:
finish enable
Uninstallation
Remove finish completely:
finish remove
Using finish
Once enabled, just type a command and press Tab twice to get suggestions:
git <TAB><TAB>
docker <TAB><TAB>
find <TAB><TAB>
Natural language also works:
# convert video to mp4 <TAB><TAB>
# list all processes using port 8080 <TAB><TAB>
# compress this folder to tar.gz <TAB><TAB>
Configuration File
The config file is located at ~/.finish/config and contains:
provider: Model provider (lmstudio, ollama)model: Model nametemperature: Response randomness (0.0 - 1.0)endpoint: API endpoint URLapi_prompt_cost: Cost per input tokenapi_completion_cost: Cost per output tokenmax_history_commands: Number of recent commands to include in contextmax_recent_files: Number of recent files to include in contextcache_dir: Directory for cached completionscache_size: Maximum number of cached itemslog_file: Path to usage log file