Compare commits

19 Commits
1.0 ... main

Author SHA1 Message Date
mike
909d5ad345 update_path
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 9s
Tests / test (bash) (push) Failing after 14s
Tests / test (zsh) (push) Failing after 8s
Tests / lint (push) Successful in 7s
Tests / docker (push) Successful in 5s
2025-12-11 17:00:00 +01:00
mike
2a0375b5bb update_path
Some checks failed
Tests / test (bash) (push) Failing after 51s
Tests / test (zsh) (push) Failing after 12s
Tests / lint (push) Successful in 7s
Tests / docker (push) Successful in 5s
Docker Build and Push / build-and-push (push) Failing after 11m46s
2025-12-11 16:46:02 +01:00
mike
2e2de4773b update_path
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 10s
Tests / test (bash) (push) Failing after 8s
Tests / test (zsh) (push) Failing after 8s
Tests / lint (push) Successful in 7s
Tests / docker (push) Successful in 4s
2025-12-11 16:42:56 +01:00
mike
47b90a2e90 update_path
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 9s
Tests / test (bash) (push) Failing after 8s
Tests / test (zsh) (push) Failing after 10s
Tests / lint (push) Successful in 8s
Tests / docker (push) Successful in 5s
2025-12-11 16:39:22 +01:00
mike
66034eda34 Crazy! Added support for ALT+\
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 10s
Tests / test (bash) (push) Failing after 8s
Tests / test (zsh) (push) Failing after 8s
Tests / lint (push) Successful in 7s
Tests / docker (push) Successful in 6s
2025-12-11 16:23:40 +01:00
mike
54d265067e update_path
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 10s
Tests / test (bash) (push) Failing after 8s
Tests / test (zsh) (push) Failing after 8s
Tests / lint (push) Successful in 8s
Tests / docker (push) Successful in 5s
2025-12-11 16:10:34 +01:00
mike
607592466a Crazy! Added support for ALT+\
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 10s
Tests / test (bash) (push) Failing after 11s
Tests / test (zsh) (push) Failing after 8s
Tests / lint (push) Successful in 7s
Tests / docker (push) Successful in 5s
2025-12-11 15:58:55 +01:00
mike
b9ceaecc3d message
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 9s
Tests / test (bash) (push) Failing after 9s
Tests / test (zsh) (push) Failing after 9s
Tests / lint (push) Successful in 9s
Tests / docker (push) Successful in 5s
2025-12-11 13:56:14 +01:00
mike
f4ef534e3f Crazy! Added support for ALT+\
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 10s
Tests / test (bash) (push) Failing after 9s
Tests / test (zsh) (push) Failing after 9s
Tests / lint (push) Successful in 7s
Tests / docker (push) Successful in 5s
2025-12-11 12:50:16 +01:00
mike
91d4592272 -init py- 2025-12-11 12:34:09 +01:00
mike
f906e6666a -init-
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 10s
Tests / test (bash) (push) Failing after 9s
Tests / test (zsh) (push) Failing after 10s
Tests / lint (push) Successful in 8s
Tests / docker (push) Successful in 6s
2025-12-11 09:14:52 +01:00
mike
3cb1b57ac5 all 2025-12-11 08:48:13 +01:00
mike
9237ad48fe update
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 10s
Tests / test (bash) (push) Successful in 9s
Tests / test (zsh) (push) Successful in 8s
Tests / lint (push) Successful in 8s
Tests / docker (push) Successful in 4s
2025-12-11 08:30:07 +01:00
mike
074a2b1f5f update
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 24s
Tests / test (bash) (push) Successful in 10s
Tests / test (zsh) (push) Successful in 10s
Tests / lint (push) Successful in 9s
Tests / docker (push) Successful in 19s
2025-12-11 08:27:03 +01:00
mike
3f9acb08ec -init- 2025-12-11 01:53:06 +01:00
mike
56bc1cb21b -init- 2025-12-11 01:21:53 +01:00
mike
43a408d962 -init- 2025-12-11 01:21:46 +01:00
ac343050a8 typo fix
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 12s
Tests / test (bash) (push) Failing after 17s
Tests / test (zsh) (push) Failing after 16s
Tests / lint (push) Successful in 9s
Tests / docker (push) Successful in 7s
2025-12-02 11:40:38 +01:00
c6e5f14433 Stream response
Some checks failed
Docker Build and Push / build-and-push (push) Failing after 12s
Tests / test (bash) (push) Failing after 17s
Tests / test (zsh) (push) Failing after 16s
Tests / lint (push) Successful in 9s
Tests / docker (push) Successful in 7s
2025-12-02 11:21:58 +01:00
20 changed files with 1053 additions and 1089 deletions

View File

@@ -56,7 +56,7 @@ jobs:
### Quick Install (Linux/macOS) ### Quick Install (Linux/macOS)
```bash ```bash
curl -sSL https://git.appmodel.nl/Tour/finish/main/docs/install.sh | bash curl -sSL https://git.appmodel.nl/tour/finish/main/docs/install.sh | bash
``` ```
### Homebrew (macOS) ### Homebrew (macOS)

72
ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,72 @@
User Input (Tab×2 or CLI)
┌───────────────────────────────────────┐
│ Entry Points │
├───────────────────────────────────────┤
│ • _finishsh() - Tab completion │
│ • command_command() - CLI mode │
└───────────────┬───────────────────────┘
┌───────────────────────────────────────┐
│ Configuration Layer │
├───────────────────────────────────────┤
│ acsh_load_config() │
│ • Read ~/.finish/config │
│ • Set ACSH_PROVIDER, ACSH_ENDPOINT │
│ • Set ACSH_ACTIVE_API_KEY (if needed) │
└───────────────┬───────────────────────┘
┌───────────────────────────────────────┐
│ Cache Layer │
├───────────────────────────────────────┤
│ • Check cache_dir/acsh-{hash}.txt │
│ • Return cached completions if exists │
└───────────────┬───────────────────────┘
↓ (cache miss)
┌───────────────────────────────────────┐
│ Context Builder │
├───────────────────────────────────────┤
│ _build_prompt() │
│ • Command history (sanitized) │
│ • Terminal info (env vars, cwd) │
│ • Recent files (ls -ld) │
│ • Help message (cmd --help) │
└───────────────┬───────────────────────┘
┌───────────────────────────────────────┐
│ Payload Builder │
├───────────────────────────────────────┤
│ _build_payload() │
│ • Format: {model, messages, temp} │
│ • Provider-specific options: │
│ - Ollama: {format:"json"} │
│ - LMStudio: {stream:true} │
│ - OpenAI: {response_format} │
└───────────────┬───────────────────────┘
┌───────────────────────────────────────┐
│ LLM Provider Layer │
├───────────────────────────────────────┤
│ openai_completion() │
│ • curl to endpoint │
│ • Ollama: no auth header │
│ • Others: Authorization header if key │
│ • Parse response (JSON/streaming) │
└───────────────┬───────────────────────┘
┌───────────────────────────────────────┐
│ Response Parser │
├───────────────────────────────────────┤
│ • Extract completions array │
│ • Fallback parsing for non-JSON │
│ • Filter command-like lines │
└───────────────┬───────────────────────┘
┌───────────────────────────────────────┐
│ Output Layer │
├───────────────────────────────────────┤
│ • Write to cache │
│ • Log usage (tokens, cost) │
│ • Return COMPREPLY array (completion) │
│ • or stdout (CLI mode) │
└───────────────────────────────────────┘

View File

@@ -26,7 +26,7 @@ RUN apt-get update && \
WORKDIR /opt/finish WORKDIR /opt/finish
# Copy application files # Copy application files
COPY finish.sh /usr/bin/finish COPY src/finish.sh /usr/bin/finish
COPY README.md LICENSE ./ COPY README.md LICENSE ./
# Make script executable # Make script executable

View File

@@ -1,34 +0,0 @@
# finish.sh Test Container
FROM ubuntu:22.04
LABEL maintainer="finish contributors"
LABEL description="Test environment for finish.sh"
ENV DEBIAN_FRONTEND=noninteractive
# Install dependencies including BATS for testing
RUN apt-get update && \
apt-get install -y \
bash \
bash-completion \
curl \
wget \
jq \
bc \
vim \
git \
bats \
&& apt-get clean && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt/finish
# Copy all files
COPY . .
# Make scripts executable
RUN chmod +x finish.sh
# Run tests by default
ENTRYPOINT ["bats"]
CMD ["tests"]

View File

@@ -1,365 +0,0 @@
# Publishing Guide for finish.sh
This guide explains how to publish and distribute finish.sh through various channels.
## Overview
finish.sh can be distributed through multiple channels:
1. **Direct installation script** - One-line curl install
2. **Debian/Ubuntu packages** - APT repository
3. **Homebrew** - macOS package manager
4. **Docker** - Container images
5. **GitHub Releases** - Direct downloads
## Prerequisites
Before publishing, ensure:
- [ ] All tests pass (`./run_tests.sh`)
- [ ] Version numbers are updated in all files
- [ ] CHANGELOG.md is updated
- [ ] Documentation is complete and accurate
## Version Management
Update version numbers in these files:
1. `finish.sh` - Line 34: `export ACSH_VERSION=0.5.0`
2. `debian/changelog` - Add new entry
3. `homebrew/finish.rb` - Line 6: `version "0.5.0"`
4. `docs/install.sh` - Line 7: `ACSH_VERSION="v0.5.0"`
5. `Dockerfile` - Line 6: `LABEL version="0.5.0"`
## 1. Direct Installation Script
The install script at `docs/install.sh` enables one-line installation:
```bash
curl -sSL https://git.appmodel.nl/Tour/finish/raw/branch/main/docs/install.sh | bash
```
### Setup:
- Host the script on a reliable server or GitHub/GitLab
- Ensure the URL is accessible and supports HTTPS
- Test the installation on clean systems
### Testing:
```bash
# Test in Docker
docker run --rm -it ubuntu:22.04 bash
curl -sSL YOUR_URL/install.sh | bash
```
## 2. Debian/Ubuntu Packages
### Building the Package
```bash
# Install build tools
sudo apt-get install debhelper devscripts
# Build the package
dpkg-buildpackage -us -uc -b
# This creates:
# ../finish_0.5.0-1_all.deb
```
### Testing the Package
```bash
# Install locally
sudo dpkg -i ../finish_*.deb
sudo apt-get install -f # Fix dependencies
# Test installation
finish --help
finish install
```
### Creating an APT Repository
#### Option A: Using Packagecloud (Recommended for beginners)
1. Sign up at https://packagecloud.io
2. Create a repository
3. Upload your .deb file:
```bash
gem install package_cloud
package_cloud push yourname/finish/ubuntu/jammy ../finish_*.deb
```
Users install with:
```bash
curl -s https://packagecloud.io/install/repositories/yourname/finish/script.deb.sh | sudo bash
sudo apt-get install finish
```
#### Option B: Using GitHub Pages + reprepro
1. Create a repository structure:
```bash
mkdir -p apt-repo/{conf,pool,dists}
```
2. Create `apt-repo/conf/distributions`:
```
Origin: finish.sh
Label: finish
Codename: stable
Architectures: all
Components: main
Description: APT repository for finish.sh
```
3. Add packages:
```bash
reprepro -b apt-repo includedeb stable ../finish_*.deb
```
4. Host on GitHub Pages or your server
Users install with:
```bash
echo "deb https://your-domain.com/apt-repo stable main" | sudo tee /etc/apt/sources.list.d/finish.list
curl -fsSL https://your-domain.com/apt-repo/key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install finish
```
## 3. Homebrew Formula
### Creating a Homebrew Tap
1. Create a GitHub repository: `homebrew-finish`
2. Add the formula to `Formula/finish.rb`
3. Create a release tarball and calculate SHA256:
```bash
git archive --format=tar.gz --prefix=finish-0.5.0/ v0.5.0 > finish-0.5.0.tar.gz
sha256sum finish-0.5.0.tar.gz
```
4. Update the formula with the correct URL and SHA256
5. Test the formula:
```bash
brew install --build-from-source ./homebrew/finish.rb
brew test finish
brew audit --strict finish
```
### Publishing the Tap
Users can install with:
```bash
brew tap yourname/finish
brew install finish
```
Or directly:
```bash
brew install yourname/finish/finish
```
## 4. Docker Images
### Building Images
```bash
# Production image
docker build -t finish:0.5.0 -t finish:latest .
# Test image
docker build -f Dockerfile.test -t finish:test .
```
### Publishing to GitHub Container Registry
The GitHub Actions workflow automatically publishes Docker images when you push a tag:
```bash
git tag -a v0.5.0 -m "Release v0.5.0"
git push origin v0.5.0
```
Images will be available at:
```
ghcr.io/appmodel/finish:0.5.0
ghcr.io/appmodel/finish:latest
```
### Publishing to Docker Hub
```bash
# Login
docker login
# Tag and push
docker tag finish:latest yourname/finish:0.5.0
docker tag finish:latest yourname/finish:latest
docker push yourname/finish:0.5.0
docker push yourname/finish:latest
```
## 5. GitHub Releases
### Automated Release Process
The `.github/workflows/release.yml` workflow automatically:
1. Creates release artifacts when you push a tag
2. Builds Debian packages
3. Publishes Docker images
4. Creates a GitHub release with downloads
### Manual Release Process
1. Create a release tarball:
```bash
git archive --format=tar.gz --prefix=finish-0.5.0/ v0.5.0 > finish-0.5.0.tar.gz
```
2. Create release on GitHub:
- Go to Releases → Draft a new release
- Tag: `v0.5.0`
- Title: `finish.sh v0.5.0`
- Upload: tarball and .deb file
- Write release notes
## Release Checklist
Before releasing a new version:
- [ ] Update all version numbers
- [ ] Update CHANGELOG.md
- [ ] Run all tests
- [ ] Test installation script
- [ ] Build and test Debian package
- [ ] Test Docker images
- [ ] Update documentation
- [ ] Create git tag
- [ ] Push tag to trigger CI/CD
- [ ] Verify GitHub release created
- [ ] Verify Docker images published
- [ ] Update Homebrew formula with new SHA256
- [ ] Test installation from all sources
- [ ] Announce release (if applicable)
## Distribution URLs
After publishing, users can install from:
**Quick Install:**
```bash
curl -sSL https://git.appmodel.nl/Tour/finish/raw/branch/main/docs/install.sh | bash
```
**Homebrew:**
```bash
brew tap appmodel/finish
brew install finish
```
**APT (if configured):**
```bash
sudo apt-get install finish
```
**Docker:**
```bash
docker pull ghcr.io/appmodel/finish:latest
```
**Direct Download:**
```
https://git.appmodel.nl/Tour/finish/releases/latest
```
## Troubleshooting
### Debian Package Issues
**Problem:** Package won't install
```bash
# Check dependencies
dpkg-deb -I finish_*.deb
# Test installation
sudo dpkg -i finish_*.deb
sudo apt-get install -f
```
**Problem:** Lintian warnings
```bash
lintian finish_*.deb
# Fix issues in debian/ files
```
### Homebrew Issues
**Problem:** Formula audit fails
```bash
brew audit --strict ./homebrew/finish.rb
# Fix issues reported
```
**Problem:** Installation fails
```bash
brew install --verbose --debug ./homebrew/finish.rb
```
### Docker Issues
**Problem:** Build fails
```bash
docker build --no-cache -t finish:test .
```
**Problem:** Image too large
```bash
# Use multi-stage builds
# Remove unnecessary files
# Combine RUN commands
```
## Continuous Integration
The project includes three GitHub Actions workflows:
1. **test.yml** - Runs on every push/PR
- Runs BATS tests
- Tests installation
- Runs shellcheck
- Tests Docker builds
2. **docker.yml** - Builds and publishes Docker images
- Triggered by tags and main branch
- Publishes to GitHub Container Registry
3. **release.yml** - Creates releases
- Triggered by version tags
- Builds all artifacts
- Creates GitHub release
- Publishes packages
## Support and Documentation
- Update the README.md with installation instructions
- Maintain CHANGELOG.md with version history
- Create issue templates for bug reports and feature requests
- Set up discussions for community support
## Security Considerations
- Always use HTTPS for installation scripts
- Sign Debian packages with GPG
- Use checksums for release artifacts
- Regularly update dependencies
- Monitor for security vulnerabilities
## Next Steps
After initial publication:
1. Monitor GitHub issues for user feedback
2. Set up analytics if desired
3. Create documentation site
4. Announce on relevant forums/communities
5. Consider submitting to package manager directories

355
README.md
View File

@@ -1,300 +1,149 @@
# finish.sh ```markdown
# finish
Command-line completion powered by local LLMs. Press `Tab` twice and get intelligent suggestions based on your terminal context. AI-powered shell completion that runs 100% on your machine.
## Quick Start One command and your terminal learns what you type next.
### One-Line Install (Recommended) ## Install
```bash ```bash
curl -sSL https://git.appmodel.nl/Tour/finish/raw/branch/main/docs/install.sh | bash # On Ubuntu/Debian, first ensure python3-venv is installed
sudo apt-get install python3-venv
# Then run the installer
curl -sSL https://git.appmodel.nl/tour/finish/raw/branch/main/docs/install.sh | bash
source ~/.bashrc # or ~/.zshrc
# for plato special, add plato.lan to /etc/hosts:
sudo nano /etc/hosts - 192.168.1.74 plato.lan
``` ```
### Manual Installation Press **Alt+\\** after typing any command to get intelligent completions—no cloud, no data leak, instant results.
## How it works
1. Captures your current directory, recent history, env vars, and available tools
2. Analyzes your intent and builds a context-aware prompt
3. Queries your local LLM (LM Studio, Ollama, or any OpenAI-compatible endpoint)
4. Returns 2-5 ranked completions with an interactive picker
5. Results are cached for instant replay
## Usage
Type a command, then press **Alt+\\**:
```bash ```bash
git clone https://git.appmodel.nl/Tour/finish.git finish # Natural language commands
cd finish show gpu status # → nvidia-smi
./finish.sh install resolve title of website google.com # → curl -s https://google.com | grep -oP '<title>\K[^<]+'
source ~/.bashrc make file about dogs # → echo "About Dogs" > dogs.md
# Partial commands
git commit # → git commit -m "..."
docker run # → docker run -it --rm ubuntu bash
find large files # → find . -type f -size +100M
``` ```
### Other Installation Methods Navigate with ↑↓, press Enter to accept, Esc to cancel.
See [Installation](#installation) below for package managers, Docker, and more options. ## Configure
## What It Does View current configuration:
finish.sh enhances your terminal with context-aware command suggestions. It analyzes:
- Current directory and recent files
- Command history
- Environment variables
- Command help output
Then generates relevant completions using a local LLM.
## Features
**Local by Default**
Runs with LM-Studio on localhost. No external API calls, no data leaving your machine.
**Context-Aware**
Understands what you're doing based on your environment and history.
**Cached**
Stores recent queries locally for instant responses.
**Flexible**
Switch between different models and providers easily.
## Configuration
View current settings:
```bash ```bash
finish config finish config
``` ```
Change the endpoint or model: Edit configuration file:
```bash ```bash
finish config set endpoint http://192.168.1.100:1234/v1/chat/completions nano ~/.finish/finish.json
finish config set model your-model-name
``` ```
Select a model interactively: ## Configuration Examples
```bash ### Local Ollama
finish model ```json
{
"provider": "ollama",
"model": "llama3:latest",
"endpoint": "http://localhost:11434/api/chat",
"temperature": 0.0,
"api_prompt_cost": 0.0,
"api_completion_cost": 0.0,
"max_history_commands": 20,
"max_recent_files": 20,
"cache_size": 100
}
``` ```
## Usage ### LM Studio
```json
Type a command and press `Tab` twice: {
"provider": "lmstudio",
```bash "model": "dolphin3.0-llama3.1-8b@q4_k_m",
docker <TAB><TAB> "endpoint": "http://localhost:1234/v1/chat/completions",
git commit <TAB><TAB> "temperature": 0.0,
ffmpeg <TAB><TAB> "api_prompt_cost": 0.0,
"api_completion_cost": 0.0,
"max_history_commands": 20,
"max_recent_files": 20,
"cache_size": 100
}
``` ```
Natural language works too: ### OpenAI (or compatible API)
```json
```bash {
# find large files <TAB><TAB> "provider": "lmstudio",
# compress to zip <TAB><TAB> "model": "gpt-4",
``` "endpoint": "https://api.openai.com/v1/chat/completions",
"api_key": "sk-...",
Test it without executing: "temperature": 0.0,
"api_prompt_cost": 0.03,
```bash "api_completion_cost": 0.06,
finish command "your command" "max_history_commands": 20,
finish command --dry-run "your command" "max_recent_files": 20,
``` "cache_size": 100
}
## Installation
### Quick Install (Linux/macOS)
The fastest way to get started:
```bash
curl -sSL https://git.appmodel.nl/Tour/finish/raw/branch/main/docs/install.sh | bash
source ~/.bashrc # or ~/.zshrc for zsh
finish model
```
### Package Managers
#### Homebrew (macOS)
```bash
brew tap appmodel/finish
brew install finish
finish install
source ~/.bashrc
```
#### APT (Debian/Ubuntu)
Download the `.deb` package from [releases](https://git.appmodel.nl/Tour/finish/releases):
```bash
sudo dpkg -i finish_*.deb
sudo apt-get install -f # Install dependencies
finish install
source ~/.bashrc
```
### Docker
Run finish.sh in a container:
```bash
# Build the image
docker build -t finish .
# Run interactively
docker run -it finish
# Or use docker-compose
docker-compose up -d finish
docker-compose exec finish bash
```
Inside the container:
```bash
finish install
source ~/.bashrc
finish model # Configure your LLM endpoint
```
### From Source
```bash
git clone https://git.appmodel.nl/Tour/finish.git finish
cd finish
chmod +x finish.sh
sudo ln -s $PWD/finish.sh /usr/local/bin/finish
finish install
source ~/.bashrc
``` ```
## Requirements ## Requirements
- Bash 4.0+ or Zsh 5.0+ - Python 3.7+
- curl - Bash ≥4 or Zsh ≥5
- jq - A local LLM running (Ollama, LM Studio, etc.) or API access
- bc
- bash-completion (recommended)
- LM-Studio or Ollama (for LLM inference)
## Directory Structure Python dependencies (installed automatically):
- httpx
``` - prompt_toolkit
~/.finish/ - rich
├── config # Configuration file
├── finish.log # Usage log
└── cache/ # Cached completions
```
## Commands ## Commands
```bash ```bash
finish install # Set up finish finish install # Set up Alt+\ keybinding
finish remove # Uninstall finish config # Show current configuration
finish config # Show/edit configuration finish command "text" # Test completions manually
finish model # Select model
finish enable # Enable completions
finish disable # Disable completions
finish clear # Clear cache and logs
finish usage # Show usage statistics
finish system # Show system information
finish --help # Show help
``` ```
## Providers ## Advanced
Currently supports:
- **LM-Studio** (default) - Local models via OpenAI-compatible API
- **Ollama** - Local models via Ollama API
Add custom providers by editing `finish.sh` and adding entries to `_finish_modellist`.
## Development
### Local Development
Clone and link for development:
### Debug mode
```bash ```bash
git clone https://git.appmodel.nl/Tour/finish.git finish export FINISH_DEBUG=1
cd finish finish command "your command here"
ln -s $PWD/finish.sh $HOME/.local/bin/finish cat ~/.finish/finish.log
finish install
``` ```
### Running Tests ### Clear cache
```bash ```bash
# Install BATS testing framework rm -rf ~/.finish/cache/*.json
sudo apt-get install bats # Ubuntu/Debian
# Run tests
./run_tests.sh
# Or use Docker
docker build -f Dockerfile.test -t finish:test .
docker run --rm finish:test
``` ```
### Building Packages
#### Debian Package
```bash
# Install build dependencies
sudo apt-get install debhelper devscripts
# Build the package
dpkg-buildpackage -us -uc -b
# Package will be created in parent directory
ls ../*.deb
```
#### Docker Images
```bash
# Production image
docker build -t finish:latest .
# Test image
docker build -f Dockerfile.test -t finish:test .
# Using docker-compose
docker-compose build
```
## Distribution & Publishing
### Creating a Release
1. Update version in `finish.sh` (ACSH_VERSION)
2. Update version in `debian/changelog`
3. Update version in `homebrew/finish.rb`
4. Create and push a git tag:
```bash
git tag -a v0.5.0 -m "Release version 0.5.0"
git push origin v0.5.0
```
This triggers the GitHub Actions workflow which:
- Builds release artifacts
- Creates Debian packages
- Publishes Docker images to GitHub Container Registry
- Creates a GitHub release with downloadable assets
### Homebrew Formula
After creating a release, update the Homebrew formula:
1. Calculate SHA256 of the release tarball:
```bash
curl -sL https://git.appmodel.nl/Tour/finish/archive/v0.5.0.tar.gz | sha256sum
```
2. Update `homebrew/finish.rb` with the new SHA256 and version
3. Submit to your Homebrew tap or the main Homebrew repository
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License ## License
BSD 2-Clause License. See LICENSE file. BSD 2-Clause.
```

View File

@@ -1,201 +0,0 @@
# Publication Setup Summary
This document summarizes the publication setup completed for finish.sh.
## Files Created
### 1. Installation Script
- **`docs/install.sh`** - Enhanced one-line installer with:
- Dependency checking
- Shell detection (bash/zsh)
- Color-coded output
- Error handling
- PATH management
- Usage: `curl -sSL <URL>/install.sh | bash`
### 2. Debian Package Structure
Complete Debian packaging in `debian/` directory:
- **`control`** - Package metadata and dependencies
- **`rules`** - Build instructions
- **`changelog`** - Version history
- **`copyright`** - License information
- **`compat`** - Debhelper compatibility level
- **`postinst`** - Post-installation script
- **`source/format`** - Package format specification
Build command: `dpkg-buildpackage -us -uc -b`
### 3. Homebrew Formula
- **`homebrew/finish.rb`** - Homebrew formula for macOS
- Includes dependencies, installation steps, and caveats
- Update SHA256 after creating releases
### 4. Docker Configuration
- **`Dockerfile`** - Production container image
- **`Dockerfile.test`** - Testing container with BATS
- **`docker-compose.yml`** - Multi-service Docker setup
- **`.dockerignore`** - Excludes unnecessary files from builds
### 5. GitHub Actions Workflows
CI/CD automation in `.github/workflows/`:
- **`test.yml`** - Runs tests on every push/PR
- **`release.yml`** - Creates releases and builds packages
- **`docker.yml`** - Builds and publishes Docker images
### 6. Documentation
- **`PUBLISHING.md`** - Comprehensive publishing guide
- **`README.md`** - Updated with installation methods
- **`.gitignore`** - Updated to exclude build artifacts
## Distribution Channels
### 1. Quick Install (Recommended)
```bash
curl -sSL https://git.appmodel.nl/Tour/finish/raw/branch/main/docs/install.sh | bash
```
### 2. Debian/Ubuntu Package
Users can install the `.deb` file:
```bash
sudo dpkg -i finish_*.deb
sudo apt-get install -f
```
Or from an APT repository (when configured):
```bash
sudo apt-get install finish
```
### 3. Homebrew (macOS)
```bash
brew tap closedloop-technologies/finish
brew install finish
```
### 4. Docker
```bash
docker pull ghcr.io/closedloop-technologies/finish:latest
docker run -it ghcr.io/closedloop-technologies/finish:latest
```
### 5. From Source
```bash
git clone https://git.appmodel.nl/Tour/finish.git
cd finish
./finish.sh install
```
## Next Steps
### Immediate Actions
1. **Test the installation script** on clean systems
2. **Build and test the Debian package**:
```bash
dpkg-buildpackage -us -uc -b
sudo dpkg -i ../finish_*.deb
```
3. **Test Docker builds**:
```bash
docker build -t finish:test .
docker run -it finish:test
```
### For First Release
1. **Update version numbers** in:
- `finish.sh` (line 34)
- `debian/changelog`
- `homebrew/finish.rb`
- `docs/install.sh`
- `Dockerfile`
2. **Create release tarball**:
```bash
git archive --format=tar.gz --prefix=finish-0.5.0/ v0.5.0 > finish-0.5.0.tar.gz
```
3. **Calculate SHA256** for Homebrew:
```bash
sha256sum finish-0.5.0.tar.gz
```
4. **Create and push tag**:
```bash
git tag -a v0.5.0 -m "Release v0.5.0"
git push origin v0.5.0
```
This triggers GitHub Actions to build everything automatically.
### Setting Up Package Repositories
#### APT Repository Options:
1. **Packagecloud** (easiest):
- Sign up at https://packagecloud.io
- Upload .deb file
- Users get one-line install
2. **GitHub Pages** (free):
- Use `reprepro` to create repository
- Host on GitHub Pages
- Users add your repository to sources
See `PUBLISHING.md` for detailed instructions.
#### Homebrew Tap:
1. Create GitHub repo: `homebrew-finish`
2. Add formula to `Formula/finish.rb`
3. Users install with: `brew tap yourname/finish && brew install finish`
## Testing Checklist
Before releasing:
- [ ] Test install script on Ubuntu 22.04
- [ ] Test install script on Debian 12
- [ ] Test install script on macOS
- [ ] Build Debian package successfully
- [ ] Install and test Debian package
- [ ] Build Docker image successfully
- [ ] Run BATS tests in Docker
- [ ] Test Homebrew formula (if applicable)
- [ ] Verify all version numbers match
- [ ] Test GitHub Actions workflows
## Automation
The GitHub Actions workflows handle:
- **On every push/PR**: Run tests, lint code, build Docker images
- **On version tag push**: Build packages, create release, publish Docker images
This means once set up, releasing a new version is as simple as:
```bash
git tag -a v0.5.1 -m "Release v0.5.1"
git push origin v0.5.1
```
Everything else happens automatically!
## Support Channels
Consider setting up:
- GitHub Issues for bug reports
- GitHub Discussions for Q&A
- Documentation website (GitHub Pages)
- Discord/Slack community (optional)
## Resources
- Debian Packaging Guide: https://www.debian.org/doc/manuals/maint-guide/
- Homebrew Formula Cookbook: https://docs.brew.sh/Formula-Cookbook
- GitHub Actions Documentation: https://docs.github.com/en/actions
- Docker Best Practices: https://docs.docker.com/develop/dev-best-practices/
## Conclusion
Your project is now fully set up for publication with:
- ✅ One-line installation script
- ✅ Debian/Ubuntu package support
- ✅ Homebrew formula for macOS
- ✅ Docker containers
- ✅ Automated CI/CD with GitHub Actions
- ✅ Comprehensive documentation
All distribution channels are ready. Follow the "Next Steps" section above to test and publish your first release!

140
USAGE.md
View File

@@ -1,140 +0,0 @@
# Usage Guide
## Installation
Install and set up finish:
```bash
finish install
source ~/.bashrc
```
## Configuration
View current settings:
```bash
finish config
```
Change settings:
```bash
finish config set temperature 0.5
finish config set endpoint http://plato.lan:1234/v1/chat/completions
finish config set model your-model-name
```
Reset to defaults:
```bash
finish config reset
```
## Model Selection
Select a model interactively:
```bash
finish model
```
Use arrow keys to navigate, Enter to select, or 'q' to quit.
## Basic Commands
Show help:
```bash
finish --help
```
Test completions without caching:
```bash
finish command "your command here"
```
Preview the prompt sent to the model:
```bash
finish command --dry-run "your command here"
```
View system information:
```bash
finish system
```
## Usage Statistics
Check your usage stats and costs:
```bash
finish usage
```
## Cache Management
Clear cached completions and logs:
```bash
finish clear
```
## Enable/Disable
Temporarily disable:
```bash
finish disable
```
Re-enable:
```bash
finish enable
```
## Uninstallation
Remove finish completely:
```bash
finish remove
```
## Using finish
Once enabled, just type a command and press `Tab` twice to get suggestions:
```bash
git <TAB><TAB>
docker <TAB><TAB>
find <TAB><TAB>
```
Natural language also works:
```bash
# convert video to mp4 <TAB><TAB>
# list all processes using port 8080 <TAB><TAB>
# compress this folder to tar.gz <TAB><TAB>
```
## Configuration File
The config file is located at `~/.finish/config` and contains:
- `provider`: Model provider (lmstudio, ollama)
- `model`: Model name
- `temperature`: Response randomness (0.0 - 1.0)
- `endpoint`: API endpoint URL
- `api_prompt_cost`: Cost per input token
- `api_completion_cost`: Cost per output token
- `max_history_commands`: Number of recent commands to include in context
- `max_recent_files`: Number of recent files to include in context
- `cache_dir`: Directory for cached completions
- `cache_size`: Maximum number of cached items
- `log_file`: Path to usage log file

4
debian/control vendored
View File

@@ -4,8 +4,8 @@ Priority: optional
Maintainer: finish.sh Contributors <noreply@finish.sh> Maintainer: finish.sh Contributors <noreply@finish.sh>
Build-Depends: debhelper (>= 10) Build-Depends: debhelper (>= 10)
Standards-Version: 4.5.0 Standards-Version: 4.5.0
Homepage: https://git.appmodel.nl/Tour/finish/finish Homepage: https://git.appmodel.nl/tour/finish/finish
Vcs-Git: https://git.appmodel.nl/Tour/finish.git Vcs-Git: https://git.appmodel.nl/tour/finish.git
Package: finish Package: finish
Architecture: all Architecture: all

4
debian/copyright vendored
View File

@@ -1,7 +1,7 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: finish Upstream-Name: finish
Upstream-Contact: https://git.appmodel.nl/Tour/finish/finish Upstream-Contact: https://git.appmodel.nl/tour/finish/finish
Source: https://git.appmodel.nl/Tour/finish Source: https://git.appmodel.nl/tour/finish
Files: * Files: *
Copyright: 2024-2025 finish.sh Contributors Copyright: 2024-2025 finish.sh Contributors

View File

@@ -1,12 +1,12 @@
#!/bin/sh #!/bin/sh
# finish.sh installer # finish.sh installer
# Usage: curl -sSL https://git.appmodel.nl/Tour/finish/raw/branch/main/docs/install.sh | bash # Usage: curl -sSL https://git.appmodel.nl/tour/finish/raw/branch/main/docs/install.sh | bash
set -e set -e
ACSH_VERSION="v0.5.0" ACSH_VERSION="v0.6.0"
BRANCH_OR_VERSION=${1:-main} BRANCH_OR_VERSION=${1:-main}
REPO_URL="https://git.appmodel.nl/Tour/finish/raw/branch" REPO_URL="https://git.appmodel.nl/tour/finish/raw/branch"
# Colors # Colors
RED='\033[0;31m' RED='\033[0;31m'
@@ -97,11 +97,9 @@ main() {
SHELL_TYPE=$(detect_shell) SHELL_TYPE=$(detect_shell)
case "$SHELL_TYPE" in case "$SHELL_TYPE" in
zsh) zsh)
SCRIPT_NAME="finish.zsh"
RC_FILE="$HOME/.zshrc" RC_FILE="$HOME/.zshrc"
;; ;;
bash) bash)
SCRIPT_NAME="finish.sh"
RC_FILE="$HOME/.bashrc" RC_FILE="$HOME/.bashrc"
;; ;;
*) *)
@@ -112,6 +110,29 @@ main() {
echo "Detected shell: $SHELL_TYPE" echo "Detected shell: $SHELL_TYPE"
# Check Python and venv
echo "Checking Python installation..."
if ! command -v python3 > /dev/null 2>&1; then
echo_error "Python 3 is required but not found"
echo "Install it with:"
echo " Ubuntu/Debian: sudo apt-get install python3 python3-pip python3-venv"
echo " macOS: brew install python3"
exit 1
fi
echo_green "✓ Python 3 found: $(python3 --version)"
# Check if venv module is available
if ! python3 -m venv --help > /dev/null 2>&1; then
echo_error "python3-venv module is not installed"
echo "Install it with:"
echo " Ubuntu/Debian: sudo apt-get install python3-venv"
echo " CentOS/RHEL: sudo yum install python3-venv"
echo " macOS: (included with python3)"
exit 1
fi
echo_green "✓ python3-venv available"
echo ""
# Check dependencies # Check dependencies
echo "Checking dependencies..." echo "Checking dependencies..."
if ! check_dependencies; then if ! check_dependencies; then
@@ -121,6 +142,11 @@ main() {
echo "" echo ""
# Determine installation location # Determine installation location
# Prefer local user install at ~/.local/bin, create it if missing
if [ ! -d "$HOME/.local/bin" ]; then
mkdir -p "$HOME/.local/bin"
fi
if [ -d "$HOME/.local/bin" ]; then if [ -d "$HOME/.local/bin" ]; then
INSTALL_LOCATION="$HOME/.local/bin/finish" INSTALL_LOCATION="$HOME/.local/bin/finish"
# Add to PATH if not already there # Add to PATH if not already there
@@ -134,26 +160,89 @@ main() {
elif [ -w "/usr/local/bin" ]; then elif [ -w "/usr/local/bin" ]; then
INSTALL_LOCATION="/usr/local/bin/finish" INSTALL_LOCATION="/usr/local/bin/finish"
else else
echo_error "Cannot write to /usr/local/bin and $HOME/.local/bin doesn't exist" echo_error "Cannot determine installation location"
echo "Please create $HOME/.local/bin or run with sudo" echo "Please ensure either $HOME/.local/bin exists and is writable, or run with sudo for /usr/local/bin"
exit 1 exit 1
fi fi
# Create directory if needed # Create directories if needed
mkdir -p "$(dirname "$INSTALL_LOCATION")" mkdir -p "$(dirname "$INSTALL_LOCATION")"
mkdir -p "$HOME/.venvs"
# Download script # Download Python script
echo "Downloading finish.sh..." echo "Downloading finish.py..."
URL="$REPO_URL/$BRANCH_OR_VERSION/$SCRIPT_NAME" URL="$REPO_URL/$BRANCH_OR_VERSION/src/finish.py"
if ! download_file "$URL" "$INSTALL_LOCATION"; then if ! download_file "$URL" "$INSTALL_LOCATION"; then
echo_error "Failed to download from $URL" echo_error "Failed to download from $URL"
exit 1 exit 1
fi fi
# Download requirements.txt
echo "Downloading requirements.txt..."
TEMP_REQ="/tmp/finish_requirements.txt"
REQ_URL="$REPO_URL/$BRANCH_OR_VERSION/requirements.txt"
if ! download_file "$REQ_URL" "$TEMP_REQ"; then
echo_error "Failed to download requirements.txt from $REQ_URL"
exit 1
fi
# Create virtualenv and install dependencies
echo "Creating virtual environment..."
VENV_PATH="$HOME/.venvs/finish"
if [ -d "$VENV_PATH" ]; then
echo "Virtual environment already exists, removing old one..."
rm -rf "$VENV_PATH"
fi
if ! python3 -m venv "$VENV_PATH"; then
echo_error "Failed to create virtual environment"
echo "Make sure python3-venv is installed:"
echo " Ubuntu/Debian: sudo apt-get install python3-venv"
exit 1
fi
echo_green "✓ Virtual environment created at $VENV_PATH"
echo "Installing Python dependencies..."
if ! "$VENV_PATH/bin/pip" install --quiet --upgrade pip; then
echo_error "Failed to upgrade pip"
exit 1
fi
if ! "$VENV_PATH/bin/pip" install --quiet -r "$TEMP_REQ"; then
echo_error "Failed to install dependencies"
cat "$TEMP_REQ"
exit 1
fi
echo_green "✓ Dependencies installed"
rm -f "$TEMP_REQ"
# Update shebang to use venv python (compatible with macOS and Linux)
if sed --version >/dev/null 2>&1; then
# GNU sed (Linux)
sed -i "1s|.*|#!$VENV_PATH/bin/python3|" "$INSTALL_LOCATION"
else
# BSD sed (macOS)
sed -i '' "1s|.*|#!$VENV_PATH/bin/python3|" "$INSTALL_LOCATION"
fi
# Verify shebang was updated correctly
SHEBANG=$(head -n 1 "$INSTALL_LOCATION")
if [ "$SHEBANG" != "#!$VENV_PATH/bin/python3" ]; then
echo_error "Failed to update shebang. Got: $SHEBANG"
exit 1
fi
chmod +x "$INSTALL_LOCATION" chmod +x "$INSTALL_LOCATION"
echo_green "✓ Installed to $INSTALL_LOCATION" echo_green "✓ Installed to $INSTALL_LOCATION"
echo "" echo ""
# Ensure current shell PATH includes the install dir for immediate use
INSTALL_DIR="$(dirname "$INSTALL_LOCATION")"
case ":$PATH:" in
*":$INSTALL_DIR:"*) ;;
*) export PATH="$INSTALL_DIR:$PATH" ;;
esac
# Check bash-completion # Check bash-completion
if [ "$SHELL_TYPE" = "bash" ]; then if [ "$SHELL_TYPE" = "bash" ]; then
if ! command -v _init_completion > /dev/null 2>&1; then if ! command -v _init_completion > /dev/null 2>&1; then
@@ -171,8 +260,18 @@ main() {
fi fi
fi fi
# Run finish install # Test that finish works
echo "Running finish installation..." echo "Testing installation..."
if ! "$INSTALL_LOCATION" --version > /dev/null 2>&1; then
echo_error "finish command failed to execute"
echo "Testing with direct python call..."
"$VENV_PATH/bin/python3" "$INSTALL_LOCATION" --version
exit 1
fi
echo_green "✓ finish command works"
# Run finish install to set up keybinding
echo "Setting up shell keybinding..."
if "$INSTALL_LOCATION" install; then if "$INSTALL_LOCATION" install; then
echo "" echo ""
echo_green "==========================================" echo_green "=========================================="
@@ -183,12 +282,20 @@ main() {
echo " 1. Reload your shell configuration:" echo " 1. Reload your shell configuration:"
echo " source $RC_FILE" echo " source $RC_FILE"
echo "" echo ""
echo " 2. Select a language model:" echo " 2. Configure your LLM endpoint:"
echo " finish model" echo " finish config"
echo "" echo ""
echo " 3. Start using by pressing Tab twice after any command" echo " 3. Edit config if needed:"
echo " nano ~/.finish/finish.json"
echo "" echo ""
echo "Documentation: https://git.appmodel.nl/Tour/finish" echo " 4. Start using by pressing Alt+\\ after typing a command"
echo ""
echo "Example:"
echo " Type: show gpu status"
echo " Press: Alt+\\"
echo " Select completion with ↑↓ arrows, Enter to accept"
echo ""
echo "Documentation: https://git.appmodel.nl/tour/finish"
else else
echo_error "Installation failed. Please check the output above." echo_error "Installation failed. Please check the output above."
exit 1 exit 1

12
finish.iml Normal file
View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="WEB_MODULE" version="4">
<component name="NewModuleRootManager" inherit-compiler-output="true">
<exclude-output />
<content url="file://$MODULE_DIR$">
<excludeFolder url="file://$MODULE_DIR$/.git" />
<excludeFolder url="file://$MODULE_DIR$/tests" />
</content>
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
</module>

View File

@@ -1,7 +1,7 @@
class finishSh < Formula class finishSh < Formula
desc "LLM-powered command-line finishtion" desc "LLM-powered command-line finishtion"
homepage "https://git.appmodel.nl/Tour/finish" homepage "https://git.appmodel.nl/tour/finish"
url "https://git.appmodel.nl/Tour/finish/archive/v0.5.0.tar.gz" url "https://git.appmodel.nl/tour/finish/archive/v0.5.0.tar.gz"
sha256 "" # Update with actual SHA256 of the release tarball sha256 "" # Update with actual SHA256 of the release tarball
license "BSD-2-Clause" license "BSD-2-Clause"
version "0.5.0" version "0.5.0"
@@ -36,7 +36,7 @@ class finishSh < Formula
Ollama: https://ollama.ai Ollama: https://ollama.ai
For more information, visit: For more information, visit:
https:/git.appmodel.nl/Tour/finish https:/git.appmodel.nl/tour/finish
EOS EOS
end end

3
requirements.txt Normal file
View File

@@ -0,0 +1,3 @@
httpx
prompt_toolkit
rich

13
run_tests.sh Normal file → Executable file
View File

@@ -1,3 +1,12 @@
#!/bin/bash #!/usr/bin/env bash
set -euo pipefail
bats tests/ # Always run from the directory where this script lives
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
# Execute the tests
bash ./tests/test_finish.sh
# Also try the local Ollama end-to-end test (skips if Ollama not running)
bash ./tests/test_finish_ollama.sh

442
src/finish.py Executable file
View File

@@ -0,0 +1,442 @@
#!/usr/bin/env python3
"""
finish.py AI shell completions that never leave your machine.
"""
from __future__ import annotations
import hashlib
import argparse, asyncio, json, os, re, shlex, subprocess, sys, time
from pathlib import Path
from typing import List, Dict, Optional
import httpx
from prompt_toolkit import ANSI, Application
from prompt_toolkit.buffer import Buffer
from prompt_toolkit.key_binding import KeyBindings
from prompt_toolkit.layout import (
ConditionalContainer, FormattedTextControl, HSplit, Layout, Window
)
from prompt_toolkit.widgets import Label
from rich.console import Console
from rich.spinner import Spinner
VERSION = "0.5.0"
CFG_DIR = Path.home()/".finish"
CFG_FILE = CFG_DIR/"finish.json"
CACHE_DIR = CFG_DIR/"cache"
LOG_FILE = CFG_DIR/"finish.log"
console = Console()
# --------------------------------------------------------------------------- #
# Config
# --------------------------------------------------------------------------- #
DEFAULT_CFG = {
"provider": "ollama",
"model": "llama3:latest",
"endpoint": "http://localhost:11434/api/chat",
"temperature": 0.0,
"api_prompt_cost": 0.0,
"api_completion_cost": 0.0,
"max_history_commands": 20,
"max_recent_files": 20,
"cache_size": 100,
}
def cfg() -> dict:
if not CFG_FILE.exists():
CFG_DIR.mkdir(exist_ok=True)
CFG_FILE.write_text(json.dumps(DEFAULT_CFG, indent=2))
return json.loads(CFG_FILE.read_text())
# --------------------------------------------------------------------------- #
# Context builders
# --------------------------------------------------------------------------- #
def _sanitise_history() -> str:
hist = subprocess.check_output(["bash", "-ic", "history"]).decode()
# scrub tokens / hashes
for pat in [
r"\b[0-9a-f]{32,40}\b", # long hex
r"\b[A-Za-z0-9-]{36}\b", # uuid
r"\b[A-Za-z0-9]{16,40}\b", # api-keyish
]:
hist = re.sub(pat, "REDACTED", hist)
return "\n".join(hist.splitlines()[-cfg()["max_history_commands"]:])
def _recent_files() -> str:
try:
files = subprocess.check_output(
["find", ".", "-maxdepth", "1", "-type", "f", "-printf", "%T@ %p\n"],
stderr=subprocess.DEVNULL,
).decode()
return "\n".join(sorted(files.splitlines(), reverse=True)[: cfg()["max_recent_files"]])
except Exception:
return ""
def _installed_tools() -> str:
"""Check for commonly used tools that might be relevant"""
tools = ["curl", "wget", "jq", "grep", "awk", "sed", "python3", "node", "docker", "git", "nvcc", "nvidia-smi"]
available = []
for tool in tools:
try:
subprocess.run(["which", tool], capture_output=True, timeout=0.5, check=True)
available.append(tool)
except Exception:
pass
return ", ".join(available) if available else "standard shell tools"
def _get_context_info() -> dict:
"""Gather comprehensive shell context"""
return {
"user": os.getenv("USER", "unknown"),
"pwd": os.getcwd(),
"home": os.getenv("HOME", ""),
"hostname": os.getenv("HOSTNAME", os.getenv("HOST", "localhost")),
"shell": os.getenv("SHELL", "bash"),
"tools": _installed_tools(),
}
def build_prompt(user_input: str) -> str:
"""Build an enhanced prompt with better context and instructions"""
c = cfg()
ctx = _get_context_info()
# Analyze user intent
user_words = user_input.lower().split()
intent_hints = []
if any(word in user_words for word in ["resolve", "get", "fetch", "download", "curl", "wget"]):
intent_hints.append("The user wants to fetch/download data")
if any(word in user_words for word in ["website", "url", "http", "html", "title"]):
intent_hints.append("The user is working with web content")
if any(word in user_words for word in ["parse", "extract", "grep", "find"]):
intent_hints.append("The user wants to extract/parse data")
if any(word in user_words for word in ["create", "make", "new", "write", "edit"]):
intent_hints.append("The user wants to create or edit files")
if any(word in user_words for word in ["file", "document", "text", "story", "about"]):
intent_hints.append("The user is working with files/documents")
if any(word in user_words for word in ["install", "setup", "configure", "apt", "pip"]):
intent_hints.append("The user wants to install or configure software")
if any(word in user_words for word in ["gpu", "nvidia", "cuda", "memory", "cpu"]):
intent_hints.append("The user wants system/hardware information")
intent_context = "\n".join(f"- {hint}" for hint in intent_hints) if intent_hints else "- General command completion"
prompt = f"""You are an intelligent bash completion assistant. Analyze the user's intent and provide practical, executable commands.
USER INPUT: {user_input}
DETECTED INTENT:
{intent_context}
SHELL CONTEXT:
- User: {ctx['user']}
- Working directory: {ctx['pwd']}
- Hostname: {ctx['hostname']}
- Available tools: {ctx['tools']}
RECENT COMMAND HISTORY:
{_sanitise_history()}
INSTRUCTIONS:
1. Understand what the user wants to accomplish (not just the literal words)
2. Generate 2-5 practical bash commands that achieve the user's goal
3. Prefer common tools (curl, wget, grep, awk, sed, jq, python)
4. For file creation: use echo with heredoc, cat, or nano/vim
5. For web tasks: use curl with headers, pipe to grep/sed/awk for parsing
6. For complex parsing: suggest one-liners with proper escaping
7. Commands should be copy-paste ready and executable
8. Order by most likely to least likely intent
EXAMPLES:
- "resolve title of website example.com" → curl -s https://example.com | grep -oP '<title>\\K[^<]+'
- "show gpu status" → nvidia-smi
- "download json from api.com" → curl -s https://api.com/data | jq .
- "make a new file story.txt" → nano story.txt
- "create file and write hello" → echo "hello" > file.txt
- "write about dogs to file" → cat > dogs.txt << 'EOF' (then Ctrl+D to finish)
OUTPUT FORMAT (JSON only, no other text):
{{"completions":["command1", "command2", "command3"]}}"""
return prompt
# --------------------------------------------------------------------------- #
# LLM call
# --------------------------------------------------------------------------- #
async def llm_complete(prompt: str) -> List[str]:
c = cfg()
payload = {
"model": c["model"],
"temperature": c["temperature"],
"messages": [
{"role": "system", "content": "You are a helpful bash completion assistant."},
{"role": "user", "content": prompt},
],
"response_format": {"type": "text"},
}
if c["provider"] == "ollama":
payload["format"] = "json"
payload["stream"] = False
headers = {"Content-Type": "application/json"}
if c.get("api_key"):
headers["Authorization"] = f"Bearer {c['api_key']}"
try:
async with httpx.AsyncClient(timeout=30) as client:
resp = await client.post(c["endpoint"], headers=headers, json=payload)
resp.raise_for_status()
body = resp.json()
except httpx.ConnectError as e:
console.print(f"[red]Error:[/] Cannot connect to {c['endpoint']}")
console.print(f"[yellow]Check that your LLM server is running and endpoint is correct.[/]")
console.print(f"[yellow]Run 'finish config' to see current config.[/]")
return []
except Exception as e:
console.print(f"[red]Error:[/] {e}")
return []
# extract content
if c["provider"] == "ollama":
raw = body["message"]["content"]
else:
raw = body["choices"][0]["message"]["content"]
# Debug logging
if os.getenv("FINISH_DEBUG"):
LOG_FILE.parent.mkdir(exist_ok=True)
with open(LOG_FILE, "a") as f:
f.write(f"\n=== {time.strftime('%Y-%m-%d %H:%M:%S')} ===\n")
f.write(f"Raw response:\n{raw}\n")
# try json first - use strict=False to be more lenient
try:
result = json.loads(raw, strict=False)["completions"]
if os.getenv("FINISH_DEBUG"):
with open(LOG_FILE, "a") as f:
f.write(f"Parsed completions: {result}\n")
return result
except Exception as e:
if os.getenv("FINISH_DEBUG"):
with open(LOG_FILE, "a") as f:
f.write(f"JSON parse failed: {e}\n")
f.write(f"Attempting manual extraction...\n")
# Try to extract JSON array manually using regex
try:
match = re.search(r'"completions"\s*:\s*\[(.*?)\]', raw, re.DOTALL)
if match:
# Extract commands from the array - handle escaped quotes
array_content = match.group(1)
# Find all quoted strings
commands = re.findall(r'"([^"\\]*(?:\\.[^"\\]*)*)"', array_content)
if commands:
if os.getenv("FINISH_DEBUG"):
with open(LOG_FILE, "a") as f:
f.write(f"Manual extraction succeeded: {commands}\n")
return commands[:5]
except Exception as e2:
if os.getenv("FINISH_DEBUG"):
with open(LOG_FILE, "a") as f:
f.write(f"Manual extraction failed: {e2}\n")
# fallback: grep command-like lines
fallback = [ln for ln in raw.splitlines() if re.match(r"^(ls|cd|find|cat|grep|echo|mkdir|rm|cp|mv|pwd|chmod|chown|nano|vim|touch)\b", ln)][:5]
if os.getenv("FINISH_DEBUG"):
with open(LOG_FILE, "a") as f:
f.write(f"Fallback completions: {fallback}\n")
return fallback
# --------------------------------------------------------------------------- #
# TUI picker
# --------------------------------------------------------------------------- #
async def select_completion(completions: List[str]) -> Optional[str]:
if not completions:
return None
if len(completions) == 1:
return completions[0]
# If not in an interactive terminal, return first completion
if not sys.stdin.isatty():
return completions[0]
# Import here to avoid issues when not needed
from prompt_toolkit.input import create_input
from prompt_toolkit.output import create_output
kb = KeyBindings()
current = 0
def get_text():
return [
("", "Select completion (↑↓ navigate, Enter accept, Esc cancel)\n\n"),
*[
("[SetCursorPosition]" if i == current else "", f"{c}\n")
for i, c in enumerate(completions)
],
]
@kb.add("up")
def _up(event):
nonlocal current
current = (current - 1) % len(completions)
@kb.add("down")
def _down(event):
nonlocal current
current = (current + 1) % len(completions)
@kb.add("enter")
def _accept(event):
event.app.exit(result=completions[current])
@kb.add("escape")
def _cancel(event):
event.app.exit(result=None)
control = FormattedTextControl(get_text)
# Force output to /dev/tty to avoid interfering with bash command substitution
try:
output = create_output(stdout=open('/dev/tty', 'w'))
input_obj = create_input(stdin=open('/dev/tty', 'r'))
except Exception:
# Fallback to first completion if tty not available
return completions[0]
app = Application(
layout=Layout(HSplit([Window(control, height=len(completions) + 2)])),
key_bindings=kb,
mouse_support=False,
erase_when_done=True,
output=output,
input=input_obj,
)
return await app.run_async()
# --------------------------------------------------------------------------- #
# Cache
# --------------------------------------------------------------------------- #
def cached(key: str) -> Optional[List[str]]:
if not CACHE_DIR.exists():
CACHE_DIR.mkdir(parents=True)
f = CACHE_DIR / f"{key}.json"
if f.exists():
return json.loads(f.read_text())
return None
def store_cache(key: str, completions: List[str]) -> None:
f = CACHE_DIR / f"{key}.json"
f.write_text(json.dumps(completions))
# LRU eviction
all_files = sorted(CACHE_DIR.glob("*.json"), key=lambda p: p.stat().st_mtime)
for old in all_files[: -cfg()["cache_size"]]:
old.unlink(missing_ok=True)
# --------------------------------------------------------------------------- #
# Main entry
# --------------------------------------------------------------------------- #
async def complete_line(line: str) -> Optional[str]:
key = hashlib.md5(line.encode()).hexdigest()
comps = cached(key)
if comps is None:
# Write to /dev/tty if available to avoid interfering with command substitution
try:
status_console = Console(file=open('/dev/tty', 'w'), stderr=False)
except Exception:
status_console = console
with status_console.status("[green]Thinking…[/]"):
comps = await llm_complete(build_prompt(line))
store_cache(key, comps)
return await select_completion(comps)
# --------------------------------------------------------------------------- #
# CLI
# --------------------------------------------------------------------------- #
def install_keybinding():
r"""Inject Bash binding Alt+\ -> finish"""
rc = Path.home()/".bashrc"
marker = "# finish.py key-binding"
# Use bind -x to allow modifying READLINE_LINE
snippet = f'''{marker}
_finish_complete() {{
local result
result=$(finish --readline-complete "$READLINE_LINE" 2>/dev/null)
if [[ -n "$result" ]]; then
READLINE_LINE="$result"
READLINE_POINT=${{#READLINE_LINE}}
fi
}}
bind -x '"\\e\\\\": _finish_complete'
'''
text = rc.read_text() if rc.exists() else ""
if marker in text:
return
rc.write_text(text + "\n" + snippet)
console.print("[green]Key-binding installed (Alt+\\)[/] restart your shell.")
def main():
# Handle readline-complete flag before argparse (for bash bind -x)
if "--readline-complete" in sys.argv:
idx = sys.argv.index("--readline-complete")
line = sys.argv[idx + 1] if idx + 1 < len(sys.argv) else ""
if line.strip():
choice = asyncio.run(complete_line(line))
if choice:
print(choice)
return
# Legacy flag support
if sys.argv[-1] == "--accept-current-line":
line = os.environ.get("READLINE_LINE", "")
if line.strip():
choice = asyncio.run(complete_line(line))
if choice:
sys.stdout.write(f"\x1b]0;\a")
sys.stdout.write(f"\x1b[2K\r")
sys.stdout.write(choice)
sys.stdout.flush()
return
parser = argparse.ArgumentParser(prog="finish", description="AI shell completions")
parser.add_argument("--version", action="version", version=VERSION)
sub = parser.add_subparsers(dest="cmd")
sub.add_parser("install", help="add Alt+\\ key-binding to ~/.bashrc")
sub.add_parser("config", help="show current config")
p = sub.add_parser("command", help="simulate double-tab")
p.add_argument("words", nargs="*", help="partial command")
p.add_argument("--dry-run", action="store_true", help="show prompt only")
args = parser.parse_args()
if args.cmd == "install":
install_keybinding()
return
if args.cmd == "config":
console.print_json(json.dumps(cfg()))
return
if args.cmd == "command":
line = " ".join(args.words)
if args.dry_run:
console.print(build_prompt(line))
return
choice = asyncio.run(complete_line(line))
if choice:
# Check if we're in an interactive shell
if os.isatty(sys.stdout.fileno()):
console.print(f"[dim]Selected:[/] [cyan]{choice}[/]")
console.print(f"\n[yellow]Tip:[/] Use [bold]Alt+\\[/] keybinding for seamless completion!")
else:
print(choice)
return
if len(sys.argv) == 1:
parser.print_help()
return
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass

109
finish.sh → src/finish.sh Normal file → Executable file
View File

@@ -1,6 +1,6 @@
#!/bin/bash #!/bin/bash
############################################################################### ###############################################################################
# Enhanced Error Handling # # -ver 1- Enhanced Error Handling #
############################################################################### ###############################################################################
error_exit() { error_exit() {
@@ -224,11 +224,14 @@ $prompt"
"LMSTUDIO") "LMSTUDIO")
payload=$(echo "$base_payload" | jq '. + { payload=$(echo "$base_payload" | jq '. + {
max_tokens: -1, max_tokens: -1,
stream: false stream: true
}') }')
;; ;;
*) *)
payload=$(echo "$base_payload" | jq '. + {response_format: {type: "json_object"}}') # Default OpenAI-compatible providers increasingly expect
# response_format.type to be either "text" or a json_schema.
# Use "text" for maximum compatibility to avoid 400 errors.
payload=$(echo "$base_payload" | jq '. + {response_format: {type: "text"}}')
;; ;;
esac esac
echo "$payload" echo "$payload"
@@ -250,22 +253,28 @@ log_request() {
created=$(echo "$response_body" | jq -r ".created // $created") created=$(echo "$response_body" | jq -r ".created // $created")
api_cost=$(echo "$prompt_tokens_int * $ACSH_API_PROMPT_COST + $completion_tokens_int * $ACSH_API_COMPLETION_COST" | bc) api_cost=$(echo "$prompt_tokens_int * $ACSH_API_PROMPT_COST + $completion_tokens_int * $ACSH_API_COMPLETION_COST" | bc)
log_file=${ACSH_LOG_FILE:-"$HOME/.finish/finish.log"} log_file=${ACSH_LOG_FILE:-"$HOME/.finish/finish.log"}
mkdir -p "$(dirname "$log_file")" 2>/dev/null
echo "$created,$user_input_hash,$prompt_tokens_int,$completion_tokens_int,$api_cost" >> "$log_file" echo "$created,$user_input_hash,$prompt_tokens_int,$completion_tokens_int,$api_cost" >> "$log_file"
} }
openai_completion() { openai_completion() {
local content status_code response_body default_user_input user_input api_key payload endpoint timeout attempt max_attempts local content status_code response_body default_user_input user_input api_key payload endpoint timeout attempt max_attempts
local log_file debug_log local log_file debug_log
# Ensure configuration (provider, endpoint, etc.) is loaded before checks
acsh_load_config
log_file=${ACSH_LOG_FILE:-"$HOME/.finish/finish.log"}
debug_log="$HOME/.finish/debug.log"
endpoint=${ACSH_ENDPOINT:-"http://plato.lan:1234/v1/chat/completions"} endpoint=${ACSH_ENDPOINT:-"http://plato.lan:1234/v1/chat/completions"}
timeout=${ACSH_TIMEOUT:-30} timeout=${ACSH_TIMEOUT:-30}
default_user_input="Write two to six most likely commands given the provided information" default_user_input="Write two to six most likely commands given the provided information"
user_input=${*:-$default_user_input} user_input=${*:-$default_user_input}
log_file=${ACSH_LOG_FILE:-"$HOME/.finish/finish.log"}
debug_log="$HOME/.finish/debug.log"
# Only check for API key if not using local providers that don't require it
if [[ -z "$ACSH_ACTIVE_API_KEY" && ${ACSH_PROVIDER^^} != "OLLAMA" && ${ACSH_PROVIDER^^} != "LMSTUDIO" ]]; then if [[ -z "$ACSH_ACTIVE_API_KEY" && ${ACSH_PROVIDER^^} != "OLLAMA" && ${ACSH_PROVIDER^^} != "LMSTUDIO" ]]; then
echo_error "ACSH_ACTIVE_API_KEY not set. Please set it with: export ${ACSH_PROVIDER^^}_API_KEY=<your-api-key>" echo_error "ACSH_ACTIVE_API_KEY not set. Please set it with: export ${ACSH_PROVIDER^^}_API_KEY=<your-api-key>"
return return 1
fi fi
api_key="${ACSH_ACTIVE_API_KEY}" api_key="${ACSH_ACTIVE_API_KEY}"
payload=$(_build_payload "$user_input") payload=$(_build_payload "$user_input")
@@ -281,16 +290,73 @@ openai_completion() {
max_attempts=2 max_attempts=2
attempt=1 attempt=1
local stream_enabled=false
# Check if streaming is enabled in payload
if echo "$payload" | jq -e '.stream == true' > /dev/null 2>&1; then
stream_enabled=true
fi
while [ $attempt -le $max_attempts ]; do while [ $attempt -le $max_attempts ]; do
if [[ "${ACSH_PROVIDER^^}" == "OLLAMA" ]]; then if [[ "$stream_enabled" == true ]]; then
response=$(\curl -s -m "$timeout" -w "\n%{http_code}" "$endpoint" --data "$payload") # Streaming mode - collect chunks and display in real-time
local temp_file=$(mktemp)
local stream_content=""
if [[ "${ACSH_PROVIDER^^}" == "OLLAMA" ]]; then
\curl -s -m "$timeout" -w "\n%{http_code}" "$endpoint" --data "$payload" -N > "$temp_file"
else
if [[ -n "$api_key" ]]; then
\curl -s -m "$timeout" -w "\n%{http_code}" "$endpoint" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $api_key" \
-d "$payload" -N > "$temp_file"
else
\curl -s -m "$timeout" -w "\n%{http_code}" "$endpoint" \
-H "Content-Type: application/json" \
-d "$payload" -N > "$temp_file"
fi
fi
status_code=$(tail -n1 "$temp_file")
response_body=$(sed '$d' "$temp_file")
# Parse SSE stream and extract content
while IFS= read -r line; do
if [[ "$line" =~ ^data:\ (.+)$ ]]; then
chunk_data="${BASH_REMATCH[1]}"
if [[ "$chunk_data" != "[DONE]" ]]; then
chunk_content=$(echo "$chunk_data" | jq -r '.choices[0].delta.content // empty' 2>/dev/null)
if [[ -n "$chunk_content" && "$chunk_content" != "null" ]]; then
stream_content+="$chunk_content"
fi
fi
fi
done < <(echo "$response_body")
# Store accumulated content as response_body for later processing
# Use jq to properly escape the content string
response_body=$(jq -n --arg content "$stream_content" '{choices: [{message: {content: $content}}]}')
rm -f "$temp_file"
else else
response=$(\curl -s -m "$timeout" -w "\n%{http_code}" "$endpoint" \ # Non-streaming mode (original behavior)
-H "Content-Type: application/json" \ if [[ "${ACSH_PROVIDER^^}" == "OLLAMA" ]]; then
-d "$payload") response=$(\curl -s -m "$timeout" -w "\n%{http_code}" "$endpoint" --data "$payload")
else
if [[ -n "$api_key" ]]; then
response=$(\curl -s -m "$timeout" -w "\n%{http_code}" "$endpoint" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $api_key" \
-d "$payload")
else
response=$(\curl -s -m "$timeout" -w "\n%{http_code}" "$endpoint" \
-H "Content-Type: application/json" \
-d "$payload")
fi
fi
status_code=$(echo "$response" | tail -n1)
response_body=$(echo "$response" | sed '$d')
fi fi
status_code=$(echo "$response" | tail -n1)
response_body=$(echo "$response" | sed '$d')
# Debug logging # Debug logging
echo "Response status: $status_code" >> "$debug_log" echo "Response status: $status_code" >> "$debug_log"
@@ -316,8 +382,8 @@ openai_completion() {
500) echo_error "Internal Server Error: An unexpected error occurred on the API server." ;; 500) echo_error "Internal Server Error: An unexpected error occurred on the API server." ;;
*) echo_error "Unknown Error: Unexpected status code $status_code received. Response: $response_body" ;; *) echo_error "Unknown Error: Unexpected status code $status_code received. Response: $response_body" ;;
esac esac
return return 1
fi fi
if [[ "${ACSH_PROVIDER^^}" == "OLLAMA" ]]; then if [[ "${ACSH_PROVIDER^^}" == "OLLAMA" ]]; then
content=$(echo "$response_body" | jq -r '.message.content') content=$(echo "$response_body" | jq -r '.message.content')
@@ -347,7 +413,7 @@ openai_completion() {
if [[ -z "$completions" ]]; then if [[ -z "$completions" ]]; then
echo_error "Failed to parse completions from API response. Check $debug_log for details." echo_error "Failed to parse completions from API response. Check $debug_log for details."
return return 1
fi fi
echo -n "$completions" echo -n "$completions"
@@ -400,13 +466,6 @@ _finishsh() {
if [[ ${#COMPREPLY[@]} -eq 0 && $COMP_TYPE -eq 63 ]]; then if [[ ${#COMPREPLY[@]} -eq 0 && $COMP_TYPE -eq 63 ]]; then
local completions user_input user_input_hash local completions user_input user_input_hash
acsh_load_config acsh_load_config
if [[ -z "$ACSH_ACTIVE_API_KEY" && ${ACSH_PROVIDER^^} != "OLLAMA" && ${ACSH_PROVIDER^^} != "LMSTUDIO" ]]; then
local provider_key="${ACSH_PROVIDER}_API_KEY"
provider_key=$(echo "$provider_key" | tr '[:lower:]' '[:upper:]')
echo_error "${provider_key} is not set. Please set it using: export ${provider_key}=<your-api-key> or disable finish via: finish disable"
echo
return
fi
if [[ -n "${COMP_WORDS[*]}" ]]; then if [[ -n "${COMP_WORDS[*]}" ]]; then
command="${COMP_WORDS[0]}" command="${COMP_WORDS[0]}"
if [[ -n "$COMP_CWORD" && "$COMP_CWORD" -lt "${#COMP_WORDS[@]}" ]]; then if [[ -n "$COMP_CWORD" && "$COMP_CWORD" -lt "${#COMP_WORDS[@]}" ]]; then
@@ -486,7 +545,7 @@ show_help() {
echo " clear Clear cache and log files" echo " clear Clear cache and log files"
echo " --help Show this help message" echo " --help Show this help message"
echo echo
echo "Submit issues at: https://git.appmodel.nl/Tour/finish/issues" echo "Submit issues at: https://git.appmodel.nl/tour/finish/issues"
} }
is_subshell() { is_subshell() {
@@ -664,7 +723,7 @@ acsh_load_config() {
install_command() { install_command() {
local bashrc_file="$HOME/.bashrc" finish_setup="source finish enable" finish_cli_setup="complete -F _finishsh_cli finish" local bashrc_file="$HOME/.bashrc" finish_setup="source finish enable" finish_cli_setup="complete -F _finishsh_cli finish"
if ! command -v finish &>/dev/null; then if ! command -v finish &>/dev/null; then
echo_error "finish.sh not in PATH. Follow install instructions at https://git.appmodel.nl/Tour/finish" echo_error "finish.sh not in PATH. Follow install instructions at https://git.appmodel.nl/tour/finish"
return return
fi fi
if [[ ! -d "$HOME/.finish" ]]; then if [[ ! -d "$HOME/.finish" ]]; then

View File

@@ -1,43 +0,0 @@
#!/usr/bin/env bats
setup() {
# Install finish.sh and run testing against the main branch
wget -qO- https://git.appmodel.nl/Tour/finish/raw/branch/main/docs/install.sh | bash -s -- main
# Source bashrc to make sure finish is available in the current session
source ~/.bashrc
# Configure for local LM-Studio
finish config set provider lmstudio
finish config set endpoint http://plato.lan:1234/v1/chat/completions
finish config set model darkidol-llama-3.1-8b-instruct-1.3-uncensored_gguf:2
}
teardown() {
# Remove finish.sh installation
finish remove -y
}
@test "which finish returns something" {
run which finish
[ "$status" -eq 0 ]
[ -n "$output" ]
}
@test "finish returns a string containing finish.sh (case insensitive)" {
run finish
[ "$status" -eq 0 ]
[[ "$output" =~ [Aa]utocomplete\.sh ]]
}
@test "finish config should have lmstudio provider" {
run finish config
[ "$status" -eq 0 ]
[[ "$output" =~ lmstudio ]]
}
@test "finish command 'ls # show largest files' should return something" {
run finish command "ls # show largest files"
[ "$status" -eq 0 ]
[ -n "$output" ]
}

82
tests/test_finish.sh Executable file
View File

@@ -0,0 +1,82 @@
#!/usr/bin/env bash
set -euo pipefail
# ----- tiny test framework -----
pass() { printf "✔ %s\n" "$1"; }
fail() { printf "✘ %s\n" "$1"; exit 1; }
assert_ok() {
if [[ $1 -ne 0 ]]; then fail "$2"; fi
}
assert_nonempty() {
if [[ -z "$1" ]]; then fail "$2"; fi
}
assert_contains() {
if [[ ! "$1" =~ $2 ]]; then fail "$3"; fi
}
# Ensure output does not contain any of the common error phrases.
assert_no_common_errors() {
local output="$1"
local -a patterns=(
"ACSH_ACTIVE_API_KEY not set"
"Bad Request"
"Unauthorized"
"Too Many Requests"
"Internal Server Error"
"Unknown Error"
"Failed to parse completions"
"SyntaxError"
"ERROR:"
)
for pat in "${patterns[@]}"; do
if grep -qE "$pat" <<<"$output"; then
printf "Test failed: output contains error pattern '%s'\nFull output follows:\n%s\n" "$pat" "$output"
exit 1
fi
done
}
# --------------------------------
echo "=== SETUP ==="
bash ./docs/install.sh main
source ~/.bashrc
finish config set provider lmstudio
finish config set endpoint http://plato.lan:1234/v1/chat/completions
finish config set model darkidol-llama-3.1-8b-instruct-1.3-uncensored_gguf:2
# -------------------------------- TESTS --------------------------------
# 1) which finish should return something
out=$(which finish 2>&1) ; code=$?
assert_ok $code "which finish should exit 0"
assert_nonempty "$out" "which finish returned empty output"
pass "which finish returns path"
# 2) finish output should contain 'finish.sh'
out=$(finish 2>&1) ; code=$?
assert_ok $code "finish should exit 0"
assert_contains "$out" "finish\.sh" "finish output does not contain finish.sh"
pass "finish outputs reference to finish.sh"
# 3) config should contain lmstudio
out=$(finish config 2>&1) ; code=$?
assert_ok $code "finish config should exit 0"
assert_contains "$out" "lmstudio" "finish config missing lmstudio provider"
pass "finish config contains lmstudio"
# 4) finish command should run
out=$(finish command "ls # show largest files" 2>&1) ; code=$?
assert_ok $code "finish command did not exit 0"
assert_nonempty "$out" "finish command returned empty output"
assert_no_common_errors "$out"
pass "finish command executed and returned output"
# ------------------------------- CLEANUP --------------------------------
echo "=== CLEANUP ==="
finish remove -y || true
echo "All tests passed."

112
tests/test_finish_ollama.sh Executable file
View File

@@ -0,0 +1,112 @@
#!/usr/bin/env bash
set -euo pipefail
# ----- tiny test framework -----
pass() { printf "✔ %s\n" "$1"; }
fail() { printf "✘ %s\n" "$1"; exit 1; }
assert_ok() {
if [[ $1 -ne 0 ]]; then fail "$2"; fi
}
assert_nonempty() {
if [[ -z "$1" ]]; then fail "$2"; fi
}
assert_contains() {
if [[ ! "$1" =~ $2 ]]; then fail "$3"; fi
}
# Ensure output does not contain any of the common error phrases.
assert_no_common_errors() {
local output="$1"
local -a patterns=(
"ACSH_ACTIVE_API_KEY not set"
"Bad Request"
"Unauthorized"
"Too Many Requests"
"Internal Server Error"
"Unknown Error"
"Failed to parse completions"
"SyntaxError"
"ERROR:"
)
for pat in "${patterns[@]}"; do
if grep -qE "$pat" <<<"$output"; then
printf "Test failed: output contains error pattern '%s'\nFull output follows:\n%s\n" "$pat" "$output"
exit 1
fi
done
}
# --------------------------------
echo "=== OLLAMA TEST: SETUP ==="
# Always run from the repo root for predictable paths
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR/.."
# Install finish (adds to PATH and shell rc files)
bash ./docs/install.sh main
# Load the updated shell rc so `finish` is available on PATH in this session
if [[ -f ~/.bashrc ]]; then
# shellcheck disable=SC1090
source ~/.bashrc || true
fi
# ---- Check prerequisites for local Ollama ----
OLLAMA_URL=${OLLAMA_URL:-"http://localhost:11434"}
echo "Checking Ollama at $OLLAMA_URL ..."
if ! curl -fsS --max-time 2 "$OLLAMA_URL/api/version" >/dev/null 2>&1; then
echo "SKIP: Ollama is not reachable at $OLLAMA_URL. Start it with: 'ollama serve' and ensure a model is pulled (e.g., 'ollama pull llama3')."
exit 0
fi
# Try to discover an installed model; prefer the first listed
MODEL=$(curl -fsS "$OLLAMA_URL/api/tags" | jq -r '.models[0].name // empty' || echo "")
if [[ -z "$MODEL" ]]; then
echo "SKIP: No local Ollama models found. Pull one first, e.g.: 'ollama pull llama3:latest'"
exit 0
fi
echo "Using Ollama model: $MODEL"
# -------------------------------- TESTS --------------------------------
# 1) configure finish to use local Ollama
finish config set provider ollama
finish config set endpoint "$OLLAMA_URL/api/chat"
finish config set model "$MODEL"
# 2) which finish should return something
out=$(which finish 2>&1) ; code=$?
assert_ok $code "which finish should exit 0"
assert_nonempty "$out" "which finish returned empty output"
pass "which finish returns path"
# 3) finish output should contain 'finish.sh'
out=$(finish 2>&1) ; code=$?
assert_ok $code "finish should exit 0"
assert_contains "$out" "finish\.sh" "finish output does not contain finish.sh"
pass "finish outputs reference to finish.sh"
# 4) config should mention ollama
out=$(finish config 2>&1) ; code=$?
assert_ok $code "finish config should exit 0"
assert_contains "$out" "ollama" "finish config missing ollama provider"
pass "finish config contains ollama"
# 5) finish command should run and return output
out=$(finish command "ls # show largest files" 2>&1) ; code=$?
assert_ok $code "finish command did not exit 0"
assert_nonempty "$out" "finish command returned empty output"
assert_no_common_errors "$out"
pass "finish command executed and returned output (ollama)"
# ------------------------------- CLEANUP --------------------------------
echo "=== OLLAMA TEST: CLEANUP ==="
finish remove -y || true
echo "Ollama test passed."