The scraper uses TEXT IDs like "A7-40063-2" but DatabaseService was creating
BIGINT columns, causing PRIMARY KEY constraint failures on the server.
Changes:
- auction_id: BIGINT -> TEXT PRIMARY KEY
- lot_id: BIGINT -> TEXT PRIMARY KEY
- sale_id: BIGINT -> TEXT
- Added UNIQUE constraints on URLs
- Added migration script (fix-schema.sql)
This fixes the "UNIQUE constraint failed: auctions.auction_id" errors
and allows bid data to populate correctly on the server.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Former-commit-id: 12c3a732e4
2.9 KiB
2.9 KiB
Database Schema Fix Instructions
Problem
The server database was created with BIGINT primary keys for auction_id and lot_id, but the scraper uses TEXT IDs like "A7-40063-2". This causes PRIMARY KEY constraint failures.
Root Cause
- Local DB:
auction_id TEXT PRIMARY KEY,lot_id TEXT PRIMARY KEY - Server DB (old):
auction_id BIGINT PRIMARY KEY,lot_id BIGINT PRIMARY KEY - Scraper data: Uses TEXT IDs like "A7-40063-2", "A1-34732-49"
This mismatch prevents the scraper from inserting data, resulting in zero bids showing in the UI.
Solution
Step 1: Backup Server Database
ssh tour@192.168.1.149
cd /mnt/okcomputer/output
cp cache.db cache.db.backup.$(date +%Y%m%d_%H%M%S)
Step 2: Upload Fix Script
From your local machine:
scp C:\vibe\auctiora\fix-schema.sql tour@192.168.1.149:/tmp/
Step 3: Stop the Application
ssh tour@192.168.1.149
cd /path/to/docker/compose # wherever docker-compose.yml is located
docker-compose down
Step 4: Apply Schema Fix
ssh tour@192.168.1.149
cd /mnt/okcomputer/output
sqlite3 cache.db < /tmp/fix-schema.sql
Step 5: Verify Schema
sqlite3 cache.db "PRAGMA table_info(auctions);"
# Should show: auction_id TEXT PRIMARY KEY
sqlite3 cache.db "PRAGMA table_info(lots);"
# Should show: lot_id TEXT PRIMARY KEY, sale_id TEXT, auction_id TEXT
# Check data integrity
sqlite3 cache.db "SELECT COUNT(*) FROM auctions;"
sqlite3 cache.db "SELECT COUNT(*) FROM lots;"
Step 6: Rebuild Application with Fixed Schema
# Build new image with fixed DatabaseService.java
cd C:\vibe\auctiora
./mvnw clean package -DskipTests
# Copy new JAR to server
scp target/quarkus-app/quarkus-run.jar tour@192.168.1.149:/path/to/app/
Step 7: Restart Application
ssh tour@192.168.1.149
cd /path/to/docker/compose
docker-compose up -d
Step 8: Verify Fix
# Check logs for successful imports
docker-compose logs -f --tail=100
# Should see:
# ✓ Imported XXX auctions
# ✓ Imported XXXXX lots
# No more "PRIMARY KEY constraint failed" errors
# Check UI at http://192.168.1.149:8081/
# Should now show:
# - Lots with Bids: > 0
# - Total Bid Value: > €0.00
# - Average Bid: > €0.00
Alternative: Quick Fix Without Downtime
If you can't afford downtime, delete the corrupted database and let it rebuild:
ssh tour@192.168.1.149
cd /mnt/okcomputer/output
mv cache.db cache.db.old
docker-compose restart
# The app will create a new database with correct schema
# Wait for scraper to re-populate data (may take 10-15 minutes)
Files Changed
DatabaseService.java- Fixed schema definitions (auction_id, lot_id, sale_id now TEXT)fix-schema.sql- SQL migration script to fix existing databaseSCHEMA_FIX_INSTRUCTIONS.md- This file
Testing Locally
Before deploying to server, test locally:
cd C:\vibe\auctiora
./mvnw clean test
# All tests should pass with new schema