Fix database schema: Change auction_id and lot_id from BIGINT to TEXT
The scraper uses TEXT IDs like "A7-40063-2" but DatabaseService was creating BIGINT columns, causing PRIMARY KEY constraint failures on the server. Changes: - auction_id: BIGINT -> TEXT PRIMARY KEY - lot_id: BIGINT -> TEXT PRIMARY KEY - sale_id: BIGINT -> TEXT - Added UNIQUE constraints on URLs - Added migration script (fix-schema.sql) This fixes the "UNIQUE constraint failed: auctions.auction_id" errors and allows bid data to populate correctly on the server. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
113
SCHEMA_FIX_INSTRUCTIONS.md
Normal file
113
SCHEMA_FIX_INSTRUCTIONS.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# Database Schema Fix Instructions
|
||||
|
||||
## Problem
|
||||
The server database was created with `BIGINT` primary keys for `auction_id` and `lot_id`, but the scraper uses TEXT IDs like "A7-40063-2". This causes PRIMARY KEY constraint failures.
|
||||
|
||||
## Root Cause
|
||||
- Local DB: `auction_id TEXT PRIMARY KEY`, `lot_id TEXT PRIMARY KEY`
|
||||
- Server DB (old): `auction_id BIGINT PRIMARY KEY`, `lot_id BIGINT PRIMARY KEY`
|
||||
- Scraper data: Uses TEXT IDs like "A7-40063-2", "A1-34732-49"
|
||||
|
||||
This mismatch prevents the scraper from inserting data, resulting in zero bids showing in the UI.
|
||||
|
||||
## Solution
|
||||
|
||||
### Step 1: Backup Server Database
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /mnt/okcomputer/output
|
||||
cp cache.db cache.db.backup.$(date +%Y%m%d_%H%M%S)
|
||||
```
|
||||
|
||||
### Step 2: Upload Fix Script
|
||||
From your local machine:
|
||||
```bash
|
||||
scp C:\vibe\auctiora\fix-schema.sql tour@192.168.1.149:/tmp/
|
||||
```
|
||||
|
||||
### Step 3: Stop the Application
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /path/to/docker/compose # wherever docker-compose.yml is located
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
### Step 4: Apply Schema Fix
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /mnt/okcomputer/output
|
||||
sqlite3 cache.db < /tmp/fix-schema.sql
|
||||
```
|
||||
|
||||
### Step 5: Verify Schema
|
||||
```bash
|
||||
sqlite3 cache.db "PRAGMA table_info(auctions);"
|
||||
# Should show: auction_id TEXT PRIMARY KEY
|
||||
|
||||
sqlite3 cache.db "PRAGMA table_info(lots);"
|
||||
# Should show: lot_id TEXT PRIMARY KEY, sale_id TEXT, auction_id TEXT
|
||||
|
||||
# Check data integrity
|
||||
sqlite3 cache.db "SELECT COUNT(*) FROM auctions;"
|
||||
sqlite3 cache.db "SELECT COUNT(*) FROM lots;"
|
||||
```
|
||||
|
||||
### Step 6: Rebuild Application with Fixed Schema
|
||||
```bash
|
||||
# Build new image with fixed DatabaseService.java
|
||||
cd C:\vibe\auctiora
|
||||
./mvnw clean package -DskipTests
|
||||
|
||||
# Copy new JAR to server
|
||||
scp target/quarkus-app/quarkus-run.jar tour@192.168.1.149:/path/to/app/
|
||||
```
|
||||
|
||||
### Step 7: Restart Application
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /path/to/docker/compose
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Step 8: Verify Fix
|
||||
```bash
|
||||
# Check logs for successful imports
|
||||
docker-compose logs -f --tail=100
|
||||
|
||||
# Should see:
|
||||
# ✓ Imported XXX auctions
|
||||
# ✓ Imported XXXXX lots
|
||||
# No more "PRIMARY KEY constraint failed" errors
|
||||
|
||||
# Check UI at http://192.168.1.149:8081/
|
||||
# Should now show:
|
||||
# - Lots with Bids: > 0
|
||||
# - Total Bid Value: > €0.00
|
||||
# - Average Bid: > €0.00
|
||||
```
|
||||
|
||||
## Alternative: Quick Fix Without Downtime
|
||||
If you can't afford downtime, delete the corrupted database and let it rebuild:
|
||||
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /mnt/okcomputer/output
|
||||
mv cache.db cache.db.old
|
||||
docker-compose restart
|
||||
|
||||
# The app will create a new database with correct schema
|
||||
# Wait for scraper to re-populate data (may take 10-15 minutes)
|
||||
```
|
||||
|
||||
## Files Changed
|
||||
1. `DatabaseService.java` - Fixed schema definitions (auction_id, lot_id, sale_id now TEXT)
|
||||
2. `fix-schema.sql` - SQL migration script to fix existing database
|
||||
3. `SCHEMA_FIX_INSTRUCTIONS.md` - This file
|
||||
|
||||
## Testing Locally
|
||||
Before deploying to server, test locally:
|
||||
```bash
|
||||
cd C:\vibe\auctiora
|
||||
./mvnw clean test
|
||||
# All tests should pass with new schema
|
||||
```
|
||||
128
fix-schema.sql
Normal file
128
fix-schema.sql
Normal file
@@ -0,0 +1,128 @@
|
||||
-- Schema Fix Script for Server Database
|
||||
-- This script migrates auction_id and lot_id from BIGINT to TEXT to match scraper format
|
||||
-- The scraper uses TEXT IDs like "A7-40063-2" but DatabaseService.java was creating BIGINT columns
|
||||
|
||||
-- Step 1: Backup existing data
|
||||
CREATE TABLE IF NOT EXISTS auctions_backup AS SELECT * FROM auctions;
|
||||
CREATE TABLE IF NOT EXISTS lots_backup AS SELECT * FROM lots;
|
||||
CREATE TABLE IF NOT EXISTS images_backup AS SELECT * FROM images;
|
||||
|
||||
-- Step 2: Drop existing tables (CASCADE would drop foreign keys)
|
||||
DROP TABLE IF EXISTS images;
|
||||
DROP TABLE IF EXISTS lots;
|
||||
DROP TABLE IF EXISTS auctions;
|
||||
|
||||
-- Step 3: Recreate auctions table with TEXT primary key (matching scraper format)
|
||||
CREATE TABLE auctions (
|
||||
auction_id TEXT PRIMARY KEY,
|
||||
title TEXT NOT NULL,
|
||||
location TEXT,
|
||||
city TEXT,
|
||||
country TEXT,
|
||||
url TEXT NOT NULL UNIQUE,
|
||||
type TEXT,
|
||||
lot_count INTEGER DEFAULT 0,
|
||||
closing_time TEXT,
|
||||
discovered_at INTEGER
|
||||
);
|
||||
|
||||
-- Step 4: Recreate lots table with TEXT primary key (matching scraper format)
|
||||
CREATE TABLE lots (
|
||||
lot_id TEXT PRIMARY KEY,
|
||||
sale_id TEXT,
|
||||
auction_id TEXT,
|
||||
title TEXT,
|
||||
description TEXT,
|
||||
manufacturer TEXT,
|
||||
type TEXT,
|
||||
year INTEGER,
|
||||
category TEXT,
|
||||
current_bid REAL,
|
||||
currency TEXT DEFAULT 'EUR',
|
||||
url TEXT UNIQUE,
|
||||
closing_time TEXT,
|
||||
closing_notified INTEGER DEFAULT 0,
|
||||
starting_bid REAL,
|
||||
minimum_bid REAL,
|
||||
status TEXT,
|
||||
brand TEXT,
|
||||
model TEXT,
|
||||
attributes_json TEXT,
|
||||
first_bid_time TEXT,
|
||||
last_bid_time TEXT,
|
||||
bid_velocity REAL,
|
||||
bid_increment REAL,
|
||||
year_manufactured INTEGER,
|
||||
condition_score REAL,
|
||||
condition_description TEXT,
|
||||
serial_number TEXT,
|
||||
damage_description TEXT,
|
||||
followers_count INTEGER DEFAULT 0,
|
||||
estimated_min_price REAL,
|
||||
estimated_max_price REAL,
|
||||
lot_condition TEXT,
|
||||
appearance TEXT,
|
||||
estimated_min REAL,
|
||||
estimated_max REAL,
|
||||
next_bid_step_cents INTEGER,
|
||||
condition TEXT,
|
||||
category_path TEXT,
|
||||
city_location TEXT,
|
||||
country_code TEXT,
|
||||
bidding_status TEXT,
|
||||
packaging TEXT,
|
||||
quantity INTEGER,
|
||||
vat REAL,
|
||||
buyer_premium_percentage REAL,
|
||||
remarks TEXT,
|
||||
reserve_price REAL,
|
||||
reserve_met INTEGER,
|
||||
view_count INTEGER,
|
||||
bid_count INTEGER,
|
||||
viewing_time TEXT,
|
||||
pickup_date TEXT,
|
||||
location TEXT,
|
||||
scraped_at TEXT,
|
||||
FOREIGN KEY (auction_id) REFERENCES auctions(auction_id),
|
||||
FOREIGN KEY (sale_id) REFERENCES auctions(auction_id)
|
||||
);
|
||||
|
||||
-- Step 5: Recreate images table
|
||||
CREATE TABLE images (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT,
|
||||
url TEXT,
|
||||
local_path TEXT,
|
||||
labels TEXT,
|
||||
processed_at INTEGER,
|
||||
downloaded INTEGER DEFAULT 0,
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
);
|
||||
|
||||
-- Step 6: Create bid_history table if it doesn't exist
|
||||
CREATE TABLE IF NOT EXISTS bid_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT,
|
||||
bid_amount REAL,
|
||||
bid_time TEXT,
|
||||
is_autobid INTEGER DEFAULT 0,
|
||||
bidder_id TEXT,
|
||||
bidder_number INTEGER,
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
);
|
||||
|
||||
-- Step 7: Restore data from backup (converting BIGINT to TEXT if needed)
|
||||
INSERT OR IGNORE INTO auctions SELECT * FROM auctions_backup;
|
||||
INSERT OR IGNORE INTO lots SELECT * FROM lots_backup;
|
||||
INSERT OR IGNORE INTO images SELECT * FROM images_backup;
|
||||
|
||||
-- Step 8: Create indexes for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_auctions_country ON auctions(country);
|
||||
CREATE INDEX IF NOT EXISTS idx_lots_sale_id ON lots(sale_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_lots_auction_id ON lots(auction_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_images_lot_id ON images(lot_id);
|
||||
|
||||
-- Step 9: Clean up backup tables (optional - comment out if you want to keep backups)
|
||||
-- DROP TABLE auctions_backup;
|
||||
-- DROP TABLE lots_backup;
|
||||
-- DROP TABLE images_backup;
|
||||
@@ -40,14 +40,15 @@ public class DatabaseService {
|
||||
stmt.execute("PRAGMA synchronous=NORMAL");
|
||||
|
||||
// Auctions table (populated by external scraper)
|
||||
// auction_id is TEXT to match scraper format (e.g., "A7-40063-2")
|
||||
stmt.execute("""
|
||||
CREATE TABLE IF NOT EXISTS auctions (
|
||||
auction_id BIGINT PRIMARY KEY,
|
||||
auction_id TEXT PRIMARY KEY,
|
||||
title TEXT NOT NULL,
|
||||
location TEXT,
|
||||
city TEXT,
|
||||
country TEXT,
|
||||
url TEXT NOT NULL,
|
||||
url TEXT NOT NULL UNIQUE,
|
||||
type TEXT,
|
||||
lot_count INTEGER DEFAULT 0,
|
||||
closing_time TEXT,
|
||||
@@ -55,10 +56,12 @@ public class DatabaseService {
|
||||
)""");
|
||||
|
||||
// Lots table (populated by external scraper)
|
||||
// lot_id and sale_id are TEXT to match scraper format (e.g., "A1-34732-49")
|
||||
stmt.execute("""
|
||||
CREATE TABLE IF NOT EXISTS lots (
|
||||
lot_id BIGINT PRIMARY KEY,
|
||||
sale_id BIGINT,
|
||||
lot_id TEXT PRIMARY KEY,
|
||||
sale_id TEXT,
|
||||
auction_id TEXT,
|
||||
title TEXT,
|
||||
description TEXT,
|
||||
manufacturer TEXT,
|
||||
@@ -67,18 +70,20 @@ public class DatabaseService {
|
||||
category TEXT,
|
||||
current_bid REAL,
|
||||
currency TEXT,
|
||||
url TEXT,
|
||||
url TEXT UNIQUE,
|
||||
closing_time TEXT,
|
||||
closing_notified INTEGER DEFAULT 0,
|
||||
FOREIGN KEY (sale_id) REFERENCES auctions(auction_id)
|
||||
FOREIGN KEY (sale_id) REFERENCES auctions(auction_id),
|
||||
FOREIGN KEY (auction_id) REFERENCES auctions(auction_id)
|
||||
)""");
|
||||
|
||||
// Images table (populated by external scraper with URLs and local_path)
|
||||
// This process only adds labels via object detection
|
||||
// lot_id is TEXT to match scraper format
|
||||
stmt.execute("""
|
||||
CREATE TABLE IF NOT EXISTS images (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id INTEGER,
|
||||
lot_id TEXT,
|
||||
url TEXT,
|
||||
local_path TEXT,
|
||||
labels TEXT,
|
||||
|
||||
Reference in New Issue
Block a user