Compare commits
4 Commits
7600cebcbb
...
3cc0d40fa3
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3cc0d40fa3 | ||
|
|
be65f4a5e6 | ||
|
|
3358a2693c | ||
|
|
62cda5c0cb |
2
.mvn/wrapper/maven-wrapper.config
vendored
2
.mvn/wrapper/maven-wrapper.config
vendored
@@ -1 +1 @@
|
||||
jvmArguments=--add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED
|
||||
jvmArguments=-Djava.util.logging.manager=org.jboss.logmanager.LogManager --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED
|
||||
@@ -99,8 +99,8 @@ brew install opencv
|
||||
Download YOLO model files for object detection:
|
||||
|
||||
```bash
|
||||
mkdir models
|
||||
cd models
|
||||
mkdir /mnt/okcomputer/output/models
|
||||
cd /mnt/okcomputer/output/models
|
||||
|
||||
# Download YOLOv4 config
|
||||
wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg
|
||||
|
||||
@@ -1,113 +0,0 @@
|
||||
# Database Schema Fix Instructions
|
||||
|
||||
## Problem
|
||||
The server database was created with `BIGINT` primary keys for `auction_id` and `lot_id`, but the scraper uses TEXT IDs like "A7-40063-2". This causes PRIMARY KEY constraint failures.
|
||||
|
||||
## Root Cause
|
||||
- Local DB: `auction_id TEXT PRIMARY KEY`, `lot_id TEXT PRIMARY KEY`
|
||||
- Server DB (old): `auction_id BIGINT PRIMARY KEY`, `lot_id BIGINT PRIMARY KEY`
|
||||
- Scraper data: Uses TEXT IDs like "A7-40063-2", "A1-34732-49"
|
||||
|
||||
This mismatch prevents the scraper from inserting data, resulting in zero bids showing in the UI.
|
||||
|
||||
## Solution
|
||||
|
||||
### Step 1: Backup Server Database
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /mnt/okcomputer/output
|
||||
cp cache.db cache.db.backup.$(date +%Y%m%d_%H%M%S)
|
||||
```
|
||||
|
||||
### Step 2: Upload Fix Script
|
||||
From your local machine:
|
||||
```bash
|
||||
scp C:\vibe\auctiora\fix-schema.sql tour@192.168.1.149:/tmp/
|
||||
```
|
||||
|
||||
### Step 3: Stop the Application
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /path/to/docker/compose # wherever docker-compose.yml is located
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
### Step 4: Apply Schema Fix
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /mnt/okcomputer/output
|
||||
sqlite3 cache.db < /tmp/fix-schema.sql
|
||||
```
|
||||
|
||||
### Step 5: Verify Schema
|
||||
```bash
|
||||
sqlite3 cache.db "PRAGMA table_info(auctions);"
|
||||
# Should show: auction_id TEXT PRIMARY KEY
|
||||
|
||||
sqlite3 cache.db "PRAGMA table_info(lots);"
|
||||
# Should show: lot_id TEXT PRIMARY KEY, sale_id TEXT, auction_id TEXT
|
||||
|
||||
# Check data integrity
|
||||
sqlite3 cache.db "SELECT COUNT(*) FROM auctions;"
|
||||
sqlite3 cache.db "SELECT COUNT(*) FROM lots;"
|
||||
```
|
||||
|
||||
### Step 6: Rebuild Application with Fixed Schema
|
||||
```bash
|
||||
# Build new image with fixed DatabaseService.java
|
||||
cd C:\vibe\auctiora
|
||||
./mvnw clean package -DskipTests
|
||||
|
||||
# Copy new JAR to server
|
||||
scp target/quarkus-app/quarkus-run.jar tour@192.168.1.149:/path/to/app/
|
||||
```
|
||||
|
||||
### Step 7: Restart Application
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /path/to/docker/compose
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Step 8: Verify Fix
|
||||
```bash
|
||||
# Check logs for successful imports
|
||||
docker-compose logs -f --tail=100
|
||||
|
||||
# Should see:
|
||||
# ✓ Imported XXX auctions
|
||||
# ✓ Imported XXXXX lots
|
||||
# No more "PRIMARY KEY constraint failed" errors
|
||||
|
||||
# Check UI at http://192.168.1.149:8081/
|
||||
# Should now show:
|
||||
# - Lots with Bids: > 0
|
||||
# - Total Bid Value: > €0.00
|
||||
# - Average Bid: > €0.00
|
||||
```
|
||||
|
||||
## Alternative: Quick Fix Without Downtime
|
||||
If you can't afford downtime, delete the corrupted database and let it rebuild:
|
||||
|
||||
```bash
|
||||
ssh tour@192.168.1.149
|
||||
cd /mnt/okcomputer/output
|
||||
mv cache.db cache.db.old
|
||||
docker-compose restart
|
||||
|
||||
# The app will create a new database with correct schema
|
||||
# Wait for scraper to re-populate data (may take 10-15 minutes)
|
||||
```
|
||||
|
||||
## Files Changed
|
||||
1. `DatabaseService.java` - Fixed schema definitions (auction_id, lot_id, sale_id now TEXT)
|
||||
2. `fix-schema.sql` - SQL migration script to fix existing database
|
||||
3. `SCHEMA_FIX_INSTRUCTIONS.md` - This file
|
||||
|
||||
## Testing Locally
|
||||
Before deploying to server, test locally:
|
||||
```bash
|
||||
cd C:\vibe\auctiora
|
||||
./mvnw clean test
|
||||
# All tests should pass with new schema
|
||||
```
|
||||
@@ -14,8 +14,8 @@ services:
|
||||
- AUCTION_IMAGES_PATH=/mnt/okcomputer/output/images
|
||||
|
||||
# Notification configuration
|
||||
- AUCTION_NOTIFICATION_CONFIG=desktop
|
||||
|
||||
# - AUCTION_NOTIFICATION_CONFIG=desktop
|
||||
- AUCTION_NOTIFICATION_CONFIG=smtp:michael.bakker1986@gmail.com:agrepolhlnvhipkv:michael.bakker1986@gmail.com
|
||||
# Quarkus configuration
|
||||
- QUARKUS_HTTP_PORT=8081
|
||||
- QUARKUS_HTTP_HOST=0.0.0.0
|
||||
|
||||
@@ -1,192 +0,0 @@
|
||||
# Database Cleanup Guide
|
||||
|
||||
## Problem: Mixed Data Formats
|
||||
|
||||
Your production database (`cache.db`) contains data from two different scrapers:
|
||||
|
||||
### Valid Data (99.92%)
|
||||
- **Format**: `A1-34732-49` (lot_id) + `c1f44ec2-ad6e-4c98-b0e2-cb1d8ccddcab` (auction_id UUID)
|
||||
- **Count**: 16,794 lots
|
||||
- **Source**: Current GraphQL-based scraper
|
||||
- **Status**: ✅ Clean, with proper auction_id
|
||||
|
||||
### Invalid Data (0.08%)
|
||||
- **Format**: `bmw-550i-4-4-v8-high-executive-...` (slug as lot_id) + `""` (empty auction_id)
|
||||
- **Count**: 13 lots
|
||||
- **Source**: Old legacy scraper
|
||||
- **Status**: ❌ Missing auction_id, causes issues
|
||||
|
||||
## Impact
|
||||
|
||||
These 13 invalid entries:
|
||||
- Cause `NullPointerException` in analytics when grouping by country
|
||||
- Cannot be properly linked to auctions
|
||||
- Skew statistics slightly
|
||||
- May cause issues with intelligence features that rely on auction_id
|
||||
|
||||
## Solution 1: Clean Sync (Recommended)
|
||||
|
||||
The updated sync script now **automatically removes old local data** before syncing:
|
||||
|
||||
```bash
|
||||
# Windows PowerShell
|
||||
.\scripts\Sync-ProductionData.ps1
|
||||
|
||||
# Linux/Mac
|
||||
./scripts/sync-production-data.sh --db-only
|
||||
```
|
||||
|
||||
**What it does**:
|
||||
1. Backs up existing database to `cache.db.backup-YYYYMMDD-HHMMSS`
|
||||
2. **Removes old local database completely**
|
||||
3. Downloads fresh copy from production
|
||||
4. Shows data quality report
|
||||
|
||||
**Output includes**:
|
||||
```
|
||||
Database statistics:
|
||||
┌─────────────┬────────┐
|
||||
│ table_name │ count │
|
||||
├─────────────┼────────┤
|
||||
│ auctions │ 526 │
|
||||
│ lots │ 16807 │
|
||||
│ images │ 536502 │
|
||||
│ cache │ 2134 │
|
||||
└─────────────┴────────┘
|
||||
|
||||
Data quality:
|
||||
┌────────────────────────────────────┬────────┬────────────┐
|
||||
│ metric │ count │ percentage │
|
||||
├────────────────────────────────────┼────────┼────────────┤
|
||||
│ Valid lots │ 16794 │ 99.92% │
|
||||
│ Invalid lots (missing auction_id) │ 13 │ 0.08% │
|
||||
│ Lots with intelligence fields │ 0 │ 0.00% │
|
||||
└────────────────────────────────────┴────────┴────────────┘
|
||||
```
|
||||
|
||||
## Solution 2: Manual Cleanup
|
||||
|
||||
If you want to clean your existing local database without re-downloading:
|
||||
|
||||
```bash
|
||||
# Dry run (see what would be deleted)
|
||||
./scripts/cleanup-database.sh --dry-run
|
||||
|
||||
# Actual cleanup
|
||||
./scripts/cleanup-database.sh
|
||||
```
|
||||
|
||||
**What it does**:
|
||||
1. Creates backup before cleanup
|
||||
2. Deletes lots with missing auction_id
|
||||
3. Deletes orphaned images (images without matching lots)
|
||||
4. Compacts database (VACUUM) to reclaim space
|
||||
5. Shows before/after statistics
|
||||
|
||||
**Example output**:
|
||||
```
|
||||
Current database state:
|
||||
┌──────────────────────────────────┬────────┐
|
||||
│ metric │ count │
|
||||
├──────────────────────────────────┼────────┤
|
||||
│ Total lots │ 16807 │
|
||||
│ Valid lots (with auction_id) │ 16794 │
|
||||
│ Invalid lots (missing auction_id) │ 13 │
|
||||
└──────────────────────────────────┴────────┘
|
||||
|
||||
Analyzing data to clean up...
|
||||
→ Invalid lots to delete: 13
|
||||
→ Orphaned images to delete: 0
|
||||
|
||||
This will permanently delete the above records.
|
||||
Continue? (y/N) y
|
||||
|
||||
Cleaning up database...
|
||||
[1/2] Deleting invalid lots...
|
||||
✓ Deleted 13 invalid lots
|
||||
[2/2] Deleting orphaned images...
|
||||
✓ Deleted 0 orphaned images
|
||||
[3/3] Compacting database...
|
||||
✓ Database compacted
|
||||
|
||||
Final database state:
|
||||
┌───────────────┬────────┐
|
||||
│ metric │ count │
|
||||
├───────────────┼────────┤
|
||||
│ Total lots │ 16794 │
|
||||
│ Total images │ 536502 │
|
||||
└───────────────┴────────┘
|
||||
|
||||
Database size: 8.9G
|
||||
```
|
||||
|
||||
## Solution 3: SQL Manual Cleanup
|
||||
|
||||
If you prefer to manually clean using SQL:
|
||||
|
||||
```sql
|
||||
-- Backup first!
|
||||
-- cp cache.db cache.db.backup
|
||||
|
||||
-- Check invalid entries
|
||||
SELECT COUNT(*), 'Invalid' as type
|
||||
FROM lots
|
||||
WHERE auction_id IS NULL OR auction_id = ''
|
||||
UNION ALL
|
||||
SELECT COUNT(*), 'Valid'
|
||||
FROM lots
|
||||
WHERE auction_id IS NOT NULL AND auction_id != '';
|
||||
|
||||
-- Delete invalid lots
|
||||
DELETE FROM lots
|
||||
WHERE auction_id IS NULL OR auction_id = '';
|
||||
|
||||
-- Delete orphaned images
|
||||
DELETE FROM images
|
||||
WHERE lot_id NOT IN (SELECT lot_id FROM lots);
|
||||
|
||||
-- Compact database
|
||||
VACUUM;
|
||||
```
|
||||
|
||||
## Prevention: Production Database Cleanup
|
||||
|
||||
To prevent these invalid entries from accumulating on production, you can:
|
||||
|
||||
1. **Clean production database** (one-time):
|
||||
```bash
|
||||
ssh tour@athena.lan
|
||||
docker run --rm -v shared-auction-data:/data alpine sqlite3 /data/cache.db "DELETE FROM lots WHERE auction_id IS NULL OR auction_id = '';"
|
||||
```
|
||||
|
||||
2. **Update scraper** to ensure all lots have auction_id
|
||||
3. **Add validation** in scraper to reject lots without auction_id
|
||||
|
||||
## When to Clean
|
||||
|
||||
### Immediately if:
|
||||
- ❌ Seeing `NullPointerException` in analytics
|
||||
- ❌ Dashboard insights failing
|
||||
- ❌ Country distribution not working
|
||||
|
||||
### Periodically:
|
||||
- 🔄 After syncing from production (if production has invalid data)
|
||||
- 🔄 Weekly/monthly maintenance
|
||||
- 🔄 Before major testing or demos
|
||||
|
||||
## Recommendation
|
||||
|
||||
**Use Solution 1 (Clean Sync)** for simplicity:
|
||||
- ✅ Guarantees clean state
|
||||
- ✅ No manual SQL needed
|
||||
- ✅ Shows data quality report
|
||||
- ✅ Safe (automatic backup)
|
||||
|
||||
The 13 invalid entries are from an old scraper and represent only 0.08% of data, so cleaning them up has minimal impact but prevents future errors.
|
||||
|
||||
---
|
||||
|
||||
**Related Documentation**:
|
||||
- [Sync Scripts README](../scripts/README.md)
|
||||
- [Data Sync Setup](DATA_SYNC_SETUP.md)
|
||||
- [Database Architecture](../wiki/DATABASE_ARCHITECTURE.md)
|
||||
@@ -1,584 +0,0 @@
|
||||
# Implementation Complete ✅
|
||||
|
||||
## Summary
|
||||
|
||||
All requirements have been successfully implemented:
|
||||
|
||||
### ✅ 1. Test Libraries Added
|
||||
|
||||
**pom.xml updated with:**
|
||||
- JUnit 5 (5.10.1) - Testing framework
|
||||
- Mockito Core (5.8.0) - Mocking framework
|
||||
- Mockito JUnit Jupiter (5.8.0) - JUnit integration
|
||||
- AssertJ (3.24.2) - Fluent assertions
|
||||
|
||||
**Run tests:**
|
||||
```bash
|
||||
mvn test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ✅ 2. Paths Configured for Windows
|
||||
|
||||
**Database:**
|
||||
```
|
||||
C:\mnt\okcomputer\output\cache.db
|
||||
```
|
||||
|
||||
**Images:**
|
||||
```
|
||||
C:\mnt\okcomputer\output\images\{saleId}\{lotId}\
|
||||
```
|
||||
|
||||
**Files Updated:**
|
||||
- `Main.java:31` - Database path
|
||||
- `ImageProcessingService.java:52` - Image storage path
|
||||
|
||||
---
|
||||
|
||||
### ✅ 3. Comprehensive Test Suite (90 Tests)
|
||||
|
||||
| Test File | Tests | Coverage |
|
||||
|-----------|-------|----------|
|
||||
| ScraperDataAdapterTest | 13 | Data transformation, ID parsing, currency |
|
||||
| DatabaseServiceTest | 15 | CRUD operations, concurrency |
|
||||
| ImageProcessingServiceTest | 11 | Download, detection, errors |
|
||||
| ObjectDetectionServiceTest | 10 | YOLO initialization, detection |
|
||||
| NotificationServiceTest | 19 | Desktop/email, priorities |
|
||||
| TroostwijkMonitorTest | 12 | Orchestration, monitoring |
|
||||
| IntegrationTest | 10 | End-to-end workflows |
|
||||
| **TOTAL** | **90** | **Complete system** |
|
||||
|
||||
**Documentation:** See `TEST_SUITE_SUMMARY.md`
|
||||
|
||||
---
|
||||
|
||||
### ✅ 4. Workflow Integration & Orchestration
|
||||
|
||||
**New Component:** `WorkflowOrchestrator.java`
|
||||
|
||||
**4 Automated Workflows:**
|
||||
|
||||
1. **Scraper Data Import** (every 30 min)
|
||||
- Imports auctions, lots, image URLs
|
||||
- Sends notifications for significant data
|
||||
|
||||
2. **Image Processing** (every 1 hour)
|
||||
- Downloads images
|
||||
- Runs YOLO object detection
|
||||
- Saves labels to database
|
||||
|
||||
3. **Bid Monitoring** (every 15 min)
|
||||
- Checks for bid changes
|
||||
- Sends notifications
|
||||
|
||||
4. **Closing Alerts** (every 5 min)
|
||||
- Finds lots closing soon
|
||||
- Sends high-priority notifications
|
||||
|
||||
---
|
||||
|
||||
### ✅ 5. Running Modes
|
||||
|
||||
**Main.java now supports 4 modes:**
|
||||
|
||||
#### Mode 1: workflow (Default - Recommended)
|
||||
```bash
|
||||
java -jar troostwijk-monitor.jar workflow
|
||||
# OR
|
||||
run-workflow.bat
|
||||
```
|
||||
- Runs all workflows continuously
|
||||
- Built-in scheduling
|
||||
- Best for production
|
||||
|
||||
#### Mode 2: once (For Cron/Task Scheduler)
|
||||
```bash
|
||||
java -jar troostwijk-monitor.jar once
|
||||
# OR
|
||||
run-once.bat
|
||||
```
|
||||
- Runs complete workflow once
|
||||
- Exits after completion
|
||||
- Perfect for external schedulers
|
||||
|
||||
#### Mode 3: legacy (Backward Compatible)
|
||||
```bash
|
||||
java -jar troostwijk-monitor.jar legacy
|
||||
```
|
||||
- Original monitoring approach
|
||||
- Kept for compatibility
|
||||
|
||||
#### Mode 4: status (Quick Check)
|
||||
```bash
|
||||
java -jar troostwijk-monitor.jar status
|
||||
# OR
|
||||
check-status.bat
|
||||
```
|
||||
- Shows current status
|
||||
- Exits immediately
|
||||
|
||||
---
|
||||
|
||||
### ✅ 6. Windows Scheduling Scripts
|
||||
|
||||
**Batch Scripts Created:**
|
||||
|
||||
1. **run-workflow.bat**
|
||||
- Starts workflow mode
|
||||
- Continuous operation
|
||||
- For manual/startup use
|
||||
|
||||
2. **run-once.bat**
|
||||
- Single execution
|
||||
- For Task Scheduler
|
||||
- Exit code support
|
||||
|
||||
3. **check-status.bat**
|
||||
- Quick status check
|
||||
- Shows database stats
|
||||
|
||||
**PowerShell Automation:**
|
||||
|
||||
4. **setup-windows-task.ps1**
|
||||
- Creates Task Scheduler tasks automatically
|
||||
- Sets up 2 scheduled tasks:
|
||||
- Workflow runner (every 30 min)
|
||||
- Status checker (every 6 hours)
|
||||
|
||||
**Usage:**
|
||||
```powershell
|
||||
# Run as Administrator
|
||||
.\setup-windows-task.ps1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ✅ 7. Event-Driven Triggers
|
||||
|
||||
**WorkflowOrchestrator supports event-driven execution:**
|
||||
|
||||
```java
|
||||
// 1. New auction discovered
|
||||
orchestrator.onNewAuctionDiscovered(auctionInfo);
|
||||
|
||||
// 2. Bid change detected
|
||||
orchestrator.onBidChange(lot, previousBid, newBid);
|
||||
|
||||
// 3. Objects detected in image
|
||||
orchestrator.onObjectsDetected(lotId, labels);
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- React immediately to important events
|
||||
- No waiting for next scheduled run
|
||||
- Flexible integration with external systems
|
||||
|
||||
---
|
||||
|
||||
### ✅ 8. Comprehensive Documentation
|
||||
|
||||
**Documentation Created:**
|
||||
|
||||
1. **TEST_SUITE_SUMMARY.md**
|
||||
- Complete test coverage overview
|
||||
- 90 test cases documented
|
||||
- Running instructions
|
||||
- Test patterns explained
|
||||
|
||||
2. **WORKFLOW_GUIDE.md**
|
||||
- Complete workflow integration guide
|
||||
- Running modes explained
|
||||
- Windows Task Scheduler setup
|
||||
- Event-driven triggers
|
||||
- Configuration options
|
||||
- Troubleshooting guide
|
||||
- Advanced integration examples
|
||||
|
||||
3. **README.md** (Updated)
|
||||
- System architecture diagram
|
||||
- Integration flow
|
||||
- User interaction points
|
||||
- Value estimation pipeline
|
||||
- Integration hooks table
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Option A: Continuous Operation (Recommended)
|
||||
|
||||
```bash
|
||||
# Build
|
||||
mvn clean package
|
||||
|
||||
# Run workflow mode
|
||||
java -jar target\troostwijk-scraper-1.0-SNAPSHOT-jar-with-dependencies.jar workflow
|
||||
|
||||
# Or use batch script
|
||||
run-workflow.bat
|
||||
```
|
||||
|
||||
**What runs:**
|
||||
- ✅ Data import every 30 min
|
||||
- ✅ Image processing every 1 hour
|
||||
- ✅ Bid monitoring every 15 min
|
||||
- ✅ Closing alerts every 5 min
|
||||
|
||||
---
|
||||
|
||||
### Option B: Windows Task Scheduler
|
||||
|
||||
```powershell
|
||||
# 1. Build JAR
|
||||
mvn clean package
|
||||
|
||||
# 2. Setup scheduled tasks (run as Admin)
|
||||
.\setup-windows-task.ps1
|
||||
|
||||
# Done! Workflow runs automatically every 30 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Option C: Manual/Cron Execution
|
||||
|
||||
```bash
|
||||
# Run once
|
||||
java -jar target\troostwijk-scraper-1.0-SNAPSHOT-jar-with-dependencies.jar once
|
||||
|
||||
# Or
|
||||
run-once.bat
|
||||
|
||||
# Schedule externally (Windows Task Scheduler, cron, etc.)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ External Scraper (Python) │
|
||||
│ Populates: auctions, lots, images tables │
|
||||
└─────────────────────────┬───────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ SQLite Database │
|
||||
│ C:\mnt\okcomputer\output\cache.db │
|
||||
└─────────────────────────┬───────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ WorkflowOrchestrator (This System) │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Workflow 1: Scraper Import (every 30 min) │ │
|
||||
│ │ Workflow 2: Image Processing (every 1 hour) │ │
|
||||
│ │ Workflow 3: Bid Monitoring (every 15 min) │ │
|
||||
│ │ Workflow 4: Closing Alerts (every 5 min) │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ ImageProcessingService │ │
|
||||
│ │ - Downloads images │ │
|
||||
│ │ - Stores: C:\mnt\okcomputer\output\images\ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ ObjectDetectionService (YOLO) │ │
|
||||
│ │ - Detects objects in images │ │
|
||||
│ │ - Labels: car, truck, machinery, etc. │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ NotificationService │ │
|
||||
│ │ - Desktop notifications (Windows tray) │ │
|
||||
│ │ - Email notifications (Gmail SMTP) │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────┬───────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ User Notifications │
|
||||
│ - Bid changes │
|
||||
│ - Closing alerts │
|
||||
│ - Object detection results │
|
||||
│ - Value estimates (future) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### 1. Database Integration
|
||||
- **Read:** Auctions, lots, image URLs from external scraper
|
||||
- **Write:** Processed images, object labels, notifications
|
||||
|
||||
### 2. File System Integration
|
||||
- **Read:** YOLO model files (models/)
|
||||
- **Write:** Downloaded images (C:\mnt\okcomputer\output\images\)
|
||||
|
||||
### 3. External Scraper Integration
|
||||
- **Mode:** Shared SQLite database
|
||||
- **Frequency:** Scraper populates, monitor enriches
|
||||
|
||||
### 4. Notification Integration
|
||||
- **Desktop:** Windows system tray
|
||||
- **Email:** Gmail SMTP (optional)
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
### Run All Tests
|
||||
```bash
|
||||
mvn test
|
||||
```
|
||||
|
||||
### Run Specific Test
|
||||
```bash
|
||||
mvn test -Dtest=IntegrationTest
|
||||
mvn test -Dtest=WorkflowOrchestratorTest
|
||||
```
|
||||
|
||||
### Test Coverage
|
||||
```bash
|
||||
mvn jacoco:prepare-agent test jacoco:report
|
||||
# Report: target/site/jacoco/index.html
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Windows (cmd)
|
||||
set DATABASE_FILE=C:\mnt\okcomputer\output\cache.db
|
||||
set NOTIFICATION_CONFIG=desktop
|
||||
|
||||
# Windows (PowerShell)
|
||||
$env:DATABASE_FILE="C:\mnt\okcomputer\output\cache.db"
|
||||
$env:NOTIFICATION_CONFIG="desktop"
|
||||
|
||||
# For email notifications
|
||||
set NOTIFICATION_CONFIG=smtp:your@gmail.com:app_password:recipient@example.com
|
||||
```
|
||||
|
||||
### Code Configuration
|
||||
|
||||
**Database Path** (`Main.java:31`):
|
||||
```java
|
||||
String databaseFile = System.getenv().getOrDefault(
|
||||
"DATABASE_FILE",
|
||||
"C:\\mnt\\okcomputer\\output\\cache.db"
|
||||
);
|
||||
```
|
||||
|
||||
**Workflow Schedules** (`WorkflowOrchestrator.java`):
|
||||
```java
|
||||
scheduleScraperDataImport(); // Line 65 - every 30 min
|
||||
scheduleImageProcessing(); // Line 95 - every 1 hour
|
||||
scheduleBidMonitoring(); // Line 180 - every 15 min
|
||||
scheduleClosingAlerts(); // Line 215 - every 5 min
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Check Status
|
||||
```bash
|
||||
java -jar troostwijk-monitor.jar status
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
📊 Workflow Status:
|
||||
Running: Yes/No
|
||||
Auctions: 25
|
||||
Lots: 150
|
||||
Images: 300
|
||||
Closing soon (< 30 min): 5
|
||||
```
|
||||
|
||||
### View Logs
|
||||
|
||||
Workflows print detailed logs:
|
||||
```
|
||||
📥 [WORKFLOW 1] Importing scraper data...
|
||||
→ Imported 5 auctions
|
||||
→ Imported 25 lots
|
||||
✓ Scraper import completed in 1250ms
|
||||
|
||||
🖼️ [WORKFLOW 2] Processing pending images...
|
||||
→ Processing 50 images
|
||||
✓ Processed 50 images, detected objects in 12
|
||||
|
||||
💰 [WORKFLOW 3] Monitoring bids...
|
||||
→ Checking 150 active lots
|
||||
✓ Bid monitoring completed in 250ms
|
||||
|
||||
⏰ [WORKFLOW 4] Checking closing times...
|
||||
→ Sent 3 closing alerts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Immediate Actions
|
||||
|
||||
1. **Build the project:**
|
||||
```bash
|
||||
mvn clean package
|
||||
```
|
||||
|
||||
2. **Run tests:**
|
||||
```bash
|
||||
mvn test
|
||||
```
|
||||
|
||||
3. **Choose execution mode:**
|
||||
- **Continuous:** `run-workflow.bat`
|
||||
- **Scheduled:** `.\setup-windows-task.ps1` (as Admin)
|
||||
- **Manual:** `run-once.bat`
|
||||
|
||||
4. **Verify setup:**
|
||||
```bash
|
||||
check-status.bat
|
||||
```
|
||||
|
||||
### Future Enhancements
|
||||
|
||||
1. **Value Estimation Algorithm**
|
||||
- Use detected objects to estimate lot value
|
||||
- Historical price analysis
|
||||
- Market trends integration
|
||||
|
||||
2. **Machine Learning**
|
||||
- Train custom YOLO model for auction items
|
||||
- Price prediction based on images
|
||||
- Automatic categorization
|
||||
|
||||
3. **Web Dashboard**
|
||||
- Real-time monitoring
|
||||
- Manual bid placement
|
||||
- Value estimate approval
|
||||
|
||||
4. **API Integration**
|
||||
- Direct Troostwijk API integration
|
||||
- Real-time bid updates
|
||||
- Automatic bid placement
|
||||
|
||||
5. **Advanced Notifications**
|
||||
- SMS notifications (Twilio)
|
||||
- Push notifications (Firebase)
|
||||
- Slack/Discord integration
|
||||
|
||||
---
|
||||
|
||||
## Files Created/Modified
|
||||
|
||||
### Core Implementation
|
||||
- ✅ `WorkflowOrchestrator.java` - Workflow coordination
|
||||
- ✅ `Main.java` - Updated with 4 running modes
|
||||
- ✅ `ImageProcessingService.java` - Windows paths
|
||||
- ✅ `pom.xml` - Test libraries added
|
||||
|
||||
### Test Suite (90 tests)
|
||||
- ✅ `ScraperDataAdapterTest.java` (13 tests)
|
||||
- ✅ `DatabaseServiceTest.java` (15 tests)
|
||||
- ✅ `ImageProcessingServiceTest.java` (11 tests)
|
||||
- ✅ `ObjectDetectionServiceTest.java` (10 tests)
|
||||
- ✅ `NotificationServiceTest.java` (19 tests)
|
||||
- ✅ `TroostwijkMonitorTest.java` (12 tests)
|
||||
- ✅ `IntegrationTest.java` (10 tests)
|
||||
|
||||
### Windows Scripts
|
||||
- ✅ `run-workflow.bat` - Workflow mode runner
|
||||
- ✅ `run-once.bat` - Once mode runner
|
||||
- ✅ `check-status.bat` - Status checker
|
||||
- ✅ `setup-windows-task.ps1` - Task Scheduler setup
|
||||
|
||||
### Documentation
|
||||
- ✅ `TEST_SUITE_SUMMARY.md` - Test coverage
|
||||
- ✅ `WORKFLOW_GUIDE.md` - Complete workflow guide
|
||||
- ✅ `README.md` - Updated with diagrams
|
||||
- ✅ `IMPLEMENTATION_COMPLETE.md` - This file
|
||||
|
||||
---
|
||||
|
||||
## Support & Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**1. Tests failing**
|
||||
```bash
|
||||
# Ensure Maven dependencies downloaded
|
||||
mvn clean install
|
||||
|
||||
# Run tests with debug info
|
||||
mvn test -X
|
||||
```
|
||||
|
||||
**2. Workflow not starting**
|
||||
```bash
|
||||
# Check if JAR was built
|
||||
dir target\*jar-with-dependencies.jar
|
||||
|
||||
# Rebuild if missing
|
||||
mvn clean package
|
||||
```
|
||||
|
||||
**3. Database not found**
|
||||
```bash
|
||||
# Check path exists
|
||||
dir C:\mnt\okcomputer\output\
|
||||
|
||||
# Create directory if missing
|
||||
mkdir C:\mnt\okcomputer\output
|
||||
```
|
||||
|
||||
**4. Images not downloading**
|
||||
- Check internet connection
|
||||
- Verify image URLs in database
|
||||
- Check Windows Firewall settings
|
||||
|
||||
### Getting Help
|
||||
|
||||
1. Review documentation:
|
||||
- `TEST_SUITE_SUMMARY.md` for tests
|
||||
- `WORKFLOW_GUIDE.md` for workflows
|
||||
- `README.md` for architecture
|
||||
|
||||
2. Check status:
|
||||
```bash
|
||||
check-status.bat
|
||||
```
|
||||
|
||||
3. Review logs in console output
|
||||
|
||||
4. Run tests to verify components:
|
||||
```bash
|
||||
mvn test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
✅ **Test libraries added** (JUnit, Mockito, AssertJ)
|
||||
✅ **90 comprehensive tests created**
|
||||
✅ **Workflow orchestration implemented**
|
||||
✅ **4 running modes** (workflow, once, legacy, status)
|
||||
✅ **Windows scheduling scripts** (batch + PowerShell)
|
||||
✅ **Event-driven triggers** (3 event types)
|
||||
✅ **Complete documentation** (3 guide files)
|
||||
✅ **Windows paths configured** (database + images)
|
||||
|
||||
**The system is production-ready and fully tested! 🎉**
|
||||
@@ -1,478 +0,0 @@
|
||||
# Integration Guide: Troostwijk Monitor ↔ Scraper
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes how **Troostwijk Monitor** (this Java project) integrates with the **ARCHITECTURE-TROOSTWIJK-SCRAPER** (Python scraper process).
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ ARCHITECTURE-TROOSTWIJK-SCRAPER (Python) │
|
||||
│ │
|
||||
│ • Discovers auctions from website │
|
||||
│ • Scrapes lot details via Playwright │
|
||||
│ • Parses __NEXT_DATA__ JSON │
|
||||
│ • Stores image URLs (not downloads) │
|
||||
│ │
|
||||
│ ↓ Writes to │
|
||||
└─────────┼───────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ SHARED SQLite DATABASE │
|
||||
│ (troostwijk.db) │
|
||||
│ │
|
||||
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
|
||||
│ │ auctions │ │ lots │ │ images │ │
|
||||
│ │ (Scraper) │ │ (Scraper) │ │ (Both) │ │
|
||||
│ └────────────────┘ └────────────────┘ └────────────────┘ │
|
||||
│ │
|
||||
│ ↑ Reads from ↓ Writes to │
|
||||
└─────────┼──────────────────────────────┼──────────────────────┘
|
||||
│ │
|
||||
│ ▼
|
||||
┌─────────┴──────────────────────────────────────────────────────┐
|
||||
│ TROOSTWIJK MONITOR (Java - This Project) │
|
||||
│ │
|
||||
│ • Reads auction/lot data from database │
|
||||
│ • Downloads images from URLs │
|
||||
│ • Runs YOLO object detection │
|
||||
│ • Monitors bid changes │
|
||||
│ • Sends notifications │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Database Schema Mapping
|
||||
|
||||
### Scraper Schema → Monitor Schema
|
||||
|
||||
The scraper and monitor use **slightly different schemas** that need to be reconciled:
|
||||
|
||||
| Scraper Table | Monitor Table | Integration Notes |
|
||||
|---------------|---------------|-----------------------------------------------|
|
||||
| `auctions` | `auctions` | ✅ **Compatible** - same structure |
|
||||
| `lots` | `lots` | ⚠️ **Needs mapping** - field name differences |
|
||||
| `images` | `images` | ⚠️ **Partial overlap** - different purposes |
|
||||
| `cache` | N/A | ❌ Monitor doesn't use cache |
|
||||
|
||||
### Field Mapping: `auctions` Table
|
||||
|
||||
| Scraper Field | Monitor Field | Notes |
|
||||
|--------------------------|-------------------------------|---------------------------------------------------------------------|
|
||||
| `auction_id` (TEXT) | `auction_id` (INTEGER) | ⚠️ **TYPE MISMATCH** - Scraper uses "A7-39813", Monitor expects INT |
|
||||
| `url` | `url` | ✅ Compatible |
|
||||
| `title` | `title` | ✅ Compatible |
|
||||
| `location` | `location`, `city`, `country` | ⚠️ Monitor splits into 3 fields |
|
||||
| `lots_count` | `lot_count` | ⚠️ Name difference |
|
||||
| `first_lot_closing_time` | `closing_time` | ⚠️ Name difference |
|
||||
| `scraped_at` | `discovered_at` | ⚠️ Name + type difference (TEXT vs INTEGER timestamp) |
|
||||
|
||||
### Field Mapping: `lots` Table
|
||||
|
||||
| Scraper Field | Monitor Field | Notes |
|
||||
|----------------------|----------------------|--------------------------------------------------|
|
||||
| `lot_id` (TEXT) | `lot_id` (INTEGER) | ⚠️ **TYPE MISMATCH** - "A1-28505-5" vs INT |
|
||||
| `auction_id` | `sale_id` | ⚠️ Different name |
|
||||
| `url` | `url` | ✅ Compatible |
|
||||
| `title` | `title` | ✅ Compatible |
|
||||
| `current_bid` (TEXT) | `current_bid` (REAL) | ⚠️ **TYPE MISMATCH** - "€123.45" vs 123.45 |
|
||||
| `bid_count` | N/A | ℹ️ Monitor doesn't track |
|
||||
| `closing_time` | `closing_time` | ⚠️ Format difference (TEXT vs LocalDateTime) |
|
||||
| `viewing_time` | N/A | ℹ️ Monitor doesn't track |
|
||||
| `pickup_date` | N/A | ℹ️ Monitor doesn't track |
|
||||
| `location` | N/A | ℹ️ Monitor doesn't track lot location separately |
|
||||
| `description` | `description` | ✅ Compatible |
|
||||
| `category` | `category` | ✅ Compatible |
|
||||
| N/A | `manufacturer` | ℹ️ Monitor has additional field |
|
||||
| N/A | `type` | ℹ️ Monitor has additional field |
|
||||
| N/A | `year` | ℹ️ Monitor has additional field |
|
||||
| N/A | `currency` | ℹ️ Monitor has additional field |
|
||||
| N/A | `closing_notified` | ℹ️ Monitor tracking field |
|
||||
|
||||
### Field Mapping: `images` Table
|
||||
|
||||
| Scraper Field | Monitor Field | Notes |
|
||||
|------------------------|--------------------------|----------------------------------------|
|
||||
| `id` | `id` | ✅ Compatible |
|
||||
| `lot_id` | `lot_id` | ⚠️ Type difference (TEXT vs INTEGER) |
|
||||
| `url` | `url` | ✅ Compatible |
|
||||
| `local_path` | `Local_path` | ⚠️ Different name |
|
||||
| `downloaded` (INTEGER) | N/A | ℹ️ Monitor uses `processed_at` instead |
|
||||
| N/A | `labels` (TEXT) | ℹ️ Monitor adds detected objects |
|
||||
| N/A | `processed_at` (INTEGER) | ℹ️ Monitor tracking field |
|
||||
|
||||
## Integration Options
|
||||
|
||||
### Option 1: Database Schema Adapter (Recommended)
|
||||
|
||||
Create a compatibility layer that transforms scraper data to monitor format.
|
||||
|
||||
**Implementation:**
|
||||
```java
|
||||
// Add to DatabaseService.java
|
||||
class ScraperDataAdapter {
|
||||
|
||||
/**
|
||||
* Imports auction from scraper format to monitor format
|
||||
*/
|
||||
static AuctionInfo fromScraperAuction(ResultSet rs) throws SQLException {
|
||||
// Parse "A7-39813" → 39813
|
||||
String auctionIdStr = rs.getString("auction_id");
|
||||
int auctionId = extractNumericId(auctionIdStr);
|
||||
|
||||
// Split "Cluj-Napoca, RO" → city="Cluj-Napoca", country="RO"
|
||||
String location = rs.getString("location");
|
||||
String[] parts = location.split(",\\s*");
|
||||
String city = parts.length > 0 ? parts[0] : "";
|
||||
String country = parts.length > 1 ? parts[1] : "";
|
||||
|
||||
return new AuctionInfo(
|
||||
auctionId,
|
||||
rs.getString("title"),
|
||||
location,
|
||||
city,
|
||||
country,
|
||||
rs.getString("url"),
|
||||
extractTypePrefix(auctionIdStr), // "A7-39813" → "A7"
|
||||
rs.getInt("lots_count"),
|
||||
parseTimestamp(rs.getString("first_lot_closing_time"))
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Imports lot from scraper format to monitor format
|
||||
*/
|
||||
static Lot fromScraperLot(ResultSet rs) throws SQLException {
|
||||
// Parse "A1-28505-5" → 285055 (combine numbers)
|
||||
String lotIdStr = rs.getString("lot_id");
|
||||
int lotId = extractNumericId(lotIdStr);
|
||||
|
||||
// Parse "A7-39813" → 39813
|
||||
String auctionIdStr = rs.getString("auction_id");
|
||||
int saleId = extractNumericId(auctionIdStr);
|
||||
|
||||
// Parse "€123.45" → 123.45
|
||||
String currentBidStr = rs.getString("current_bid");
|
||||
double currentBid = parseBid(currentBidStr);
|
||||
|
||||
return new Lot(
|
||||
saleId,
|
||||
lotId,
|
||||
rs.getString("title"),
|
||||
rs.getString("description"),
|
||||
"", // manufacturer - not in scraper
|
||||
"", // type - not in scraper
|
||||
0, // year - not in scraper
|
||||
rs.getString("category"),
|
||||
currentBid,
|
||||
"EUR", // currency - inferred from €
|
||||
rs.getString("url"),
|
||||
parseTimestamp(rs.getString("closing_time")),
|
||||
false // not yet notified
|
||||
);
|
||||
}
|
||||
|
||||
private static int extractNumericId(String id) {
|
||||
// "A7-39813" → 39813
|
||||
// "A1-28505-5" → 285055
|
||||
return Integer.parseInt(id.replaceAll("[^0-9]", ""));
|
||||
}
|
||||
|
||||
private static String extractTypePrefix(String id) {
|
||||
// "A7-39813" → "A7"
|
||||
int dashIndex = id.indexOf('-');
|
||||
return dashIndex > 0 ? id.substring(0, dashIndex) : "";
|
||||
}
|
||||
|
||||
private static double parseBid(String bid) {
|
||||
// "€123.45" → 123.45
|
||||
// "No bids" → 0.0
|
||||
if (bid == null || bid.contains("No")) return 0.0;
|
||||
return Double.parseDouble(bid.replaceAll("[^0-9.]", ""));
|
||||
}
|
||||
|
||||
private static LocalDateTime parseTimestamp(String timestamp) {
|
||||
if (timestamp == null) return null;
|
||||
// Parse scraper's timestamp format
|
||||
return LocalDateTime.parse(timestamp);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Option 2: Unified Schema (Better Long-term)
|
||||
|
||||
Modify **both** scraper and monitor to use a unified schema.
|
||||
|
||||
**Create**: `SHARED_SCHEMA.sql`
|
||||
```sql
|
||||
-- Unified schema that both projects use
|
||||
|
||||
CREATE TABLE IF NOT EXISTS auctions (
|
||||
auction_id TEXT PRIMARY KEY, -- Use TEXT to support "A7-39813"
|
||||
auction_id_numeric INTEGER, -- For monitor's integer needs
|
||||
title TEXT NOT NULL,
|
||||
location TEXT, -- Full: "Cluj-Napoca, RO"
|
||||
city TEXT, -- Parsed: "Cluj-Napoca"
|
||||
country TEXT, -- Parsed: "RO"
|
||||
url TEXT NOT NULL,
|
||||
type TEXT, -- "A7", "A1"
|
||||
lot_count INTEGER DEFAULT 0,
|
||||
closing_time TEXT, -- ISO 8601 format
|
||||
scraped_at INTEGER, -- Unix timestamp
|
||||
discovered_at INTEGER -- Unix timestamp (same as scraped_at)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS lots (
|
||||
lot_id TEXT PRIMARY KEY, -- Use TEXT: "A1-28505-5"
|
||||
lot_id_numeric INTEGER, -- For monitor's integer needs
|
||||
auction_id TEXT, -- FK: "A7-39813"
|
||||
sale_id INTEGER, -- For monitor (same as auction_id_numeric)
|
||||
title TEXT,
|
||||
description TEXT,
|
||||
manufacturer TEXT,
|
||||
type TEXT,
|
||||
year INTEGER,
|
||||
category TEXT,
|
||||
current_bid_text TEXT, -- "€123.45" or "No bids"
|
||||
current_bid REAL, -- 123.45
|
||||
bid_count INTEGER,
|
||||
currency TEXT DEFAULT 'EUR',
|
||||
url TEXT UNIQUE,
|
||||
closing_time TEXT,
|
||||
viewing_time TEXT,
|
||||
pickup_date TEXT,
|
||||
location TEXT,
|
||||
closing_notified INTEGER DEFAULT 0,
|
||||
scraped_at TEXT,
|
||||
FOREIGN KEY (auction_id) REFERENCES auctions(auction_id)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS images (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT, -- FK: "A1-28505-5"
|
||||
url TEXT, -- Image URL from website
|
||||
local_path TEXT, -- Local path after download
|
||||
labels TEXT, -- Detected objects (comma-separated)
|
||||
downloaded INTEGER DEFAULT 0, -- 0=pending, 1=downloaded
|
||||
processed_at INTEGER, -- Unix timestamp when processed
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_auctions_country ON auctions(country);
|
||||
CREATE INDEX IF NOT EXISTS idx_lots_auction_id ON lots(auction_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_images_lot_id ON images(lot_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_images_downloaded ON images(downloaded);
|
||||
```
|
||||
|
||||
### Option 3: API Integration (Most Flexible)
|
||||
|
||||
Have the scraper expose a REST API for the monitor to query.
|
||||
|
||||
```python
|
||||
# In scraper: Add Flask API endpoint
|
||||
@app.route('/api/auctions', methods=['GET'])
|
||||
def get_auctions():
|
||||
"""Returns auctions in monitor-compatible format"""
|
||||
conn = sqlite3.connect(CACHE_DB)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT * FROM auctions WHERE location LIKE '%NL%'")
|
||||
|
||||
auctions = []
|
||||
for row in cursor.fetchall():
|
||||
auctions.append({
|
||||
'auctionId': extract_numeric_id(row[0]),
|
||||
'title': row[2],
|
||||
'location': row[3],
|
||||
'city': row[3].split(',')[0] if row[3] else '',
|
||||
'country': row[3].split(',')[1].strip() if ',' in row[3] else '',
|
||||
'url': row[1],
|
||||
'type': row[0].split('-')[0],
|
||||
'lotCount': row[4],
|
||||
'closingTime': row[5]
|
||||
})
|
||||
|
||||
return jsonify(auctions)
|
||||
```
|
||||
|
||||
## Recommended Integration Steps
|
||||
|
||||
### Phase 1: Immediate (Adapter Pattern)
|
||||
1. ✅ Keep separate schemas
|
||||
2. ✅ Create `ScraperDataAdapter` in Monitor
|
||||
3. ✅ Add import methods to `DatabaseService`
|
||||
4. ✅ Monitor reads from scraper's tables using adapter
|
||||
|
||||
### Phase 2: Short-term (Unified Schema)
|
||||
1. 📋 Design unified schema (see Option 2)
|
||||
2. 📋 Update scraper to use unified schema
|
||||
3. 📋 Update monitor to use unified schema
|
||||
4. 📋 Migrate existing data
|
||||
|
||||
### Phase 3: Long-term (API + Event-driven)
|
||||
1. 📋 Add REST API to scraper
|
||||
2. 📋 Add webhook/event notification when new data arrives
|
||||
3. 📋 Monitor subscribes to events
|
||||
4. 📋 Process images asynchronously
|
||||
|
||||
## Current Integration Flow
|
||||
|
||||
### Scraper Process (Python)
|
||||
```bash
|
||||
# 1. Run scraper to populate database
|
||||
cd /path/to/scraper
|
||||
python scraper.py
|
||||
|
||||
# Output:
|
||||
# ✅ Scraped 42 auctions
|
||||
# ✅ Scraped 1,234 lots
|
||||
# ✅ Saved 3,456 image URLs
|
||||
# ✅ Data written to: /mnt/okcomputer/output/cache.db
|
||||
```
|
||||
|
||||
### Monitor Process (Java)
|
||||
```bash
|
||||
# 2. Run monitor to process the data
|
||||
cd /path/to/monitor
|
||||
export DATABASE_FILE=/mnt/okcomputer/output/cache.db
|
||||
java -jar troostwijk-monitor.jar
|
||||
|
||||
# Output:
|
||||
# 📊 Current Database State:
|
||||
# Total lots in database: 1,234
|
||||
# Total images processed: 0
|
||||
#
|
||||
# [1/2] Processing images...
|
||||
# Downloading and analyzing 3,456 images...
|
||||
#
|
||||
# [2/2] Starting bid monitoring...
|
||||
# ✓ Monitoring 1,234 active lots
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Shared Database Path
|
||||
Both processes must point to the same database file:
|
||||
|
||||
**Scraper** (`config.py`):
|
||||
```python
|
||||
CACHE_DB = '/mnt/okcomputer/output/cache.db'
|
||||
```
|
||||
|
||||
**Monitor** (`Main.java`):
|
||||
```java
|
||||
String databaseFile = System.getenv().getOrDefault(
|
||||
"DATABASE_FILE",
|
||||
"/mnt/okcomputer/output/cache.db"
|
||||
);
|
||||
```
|
||||
|
||||
### Recommended Directory Structure
|
||||
```
|
||||
/mnt/okcomputer/
|
||||
├── scraper/ # Python scraper code
|
||||
│ ├── scraper.py
|
||||
│ └── requirements.txt
|
||||
├── monitor/ # Java monitor code
|
||||
│ ├── troostwijk-monitor.jar
|
||||
│ └── models/ # YOLO models
|
||||
│ ├── yolov4.cfg
|
||||
│ ├── yolov4.weights
|
||||
│ └── coco.names
|
||||
└── output/ # Shared data directory
|
||||
├── cache.db # Shared SQLite database
|
||||
└── images/ # Downloaded images
|
||||
├── A1-28505-5/
|
||||
│ ├── 001.jpg
|
||||
│ └── 002.jpg
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Monitoring & Coordination
|
||||
|
||||
### Option A: Sequential Execution
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# run-pipeline.sh
|
||||
|
||||
echo "Step 1: Scraping..."
|
||||
python scraper/scraper.py
|
||||
|
||||
echo "Step 2: Processing images..."
|
||||
java -jar monitor/troostwijk-monitor.jar --process-images-only
|
||||
|
||||
echo "Step 3: Starting monitor..."
|
||||
java -jar monitor/troostwijk-monitor.jar --monitor-only
|
||||
```
|
||||
|
||||
### Option B: Separate Services (Docker Compose)
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
scraper:
|
||||
build: ./scraper
|
||||
volumes:
|
||||
- ./output:/data
|
||||
environment:
|
||||
- CACHE_DB=/data/cache.db
|
||||
command: python scraper.py
|
||||
|
||||
monitor:
|
||||
build: ./monitor
|
||||
volumes:
|
||||
- ./output:/data
|
||||
environment:
|
||||
- DATABASE_FILE=/data/cache.db
|
||||
- NOTIFICATION_CONFIG=desktop
|
||||
depends_on:
|
||||
- scraper
|
||||
command: java -jar troostwijk-monitor.jar
|
||||
```
|
||||
|
||||
### Option C: Cron-based Scheduling
|
||||
```cron
|
||||
# Scrape every 6 hours
|
||||
0 */6 * * * cd /mnt/okcomputer/scraper && python scraper.py
|
||||
|
||||
# Process images every hour (if new lots found)
|
||||
0 * * * * cd /mnt/okcomputer/monitor && java -jar monitor.jar --process-new
|
||||
|
||||
# Monitor runs continuously
|
||||
@reboot cd /mnt/okcomputer/monitor && java -jar monitor.jar --monitor-only
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Type Mismatch Errors
|
||||
**Symptom**: Monitor crashes with "INTEGER expected, got TEXT"
|
||||
|
||||
**Solution**: Use adapter pattern (Option 1) or unified schema (Option 2)
|
||||
|
||||
### Issue: Monitor sees no data
|
||||
**Symptom**: "Total lots in database: 0"
|
||||
|
||||
**Check**:
|
||||
1. Is `DATABASE_FILE` env var set correctly?
|
||||
2. Did scraper actually write data?
|
||||
3. Are both processes using the same database file?
|
||||
|
||||
```bash
|
||||
# Verify database has data
|
||||
sqlite3 /mnt/okcomputer/output/cache.db "SELECT COUNT(*) FROM lots"
|
||||
```
|
||||
|
||||
### Issue: Images not downloading
|
||||
**Symptom**: "Total images processed: 0" but scraper found images
|
||||
|
||||
**Check**:
|
||||
1. Scraper writes image URLs to `images` table
|
||||
2. Monitor reads from `images` table with `downloaded=0`
|
||||
3. Field name mapping: `local_path` vs `local_path`
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Immediate**: Implement `ScraperDataAdapter` for compatibility
|
||||
2. **This Week**: Test end-to-end integration with sample data
|
||||
3. **Next Sprint**: Migrate to unified schema
|
||||
4. **Future**: Add event-driven architecture with webhooks
|
||||
@@ -1,422 +0,0 @@
|
||||
# Intelligence Features Implementation Summary
|
||||
|
||||
## Overview
|
||||
This document summarizes the implementation of advanced intelligence features based on 15+ new GraphQL API fields discovered from the Troostwijk auction system.
|
||||
|
||||
## New GraphQL Fields Integrated
|
||||
|
||||
### HIGH PRIORITY FIELDS (Implemented)
|
||||
1. **`followersCount`** (Integer) - Watch count showing bidder interest
|
||||
- Direct indicator of competition
|
||||
- Used for sleeper lot detection
|
||||
- Popularity level classification
|
||||
|
||||
2. **`estimatedFullPrice`** (Object: min/max cents)
|
||||
- Auction house's estimated value range
|
||||
- Used for bargain detection
|
||||
- Price vs estimate analytics
|
||||
|
||||
3. **`nextBidStepInCents`** (Long)
|
||||
- Exact bid increment from API
|
||||
- Precise next bid calculations
|
||||
- Better UX for bidding recommendations
|
||||
|
||||
4. **`condition`** (String)
|
||||
- Direct condition field from API
|
||||
- Better than extracting from attributes
|
||||
- Used in condition scoring
|
||||
|
||||
5. **`categoryInformation`** (Object)
|
||||
- Structured category with path
|
||||
- Better categorization and filtering
|
||||
- Category-based analytics
|
||||
|
||||
6. **`location`** (Object: city, countryCode, etc.)
|
||||
- Structured location data
|
||||
- Proximity filtering capability
|
||||
- Logistics cost calculation
|
||||
|
||||
### MEDIUM PRIORITY FIELDS (Implemented)
|
||||
7. **`biddingStatus`** (Enum) - Detailed bidding status
|
||||
8. **`appearance`** (String) - Visual condition notes
|
||||
9. **`packaging`** (String) - Packaging details
|
||||
10. **`quantity`** (Long) - Lot quantity for bulk items
|
||||
11. **`vat`** (BigDecimal) - VAT percentage
|
||||
12. **`buyerPremiumPercentage`** (BigDecimal) - Buyer premium
|
||||
13. **`remarks`** (String) - Viewing/pickup notes
|
||||
|
||||
## Code Changes
|
||||
|
||||
### 1. Backend - Lot.java (Domain Model)
|
||||
**File**: `src/main/java/auctiora/Lot.java`
|
||||
|
||||
**Changes**:
|
||||
- Added 24 new fields to the Lot record
|
||||
- Implemented 9 intelligence calculation methods:
|
||||
- `calculateTotalCost()` - Bid + VAT + Premium
|
||||
- `calculateNextBid()` - Using API increment
|
||||
- `isBelowEstimate()` - Bargain detection
|
||||
- `isAboveEstimate()` - Overvalued detection
|
||||
- `getInterestToBidRatio()` - Conversion rate
|
||||
- `getPopularityLevel()` - HIGH/MEDIUM/LOW/MINIMAL
|
||||
- `isSleeperLot()` - High interest, low bid
|
||||
- `getEstimatedMidpoint()` - Average of estimate range
|
||||
- `getPriceVsEstimateRatio()` - Price comparison metric
|
||||
|
||||
**Example**:
|
||||
```java
|
||||
public boolean isSleeperLot() {
|
||||
return followersCount != null && followersCount > 10 && currentBid < 100;
|
||||
}
|
||||
|
||||
public double calculateTotalCost() {
|
||||
double base = currentBid > 0 ? currentBid : 0;
|
||||
if (vat != null && vat > 0) {
|
||||
base += (base * vat / 100.0);
|
||||
}
|
||||
if (buyerPremiumPercentage != null && buyerPremiumPercentage > 0) {
|
||||
base += (base * buyerPremiumPercentage / 100.0);
|
||||
}
|
||||
return base;
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Backend - AuctionMonitorResource.java (REST API)
|
||||
**File**: `src/main/java/auctiora/AuctionMonitorResource.java`
|
||||
|
||||
**New Endpoints Added**:
|
||||
1. `GET /api/monitor/intelligence/sleepers` - Sleeper lots (high interest, low bids)
|
||||
2. `GET /api/monitor/intelligence/bargains` - Bargain lots (below estimate)
|
||||
3. `GET /api/monitor/intelligence/popular?level={HIGH|MEDIUM|LOW}` - Popular lots
|
||||
4. `GET /api/monitor/intelligence/price-analysis` - Price vs estimate statistics
|
||||
5. `GET /api/monitor/lots/{lotId}/intelligence` - Detailed lot intelligence
|
||||
6. `GET /api/monitor/charts/watch-distribution` - Follower count distribution
|
||||
|
||||
**Enhanced Features**:
|
||||
- Updated insights endpoint to include sleeper, bargain, and popular insights
|
||||
- Added intelligent filtering and sorting for intelligence data
|
||||
- Integrated new fields into existing statistics
|
||||
|
||||
**Example Endpoint**:
|
||||
```java
|
||||
@GET
|
||||
@Path("/intelligence/sleepers")
|
||||
public Response getSleeperLots(@QueryParam("minFollowers") @DefaultValue("10") int minFollowers) {
|
||||
var allLots = db.getAllLots();
|
||||
var sleepers = allLots.stream()
|
||||
.filter(Lot::isSleeperLot)
|
||||
.toList();
|
||||
|
||||
return Response.ok(Map.of(
|
||||
"count", sleepers.size(),
|
||||
"lots", sleepers
|
||||
)).build();
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Frontend - index.html (Intelligence Dashboard)
|
||||
**File**: `src/main/resources/META-INF/resources/index.html`
|
||||
|
||||
**New UI Components**:
|
||||
|
||||
#### Intelligence Dashboard Widgets (3 new cards)
|
||||
1. **Sleeper Lots Widget**
|
||||
- Purple gradient design
|
||||
- Shows count of high-interest, low-bid lots
|
||||
- Click to filter table
|
||||
|
||||
2. **Bargain Lots Widget**
|
||||
- Green gradient design
|
||||
- Shows count of below-estimate lots
|
||||
- Click to filter table
|
||||
|
||||
3. **Popular/Hot Lots Widget**
|
||||
- Orange gradient design
|
||||
- Shows count of high-follower lots
|
||||
- Click to filter table
|
||||
|
||||
#### Enhanced Closing Soon Table
|
||||
**New Columns Added**:
|
||||
1. **Watchers** - Follower count with color-coded badges
|
||||
- Red (50+ followers): High competition
|
||||
- Orange (21-50): Medium competition
|
||||
- Blue (6-20): Some interest
|
||||
- Gray (0-5): Minimal interest
|
||||
|
||||
2. **Est. Range** - Auction house estimate (`€min-€max`)
|
||||
- Shows "DEAL" badge if below estimate
|
||||
|
||||
3. **Total Cost** - True cost including VAT and premium
|
||||
- Hover tooltip shows breakdown
|
||||
- Purple color to stand out
|
||||
|
||||
**JavaScript Functions Added**:
|
||||
- `fetchIntelligenceData()` - Fetches all intelligence metrics
|
||||
- `showSleeperLots()` - Filters table to sleepers
|
||||
- `showBargainLots()` - Filters table to bargains
|
||||
- `showPopularLots()` - Filters table to popular
|
||||
- Enhanced table rendering with smart badges
|
||||
|
||||
**Example Code**:
|
||||
```javascript
|
||||
// Calculate total cost (including VAT and premium)
|
||||
const currentBid = lot.currentBid || 0;
|
||||
const vat = lot.vat || 0;
|
||||
const premium = lot.buyerPremiumPercentage || 0;
|
||||
const totalCost = currentBid * (1 + (vat/100) + (premium/100));
|
||||
|
||||
// Bargain indicator
|
||||
const isBargain = estMin && currentBid < parseFloat(estMin);
|
||||
const bargainBadge = isBargain ?
|
||||
'<span class="ml-1 text-xs bg-green-500 text-white px-1 rounded">DEAL</span>' : '';
|
||||
```
|
||||
|
||||
## Intelligence Features
|
||||
|
||||
### 1. Sleeper Lot Detection
|
||||
**Algorithm**: `followersCount > 10 AND currentBid < 100`
|
||||
|
||||
**Value Proposition**:
|
||||
- Identifies lots with high interest but low current bids
|
||||
- Opportunity to bid strategically before price escalates
|
||||
- Early indicator of undervalued items
|
||||
|
||||
**Dashboard Display**:
|
||||
- Count shown in purple widget
|
||||
- Click to filter table
|
||||
- Purple "eye" icon
|
||||
|
||||
### 2. Bargain Detection
|
||||
**Algorithm**: `currentBid < estimatedMin`
|
||||
|
||||
**Value Proposition**:
|
||||
- Identifies lots priced below auction house estimate
|
||||
- Clear signal of potential good deals
|
||||
- Quantifiable value assessment
|
||||
|
||||
**Dashboard Display**:
|
||||
- Count shown in green widget
|
||||
- "DEAL" badge in table
|
||||
- Click to filter table
|
||||
|
||||
### 3. Popularity Analysis
|
||||
**Algorithm**: Tiered classification by follower count
|
||||
- HIGH: > 50 followers
|
||||
- MEDIUM: 21-50 followers
|
||||
- LOW: 6-20 followers
|
||||
- MINIMAL: 0-5 followers
|
||||
|
||||
**Value Proposition**:
|
||||
- Predict competition level
|
||||
- Identify trending items
|
||||
- Adjust bidding strategy accordingly
|
||||
|
||||
**Dashboard Display**:
|
||||
- Count shown in orange widget
|
||||
- Color-coded badges in table
|
||||
- Click to filter by level
|
||||
|
||||
### 4. True Cost Calculator
|
||||
**Algorithm**: `currentBid × (1 + VAT/100) × (1 + premium/100)`
|
||||
|
||||
**Value Proposition**:
|
||||
- Shows actual out-of-pocket cost
|
||||
- Prevents budget surprises
|
||||
- Enables accurate comparison across lots
|
||||
|
||||
**Dashboard Display**:
|
||||
- Purple "Total Cost" column
|
||||
- Hover tooltip shows breakdown
|
||||
- Updated in real-time
|
||||
|
||||
### 5. Exact Bid Increment
|
||||
**Algorithm**: Uses `nextBidStepInCents` from API, falls back to calculated increment
|
||||
|
||||
**Value Proposition**:
|
||||
- No guesswork on next bid amount
|
||||
- API-provided accuracy
|
||||
- Better bidding UX
|
||||
|
||||
**Implementation**:
|
||||
```java
|
||||
public double calculateNextBid() {
|
||||
if (nextBidStepInCents != null && nextBidStepInCents > 0) {
|
||||
return currentBid + (nextBidStepInCents / 100.0);
|
||||
} else if (bidIncrement != null && bidIncrement > 0) {
|
||||
return currentBid + bidIncrement;
|
||||
}
|
||||
return currentBid * 1.05; // Fallback: 5% increment
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Price vs Estimate Analytics
|
||||
**Metrics**:
|
||||
- Total lots with estimates
|
||||
- Count below estimate
|
||||
- Count above estimate
|
||||
- Average price vs estimate percentage
|
||||
|
||||
**Value Proposition**:
|
||||
- Market efficiency analysis
|
||||
- Auction house accuracy tracking
|
||||
- Investment opportunity identification
|
||||
|
||||
**API Endpoint**: `/api/monitor/intelligence/price-analysis`
|
||||
|
||||
## Visual Design
|
||||
|
||||
### Color Scheme
|
||||
- **Purple**: Sleeper lots, total cost (opportunity/value)
|
||||
- **Green**: Bargains, deals (positive value)
|
||||
- **Orange/Red**: Popular/hot lots (competition warning)
|
||||
- **Blue**: Moderate interest (informational)
|
||||
- **Gray**: Minimal interest (neutral)
|
||||
|
||||
### Badge System
|
||||
1. **Watchers Badge**: Color-coded by competition level
|
||||
2. **DEAL Badge**: Green indicator for below-estimate
|
||||
3. **Time Left Badge**: Red/yellow/green by urgency
|
||||
4. **Popularity Badge**: Fire icon for hot lots
|
||||
|
||||
### Interactive Elements
|
||||
- Click widgets to filter table
|
||||
- Hover for detailed tooltips
|
||||
- Smooth scroll to table on filter
|
||||
- Toast notifications for user feedback
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### API Optimization
|
||||
- All intelligence data fetched in parallel
|
||||
- Cached in dashboard state
|
||||
- Minimal recalculation on render
|
||||
- Efficient stream operations in backend
|
||||
|
||||
### Frontend Optimization
|
||||
- Batch DOM updates
|
||||
- Lazy rendering for large tables
|
||||
- Debounced filter operations
|
||||
- CSS transitions for smooth UX
|
||||
|
||||
## Testing Recommendations
|
||||
|
||||
### Backend Tests
|
||||
1. Test `Lot` intelligence methods with various inputs
|
||||
2. Test API endpoints with mock data
|
||||
3. Test edge cases (null values, zero bids, etc.)
|
||||
4. Performance test with 10k+ lots
|
||||
|
||||
### Frontend Tests
|
||||
1. Test widget click handlers
|
||||
2. Test table rendering with new columns
|
||||
3. Test filter functionality
|
||||
4. Test responsive design on mobile
|
||||
|
||||
### Integration Tests
|
||||
1. End-to-end flow: Scraper → DB → API → Dashboard
|
||||
2. Real-time data refresh
|
||||
3. Concurrent user access
|
||||
4. Load testing
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Phase 2 (Bid History)
|
||||
- Implement `bid_history` table scraping
|
||||
- Track bid changes over time
|
||||
- Calculate bid velocity accurately
|
||||
- Identify bid patterns
|
||||
|
||||
### Phase 3 (ML Predictions)
|
||||
- Predict final hammer price
|
||||
- Recommend optimal bid timing
|
||||
- Classify lot categories automatically
|
||||
- Anomaly detection
|
||||
|
||||
### Phase 4 (Mobile)
|
||||
- React Native mobile app
|
||||
- Push notifications
|
||||
- Offline mode
|
||||
- Quick bid functionality
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### Database Migration (Required)
|
||||
The new fields need to be added to the database schema:
|
||||
|
||||
```sql
|
||||
-- Add to lots table
|
||||
ALTER TABLE lots ADD COLUMN followers_count INTEGER DEFAULT 0;
|
||||
ALTER TABLE lots ADD COLUMN estimated_min DECIMAL(12, 2);
|
||||
ALTER TABLE lots ADD COLUMN estimated_max DECIMAL(12, 2);
|
||||
ALTER TABLE lots ADD COLUMN next_bid_step_in_cents BIGINT;
|
||||
ALTER TABLE lots ADD COLUMN condition TEXT;
|
||||
ALTER TABLE lots ADD COLUMN category_path TEXT;
|
||||
ALTER TABLE lots ADD COLUMN city_location TEXT;
|
||||
ALTER TABLE lots ADD COLUMN country_code TEXT;
|
||||
ALTER TABLE lots ADD COLUMN bidding_status TEXT;
|
||||
ALTER TABLE lots ADD COLUMN appearance TEXT;
|
||||
ALTER TABLE lots ADD COLUMN packaging TEXT;
|
||||
ALTER TABLE lots ADD COLUMN quantity BIGINT;
|
||||
ALTER TABLE lots ADD COLUMN vat DECIMAL(5, 2);
|
||||
ALTER TABLE lots ADD COLUMN buyer_premium_percentage DECIMAL(5, 2);
|
||||
ALTER TABLE lots ADD COLUMN remarks TEXT;
|
||||
ALTER TABLE lots ADD COLUMN starting_bid DECIMAL(12, 2);
|
||||
ALTER TABLE lots ADD COLUMN reserve_price DECIMAL(12, 2);
|
||||
ALTER TABLE lots ADD COLUMN reserve_met BOOLEAN DEFAULT FALSE;
|
||||
ALTER TABLE lots ADD COLUMN bid_increment DECIMAL(12, 2);
|
||||
ALTER TABLE lots ADD COLUMN view_count INTEGER DEFAULT 0;
|
||||
ALTER TABLE lots ADD COLUMN first_bid_time TEXT;
|
||||
ALTER TABLE lots ADD COLUMN last_bid_time TEXT;
|
||||
ALTER TABLE lots ADD COLUMN bid_velocity DECIMAL(5, 2);
|
||||
```
|
||||
|
||||
### Scraper Update (Required)
|
||||
The external scraper (Python/Playwright) needs to extract the new fields from GraphQL:
|
||||
|
||||
```python
|
||||
# Extract from __NEXT_DATA__ JSON
|
||||
followers_count = lot_data.get('followersCount')
|
||||
estimated_min = lot_data.get('estimatedFullPrice', {}).get('min', {}).get('cents')
|
||||
estimated_max = lot_data.get('estimatedFullPrice', {}).get('max', {}).get('cents')
|
||||
next_bid_step = lot_data.get('nextBidStepInCents')
|
||||
condition = lot_data.get('condition')
|
||||
# ... etc
|
||||
```
|
||||
|
||||
### Deployment Steps
|
||||
1. Stop the monitor service
|
||||
2. Run database migrations
|
||||
3. Update scraper to extract new fields
|
||||
4. Deploy updated monitor JAR
|
||||
5. Restart services
|
||||
6. Verify data populating in dashboard
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### Expected Performance
|
||||
- **Intelligence Data Fetch**: < 100ms for 10k lots
|
||||
- **Table Rendering**: < 200ms with all new columns
|
||||
- **Widget Update**: < 50ms
|
||||
- **API Response Time**: < 500ms
|
||||
|
||||
### Resource Usage
|
||||
- **Memory**: +50MB for intelligence calculations
|
||||
- **Database**: +2KB per lot (new columns)
|
||||
- **Network**: +10KB per dashboard refresh
|
||||
|
||||
## Documentation
|
||||
- **Integration Flowchart**: `docs/INTEGRATION_FLOWCHART.md`
|
||||
- **API Documentation**: Auto-generated from JAX-RS annotations
|
||||
- **Database Schema**: `wiki/DATABASE_ARCHITECTURE.md`
|
||||
- **GraphQL Fields**: `wiki/EXPERT_ANALITICS.sql`
|
||||
|
||||
---
|
||||
|
||||
**Implementation Date**: December 2025
|
||||
**Version**: 2.1
|
||||
**Status**: ✅ Complete - Ready for Testing
|
||||
**Next Steps**:
|
||||
1. Deploy to staging environment
|
||||
2. Run integration tests
|
||||
3. Update scraper to extract new fields
|
||||
4. Deploy to production
|
||||
@@ -1,540 +0,0 @@
|
||||
# Quarkus Implementation Complete ✅
|
||||
|
||||
## Summary
|
||||
|
||||
The Troostwijk Auction Monitor has been fully integrated with **Quarkus Framework** for production-ready deployment with enterprise features.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 What Was Added
|
||||
|
||||
### 1. **Quarkus Dependencies** (pom.xml)
|
||||
|
||||
```xml
|
||||
<!-- Core Quarkus -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-arc</artifactId> <!-- CDI/DI -->
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest-jackson</artifactId> <!-- REST API -->
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-scheduler</artifactId> <!-- Cron Scheduling -->
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-smallrye-health</artifactId> <!-- Health Checks -->
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-config-yaml</artifactId> <!-- YAML Config -->
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### 2. **Configuration** (application.properties)
|
||||
|
||||
```properties
|
||||
# Application
|
||||
quarkus.application.name=troostwijk-scraper
|
||||
quarkus.http.port=8081
|
||||
|
||||
# Auction Monitor Configuration
|
||||
auction.database.path=C:\\mnt\\okcomputer\\output\\cache.db
|
||||
auction.images.path=C:\\mnt\\okcomputer\\output\\images
|
||||
auction.notification.config=desktop
|
||||
|
||||
# YOLO Models
|
||||
auction.yolo.config=models/yolov4.cfg
|
||||
auction.yolo.weights=models/yolov4.weights
|
||||
auction.yolo.classes=models/coco.names
|
||||
|
||||
# Workflow Schedules (Cron Expressions)
|
||||
auction.workflow.scraper-import.cron=0 */30 * * * ? # Every 30 min
|
||||
auction.workflow.image-processing.cron=0 0 * * * ? # Every 1 hour
|
||||
auction.workflow.bid-monitoring.cron=0 */15 * * * ? # Every 15 min
|
||||
auction.workflow.closing-alerts.cron=0 */5 * * * ? # Every 5 min
|
||||
|
||||
# Scheduler
|
||||
quarkus.scheduler.enabled=true
|
||||
|
||||
# Health Checks
|
||||
quarkus.smallrye-health.root-path=/health
|
||||
```
|
||||
|
||||
### 3. **Quarkus Scheduler** (QuarkusWorkflowScheduler.java)
|
||||
|
||||
Replaced manual `ScheduledExecutorService` with Quarkus `@Scheduled`:
|
||||
|
||||
```java
|
||||
@ApplicationScoped
|
||||
public class QuarkusWorkflowScheduler {
|
||||
|
||||
@Inject DatabaseService db;
|
||||
@Inject NotificationService notifier;
|
||||
@Inject ObjectDetectionService detector;
|
||||
@Inject ImageProcessingService imageProcessor;
|
||||
|
||||
// Workflow 1: Every 30 minutes
|
||||
@Scheduled(cron = "{auction.workflow.scraper-import.cron}")
|
||||
void importScraperData() { /* ... */ }
|
||||
|
||||
// Workflow 2: Every 1 hour
|
||||
@Scheduled(cron = "{auction.workflow.image-processing.cron}")
|
||||
void processImages() { /* ... */ }
|
||||
|
||||
// Workflow 3: Every 15 minutes
|
||||
@Scheduled(cron = "{auction.workflow.bid-monitoring.cron}")
|
||||
void monitorBids() { /* ... */ }
|
||||
|
||||
// Workflow 4: Every 5 minutes
|
||||
@Scheduled(cron = "{auction.workflow.closing-alerts.cron}")
|
||||
void checkClosingTimes() { /* ... */ }
|
||||
}
|
||||
```
|
||||
|
||||
### 4. **CDI Producer** (AuctionMonitorProducer.java)
|
||||
|
||||
Centralized service creation with dependency injection:
|
||||
|
||||
```java
|
||||
@ApplicationScoped
|
||||
public class AuctionMonitorProducer {
|
||||
|
||||
@Produces @Singleton
|
||||
public DatabaseService produceDatabaseService(
|
||||
@ConfigProperty(name = "auction.database.path") String dbPath) {
|
||||
DatabaseService db = new DatabaseService(dbPath);
|
||||
db.ensureSchema();
|
||||
return db;
|
||||
}
|
||||
|
||||
@Produces @Singleton
|
||||
public NotificationService produceNotificationService(
|
||||
@ConfigProperty(name = "auction.notification.config") String config) {
|
||||
return new NotificationService(config, "");
|
||||
}
|
||||
|
||||
@Produces @Singleton
|
||||
public ObjectDetectionService produceObjectDetectionService(...) { }
|
||||
|
||||
@Produces @Singleton
|
||||
public ImageProcessingService produceImageProcessingService(...) { }
|
||||
}
|
||||
```
|
||||
|
||||
### 5. **REST API** (AuctionMonitorResource.java)
|
||||
|
||||
Full REST API for monitoring and control:
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/api/monitor/status` | GET | Get current status |
|
||||
| `/api/monitor/statistics` | GET | Get detailed statistics |
|
||||
| `/api/monitor/trigger/scraper-import` | POST | Trigger scraper import |
|
||||
| `/api/monitor/trigger/image-processing` | POST | Trigger image processing |
|
||||
| `/api/monitor/trigger/bid-monitoring` | POST | Trigger bid monitoring |
|
||||
| `/api/monitor/trigger/closing-alerts` | POST | Trigger closing alerts |
|
||||
| `/api/monitor/auctions` | GET | List auctions |
|
||||
| `/api/monitor/auctions?country=NL` | GET | Filter auctions by country |
|
||||
| `/api/monitor/lots` | GET | List active lots |
|
||||
| `/api/monitor/lots/closing-soon` | GET | Lots closing soon |
|
||||
| `/api/monitor/lots/{id}/images` | GET | Get lot images |
|
||||
| `/api/monitor/test-notification` | POST | Send test notification |
|
||||
|
||||
### 6. **Health Checks** (AuctionMonitorHealthCheck.java)
|
||||
|
||||
Kubernetes-ready health probes:
|
||||
|
||||
```java
|
||||
@Liveness // /health/live
|
||||
public class LivenessCheck implements HealthCheck {
|
||||
public HealthCheckResponse call() {
|
||||
return HealthCheckResponse.up("Auction Monitor is alive");
|
||||
}
|
||||
}
|
||||
|
||||
@Readiness // /health/ready
|
||||
public class ReadinessCheck implements HealthCheck {
|
||||
@Inject DatabaseService db;
|
||||
|
||||
public HealthCheckResponse call() {
|
||||
var auctions = db.getAllAuctions();
|
||||
return HealthCheckResponse.named("database")
|
||||
.up()
|
||||
.withData("auctions", auctions.size())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
@Startup // /health/started
|
||||
public class StartupCheck implements HealthCheck { /* ... */ }
|
||||
```
|
||||
|
||||
### 7. **Docker Support**
|
||||
|
||||
#### Dockerfile (Optimized for Quarkus fast-jar)
|
||||
|
||||
```dockerfile
|
||||
# Build stage
|
||||
FROM maven:3.9-eclipse-temurin-25-alpine AS build
|
||||
WORKDIR /app
|
||||
COPY ../pom.xml ./
|
||||
RUN mvn dependency:go-offline -B
|
||||
COPY ../src ./src/
|
||||
RUN mvn package -DskipTests -Dquarkus.package.jar.type=fast-jar
|
||||
|
||||
# Runtime stage
|
||||
FROM eclipse-temurin:25-jre-alpine
|
||||
WORKDIR /app
|
||||
|
||||
# Copy Quarkus fast-jar structure
|
||||
COPY --from=build /app/target/quarkus-app/lib/ /app/lib/
|
||||
COPY --from=build /app/target/quarkus-app/*.jar /app/
|
||||
COPY --from=build /app/target/quarkus-app/app/ /app/app/
|
||||
COPY --from=build /app/target/quarkus-app/quarkus/ /app/quarkus/
|
||||
|
||||
EXPOSE 8081
|
||||
HEALTHCHECK CMD wget --spider http://localhost:8081/health/live
|
||||
|
||||
ENTRYPOINT ["java", "-jar", "/app/quarkus-run.jar"]
|
||||
```
|
||||
|
||||
#### docker-compose.yml
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
auction-monitor:
|
||||
build: ../wiki
|
||||
ports:
|
||||
- "8081:8081"
|
||||
volumes:
|
||||
- ./data/cache.db:/mnt/okcomputer/output/cache.db
|
||||
- ./data/images:/mnt/okcomputer/output/images
|
||||
environment:
|
||||
- AUCTION_DATABASE_PATH=/mnt/okcomputer/output/cache.db
|
||||
- AUCTION_NOTIFICATION_CONFIG=desktop
|
||||
healthcheck:
|
||||
test: [ "CMD", "wget", "--spider", "http://localhost:8081/health/live" ]
|
||||
interval: 30s
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
### 8. **Kubernetes Deployment**
|
||||
|
||||
Full Kubernetes manifests:
|
||||
- **Namespace** - Isolated environment
|
||||
- **PersistentVolumeClaim** - Data storage
|
||||
- **ConfigMap** - Configuration
|
||||
- **Secret** - Sensitive data (SMTP credentials)
|
||||
- **Deployment** - Application pods
|
||||
- **Service** - Internal networking
|
||||
- **Ingress** - External access
|
||||
- **HorizontalPodAutoscaler** - Auto-scaling
|
||||
|
||||
---
|
||||
|
||||
## 🚀 How to Run
|
||||
|
||||
### Development Mode (with live reload)
|
||||
|
||||
```bash
|
||||
mvn quarkus:dev
|
||||
|
||||
# Access:
|
||||
# - App: http://localhost:8081
|
||||
# - Dev UI: http://localhost:8081/q/dev/
|
||||
# - API: http://localhost:8081/api/monitor/status
|
||||
# - Health: http://localhost:8081/health
|
||||
```
|
||||
|
||||
### Production Mode (JAR)
|
||||
|
||||
```bash
|
||||
# Build
|
||||
mvn clean package
|
||||
|
||||
# Run
|
||||
java -jar target/quarkus-app/quarkus-run.jar
|
||||
|
||||
# Access: http://localhost:8081
|
||||
```
|
||||
|
||||
### Docker
|
||||
|
||||
```bash
|
||||
# Build
|
||||
docker build -t auction-monitor .
|
||||
|
||||
# Run
|
||||
docker run -p 8081:8081 auction-monitor
|
||||
|
||||
# Access: http://localhost:8081
|
||||
```
|
||||
|
||||
### Docker Compose
|
||||
|
||||
```bash
|
||||
# Start
|
||||
docker-compose up -d
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f
|
||||
|
||||
# Access: http://localhost:8081
|
||||
```
|
||||
|
||||
### Kubernetes
|
||||
|
||||
```bash
|
||||
# Deploy
|
||||
kubectl apply -f k8s/deployment.yaml
|
||||
|
||||
# Port forward
|
||||
kubectl port-forward svc/auction-monitor 8081:8081 -n auction-monitor
|
||||
|
||||
# Access: http://localhost:8081
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ QUARKUS APPLICATION │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────────────┐ │
|
||||
│ │ QuarkusWorkflowScheduler (@ApplicationScoped) │ │
|
||||
│ │ ┌──────────────────────────────────────────────┐ │ │
|
||||
│ │ │ @Scheduled(cron = "0 */30 * * * ?") │ │ │
|
||||
│ │ │ importScraperData() │ │ │
|
||||
│ │ ├──────────────────────────────────────────────┤ │ │
|
||||
│ │ │ @Scheduled(cron = "0 0 * * * ?") │ │ │
|
||||
│ │ │ processImages() │ │ │
|
||||
│ │ ├──────────────────────────────────────────────┤ │ │
|
||||
│ │ │ @Scheduled(cron = "0 */15 * * * ?") │ │ │
|
||||
│ │ │ monitorBids() │ │ │
|
||||
│ │ ├──────────────────────────────────────────────┤ │ │
|
||||
│ │ │ @Scheduled(cron = "0 */5 * * * ?") │ │ │
|
||||
│ │ │ checkClosingTimes() │ │ │
|
||||
│ │ └──────────────────────────────────────────────┘ │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
│ ▲ │
|
||||
│ │ @Inject │
|
||||
│ ┌───────────────────────┴────────────────────────────┐ │
|
||||
│ │ AuctionMonitorProducer │ │
|
||||
│ │ ┌──────────────────────────────────────────────┐ │ │
|
||||
│ │ │ @Produces @Singleton DatabaseService │ │ │
|
||||
│ │ │ @Produces @Singleton NotificationService │ │ │
|
||||
│ │ │ @Produces @Singleton ObjectDetectionService │ │ │
|
||||
│ │ │ @Produces @Singleton ImageProcessingService │ │ │
|
||||
│ │ └──────────────────────────────────────────────┘ │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────────────┐ │
|
||||
│ │ AuctionMonitorResource (REST API) │ │
|
||||
│ │ ┌──────────────────────────────────────────────┐ │ │
|
||||
│ │ │ GET /api/monitor/status │ │ │
|
||||
│ │ │ GET /api/monitor/statistics │ │ │
|
||||
│ │ │ POST /api/monitor/trigger/* │ │ │
|
||||
│ │ │ GET /api/monitor/auctions │ │ │
|
||||
│ │ │ GET /api/monitor/lots │ │ │
|
||||
│ │ └──────────────────────────────────────────────┘ │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────────────┐ │
|
||||
│ │ AuctionMonitorHealthCheck │ │
|
||||
│ │ ┌──────────────────────────────────────────────┐ │ │
|
||||
│ │ │ @Liveness - /health/live │ │ │
|
||||
│ │ │ @Readiness - /health/ready │ │ │
|
||||
│ │ │ @Startup - /health/started │ │ │
|
||||
│ │ └──────────────────────────────────────────────┘ │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Key Features
|
||||
|
||||
### 1. **Dependency Injection (CDI)**
|
||||
- Type-safe injection with `@Inject`
|
||||
- Singleton services with `@Produces`
|
||||
- Configuration injection with `@ConfigProperty`
|
||||
|
||||
### 2. **Scheduled Tasks**
|
||||
- Cron-based scheduling with `@Scheduled`
|
||||
- Configurable via properties
|
||||
- No manual thread management
|
||||
|
||||
### 3. **REST API**
|
||||
- JAX-RS endpoints
|
||||
- JSON serialization
|
||||
- Error handling
|
||||
|
||||
### 4. **Health Checks**
|
||||
- Liveness probe (is app alive?)
|
||||
- Readiness probe (is app ready?)
|
||||
- Startup probe (has app started?)
|
||||
|
||||
### 5. **Configuration**
|
||||
- External configuration
|
||||
- Environment variable override
|
||||
- Type-safe config injection
|
||||
|
||||
### 6. **Container Ready**
|
||||
- Optimized Docker image
|
||||
- Fast startup (~0.5s)
|
||||
- Low memory (~50MB)
|
||||
- Health checks included
|
||||
|
||||
### 7. **Cloud Native**
|
||||
- Kubernetes manifests
|
||||
- Auto-scaling support
|
||||
- Ingress configuration
|
||||
- Persistent storage
|
||||
|
||||
---
|
||||
|
||||
## 📁 Files Created/Modified
|
||||
|
||||
### New Files
|
||||
|
||||
```
|
||||
src/main/java/com/auction/
|
||||
├── QuarkusWorkflowScheduler.java # Quarkus scheduler
|
||||
├── AuctionMonitorProducer.java # CDI producer
|
||||
├── AuctionMonitorResource.java # REST API
|
||||
└── AuctionMonitorHealthCheck.java # Health checks
|
||||
|
||||
src/main/resources/
|
||||
└── application.properties # Configuration
|
||||
|
||||
k8s/
|
||||
├── deployment.yaml # Kubernetes manifests
|
||||
└── README.md # K8s deployment guide
|
||||
|
||||
docker-compose.yml # Docker Compose config
|
||||
Dockerfile # Updated for Quarkus
|
||||
QUARKUS_GUIDE.md # Complete Quarkus guide
|
||||
QUARKUS_IMPLEMENTATION.md # This file
|
||||
```
|
||||
|
||||
### Modified Files
|
||||
|
||||
```
|
||||
pom.xml # Added Quarkus dependencies
|
||||
src/main/resources/application.properties # Added config
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Benefits of Quarkus
|
||||
|
||||
| Feature | Before | After (Quarkus) |
|
||||
|---------|--------|-----------------|
|
||||
| **Startup Time** | ~3-5 seconds | ~0.5 seconds |
|
||||
| **Memory** | ~200MB | ~50MB |
|
||||
| **Scheduling** | Manual ExecutorService | @Scheduled annotations |
|
||||
| **DI/CDI** | Manual instantiation | @Inject, @Produces |
|
||||
| **REST API** | None | Full JAX-RS API |
|
||||
| **Health Checks** | None | Built-in probes |
|
||||
| **Config** | Hard-coded | External properties |
|
||||
| **Dev Mode** | Manual restart | Live reload |
|
||||
| **Container** | Basic Docker | Optimized fast-jar |
|
||||
| **Cloud Native** | Not ready | K8s ready |
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### Unit Tests
|
||||
```bash
|
||||
mvn test
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
```bash
|
||||
# Start app
|
||||
mvn quarkus:dev
|
||||
|
||||
# In another terminal
|
||||
curl http://localhost:8081/api/monitor/status
|
||||
curl http://localhost:8081/health
|
||||
curl -X POST http://localhost:8081/api/monitor/trigger/scraper-import
|
||||
```
|
||||
|
||||
### Docker Test
|
||||
```bash
|
||||
docker-compose up -d
|
||||
docker-compose logs -f
|
||||
curl http://localhost:8081/api/monitor/status
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
1. **QUARKUS_GUIDE.md** - Complete Quarkus usage guide
|
||||
2. **QUARKUS_IMPLEMENTATION.md** - This file (implementation details)
|
||||
3. **k8s/README.md** - Kubernetes deployment guide
|
||||
4. **docker-compose.yml** - Docker Compose reference
|
||||
5. **README.md** - Updated main README
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Summary
|
||||
|
||||
✅ **Quarkus Framework** - Fully integrated
|
||||
✅ **@Scheduled Workflows** - Cron-based scheduling
|
||||
✅ **CDI/Dependency Injection** - Clean architecture
|
||||
✅ **REST API** - Full control interface
|
||||
✅ **Health Checks** - Kubernetes ready
|
||||
✅ **Docker/Compose** - Production containers
|
||||
✅ **Kubernetes** - Cloud deployment
|
||||
✅ **Configuration** - Externalized settings
|
||||
✅ **Documentation** - Complete guides
|
||||
|
||||
**The application is now production-ready with Quarkus! 🚀**
|
||||
|
||||
### Quick Commands
|
||||
|
||||
```bash
|
||||
# Development
|
||||
mvn quarkus:dev
|
||||
|
||||
# Production
|
||||
mvn clean package
|
||||
java -jar target/quarkus-app/quarkus-run.jar
|
||||
|
||||
# Docker
|
||||
docker-compose up -d
|
||||
|
||||
# Kubernetes
|
||||
kubectl apply -f k8s/deployment.yaml
|
||||
```
|
||||
|
||||
### API Access
|
||||
|
||||
```bash
|
||||
# Status
|
||||
curl http://localhost:8081/api/monitor/status
|
||||
|
||||
# Statistics
|
||||
curl http://localhost:8081/api/monitor/statistics
|
||||
|
||||
# Health
|
||||
curl http://localhost:8081/health
|
||||
|
||||
# Trigger workflow
|
||||
curl -X POST http://localhost:8081/api/monitor/trigger/scraper-import
|
||||
```
|
||||
|
||||
**Enjoy your Quarkus-powered Auction Monitor! 🎊**
|
||||
@@ -1,191 +0,0 @@
|
||||
# Quick Start Guide
|
||||
|
||||
Get the scraper running in minutes without downloading YOLO models!
|
||||
|
||||
## Minimal Setup (No Object Detection)
|
||||
|
||||
The scraper works perfectly fine **without** YOLO object detection. You can run it immediately and add object detection later if needed.
|
||||
|
||||
### Step 1: Run the Scraper
|
||||
|
||||
```bash
|
||||
# Using Maven
|
||||
mvn clean compile exec:java -Dexec.mainClass="com.auction.scraper.TroostwijkScraper"
|
||||
```
|
||||
|
||||
Or in IntelliJ IDEA:
|
||||
1. Open `TroostwijkScraper.java`
|
||||
2. Right-click on the `main` method
|
||||
3. Select "Run 'TroostwijkScraper.main()'"
|
||||
|
||||
### What You'll See
|
||||
|
||||
```
|
||||
=== Troostwijk Auction Scraper ===
|
||||
|
||||
Initializing scraper...
|
||||
⚠️ Object detection disabled: YOLO model files not found
|
||||
Expected files:
|
||||
- models/yolov4.cfg
|
||||
- models/yolov4.weights
|
||||
- models/coco.names
|
||||
Scraper will continue without image analysis.
|
||||
|
||||
[1/3] Discovering Dutch auctions...
|
||||
✓ Found 5 auctions: [12345, 12346, 12347, 12348, 12349]
|
||||
|
||||
[2/3] Fetching lot details...
|
||||
Processing sale 12345...
|
||||
|
||||
[3/3] Starting monitoring service...
|
||||
✓ Monitoring active. Press Ctrl+C to stop.
|
||||
```
|
||||
|
||||
### Step 2: Test Desktop Notifications
|
||||
|
||||
The scraper will automatically send desktop notifications when:
|
||||
- A new bid is placed on a monitored lot
|
||||
- An auction is closing within 5 minutes
|
||||
|
||||
**No setup required** - desktop notifications work out of the box!
|
||||
|
||||
---
|
||||
|
||||
## Optional: Add Email Notifications
|
||||
|
||||
If you want email notifications in addition to desktop notifications:
|
||||
|
||||
```bash
|
||||
# Set environment variable
|
||||
export NOTIFICATION_CONFIG="smtp:your.email@gmail.com:app_password:your.email@gmail.com"
|
||||
|
||||
# Then run the scraper
|
||||
mvn exec:java -Dexec.mainClass="com.auction.scraper.TroostwijkScraper"
|
||||
```
|
||||
|
||||
**Get Gmail App Password:**
|
||||
1. Enable 2FA in Google Account
|
||||
2. Go to: Google Account → Security → 2-Step Verification → App passwords
|
||||
3. Generate password for "Mail"
|
||||
4. Use that password (not your regular Gmail password)
|
||||
|
||||
---
|
||||
|
||||
## Optional: Add Object Detection Later
|
||||
|
||||
If you want AI-powered image analysis to detect objects in auction photos:
|
||||
|
||||
### 1. Create models directory
|
||||
```bash
|
||||
mkdir models
|
||||
cd models
|
||||
```
|
||||
|
||||
### 2. Download YOLO files
|
||||
```bash
|
||||
# YOLOv4 config (small)
|
||||
curl -O https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg
|
||||
|
||||
# YOLOv4 weights (245 MB - takes a few minutes)
|
||||
curl -LO https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
|
||||
|
||||
# COCO class names
|
||||
curl -O https://raw.githubusercontent.com/AlexeyAB/darknet/master/data/coco.names
|
||||
```
|
||||
|
||||
### 3. Run again
|
||||
```bash
|
||||
mvn exec:java -Dexec.mainClass="com.auction.scraper.TroostwijkScraper"
|
||||
```
|
||||
|
||||
Now you'll see:
|
||||
```
|
||||
✓ Object detection enabled with YOLO
|
||||
```
|
||||
|
||||
The scraper will now analyze auction images and detect objects like:
|
||||
- Vehicles (cars, trucks, forklifts)
|
||||
- Equipment (machines, tools)
|
||||
- Furniture
|
||||
- Electronics
|
||||
- And 80+ other object types
|
||||
|
||||
---
|
||||
|
||||
## Features Without Object Detection
|
||||
|
||||
Even without YOLO, the scraper provides:
|
||||
|
||||
✅ **Full auction scraping** - Discovers all Dutch auctions
|
||||
✅ **Lot tracking** - Monitors bids and closing times
|
||||
✅ **Desktop notifications** - Real-time alerts
|
||||
✅ **SQLite database** - All data persisted locally
|
||||
✅ **Image downloading** - Saves all lot images
|
||||
✅ **Scheduled monitoring** - Automatic updates every hour
|
||||
|
||||
Object detection simply adds:
|
||||
- AI-powered image analysis
|
||||
- Automatic object labeling
|
||||
- Searchable image database
|
||||
|
||||
---
|
||||
|
||||
## Database Location
|
||||
|
||||
The scraper creates `troostwijk.db` in your current directory with:
|
||||
- All auction data
|
||||
- Lot details (title, description, bids, etc.)
|
||||
- Downloaded image paths
|
||||
- Object labels (if detection enabled)
|
||||
|
||||
View the database with any SQLite browser:
|
||||
```bash
|
||||
sqlite3 troostwijk.db
|
||||
.tables
|
||||
SELECT * FROM lots LIMIT 5;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Stopping the Scraper
|
||||
|
||||
Press **Ctrl+C** to stop the monitoring service.
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ✅ **Run the scraper** without YOLO to test it
|
||||
2. ✅ **Verify desktop notifications** work
|
||||
3. ⚙️ **Optional**: Add email notifications
|
||||
4. ⚙️ **Optional**: Download YOLO models for object detection
|
||||
5. 🔧 **Customize**: Edit monitoring frequency, closing alerts, etc.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Desktop notifications not appearing?
|
||||
- **Windows**: Check if Java has notification permissions
|
||||
- **Linux**: Ensure desktop environment is running (not headless)
|
||||
- **macOS**: Check System Preferences → Notifications
|
||||
|
||||
### OpenCV warnings?
|
||||
These are normal and can be ignored:
|
||||
```
|
||||
WARNING: A restricted method in java.lang.System has been called
|
||||
WARNING: Use --enable-native-access=ALL-UNNAMED to avoid warning
|
||||
```
|
||||
|
||||
The scraper works fine despite these warnings.
|
||||
|
||||
---
|
||||
|
||||
## Full Documentation
|
||||
|
||||
See [README.md](../README.md) for complete documentation including:
|
||||
- Email setup details
|
||||
- YOLO installation guide
|
||||
- Configuration options
|
||||
- Database schema
|
||||
- API endpoints
|
||||
@@ -1,399 +0,0 @@
|
||||
# Scraper Refactor Guide - Image Download Integration
|
||||
|
||||
## 🎯 Objective
|
||||
|
||||
Refactor the Troostwijk scraper to **download and store images locally**, eliminating the 57M+ duplicate image problem in the monitoring process.
|
||||
|
||||
## 📋 Current vs. New Architecture
|
||||
|
||||
### **Before** (Current Architecture)
|
||||
```
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Scraper │────────▶│ Database │◀────────│ Monitor │
|
||||
│ │ │ │ │ │
|
||||
│ Stores URLs │ │ images table │ │ Downloads + │
|
||||
│ downloaded=0 │ │ │ │ Detection │
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
│
|
||||
▼
|
||||
57M+ duplicates!
|
||||
```
|
||||
|
||||
### **After** (New Architecture)
|
||||
```
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Scraper │────────▶│ Database │◀────────│ Monitor │
|
||||
│ │ │ │ │ │
|
||||
│ Downloads + │ │ images table │ │ Detection │
|
||||
│ Stores path │ │ local_path ✓ │ │ Only │
|
||||
│ downloaded=1 │ │ │ │ │
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
│
|
||||
▼
|
||||
No duplicates!
|
||||
```
|
||||
|
||||
## 🗄️ Database Schema Changes
|
||||
|
||||
### Current Schema (ARCHITECTURE-TROOSTWIJK-SCRAPER.md:113-122)
|
||||
```sql
|
||||
CREATE TABLE images (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT,
|
||||
url TEXT,
|
||||
local_path TEXT, -- Currently NULL
|
||||
downloaded INTEGER -- Currently 0
|
||||
-- Missing: processed_at, labels (added by monitor)
|
||||
);
|
||||
```
|
||||
|
||||
### Required Schema (Already Compatible!)
|
||||
```sql
|
||||
CREATE TABLE images (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT,
|
||||
url TEXT,
|
||||
local_path TEXT, -- ✅ SET by scraper after download
|
||||
downloaded INTEGER, -- ✅ SET to 1 by scraper after download
|
||||
labels TEXT, -- ⚠️ SET by monitor (object detection)
|
||||
processed_at INTEGER, -- ⚠️ SET by monitor (timestamp)
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
);
|
||||
```
|
||||
|
||||
**Good News**: The scraper's schema already has `local_path` and `downloaded` columns! You just need to populate them.
|
||||
|
||||
## 🔧 Implementation Steps
|
||||
|
||||
### **Step 1: Enable Image Downloading in Configuration**
|
||||
|
||||
**File**: Your scraper's config file (e.g., `config.py` or environment variables)
|
||||
|
||||
```python
|
||||
# Current setting
|
||||
DOWNLOAD_IMAGES = False # ❌ Change this!
|
||||
|
||||
# New setting
|
||||
DOWNLOAD_IMAGES = True # ✅ Enable downloads
|
||||
|
||||
# Image storage path
|
||||
IMAGES_DIR = "/mnt/okcomputer/output/images" # Or your preferred path
|
||||
```
|
||||
|
||||
### **Step 2: Update Image Download Logic**
|
||||
|
||||
Based on ARCHITECTURE-TROOSTWIJK-SCRAPER.md:211-228, you already have the structure. Here's what needs to change:
|
||||
|
||||
**Current Code** (Conceptual):
|
||||
```python
|
||||
# Phase 3: Scrape lot details
|
||||
def scrape_lot(lot_url):
|
||||
lot_data = parse_lot_page(lot_url)
|
||||
|
||||
# Save lot to database
|
||||
db.insert_lot(lot_data)
|
||||
|
||||
# Save image URLs to database (NOT DOWNLOADED)
|
||||
for img_url in lot_data['images']:
|
||||
db.execute("""
|
||||
INSERT INTO images (lot_id, url, downloaded)
|
||||
VALUES (?, ?, 0)
|
||||
""", (lot_data['lot_id'], img_url))
|
||||
```
|
||||
|
||||
**New Code** (Required):
|
||||
```python
|
||||
import os
|
||||
import requests
|
||||
from pathlib import Path
|
||||
import time
|
||||
|
||||
def scrape_lot(lot_url):
|
||||
lot_data = parse_lot_page(lot_url)
|
||||
|
||||
# Save lot to database
|
||||
db.insert_lot(lot_data)
|
||||
|
||||
# Download and save images
|
||||
for idx, img_url in enumerate(lot_data['images'], start=1):
|
||||
try:
|
||||
# Download image
|
||||
local_path = download_image(img_url, lot_data['lot_id'], idx)
|
||||
|
||||
# Insert with local_path and downloaded=1
|
||||
db.execute("""
|
||||
INSERT INTO images (lot_id, url, local_path, downloaded)
|
||||
VALUES (?, ?, ?, 1)
|
||||
ON CONFLICT(lot_id, url) DO UPDATE SET
|
||||
local_path = excluded.local_path,
|
||||
downloaded = 1
|
||||
""", (lot_data['lot_id'], img_url, local_path))
|
||||
|
||||
# Rate limiting (0.5s between downloads)
|
||||
time.sleep(0.5)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Failed to download {img_url}: {e}")
|
||||
# Still insert record but mark as not downloaded
|
||||
db.execute("""
|
||||
INSERT INTO images (lot_id, url, downloaded)
|
||||
VALUES (?, ?, 0)
|
||||
""", (lot_data['lot_id'], img_url))
|
||||
|
||||
def download_image(image_url, lot_id, index):
|
||||
"""
|
||||
Downloads an image and saves it to organized directory structure.
|
||||
|
||||
Args:
|
||||
image_url: Remote URL of the image
|
||||
lot_id: Lot identifier (e.g., "A1-28505-5")
|
||||
index: Image sequence number (1, 2, 3, ...)
|
||||
|
||||
Returns:
|
||||
Absolute path to saved file
|
||||
"""
|
||||
# Create directory structure: /images/{lot_id}/
|
||||
images_dir = Path(os.getenv('IMAGES_DIR', '/mnt/okcomputer/output/images'))
|
||||
lot_dir = images_dir / lot_id
|
||||
lot_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Determine file extension from URL or content-type
|
||||
ext = Path(image_url).suffix or '.jpg'
|
||||
filename = f"{index:03d}{ext}" # 001.jpg, 002.jpg, etc.
|
||||
local_path = lot_dir / filename
|
||||
|
||||
# Download with timeout
|
||||
response = requests.get(image_url, timeout=10)
|
||||
response.raise_for_status()
|
||||
|
||||
# Save to disk
|
||||
with open(local_path, 'wb') as f:
|
||||
f.write(response.content)
|
||||
|
||||
return str(local_path.absolute())
|
||||
```
|
||||
|
||||
### **Step 3: Add Unique Constraint to Prevent Duplicates**
|
||||
|
||||
**Migration SQL**:
|
||||
```sql
|
||||
-- Add unique constraint to prevent duplicate image records
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_images_unique
|
||||
ON images(lot_id, url);
|
||||
```
|
||||
|
||||
Add this to your scraper's schema initialization:
|
||||
|
||||
```python
|
||||
def init_database():
|
||||
conn = sqlite3.connect('/mnt/okcomputer/output/cache.db')
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Existing table creation...
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS images (...)
|
||||
""")
|
||||
|
||||
# Add unique constraint (NEW)
|
||||
cursor.execute("""
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_images_unique
|
||||
ON images(lot_id, url)
|
||||
""")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
```
|
||||
|
||||
### **Step 4: Handle Image Download Failures Gracefully**
|
||||
|
||||
```python
|
||||
def download_with_retry(image_url, lot_id, index, max_retries=3):
|
||||
"""Downloads image with retry logic."""
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
return download_image(image_url, lot_id, index)
|
||||
except requests.exceptions.RequestException as e:
|
||||
if attempt == max_retries - 1:
|
||||
print(f"Failed after {max_retries} attempts: {image_url}")
|
||||
return None # Return None on failure
|
||||
print(f"Retry {attempt + 1}/{max_retries} for {image_url}")
|
||||
time.sleep(2 ** attempt) # Exponential backoff
|
||||
```
|
||||
|
||||
### **Step 5: Update Database Queries**
|
||||
|
||||
Make sure your INSERT uses `INSERT ... ON CONFLICT` to handle re-scraping:
|
||||
|
||||
```python
|
||||
# Good: Handles re-scraping without duplicates
|
||||
db.execute("""
|
||||
INSERT INTO images (lot_id, url, local_path, downloaded)
|
||||
VALUES (?, ?, ?, 1)
|
||||
ON CONFLICT(lot_id, url) DO UPDATE SET
|
||||
local_path = excluded.local_path,
|
||||
downloaded = 1
|
||||
""", (lot_id, img_url, local_path))
|
||||
|
||||
# Bad: Creates duplicates on re-scrape
|
||||
db.execute("""
|
||||
INSERT INTO images (lot_id, url, local_path, downloaded)
|
||||
VALUES (?, ?, ?, 1)
|
||||
""", (lot_id, img_url, local_path))
|
||||
```
|
||||
|
||||
## 📊 Expected Outcomes
|
||||
|
||||
### Before Refactor
|
||||
```sql
|
||||
SELECT COUNT(*) FROM images WHERE downloaded = 0;
|
||||
-- Result: 57,376,293 (57M+ undownloaded!)
|
||||
|
||||
SELECT COUNT(*) FROM images WHERE local_path IS NOT NULL;
|
||||
-- Result: 0 (no files downloaded)
|
||||
```
|
||||
|
||||
### After Refactor
|
||||
```sql
|
||||
SELECT COUNT(*) FROM images WHERE downloaded = 1;
|
||||
-- Result: ~16,807 (one per actual lot image)
|
||||
|
||||
SELECT COUNT(*) FROM images WHERE local_path IS NOT NULL;
|
||||
-- Result: ~16,807 (all downloaded images have paths)
|
||||
|
||||
SELECT COUNT(DISTINCT lot_id, url) FROM images;
|
||||
-- Result: ~16,807 (no duplicates!)
|
||||
```
|
||||
|
||||
## 🚀 Deployment Checklist
|
||||
|
||||
### Pre-Deployment
|
||||
- [ ] Back up current database: `cp cache.db cache.db.backup`
|
||||
- [ ] Verify disk space: At least 10GB free for images
|
||||
- [ ] Test download function on 5 sample lots
|
||||
- [ ] Verify `IMAGES_DIR` path exists and is writable
|
||||
|
||||
### Deployment
|
||||
- [ ] Update configuration: `DOWNLOAD_IMAGES = True`
|
||||
- [ ] Run schema migration to add unique index
|
||||
- [ ] Deploy updated scraper code
|
||||
- [ ] Monitor first 100 lots for errors
|
||||
|
||||
### Post-Deployment Verification
|
||||
```sql
|
||||
-- Check download success rate
|
||||
SELECT
|
||||
COUNT(*) as total_images,
|
||||
SUM(CASE WHEN downloaded = 1 THEN 1 ELSE 0 END) as downloaded,
|
||||
SUM(CASE WHEN downloaded = 0 THEN 1 ELSE 0 END) as failed,
|
||||
ROUND(100.0 * SUM(downloaded) / COUNT(*), 2) as success_rate
|
||||
FROM images;
|
||||
|
||||
-- Check for duplicates (should be 0)
|
||||
SELECT lot_id, url, COUNT(*) as dup_count
|
||||
FROM images
|
||||
GROUP BY lot_id, url
|
||||
HAVING COUNT(*) > 1;
|
||||
|
||||
-- Verify file system
|
||||
SELECT COUNT(*) FROM images
|
||||
WHERE downloaded = 1
|
||||
AND local_path IS NOT NULL
|
||||
AND local_path != '';
|
||||
```
|
||||
|
||||
## 🔍 Monitoring Process Impact
|
||||
|
||||
The monitoring process (auctiora) will automatically:
|
||||
- ✅ Stop downloading images (network I/O eliminated)
|
||||
- ✅ Only run object detection on `local_path` files
|
||||
- ✅ Query: `WHERE local_path IS NOT NULL AND (labels IS NULL OR labels = '')`
|
||||
- ✅ Update only the `labels` and `processed_at` columns
|
||||
|
||||
**No changes needed in monitoring process!** It's already updated to work with scraper-downloaded images.
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Problem: "No space left on device"
|
||||
```bash
|
||||
# Check disk usage
|
||||
df -h /mnt/okcomputer/output/images
|
||||
|
||||
# Estimate needed space: ~100KB per image
|
||||
# 16,807 images × 100KB = ~1.6GB
|
||||
```
|
||||
|
||||
### Problem: "Permission denied" when writing images
|
||||
```bash
|
||||
# Fix permissions
|
||||
chmod 755 /mnt/okcomputer/output/images
|
||||
chown -R scraper_user:scraper_group /mnt/okcomputer/output/images
|
||||
```
|
||||
|
||||
### Problem: Images downloading but not recorded in DB
|
||||
```python
|
||||
# Add logging
|
||||
import logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
def download_image(...):
|
||||
logging.info(f"Downloading {image_url} to {local_path}")
|
||||
# ... download code ...
|
||||
logging.info(f"Saved to {local_path}, size: {os.path.getsize(local_path)} bytes")
|
||||
return local_path
|
||||
```
|
||||
|
||||
### Problem: Duplicate images after refactor
|
||||
```sql
|
||||
-- Find duplicates
|
||||
SELECT lot_id, url, COUNT(*)
|
||||
FROM images
|
||||
GROUP BY lot_id, url
|
||||
HAVING COUNT(*) > 1;
|
||||
|
||||
-- Clean up duplicates (keep newest)
|
||||
DELETE FROM images
|
||||
WHERE id NOT IN (
|
||||
SELECT MAX(id)
|
||||
FROM images
|
||||
GROUP BY lot_id, url
|
||||
);
|
||||
```
|
||||
|
||||
## 📈 Performance Comparison
|
||||
|
||||
| Metric | Before (Monitor Downloads) | After (Scraper Downloads) |
|
||||
|----------------------|---------------------------------|---------------------------|
|
||||
| **Image records** | 57,376,293 | ~16,807 |
|
||||
| **Duplicates** | 57,359,486 (99.97%!) | 0 |
|
||||
| **Network I/O** | Monitor process | Scraper process |
|
||||
| **Disk usage** | 0 (URLs only) | ~1.6GB (actual files) |
|
||||
| **Processing speed** | 500ms/image (download + detect) | 100ms/image (detect only) |
|
||||
| **Error handling** | Complex (download failures) | Simple (files exist) |
|
||||
|
||||
## 🎓 Code Examples by Language
|
||||
|
||||
### Python (Most Likely)
|
||||
See **Step 2** above for complete implementation.
|
||||
|
||||
## 📚 References
|
||||
|
||||
- **Current Scraper Architecture**: `wiki/ARCHITECTURE-TROOSTWIJK-SCRAPER.md`
|
||||
- **Database Schema**: `wiki/DATABASE_ARCHITECTURE.md`
|
||||
- **Monitor Changes**: See commit history for `ImageProcessingService.java`, `DatabaseService.java`
|
||||
|
||||
## ✅ Success Criteria
|
||||
|
||||
You'll know the refactor is successful when:
|
||||
|
||||
1. ✅ Database query `SELECT COUNT(*) FROM images` returns ~16,807 (not 57M+)
|
||||
2. ✅ All images have `downloaded = 1` and `local_path IS NOT NULL`
|
||||
3. ✅ No duplicate records: `SELECT lot_id, url, COUNT(*) ... HAVING COUNT(*) > 1` returns 0 rows
|
||||
4. ✅ Monitor logs show "Found N images needing detection" with reasonable numbers
|
||||
5. ✅ Files exist at paths in `local_path` column
|
||||
6. ✅ Monitor process speed increases (100ms vs 500ms per image)
|
||||
|
||||
---
|
||||
|
||||
**Questions?** Check the troubleshooting section or inspect the monitor's updated code in:
|
||||
- `src/main/java/auctiora/ImageProcessingService.java`
|
||||
- `src/main/java/auctiora/DatabaseService.java:695-719`
|
||||
@@ -1,333 +0,0 @@
|
||||
# Test Suite Summary
|
||||
|
||||
## Overview
|
||||
Comprehensive test suite for Troostwijk Auction Monitor with individual test cases for every aspect of the system.
|
||||
|
||||
## Configuration Updates
|
||||
|
||||
### Paths Updated
|
||||
- **Database**: `C:\mnt\okcomputer\output\cache.db`
|
||||
- **Images**: `C:\mnt\okcomputer\output\images\{saleId}\{lotId}\`
|
||||
|
||||
### Files Modified
|
||||
1. `src/main/java/com/auction/Main.java` - Updated default database path
|
||||
2. `src/main/java/com/auction/ImageProcessingService.java` - Updated image storage path
|
||||
|
||||
## Test Files Created
|
||||
|
||||
### 1. ScraperDataAdapterTest.java (13 test cases)
|
||||
Tests data transformation from external scraper schema to monitor schema:
|
||||
|
||||
- ✅ Extract numeric ID from text format (auction & lot IDs)
|
||||
- ✅ Convert scraper auction format to AuctionInfo
|
||||
- ✅ Handle simple location without country
|
||||
- ✅ Convert scraper lot format to Lot
|
||||
- ✅ Parse bid amounts from various formats (€, $, £, plain numbers)
|
||||
- ✅ Handle missing/null fields gracefully
|
||||
- ✅ Parse various timestamp formats (ISO, SQL)
|
||||
- ✅ Handle invalid timestamps
|
||||
- ✅ Extract type prefix from auction ID
|
||||
- ✅ Handle GBP currency symbol
|
||||
- ✅ Handle "No bids" text
|
||||
- ✅ Parse complex lot IDs (A1-28505-5 → 285055)
|
||||
- ✅ Validate field mapping (lots_count → lotCount, etc.)
|
||||
|
||||
### 2. DatabaseServiceTest.java (15 test cases)
|
||||
Tests database operations and SQLite persistence:
|
||||
|
||||
- ✅ Create database schema successfully
|
||||
- ✅ Insert and retrieve auction
|
||||
- ✅ Update existing auction on conflict (UPSERT)
|
||||
- ✅ Retrieve auctions by country code
|
||||
- ✅ Insert and retrieve lot
|
||||
- ✅ Update lot current bid
|
||||
- ✅ Update lot notification flags
|
||||
- ✅ Insert and retrieve image records
|
||||
- ✅ Count total images
|
||||
- ✅ Handle empty database gracefully
|
||||
- ✅ Handle lots with null closing time
|
||||
- ✅ Retrieve active lots
|
||||
- ✅ Handle concurrent upserts (thread safety)
|
||||
- ✅ Validate foreign key relationships
|
||||
- ✅ Test database indexes performance
|
||||
|
||||
### 3. ImageProcessingServiceTest.java (11 test cases)
|
||||
Tests image downloading and processing pipeline:
|
||||
|
||||
- ✅ Process images for lot with object detection
|
||||
- ✅ Handle image download failure gracefully
|
||||
- ✅ Create directory structure for images
|
||||
- ✅ Save detected objects to database
|
||||
- ✅ Handle empty image list
|
||||
- ✅ Process pending images from database
|
||||
- ✅ Skip lots that already have images
|
||||
- ✅ Handle database errors during image save
|
||||
- ✅ Handle empty detection results
|
||||
- ✅ Handle lots with no existing images
|
||||
- ✅ Capture and verify detection labels
|
||||
|
||||
### 4. ObjectDetectionServiceTest.java (10 test cases)
|
||||
Tests YOLO object detection functionality:
|
||||
|
||||
- ✅ Initialize with missing YOLO models (disabled mode)
|
||||
- ✅ Return empty list when detection is disabled
|
||||
- ✅ Handle invalid image path gracefully
|
||||
- ✅ Handle empty image file
|
||||
- ✅ Initialize successfully with valid model files
|
||||
- ✅ Handle missing class names file
|
||||
- ✅ Detect when model files are missing
|
||||
- ✅ Return unique labels only
|
||||
- ✅ Handle multiple detections in same image
|
||||
- ✅ Respect confidence threshold (0.5)
|
||||
|
||||
### 5. NotificationServiceTest.java (19 test cases)
|
||||
Tests desktop and email notification delivery:
|
||||
|
||||
- ✅ Initialize with desktop-only configuration
|
||||
- ✅ Initialize with SMTP configuration
|
||||
- ✅ Reject invalid SMTP configuration format
|
||||
- ✅ Reject unknown configuration type
|
||||
- ✅ Send desktop notification without error
|
||||
- ✅ Send high priority notification
|
||||
- ✅ Send normal priority notification
|
||||
- ✅ Handle notification when system tray not supported
|
||||
- ✅ Send email notification with valid SMTP config
|
||||
- ✅ Include both desktop and email when SMTP configured
|
||||
- ✅ Handle empty message gracefully
|
||||
- ✅ Handle very long message (1000+ chars)
|
||||
- ✅ Handle special characters in message (€, ⚠️)
|
||||
- ✅ Accept case-insensitive desktop config
|
||||
- ✅ Validate SMTP config parts count
|
||||
- ✅ Handle multiple rapid notifications
|
||||
- ✅ Send bid change notification format
|
||||
- ✅ Send closing alert notification format
|
||||
- ✅ Send object detection notification format
|
||||
|
||||
### 6. TroostwijkMonitorTest.java (12 test cases)
|
||||
Tests monitoring orchestration and coordination:
|
||||
|
||||
- ✅ Initialize monitor successfully
|
||||
- ✅ Print database stats without error
|
||||
- ✅ Process pending images without error
|
||||
- ✅ Handle empty database gracefully
|
||||
- ✅ Track lots in database
|
||||
- ✅ Monitor lots closing soon (< 5 minutes)
|
||||
- ✅ Identify lots with time remaining
|
||||
- ✅ Handle lots without closing time
|
||||
- ✅ Track notification status
|
||||
- ✅ Update bid amounts
|
||||
- ✅ Handle multiple concurrent lot updates
|
||||
- ✅ Handle database with auctions and lots
|
||||
|
||||
### 7. IntegrationTest.java (10 test cases)
|
||||
Tests complete end-to-end workflows:
|
||||
|
||||
- ✅ **Test 1**: Complete scraper data import workflow
|
||||
- Import auction from scraper format
|
||||
- Import multiple lots for auction
|
||||
- Verify data integrity
|
||||
|
||||
- ✅ **Test 2**: Image processing and detection workflow
|
||||
- Add images for lots
|
||||
- Run object detection
|
||||
- Save labels to database
|
||||
|
||||
- ✅ **Test 3**: Bid monitoring and notification workflow
|
||||
- Simulate bid increase
|
||||
- Update database
|
||||
- Send notification
|
||||
- Verify bid was updated
|
||||
|
||||
- ✅ **Test 4**: Closing alert workflow
|
||||
- Create lot closing soon
|
||||
- Send high-priority notification
|
||||
- Mark as notified
|
||||
- Verify notification flag
|
||||
|
||||
- ✅ **Test 5**: Multi-country auction filtering
|
||||
- Add auctions from NL, RO, BE
|
||||
- Filter by country code
|
||||
- Verify filtering works correctly
|
||||
|
||||
- ✅ **Test 6**: Complete monitoring cycle
|
||||
- Print database statistics
|
||||
- Process pending images
|
||||
- Verify database integrity
|
||||
|
||||
- ✅ **Test 7**: Data consistency across services
|
||||
- Verify all auctions have valid data
|
||||
- Verify all lots have valid data
|
||||
- Check referential integrity
|
||||
|
||||
- ✅ **Test 8**: Object detection value estimation workflow
|
||||
- Create lot with detected objects
|
||||
- Add images with labels
|
||||
- Analyze detected objects
|
||||
- Send value estimation notification
|
||||
|
||||
- ✅ **Test 9**: Handle rapid concurrent updates
|
||||
- Concurrent auction insertions
|
||||
- Concurrent lot insertions
|
||||
- Verify all data persisted correctly
|
||||
|
||||
- ✅ **Test 10**: End-to-end notification scenarios
|
||||
- Bid change notification
|
||||
- Closing alert
|
||||
- Object detection notification
|
||||
- Value estimate notification
|
||||
- Viewing day reminder
|
||||
|
||||
## Test Coverage Summary
|
||||
|
||||
| Component | Test Cases | Coverage Areas |
|
||||
|-----------|-----------|----------------|
|
||||
| **ScraperDataAdapter** | 13 | Data transformation, ID parsing, currency parsing, timestamp parsing |
|
||||
| **DatabaseService** | 15 | CRUD operations, concurrency, foreign keys, indexes |
|
||||
| **ImageProcessingService** | 11 | Download, detection integration, error handling |
|
||||
| **ObjectDetectionService** | 10 | YOLO initialization, detection, confidence threshold |
|
||||
| **NotificationService** | 19 | Desktop/Email, priority levels, special chars, formats |
|
||||
| **TroostwijkMonitor** | 12 | Orchestration, monitoring, bid tracking, alerts |
|
||||
| **Integration** | 10 | End-to-end workflows, multi-service coordination |
|
||||
| **TOTAL** | **90** | **Complete system coverage** |
|
||||
|
||||
## Key Testing Patterns
|
||||
|
||||
### 1. Isolation Testing
|
||||
Each component tested independently with mocks:
|
||||
```java
|
||||
mockDb = mock(DatabaseService.class);
|
||||
mockDetector = mock(ObjectDetectionService.class);
|
||||
service = new ImageProcessingService(mockDb, mockDetector);
|
||||
```
|
||||
|
||||
### 2. Integration Testing
|
||||
Components tested together for realistic scenarios:
|
||||
```java
|
||||
db → imageProcessor → detector → notifier
|
||||
```
|
||||
|
||||
### 3. Concurrency Testing
|
||||
Thread safety verified with parallel operations:
|
||||
```java
|
||||
Thread t1 = new Thread(() -> db.upsertLot(...));
|
||||
Thread t2 = new Thread(() -> db.upsertLot(...));
|
||||
t1.start(); t2.start();
|
||||
```
|
||||
|
||||
### 4. Error Handling
|
||||
Graceful degradation tested throughout:
|
||||
```java
|
||||
assertDoesNotThrow(() -> service.process(invalidInput));
|
||||
```
|
||||
|
||||
## Running the Tests
|
||||
|
||||
### Run All Tests
|
||||
```bash
|
||||
mvn test
|
||||
```
|
||||
|
||||
### Run Specific Test Class
|
||||
```bash
|
||||
mvn test -Dtest=ScraperDataAdapterTest
|
||||
mvn test -Dtest=IntegrationTest
|
||||
```
|
||||
|
||||
### Run Single Test Method
|
||||
```bash
|
||||
mvn test -Dtest=IntegrationTest#testCompleteScraperImportWorkflow
|
||||
```
|
||||
|
||||
### Generate Coverage Report
|
||||
```bash
|
||||
mvn jacoco:prepare-agent test jacoco:report
|
||||
```
|
||||
|
||||
## Test Data Cleanup
|
||||
All tests use temporary databases that are automatically cleaned up:
|
||||
```java
|
||||
@AfterAll
|
||||
void tearDown() throws Exception {
|
||||
Files.deleteIfExists(Paths.get(testDbPath));
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Scenarios Covered
|
||||
|
||||
### Scenario 1: New Auction Discovery
|
||||
1. External scraper finds new auction
|
||||
2. Data imported via ScraperDataAdapter
|
||||
3. Lots added to database
|
||||
4. Images downloaded
|
||||
5. Object detection runs
|
||||
6. Notification sent to user
|
||||
|
||||
### Scenario 2: Bid Monitoring
|
||||
1. Monitor checks API every hour
|
||||
2. Detects bid increase
|
||||
3. Updates database
|
||||
4. Sends notification
|
||||
5. User can place counter-bid
|
||||
|
||||
### Scenario 3: Closing Alert
|
||||
1. Monitor checks closing times
|
||||
2. Lot closing in < 5 minutes
|
||||
3. High-priority notification sent
|
||||
4. Flag updated to prevent duplicates
|
||||
5. User can place final bid
|
||||
|
||||
### Scenario 4: Value Estimation
|
||||
1. Images downloaded
|
||||
2. YOLO detects objects
|
||||
3. Labels saved to database
|
||||
4. Value estimated (future feature)
|
||||
5. Notification sent with estimate
|
||||
|
||||
## Dependencies Required for Tests
|
||||
|
||||
```xml
|
||||
<dependencies>
|
||||
<!-- JUnit 5 -->
|
||||
<dependency>
|
||||
<groupId>org.junit.jupiter</groupId>
|
||||
<artifactId>junit-jupiter</artifactId>
|
||||
<version>5.10.0</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- Mockito -->
|
||||
<dependency>
|
||||
<groupId>org.mockito</groupId>
|
||||
<artifactId>mockito-core</artifactId>
|
||||
<version>5.5.0</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- Mockito JUnit Jupiter -->
|
||||
<dependency>
|
||||
<groupId>org.mockito</groupId>
|
||||
<artifactId>mockito-junit-jupiter</artifactId>
|
||||
<version>5.5.0</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- All tests are independent and can run in any order
|
||||
- Tests use in-memory or temporary databases
|
||||
- No actual HTTP requests made (except in integration tests)
|
||||
- YOLO models are optional (tests work in disabled mode)
|
||||
- Notifications are tested but may not display in headless environments
|
||||
- Tests document expected behavior for each component
|
||||
|
||||
## Future Test Enhancements
|
||||
|
||||
1. **Mock HTTP Server** for realistic image download testing
|
||||
2. **Test Containers** for full database integration
|
||||
3. **Performance Tests** for large datasets (1000+ auctions)
|
||||
4. **Stress Tests** for concurrent monitoring scenarios
|
||||
5. **UI Tests** for notification display (if GUI added)
|
||||
6. **API Tests** for Troostwijk API integration
|
||||
7. **Value Estimation** tests (when algorithm implemented)
|
||||
@@ -1,537 +0,0 @@
|
||||
## Troostwijk Auction Monitor - Workflow Integration Guide
|
||||
|
||||
Complete guide for running the auction monitoring system with scheduled workflows, cron jobs, and event-driven triggers.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Running Modes](#running-modes)
|
||||
3. [Workflow Orchestration](#workflow-orchestration)
|
||||
4. [Windows Scheduling](#windows-scheduling)
|
||||
5. [Event-Driven Triggers](#event-driven-triggers)
|
||||
6. [Configuration](#configuration)
|
||||
7. [Monitoring & Debugging](#monitoring--debugging)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Troostwijk Auction Monitor supports multiple execution modes:
|
||||
|
||||
- **Workflow Mode** (Recommended): Continuous operation with built-in scheduling
|
||||
- **Once Mode**: Single execution for external schedulers (Windows Task Scheduler, cron)
|
||||
- **Legacy Mode**: Original monitoring approach
|
||||
- **Status Mode**: Quick status check
|
||||
|
||||
---
|
||||
|
||||
## Running Modes
|
||||
|
||||
### 1. Workflow Mode (Default - Recommended)
|
||||
|
||||
**Runs all workflows continuously with built-in scheduling.**
|
||||
|
||||
```bash
|
||||
# Windows
|
||||
java -jar target\troostwijk-scraper-1.0-SNAPSHOT-jar-with-dependencies.jar workflow
|
||||
|
||||
# Or simply (workflow is default)
|
||||
java -jar target\troostwijk-scraper-1.0-SNAPSHOT-jar-with-dependencies.jar
|
||||
|
||||
# Using batch script
|
||||
run-workflow.bat
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
- ✅ Imports scraper data every 30 minutes
|
||||
- ✅ Processes images every 1 hour
|
||||
- ✅ Monitors bids every 15 minutes
|
||||
- ✅ Checks closing times every 5 minutes
|
||||
|
||||
**Best for:**
|
||||
- Production deployment
|
||||
- Long-running services
|
||||
- Development/testing
|
||||
|
||||
---
|
||||
|
||||
### 2. Once Mode (For External Schedulers)
|
||||
|
||||
**Runs complete workflow once and exits.**
|
||||
|
||||
```bash
|
||||
# Windows
|
||||
java -jar target\troostwijk-scraper-1.0-SNAPSHOT-jar-with-dependencies.jar once
|
||||
|
||||
# Using batch script
|
||||
run-once.bat
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
1. Imports scraper data
|
||||
2. Processes pending images
|
||||
3. Monitors bids
|
||||
4. Checks closing times
|
||||
5. Exits
|
||||
|
||||
**Best for:**
|
||||
- Windows Task Scheduler
|
||||
- Cron jobs (Linux/Mac)
|
||||
- Manual execution
|
||||
- Testing
|
||||
|
||||
---
|
||||
|
||||
### 3. Legacy Mode
|
||||
|
||||
**Original monitoring approach (backward compatibility).**
|
||||
|
||||
```bash
|
||||
java -jar target\troostwijk-scraper-1.0-SNAPSHOT-jar-with-dependencies.jar legacy
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- Maintaining existing deployments
|
||||
- Troubleshooting
|
||||
|
||||
---
|
||||
|
||||
### 4. Status Mode
|
||||
|
||||
**Shows current status and exits.**
|
||||
|
||||
```bash
|
||||
java -jar target\troostwijk-scraper-1.0-SNAPSHOT-jar-with-dependencies.jar status
|
||||
|
||||
# Using batch script
|
||||
check-status.bat
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
📊 Workflow Status:
|
||||
Running: No
|
||||
Auctions: 25
|
||||
Lots: 150
|
||||
Images: 300
|
||||
Closing soon (< 30 min): 5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Orchestration
|
||||
|
||||
The `WorkflowOrchestrator` coordinates 4 scheduled workflows:
|
||||
|
||||
### Workflow 1: Scraper Data Import
|
||||
**Frequency:** Every 30 minutes
|
||||
**Purpose:** Import new auctions and lots from external scraper
|
||||
|
||||
**Process:**
|
||||
1. Import auctions from scraper database
|
||||
2. Import lots from scraper database
|
||||
3. Import image URLs
|
||||
4. Send notification if significant data imported
|
||||
|
||||
**Code Location:** `WorkflowOrchestrator.java:110`
|
||||
|
||||
---
|
||||
|
||||
### Workflow 2: Image Processing
|
||||
**Frequency:** Every 1 hour
|
||||
**Purpose:** Download images and run object detection
|
||||
|
||||
**Process:**
|
||||
1. Get unprocessed images from database
|
||||
2. Download each image
|
||||
3. Run YOLO object detection
|
||||
4. Save labels to database
|
||||
5. Send notification for interesting detections (3+ objects)
|
||||
|
||||
**Code Location:** `WorkflowOrchestrator.java:150`
|
||||
|
||||
---
|
||||
|
||||
### Workflow 3: Bid Monitoring
|
||||
**Frequency:** Every 15 minutes
|
||||
**Purpose:** Check for bid changes and send notifications
|
||||
|
||||
**Process:**
|
||||
1. Get all active lots
|
||||
2. Check for bid changes (via external scraper updates)
|
||||
3. Send notifications for bid increases
|
||||
|
||||
**Code Location:** `WorkflowOrchestrator.java:210`
|
||||
|
||||
**Note:** The external scraper updates bids; this workflow monitors and notifies.
|
||||
|
||||
---
|
||||
|
||||
### Workflow 4: Closing Alerts
|
||||
**Frequency:** Every 5 minutes
|
||||
**Purpose:** Send alerts for lots closing soon
|
||||
|
||||
**Process:**
|
||||
1. Get all active lots
|
||||
2. Check closing times
|
||||
3. Send high-priority notification for lots closing in < 5 min
|
||||
4. Mark as notified to prevent duplicates
|
||||
|
||||
**Code Location:** `WorkflowOrchestrator.java:240`
|
||||
|
||||
---
|
||||
|
||||
## Windows Scheduling
|
||||
|
||||
### Option A: Use Built-in Workflow Mode (Recommended)
|
||||
|
||||
**Run as a Windows Service or startup application:**
|
||||
|
||||
1. Create shortcut to `run-workflow.bat`
|
||||
2. Place in: `C:\Users\[YourUser]\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup`
|
||||
3. Monitor will start automatically on login
|
||||
|
||||
---
|
||||
|
||||
### Option B: Windows Task Scheduler (Once Mode)
|
||||
|
||||
**Automated setup:**
|
||||
|
||||
```powershell
|
||||
# Run PowerShell as Administrator
|
||||
.\setup-windows-task.ps1
|
||||
```
|
||||
|
||||
This creates two tasks:
|
||||
- `TroostwijkMonitor-Workflow`: Runs every 30 minutes
|
||||
- `TroostwijkMonitor-StatusCheck`: Runs every 6 hours
|
||||
|
||||
**Manual setup:**
|
||||
|
||||
1. Open Task Scheduler
|
||||
2. Create Basic Task
|
||||
3. Configure:
|
||||
- **Name:** `TroostwijkMonitor`
|
||||
- **Trigger:** Every 30 minutes
|
||||
- **Action:** Start a program
|
||||
- **Program:** `java`
|
||||
- **Arguments:** `-jar "C:\path\to\troostwijk-scraper.jar" once`
|
||||
- **Start in:** `C:\path\to\project`
|
||||
|
||||
---
|
||||
|
||||
### Option C: Multiple Scheduled Tasks (Fine-grained Control)
|
||||
|
||||
Create separate tasks for each workflow:
|
||||
|
||||
| Task | Frequency | Command |
|
||||
|------|-----------|---------|
|
||||
| Import Data | Every 30 min | `run-once.bat` |
|
||||
| Process Images | Every 1 hour | `run-once.bat` |
|
||||
| Check Bids | Every 15 min | `run-once.bat` |
|
||||
| Closing Alerts | Every 5 min | `run-once.bat` |
|
||||
|
||||
---
|
||||
|
||||
## Event-Driven Triggers
|
||||
|
||||
The orchestrator supports event-driven execution:
|
||||
|
||||
### 1. New Auction Discovered
|
||||
|
||||
```java
|
||||
orchestrator.onNewAuctionDiscovered(auctionInfo);
|
||||
```
|
||||
|
||||
**Triggered when:**
|
||||
- External scraper finds new auction
|
||||
|
||||
**Actions:**
|
||||
- Insert to database
|
||||
- Send notification
|
||||
|
||||
---
|
||||
|
||||
### 2. Bid Change Detected
|
||||
|
||||
```java
|
||||
orchestrator.onBidChange(lot, previousBid, newBid);
|
||||
```
|
||||
|
||||
**Triggered when:**
|
||||
- Bid increases on monitored lot
|
||||
|
||||
**Actions:**
|
||||
- Update database
|
||||
- Send notification: "Nieuw bod op kavel X: €Y (was €Z)"
|
||||
|
||||
---
|
||||
|
||||
### 3. Objects Detected
|
||||
|
||||
```java
|
||||
orchestrator.onObjectsDetected(lotId, labels);
|
||||
```
|
||||
|
||||
**Triggered when:**
|
||||
- YOLO detects 2+ objects in image
|
||||
|
||||
**Actions:**
|
||||
- Send notification: "Lot X contains: car, truck, machinery"
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Database location
|
||||
set DATABASE_FILE=C:\mnt\okcomputer\output\cache.db
|
||||
|
||||
# Notification configuration
|
||||
set NOTIFICATION_CONFIG=desktop
|
||||
|
||||
# Or for email notifications
|
||||
set NOTIFICATION_CONFIG=smtp:your@gmail.com:app_password:recipient@example.com
|
||||
```
|
||||
|
||||
### Configuration Files
|
||||
|
||||
**YOLO Model Paths** (`Main.java:35-37`):
|
||||
```java
|
||||
String yoloCfg = "models/yolov4.cfg";
|
||||
String yoloWeights = "models/yolov4.weights";
|
||||
String yoloClasses = "models/coco.names";
|
||||
```
|
||||
|
||||
### Customizing Schedules
|
||||
|
||||
Edit `WorkflowOrchestrator.java` to change frequencies:
|
||||
|
||||
```java
|
||||
// Change from 30 minutes to 15 minutes
|
||||
scheduler.scheduleAtFixedRate(() -> {
|
||||
// ... scraper import logic
|
||||
}, 0, 15, TimeUnit.MINUTES); // Changed from 30
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Monitoring & Debugging
|
||||
|
||||
### Check Status
|
||||
|
||||
```bash
|
||||
# Quick status check
|
||||
java -jar troostwijk-monitor.jar status
|
||||
|
||||
# Or
|
||||
check-status.bat
|
||||
```
|
||||
|
||||
### View Logs
|
||||
|
||||
Workflows print timestamped logs:
|
||||
|
||||
```
|
||||
📥 [WORKFLOW 1] Importing scraper data...
|
||||
→ Imported 5 auctions
|
||||
→ Imported 25 lots
|
||||
→ Found 50 unprocessed images
|
||||
✓ Scraper import completed in 1250ms
|
||||
|
||||
🖼️ [WORKFLOW 2] Processing pending images...
|
||||
→ Processing 50 images
|
||||
✓ Processed 50 images, detected objects in 12 (15.3s)
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### 1. No data being imported
|
||||
|
||||
**Problem:** External scraper not running
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Check if scraper is running and populating database
|
||||
sqlite3 C:\mnt\okcomputer\output\cache.db "SELECT COUNT(*) FROM auctions;"
|
||||
```
|
||||
|
||||
#### 2. Images not downloading
|
||||
|
||||
**Problem:** No internet connection or invalid URLs
|
||||
|
||||
**Solution:**
|
||||
- Check network connectivity
|
||||
- Verify image URLs in database
|
||||
- Check firewall settings
|
||||
|
||||
#### 3. Notifications not showing
|
||||
|
||||
**Problem:** System tray not available
|
||||
|
||||
**Solution:**
|
||||
- Use email notifications instead
|
||||
- Check notification permissions in Windows
|
||||
|
||||
#### 4. Workflows not running
|
||||
|
||||
**Problem:** Application crashed or was stopped
|
||||
|
||||
**Solution:**
|
||||
- Check Task Scheduler logs
|
||||
- Review application logs
|
||||
- Restart in workflow mode
|
||||
|
||||
---
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Example 1: Complete Automated Workflow
|
||||
|
||||
**Setup:**
|
||||
1. External scraper runs continuously, populating database
|
||||
2. This monitor runs in workflow mode
|
||||
3. Notifications sent to desktop + email
|
||||
|
||||
**Result:**
|
||||
- New auctions → Notification within 30 min
|
||||
- New images → Processed within 1 hour
|
||||
- Bid changes → Notification within 15 min
|
||||
- Closing alerts → Notification within 5 min
|
||||
|
||||
---
|
||||
|
||||
### Example 2: On-Demand Processing
|
||||
|
||||
**Setup:**
|
||||
1. External scraper runs once per day (cron/Task Scheduler)
|
||||
2. This monitor runs in once mode after scraper completes
|
||||
|
||||
**Script:**
|
||||
```bash
|
||||
# run-daily.bat
|
||||
@echo off
|
||||
REM Run scraper first
|
||||
python scraper.py
|
||||
|
||||
REM Wait for completion
|
||||
timeout /t 30
|
||||
|
||||
REM Run monitor once
|
||||
java -jar troostwijk-monitor.jar once
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Example 3: Event-Driven with External Integration
|
||||
|
||||
**Setup:**
|
||||
1. External system calls orchestrator events
|
||||
2. Workflows run on-demand
|
||||
|
||||
**Java code:**
|
||||
```java
|
||||
WorkflowOrchestrator orchestrator = new WorkflowOrchestrator(...);
|
||||
|
||||
// When external scraper finds new auction
|
||||
AuctionInfo newAuction = parseScraperData();
|
||||
orchestrator.onNewAuctionDiscovered(newAuction);
|
||||
|
||||
// When bid detected
|
||||
orchestrator.onBidChange(lot, 100.0, 150.0);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
### Custom Workflows
|
||||
|
||||
Add custom workflows to `WorkflowOrchestrator`:
|
||||
|
||||
```java
|
||||
// Workflow 5: Value Estimation (every 2 hours)
|
||||
scheduler.scheduleAtFixedRate(() -> {
|
||||
try {
|
||||
Console.println("💰 [WORKFLOW 5] Estimating values...");
|
||||
|
||||
var lotsWithImages = db.getLotsWithImages();
|
||||
for (var lot : lotsWithImages) {
|
||||
var images = db.getImagesForLot(lot.lotId());
|
||||
double estimatedValue = estimateValue(images);
|
||||
|
||||
// Update database
|
||||
db.updateLotEstimatedValue(lot.lotId(), estimatedValue);
|
||||
|
||||
// Notify if high value
|
||||
if (estimatedValue > 5000) {
|
||||
notifier.sendNotification(
|
||||
String.format("High value lot detected: %d (€%.2f)",
|
||||
lot.lotId(), estimatedValue),
|
||||
"Value Alert", 1
|
||||
);
|
||||
}
|
||||
}
|
||||
} catch (Exception e) {
|
||||
Console.println(" ❌ Value estimation failed: " + e.getMessage());
|
||||
}
|
||||
}, 10, 120, TimeUnit.MINUTES);
|
||||
```
|
||||
|
||||
### Webhook Integration
|
||||
|
||||
Trigger workflows via HTTP webhooks:
|
||||
|
||||
```java
|
||||
// In a separate web server (e.g., using Javalin)
|
||||
Javalin app = Javalin.create().start(7070);
|
||||
|
||||
app.post("/webhook/new-auction", ctx -> {
|
||||
AuctionInfo auction = ctx.bodyAsClass(AuctionInfo.class);
|
||||
orchestrator.onNewAuctionDiscovered(auction);
|
||||
ctx.result("OK");
|
||||
});
|
||||
|
||||
app.post("/webhook/bid-change", ctx -> {
|
||||
BidChange change = ctx.bodyAsClass(BidChange.class);
|
||||
orchestrator.onBidChange(change.lot, change.oldBid, change.newBid);
|
||||
ctx.result("OK");
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Mode | Use Case | Scheduling | Best For |
|
||||
|------|----------|------------|----------|
|
||||
| **workflow** | Continuous operation | Built-in (Java) | Production, development |
|
||||
| **once** | Single execution | External (Task Scheduler) | Cron jobs, on-demand |
|
||||
| **legacy** | Backward compatibility | Built-in (Java) | Existing deployments |
|
||||
| **status** | Quick check | Manual/External | Health checks, debugging |
|
||||
|
||||
**Recommended Setup for Windows:**
|
||||
1. Install as Windows Service OR
|
||||
2. Add to Startup folder (workflow mode) OR
|
||||
3. Use Task Scheduler (once mode, every 30 min)
|
||||
|
||||
**All workflows automatically:**
|
||||
- Import data from scraper
|
||||
- Process images
|
||||
- Detect objects
|
||||
- Monitor bids
|
||||
- Send notifications
|
||||
- Handle errors gracefully
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
- Check `TEST_SUITE_SUMMARY.md` for test coverage
|
||||
- Review code in `WorkflowOrchestrator.java`
|
||||
- Run `java -jar troostwijk-monitor.jar status` for diagnostics
|
||||
128
fix-schema.sql
128
fix-schema.sql
@@ -1,128 +0,0 @@
|
||||
-- Schema Fix Script for Server Database
|
||||
-- This script migrates auction_id and lot_id from BIGINT to TEXT to match scraper format
|
||||
-- The scraper uses TEXT IDs like "A7-40063-2" but DatabaseService.java was creating BIGINT columns
|
||||
|
||||
-- Step 1: Backup existing data
|
||||
CREATE TABLE IF NOT EXISTS auctions_backup AS SELECT * FROM auctions;
|
||||
CREATE TABLE IF NOT EXISTS lots_backup AS SELECT * FROM lots;
|
||||
CREATE TABLE IF NOT EXISTS images_backup AS SELECT * FROM images;
|
||||
|
||||
-- Step 2: Drop existing tables (CASCADE would drop foreign keys)
|
||||
DROP TABLE IF EXISTS images;
|
||||
DROP TABLE IF EXISTS lots;
|
||||
DROP TABLE IF EXISTS auctions;
|
||||
|
||||
-- Step 3: Recreate auctions table with TEXT primary key (matching scraper format)
|
||||
CREATE TABLE auctions (
|
||||
auction_id TEXT PRIMARY KEY,
|
||||
title TEXT NOT NULL,
|
||||
location TEXT,
|
||||
city TEXT,
|
||||
country TEXT,
|
||||
url TEXT NOT NULL UNIQUE,
|
||||
type TEXT,
|
||||
lot_count INTEGER DEFAULT 0,
|
||||
closing_time TEXT,
|
||||
discovered_at INTEGER
|
||||
);
|
||||
|
||||
-- Step 4: Recreate lots table with TEXT primary key (matching scraper format)
|
||||
CREATE TABLE lots (
|
||||
lot_id TEXT PRIMARY KEY,
|
||||
sale_id TEXT,
|
||||
auction_id TEXT,
|
||||
title TEXT,
|
||||
description TEXT,
|
||||
manufacturer TEXT,
|
||||
type TEXT,
|
||||
year INTEGER,
|
||||
category TEXT,
|
||||
current_bid REAL,
|
||||
currency TEXT DEFAULT 'EUR',
|
||||
url TEXT UNIQUE,
|
||||
closing_time TEXT,
|
||||
closing_notified INTEGER DEFAULT 0,
|
||||
starting_bid REAL,
|
||||
minimum_bid REAL,
|
||||
status TEXT,
|
||||
brand TEXT,
|
||||
model TEXT,
|
||||
attributes_json TEXT,
|
||||
first_bid_time TEXT,
|
||||
last_bid_time TEXT,
|
||||
bid_velocity REAL,
|
||||
bid_increment REAL,
|
||||
year_manufactured INTEGER,
|
||||
condition_score REAL,
|
||||
condition_description TEXT,
|
||||
serial_number TEXT,
|
||||
damage_description TEXT,
|
||||
followers_count INTEGER DEFAULT 0,
|
||||
estimated_min_price REAL,
|
||||
estimated_max_price REAL,
|
||||
lot_condition TEXT,
|
||||
appearance TEXT,
|
||||
estimated_min REAL,
|
||||
estimated_max REAL,
|
||||
next_bid_step_cents INTEGER,
|
||||
condition TEXT,
|
||||
category_path TEXT,
|
||||
city_location TEXT,
|
||||
country_code TEXT,
|
||||
bidding_status TEXT,
|
||||
packaging TEXT,
|
||||
quantity INTEGER,
|
||||
vat REAL,
|
||||
buyer_premium_percentage REAL,
|
||||
remarks TEXT,
|
||||
reserve_price REAL,
|
||||
reserve_met INTEGER,
|
||||
view_count INTEGER,
|
||||
bid_count INTEGER,
|
||||
viewing_time TEXT,
|
||||
pickup_date TEXT,
|
||||
location TEXT,
|
||||
scraped_at TEXT,
|
||||
FOREIGN KEY (auction_id) REFERENCES auctions(auction_id),
|
||||
FOREIGN KEY (sale_id) REFERENCES auctions(auction_id)
|
||||
);
|
||||
|
||||
-- Step 5: Recreate images table
|
||||
CREATE TABLE images (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT,
|
||||
url TEXT,
|
||||
local_path TEXT,
|
||||
labels TEXT,
|
||||
processed_at INTEGER,
|
||||
downloaded INTEGER DEFAULT 0,
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
);
|
||||
|
||||
-- Step 6: Create bid_history table if it doesn't exist
|
||||
CREATE TABLE IF NOT EXISTS bid_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT,
|
||||
bid_amount REAL,
|
||||
bid_time TEXT,
|
||||
is_autobid INTEGER DEFAULT 0,
|
||||
bidder_id TEXT,
|
||||
bidder_number INTEGER,
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
);
|
||||
|
||||
-- Step 7: Restore data from backup (converting BIGINT to TEXT if needed)
|
||||
INSERT OR IGNORE INTO auctions SELECT * FROM auctions_backup;
|
||||
INSERT OR IGNORE INTO lots SELECT * FROM lots_backup;
|
||||
INSERT OR IGNORE INTO images SELECT * FROM images_backup;
|
||||
|
||||
-- Step 8: Create indexes for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_auctions_country ON auctions(country);
|
||||
CREATE INDEX IF NOT EXISTS idx_lots_sale_id ON lots(sale_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_lots_auction_id ON lots(auction_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_images_lot_id ON images(lot_id);
|
||||
|
||||
-- Step 9: Clean up backup tables (optional - comment out if you want to keep backups)
|
||||
-- DROP TABLE auctions_backup;
|
||||
-- DROP TABLE lots_backup;
|
||||
-- DROP TABLE images_backup;
|
||||
189
k8s/README.md
189
k8s/README.md
@@ -1,189 +0,0 @@
|
||||
# Kubernetes Deployment for Auction Monitor
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Build and Push Docker Image
|
||||
|
||||
```bash
|
||||
# Build image
|
||||
docker build -t your-registry/auction-monitor:latest .
|
||||
|
||||
# Push to registry
|
||||
docker push your-registry/auction-monitor:latest
|
||||
```
|
||||
|
||||
### 2. Update deployment.yaml
|
||||
|
||||
Edit `deployment.yaml` and replace:
|
||||
- `image: auction-monitor:latest` with your image
|
||||
- `auction-monitor.yourdomain.com` with your domain
|
||||
|
||||
### 3. Deploy to Kubernetes
|
||||
|
||||
```bash
|
||||
# Apply all resources
|
||||
kubectl apply -f k8s/deployment.yaml
|
||||
|
||||
# Or apply individually
|
||||
kubectl apply -f k8s/namespace.yaml
|
||||
kubectl apply -f k8s/configmap.yaml
|
||||
kubectl apply -f k8s/secret.yaml
|
||||
kubectl apply -f k8s/deployment.yaml
|
||||
kubectl apply -f k8s/service.yaml
|
||||
kubectl apply -f k8s/ingress.yaml
|
||||
```
|
||||
|
||||
### 4. Verify Deployment
|
||||
|
||||
```bash
|
||||
# Check pods
|
||||
kubectl get pods -n auction-monitor
|
||||
|
||||
# Check services
|
||||
kubectl get svc -n auction-monitor
|
||||
|
||||
# Check ingress
|
||||
kubectl get ingress -n auction-monitor
|
||||
|
||||
# View logs
|
||||
kubectl logs -f deployment/auction-monitor -n auction-monitor
|
||||
```
|
||||
|
||||
### 5. Access Application
|
||||
|
||||
```bash
|
||||
# Port forward for local access
|
||||
kubectl port-forward svc/auction-monitor 8081:8081 -n auction-monitor
|
||||
|
||||
# Access API
|
||||
curl http://localhost:8081/api/monitor/status
|
||||
|
||||
# Access health check
|
||||
curl http://localhost:8081/health/live
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### ConfigMap
|
||||
|
||||
Edit workflow schedules in `configMap`:
|
||||
|
||||
```yaml
|
||||
data:
|
||||
AUCTION_WORKFLOW_SCRAPER_IMPORT_CRON: "0 */30 * * * ?" # Every 30 min
|
||||
AUCTION_WORKFLOW_IMAGE_PROCESSING_CRON: "0 0 * * * ?" # Every 1 hour
|
||||
AUCTION_WORKFLOW_BID_MONITORING_CRON: "0 */15 * * * ?" # Every 15 min
|
||||
AUCTION_WORKFLOW_CLOSING_ALERTS_CRON: "0 */5 * * * ?" # Every 5 min
|
||||
```
|
||||
|
||||
### Secrets
|
||||
|
||||
Update notification configuration:
|
||||
|
||||
```bash
|
||||
# Create secret
|
||||
kubectl create secret generic auction-secrets \
|
||||
--from-literal=notification-config='smtp:user@gmail.com:password:recipient@example.com' \
|
||||
-n auction-monitor
|
||||
|
||||
# Or edit existing
|
||||
kubectl edit secret auction-secrets -n auction-monitor
|
||||
```
|
||||
|
||||
## Scaling
|
||||
|
||||
### Manual Scaling
|
||||
|
||||
```bash
|
||||
# Scale to 3 replicas
|
||||
kubectl scale deployment auction-monitor --replicas=3 -n auction-monitor
|
||||
```
|
||||
|
||||
### Auto Scaling
|
||||
|
||||
HPA is configured in `deployment.yaml`:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
minReplicas: 1
|
||||
maxReplicas: 3
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
averageUtilization: 80
|
||||
```
|
||||
|
||||
View HPA status:
|
||||
|
||||
```bash
|
||||
kubectl get hpa -n auction-monitor
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Liveness
|
||||
kubectl exec -it deployment/auction-monitor -n auction-monitor -- \
|
||||
wget -qO- http://localhost:8081/health/live
|
||||
|
||||
# Readiness
|
||||
kubectl exec -it deployment/auction-monitor -n auction-monitor -- \
|
||||
wget -qO- http://localhost:8081/health/ready
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# Follow logs
|
||||
kubectl logs -f deployment/auction-monitor -n auction-monitor
|
||||
|
||||
# Logs from all pods
|
||||
kubectl logs -f -l app=auction-monitor -n auction-monitor
|
||||
|
||||
# Previous pod logs
|
||||
kubectl logs deployment/auction-monitor --previous -n auction-monitor
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Pod not starting
|
||||
|
||||
```bash
|
||||
# Describe pod
|
||||
kubectl describe pod -l app=auction-monitor -n auction-monitor
|
||||
|
||||
# Check events
|
||||
kubectl get events -n auction-monitor --sort-by='.lastTimestamp'
|
||||
```
|
||||
|
||||
### Database issues
|
||||
|
||||
```bash
|
||||
# Check PVC
|
||||
kubectl get pvc -n auction-monitor
|
||||
|
||||
# Check volume mount
|
||||
kubectl exec -it deployment/auction-monitor -n auction-monitor -- ls -la /data
|
||||
```
|
||||
|
||||
### Network issues
|
||||
|
||||
```bash
|
||||
# Test service
|
||||
kubectl run -it --rm debug --image=busybox --restart=Never -n auction-monitor -- \
|
||||
wget -qO- http://auction-monitor:8081/health/live
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
```bash
|
||||
# Delete all resources
|
||||
kubectl delete -f k8s/deployment.yaml
|
||||
|
||||
# Or delete namespace (removes everything)
|
||||
kubectl delete namespace auction-monitor
|
||||
```
|
||||
@@ -1,197 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: auction-monitor
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: auction-data-pvc
|
||||
namespace: auction-monitor
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: auction-config
|
||||
namespace: auction-monitor
|
||||
data:
|
||||
AUCTION_DATABASE_PATH: "/data/cache.db"
|
||||
AUCTION_IMAGES_PATH: "/data/images"
|
||||
AUCTION_NOTIFICATION_CONFIG: "desktop"
|
||||
QUARKUS_HTTP_PORT: "8081"
|
||||
QUARKUS_HTTP_HOST: "0.0.0.0"
|
||||
# Workflow schedules (cron expressions)
|
||||
AUCTION_WORKFLOW_SCRAPER_IMPORT_CRON: "0 */30 * * * ?"
|
||||
AUCTION_WORKFLOW_IMAGE_PROCESSING_CRON: "0 0 * * * ?"
|
||||
AUCTION_WORKFLOW_BID_MONITORING_CRON: "0 */15 * * * ?"
|
||||
AUCTION_WORKFLOW_CLOSING_ALERTS_CRON: "0 */5 * * * ?"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: auction-secrets
|
||||
namespace: auction-monitor
|
||||
type: Opaque
|
||||
stringData:
|
||||
# Replace with your actual SMTP configuration
|
||||
notification-config: "desktop"
|
||||
# For email: smtp:your@gmail.com:app_password:recipient@example.com
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: auction-monitor
|
||||
namespace: auction-monitor
|
||||
labels:
|
||||
app: auction-monitor
|
||||
version: v1
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: auction-monitor
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: auction-monitor
|
||||
version: v1
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/port: "8081"
|
||||
prometheus.io/path: "/q/metrics"
|
||||
spec:
|
||||
containers:
|
||||
- name: auction-monitor
|
||||
image: auction-monitor:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8081
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: JAVA_OPTS
|
||||
value: "-Xmx256m -XX:+UseParallelGC"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: auction-config
|
||||
- secretRef:
|
||||
name: auction-secrets
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
- name: models
|
||||
mountPath: /app/models
|
||||
readOnly: true
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /health/live
|
||||
port: 8081
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 3
|
||||
failureThreshold: 3
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /health/ready
|
||||
port: 8081
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 3
|
||||
failureThreshold: 3
|
||||
startupProbe:
|
||||
httpGet:
|
||||
path: /health/started
|
||||
port: 8081
|
||||
initialDelaySeconds: 0
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 3
|
||||
failureThreshold: 30
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: auction-data-pvc
|
||||
- name: models
|
||||
emptyDir: {} # Or mount from ConfigMap/PVC if you have YOLO models
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: auction-monitor
|
||||
namespace: auction-monitor
|
||||
labels:
|
||||
app: auction-monitor
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 8081
|
||||
targetPort: 8081
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: auction-monitor
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: auction-monitor-ingress
|
||||
namespace: auction-monitor
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /
|
||||
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
tls:
|
||||
- hosts:
|
||||
- auction-monitor.yourdomain.com
|
||||
secretName: auction-monitor-tls
|
||||
rules:
|
||||
- host: auction-monitor.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: auction-monitor
|
||||
port:
|
||||
number: 8081
|
||||
---
|
||||
apiVersion: autoscaling/v2
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: auction-monitor-hpa
|
||||
namespace: auction-monitor
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: auction-monitor
|
||||
minReplicas: 1
|
||||
maxReplicas: 3
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 80
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 80
|
||||
@@ -1,80 +0,0 @@
|
||||
person
|
||||
bicycle
|
||||
car
|
||||
motorbike
|
||||
aeroplane
|
||||
bus
|
||||
train
|
||||
truck
|
||||
boat
|
||||
traffic light
|
||||
fire hydrant
|
||||
stop sign
|
||||
parking meter
|
||||
bench
|
||||
bird
|
||||
cat
|
||||
dog
|
||||
horse
|
||||
sheep
|
||||
cow
|
||||
elephant
|
||||
bear
|
||||
zebra
|
||||
giraffe
|
||||
backpack
|
||||
umbrella
|
||||
handbag
|
||||
tie
|
||||
suitcase
|
||||
frisbee
|
||||
skis
|
||||
snowboard
|
||||
sports ball
|
||||
kite
|
||||
baseball bat
|
||||
baseball glove
|
||||
skateboard
|
||||
surfboard
|
||||
tennis racket
|
||||
bottle
|
||||
wine glass
|
||||
cup
|
||||
fork
|
||||
knife
|
||||
spoon
|
||||
bowl
|
||||
banana
|
||||
apple
|
||||
sandwich
|
||||
orange
|
||||
broccoli
|
||||
carrot
|
||||
hot dog
|
||||
pizza
|
||||
donut
|
||||
cake
|
||||
chair
|
||||
sofa
|
||||
pottedplant
|
||||
bed
|
||||
diningtable
|
||||
toilet
|
||||
tvmonitor
|
||||
laptop
|
||||
mouse
|
||||
remote
|
||||
keyboard
|
||||
cell phone
|
||||
microwave
|
||||
oven
|
||||
toaster
|
||||
sink
|
||||
refrigerator
|
||||
book
|
||||
clock
|
||||
vase
|
||||
scissors
|
||||
teddy bear
|
||||
hair drier
|
||||
toothbrush
|
||||
1158
models/yolov4.cfg
1158
models/yolov4.cfg
File diff suppressed because it is too large
Load Diff
Binary file not shown.
25
pom.xml
25
pom.xml
@@ -34,6 +34,7 @@
|
||||
<maven-compiler-plugin-version>3.14.0</maven-compiler-plugin-version>
|
||||
<versions-maven-plugin.version>2.19.0</versions-maven-plugin.version>
|
||||
<jandex-maven-plugin-version>3.5.0</jandex-maven-plugin-version>
|
||||
<jdbi.version>3.47.0</jdbi.version>
|
||||
<maven.compiler.args>
|
||||
--enable-native-access=ALL-UNNAMED
|
||||
--add-opens java.base/sun.misc=ALL-UNNAMED
|
||||
@@ -161,11 +162,11 @@
|
||||
<artifactId>slf4j-api</artifactId>
|
||||
<version>2.0.9</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<!-- <dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-simple</artifactId>
|
||||
<version>2.0.9</version>
|
||||
</dependency>
|
||||
</dependency>-->
|
||||
<!-- JUnit 5 for testing -->
|
||||
<dependency>
|
||||
<groupId>org.junit.jupiter</groupId>
|
||||
@@ -175,6 +176,12 @@
|
||||
</dependency>
|
||||
|
||||
<!-- Mockito for mocking in tests -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-junit5-mockito</artifactId>
|
||||
<version>3.30.2</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.mockito</groupId>
|
||||
<artifactId>mockito-core</artifactId>
|
||||
@@ -190,6 +197,18 @@
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- JDBI3 - Lightweight ORM for SQL -->
|
||||
<dependency>
|
||||
<groupId>org.jdbi</groupId>
|
||||
<artifactId>jdbi3-core</artifactId>
|
||||
<version>${jdbi.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.jdbi</groupId>
|
||||
<artifactId>jdbi3-sqlobject</artifactId>
|
||||
<version>${jdbi.version}</version>
|
||||
</dependency>
|
||||
|
||||
<!-- AssertJ for fluent assertions (optional but recommended) -->
|
||||
<dependency>
|
||||
<groupId>org.assertj</groupId>
|
||||
@@ -421,6 +440,8 @@
|
||||
<!-- Enable ByteBuddy experimental mode for Java 25 support -->
|
||||
<!-- Mockito requires this for Java 24+ -->
|
||||
<argLine>
|
||||
--enable-native-access=ALL-UNNAMED
|
||||
--add-opens java.base/sun.misc=ALL-UNNAMED
|
||||
-Dnet.bytebuddy.experimental=true
|
||||
--add-opens java.base/java.lang=ALL-UNNAMED
|
||||
--add-opens java.base/java.util=ALL-UNNAMED
|
||||
|
||||
15
scripts/smb.ps1
Normal file
15
scripts/smb.ps1
Normal file
@@ -0,0 +1,15 @@
|
||||
# PowerShell: map the remote share, copy the folder, then clean up
|
||||
$remote = '\\192.168.1.159\shared-auction-data'
|
||||
$local = 'C:\mnt\okcomputer\output\models'
|
||||
|
||||
# (1) create/verify the PSDrive (prompts for password if needed)
|
||||
if (-not (Get-PSDrive -Name Z -ErrorAction SilentlyContinue)) {
|
||||
$cred = Get-Credential -UserName 'tour' -Message 'SMB password for tour@192.168.1.159'
|
||||
New-PSDrive -Name Z -PSProvider FileSystem -Root $remote -Credential $cred -Persist | Out-Null
|
||||
}
|
||||
|
||||
# (2) copy the local folder into the share
|
||||
Copy-Item -Path $local -Destination 'Z:\' -Recurse -Force
|
||||
|
||||
# (3) optional cleanup
|
||||
Remove-PSDrive -Name Z -Force
|
||||
@@ -7,13 +7,13 @@ import java.time.LocalDateTime;
|
||||
* Data typically populated by the external scraper process
|
||||
*/
|
||||
public record AuctionInfo(
|
||||
long auctionId, // Unique auction ID (from URL)
|
||||
String title, // Auction title
|
||||
String location, // Location (e.g., "Amsterdam, NL")
|
||||
String city, // City name
|
||||
String country, // Country code (e.g., "NL")
|
||||
String url, // Full auction URL
|
||||
String typePrefix, // Auction type (A1 or A7)
|
||||
int lotCount, // Number of lots/kavels
|
||||
LocalDateTime firstLotClosingTime // Closing time if available
|
||||
long auctionId, // Unique auction ID (from URL)
|
||||
String title, // Auction title
|
||||
String location, // Location (e.g., "Amsterdam, NL")
|
||||
String city, // City name
|
||||
String country, // Country code (e.g., "NL")
|
||||
String url, // Full auction URL
|
||||
String typePrefix, // Auction type (A1 or A7)
|
||||
int lotCount, // Number of lots/kavels
|
||||
LocalDateTime firstLotClosingTime // Closing time if available
|
||||
) { }
|
||||
|
||||
@@ -11,93 +11,72 @@ import org.eclipse.microprofile.health.Startup;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Paths;
|
||||
|
||||
/**
|
||||
* Health checks for Auction Monitor.
|
||||
* Provides liveness and readiness probes for Kubernetes/Docker deployment.
|
||||
*/
|
||||
@ApplicationScoped
|
||||
public class AuctionMonitorHealthCheck {
|
||||
|
||||
@Inject
|
||||
DatabaseService db;
|
||||
|
||||
/**
|
||||
* Liveness probe - checks if application is alive
|
||||
* GET /health/live
|
||||
*/
|
||||
@Liveness
|
||||
public static class LivenessCheck implements HealthCheck {
|
||||
@Override
|
||||
public HealthCheckResponse call() {
|
||||
return HealthCheckResponse.up("Auction Monitor is alive");
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Readiness probe - checks if application is ready to serve requests
|
||||
* GET /health/ready
|
||||
*/
|
||||
@Readiness
|
||||
@ApplicationScoped
|
||||
public static class ReadinessCheck implements HealthCheck {
|
||||
|
||||
@Inject
|
||||
DatabaseService db;
|
||||
|
||||
@Override
|
||||
public HealthCheckResponse call() {
|
||||
try {
|
||||
// Check database connection
|
||||
var auctions = db.getAllAuctions();
|
||||
|
||||
// Check database path exists
|
||||
var dbPath = Paths.get("C:\\mnt\\okcomputer\\output\\cache.db");
|
||||
if (!Files.exists(dbPath.getParent())) {
|
||||
return HealthCheckResponse.down("Database directory does not exist");
|
||||
}
|
||||
|
||||
return HealthCheckResponse.named("database")
|
||||
.up()
|
||||
.withData("auctions", auctions.size())
|
||||
.build();
|
||||
|
||||
} catch (Exception e) {
|
||||
return HealthCheckResponse.named("database")
|
||||
.down()
|
||||
.withData("error", e.getMessage())
|
||||
.build();
|
||||
|
||||
@Liveness
|
||||
public static class LivenessCheck
|
||||
implements HealthCheck {
|
||||
|
||||
@Override public HealthCheckResponse call() {
|
||||
return HealthCheckResponse.up("Auction Monitor is alive");
|
||||
}
|
||||
}
|
||||
|
||||
@Readiness
|
||||
@ApplicationScoped
|
||||
public static class ReadinessCheck
|
||||
implements HealthCheck {
|
||||
|
||||
@Inject DatabaseService db;
|
||||
|
||||
@Override
|
||||
public HealthCheckResponse call() {
|
||||
try {
|
||||
var auctions = db.getAllAuctions();
|
||||
var dbPath = Paths.get("C:\\mnt\\okcomputer\\output\\cache.db");
|
||||
if (!Files.exists(dbPath.getParent())) {
|
||||
return HealthCheckResponse.down("Database directory does not exist");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Startup probe - checks if application has started correctly
|
||||
* GET /health/started
|
||||
*/
|
||||
@Startup
|
||||
@ApplicationScoped
|
||||
public static class StartupCheck implements HealthCheck {
|
||||
|
||||
@Inject
|
||||
DatabaseService db;
|
||||
|
||||
@Override
|
||||
public HealthCheckResponse call() {
|
||||
try {
|
||||
// Verify database schema
|
||||
db.ensureSchema();
|
||||
|
||||
return HealthCheckResponse.named("startup")
|
||||
.up()
|
||||
.withData("message", "Database schema initialized")
|
||||
.build();
|
||||
|
||||
} catch (Exception e) {
|
||||
return HealthCheckResponse.named("startup")
|
||||
.down()
|
||||
.withData("error", e.getMessage())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return HealthCheckResponse.named("database")
|
||||
.up()
|
||||
.withData("auctions", auctions.size())
|
||||
.build();
|
||||
|
||||
} catch (Exception e) {
|
||||
return HealthCheckResponse.named("database")
|
||||
.down()
|
||||
.withData("error", e.getMessage())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Startup
|
||||
@ApplicationScoped
|
||||
public static class StartupCheck
|
||||
implements HealthCheck {
|
||||
|
||||
@Inject DatabaseService db;
|
||||
|
||||
@Override
|
||||
public HealthCheckResponse call() {
|
||||
try {
|
||||
// Verify database schema
|
||||
db.ensureSchema();
|
||||
|
||||
return HealthCheckResponse.named("startup")
|
||||
.up()
|
||||
.withData("message", "Database schema initialized")
|
||||
.build();
|
||||
|
||||
} catch (Exception e) {
|
||||
return HealthCheckResponse.named("startup")
|
||||
.down()
|
||||
.withData("error", e.getMessage())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import jakarta.annotation.PostConstruct;
|
||||
import jakarta.enterprise.context.ApplicationScoped;
|
||||
import jakarta.enterprise.inject.Produces;
|
||||
import jakarta.inject.Singleton;
|
||||
import nu.pattern.OpenCV;
|
||||
import org.eclipse.microprofile.config.inject.ConfigProperty;
|
||||
import org.jboss.logging.Logger;
|
||||
import org.opencv.core.Core;
|
||||
@@ -19,58 +20,42 @@ import java.sql.SQLException;
|
||||
@Startup
|
||||
@ApplicationScoped
|
||||
public class AuctionMonitorProducer {
|
||||
|
||||
private static final Logger LOG = Logger.getLogger(AuctionMonitorProducer.class);
|
||||
|
||||
@PostConstruct
|
||||
void init() {
|
||||
// Load OpenCV native library at startup
|
||||
try {
|
||||
nu.pattern.OpenCV.loadLocally();
|
||||
LOG.info("✓ OpenCV loaded successfully");
|
||||
} catch (Exception e) {
|
||||
LOG.warn("⚠️ OpenCV not available - image detection will be disabled: " + e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
@Produces
|
||||
@Singleton
|
||||
public DatabaseService produceDatabaseService(
|
||||
@ConfigProperty(name = "auction.database.path") String dbPath) throws SQLException {
|
||||
|
||||
LOG.infof("Initializing DatabaseService with path: %s", dbPath);
|
||||
var db = new DatabaseService(dbPath);
|
||||
db.ensureSchema();
|
||||
return db;
|
||||
}
|
||||
|
||||
@Produces
|
||||
@Singleton
|
||||
public NotificationService produceNotificationService(
|
||||
@ConfigProperty(name = "auction.notification.config") String config) {
|
||||
|
||||
LOG.infof("Initializing NotificationService with config: %s", config);
|
||||
return new NotificationService(config);
|
||||
}
|
||||
|
||||
@Produces
|
||||
@Singleton
|
||||
public ObjectDetectionService produceObjectDetectionService(
|
||||
@ConfigProperty(name = "auction.yolo.config") String cfgPath,
|
||||
@ConfigProperty(name = "auction.yolo.weights") String weightsPath,
|
||||
@ConfigProperty(name = "auction.yolo.classes") String classesPath) throws IOException {
|
||||
|
||||
LOG.infof("Initializing ObjectDetectionService");
|
||||
return new ObjectDetectionService(cfgPath, weightsPath, classesPath);
|
||||
}
|
||||
|
||||
@Produces
|
||||
@Singleton
|
||||
public ImageProcessingService produceImageProcessingService(
|
||||
DatabaseService db,
|
||||
ObjectDetectionService detector) {
|
||||
|
||||
LOG.infof("Initializing ImageProcessingService");
|
||||
return new ImageProcessingService(db, detector);
|
||||
}
|
||||
|
||||
private static final Logger LOG = Logger.getLogger(AuctionMonitorProducer.class);
|
||||
|
||||
@PostConstruct void init() {
|
||||
try {
|
||||
OpenCV.loadLocally();
|
||||
LOG.info("✓ OpenCV loaded successfully");
|
||||
} catch (Exception e) {
|
||||
LOG.warn("⚠️ OpenCV not available - image detection will be disabled: " + e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
@Produces @Singleton public DatabaseService produceDatabaseService(
|
||||
@ConfigProperty(name = "auction.database.path") String dbPath) throws SQLException {
|
||||
var db = new DatabaseService(dbPath);
|
||||
db.ensureSchema();
|
||||
return db;
|
||||
}
|
||||
|
||||
@Produces @Singleton public NotificationService produceNotificationService(
|
||||
@ConfigProperty(name = "auction.notification.config") String config) {
|
||||
|
||||
return new NotificationService(config);
|
||||
}
|
||||
|
||||
@Produces @Singleton public ObjectDetectionService produceObjectDetectionService(
|
||||
@ConfigProperty(name = "auction.yolo.config") String cfgPath,
|
||||
@ConfigProperty(name = "auction.yolo.weights") String weightsPath,
|
||||
@ConfigProperty(name = "auction.yolo.classes") String classesPath) throws IOException {
|
||||
|
||||
return new ObjectDetectionService(cfgPath, weightsPath, classesPath);
|
||||
}
|
||||
|
||||
@Produces @Singleton public ImageProcessingService produceImageProcessingService(
|
||||
DatabaseService db,
|
||||
ObjectDetectionService detector) {
|
||||
return new ImageProcessingService(db, detector);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -30,21 +30,12 @@ public class AuctionMonitorResource {
|
||||
|
||||
private static final Logger LOG = Logger.getLogger(AuctionMonitorResource.class);
|
||||
|
||||
@Inject
|
||||
DatabaseService db;
|
||||
@Inject DatabaseService db;
|
||||
@Inject QuarkusWorkflowScheduler scheduler;
|
||||
@Inject NotificationService notifier;
|
||||
@Inject RateLimitedHttpClient httpClient;
|
||||
@Inject LotEnrichmentService enrichmentService;
|
||||
|
||||
@Inject
|
||||
QuarkusWorkflowScheduler scheduler;
|
||||
|
||||
@Inject
|
||||
NotificationService notifier;
|
||||
|
||||
@Inject
|
||||
RateLimitedHttpClient httpClient;
|
||||
|
||||
@Inject
|
||||
LotEnrichmentService enrichmentService;
|
||||
|
||||
/**
|
||||
* GET /api/monitor/status
|
||||
* Returns current monitoring status
|
||||
@@ -99,33 +90,33 @@ public class AuctionMonitorResource {
|
||||
stats.put("totalImages", db.getImageCount());
|
||||
|
||||
// Lot statistics
|
||||
var activeLots = 0;
|
||||
var lotsWithBids = 0;
|
||||
double totalBids = 0;
|
||||
var hotLots = 0;
|
||||
var sleeperLots = 0;
|
||||
var bargainLots = 0;
|
||||
var lotsClosing1h = 0;
|
||||
var lotsClosing6h = 0;
|
||||
var activeLots = 0;
|
||||
var lotsWithBids = 0;
|
||||
double totalBids = 0;
|
||||
var hotLots = 0;
|
||||
var sleeperLots = 0;
|
||||
var bargainLots = 0;
|
||||
var lotsClosing1h = 0;
|
||||
var lotsClosing6h = 0;
|
||||
double totalBidVelocity = 0;
|
||||
int velocityCount = 0;
|
||||
|
||||
int velocityCount = 0;
|
||||
|
||||
for (var lot : lots) {
|
||||
long minutesLeft = lot.closingTime() != null ? lot.minutesUntilClose() : Long.MAX_VALUE;
|
||||
|
||||
|
||||
if (lot.closingTime() != null && minutesLeft > 0) {
|
||||
activeLots++;
|
||||
|
||||
|
||||
// Time-based counts
|
||||
if (minutesLeft < 60) lotsClosing1h++;
|
||||
if (minutesLeft < 360) lotsClosing6h++;
|
||||
}
|
||||
|
||||
|
||||
if (lot.currentBid() > 0) {
|
||||
lotsWithBids++;
|
||||
totalBids += lot.currentBid();
|
||||
}
|
||||
|
||||
|
||||
// Intelligence metrics (require GraphQL enrichment)
|
||||
if (lot.followersCount() != null && lot.followersCount() > 20) {
|
||||
hotLots++;
|
||||
@@ -136,22 +127,22 @@ public class AuctionMonitorResource {
|
||||
if (lot.isBelowEstimate()) {
|
||||
bargainLots++;
|
||||
}
|
||||
|
||||
|
||||
// Bid velocity
|
||||
if (lot.bidVelocity() != null && lot.bidVelocity() > 0) {
|
||||
totalBidVelocity += lot.bidVelocity();
|
||||
velocityCount++;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Calculate bids per hour (average velocity across all lots with velocity data)
|
||||
double bidsPerHour = velocityCount > 0 ? totalBidVelocity / velocityCount : 0;
|
||||
|
||||
|
||||
stats.put("activeLots", activeLots);
|
||||
stats.put("lotsWithBids", lotsWithBids);
|
||||
stats.put("totalBidValue", String.format("€%.2f", totalBids));
|
||||
stats.put("averageBid", lotsWithBids > 0 ? String.format("€%.2f", totalBids / lotsWithBids) : "€0.00");
|
||||
|
||||
|
||||
// Bidding intelligence
|
||||
stats.put("bidsPerHour", String.format("%.1f", bidsPerHour));
|
||||
stats.put("hotLots", hotLots);
|
||||
@@ -159,11 +150,11 @@ public class AuctionMonitorResource {
|
||||
stats.put("bargainLots", bargainLots);
|
||||
stats.put("lotsClosing1h", lotsClosing1h);
|
||||
stats.put("lotsClosing6h", lotsClosing6h);
|
||||
|
||||
|
||||
// Conversion rate
|
||||
double conversionRate = activeLots > 0 ? (lotsWithBids * 100.0 / activeLots) : 0;
|
||||
stats.put("conversionRate", String.format("%.1f%%", conversionRate));
|
||||
|
||||
|
||||
return Response.ok(stats).build();
|
||||
|
||||
} catch (Exception e) {
|
||||
@@ -184,12 +175,12 @@ public class AuctionMonitorResource {
|
||||
try {
|
||||
var lots = db.getAllLots();
|
||||
var closingSoon = lots.stream()
|
||||
.filter(lot -> lot.closingTime() != null)
|
||||
.filter(lot -> lot.minutesUntilClose() > 0 && lot.minutesUntilClose() <= hours * 60)
|
||||
.sorted((a, b) -> Long.compare(a.minutesUntilClose(), b.minutesUntilClose()))
|
||||
.limit(100)
|
||||
.toList();
|
||||
|
||||
.filter(lot -> lot.closingTime() != null)
|
||||
.filter(lot -> lot.minutesUntilClose() > 0 && lot.minutesUntilClose() <= hours * 60)
|
||||
.sorted((a, b) -> Long.compare(a.minutesUntilClose(), b.minutesUntilClose()))
|
||||
.limit(100)
|
||||
.toList();
|
||||
|
||||
return Response.ok(closingSoon).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get closing soon lots", e);
|
||||
@@ -198,7 +189,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* GET /api/monitor/lots/{lotId}/bid-history
|
||||
* Returns bid history for a specific lot
|
||||
@@ -216,7 +207,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* POST /api/monitor/trigger/scraper-import
|
||||
* Manually trigger scraper import workflow
|
||||
@@ -288,7 +279,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* POST /api/monitor/trigger/graphql-enrichment
|
||||
* Manually trigger GraphQL enrichment for all lots or lots closing soon
|
||||
@@ -301,15 +292,15 @@ public class AuctionMonitorResource {
|
||||
if (hours > 0) {
|
||||
enriched = enrichmentService.enrichClosingSoonLots(hours);
|
||||
return Response.ok(Map.of(
|
||||
"message", "GraphQL enrichment triggered for lots closing within " + hours + " hours",
|
||||
"enrichedCount", enriched
|
||||
)).build();
|
||||
"message", "GraphQL enrichment triggered for lots closing within " + hours + " hours",
|
||||
"enrichedCount", enriched
|
||||
)).build();
|
||||
} else {
|
||||
enriched = enrichmentService.enrichAllActiveLots();
|
||||
return Response.ok(Map.of(
|
||||
"message", "GraphQL enrichment triggered for all lots",
|
||||
"enrichedCount", enriched
|
||||
)).build();
|
||||
"message", "GraphQL enrichment triggered for all lots",
|
||||
"enrichedCount", enriched
|
||||
)).build();
|
||||
}
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to trigger GraphQL enrichment", e);
|
||||
@@ -318,7 +309,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* GET /api/monitor/auctions
|
||||
* Returns list of all auctions
|
||||
@@ -375,7 +366,7 @@ public class AuctionMonitorResource {
|
||||
})
|
||||
.sorted((a, b) -> Long.compare(a.minutesUntilClose(), b.minutesUntilClose()))
|
||||
.toList();
|
||||
|
||||
|
||||
return Response.ok(closingSoon).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get closing lots", e);
|
||||
@@ -530,7 +521,7 @@ public class AuctionMonitorResource {
|
||||
public Response getCategoryDistribution() {
|
||||
try {
|
||||
var lots = db.getAllLots();
|
||||
|
||||
|
||||
// Category distribution
|
||||
Map<String, Long> distribution = lots.stream()
|
||||
.filter(l -> l.category() != null && !l.category().isEmpty())
|
||||
@@ -538,34 +529,34 @@ public class AuctionMonitorResource {
|
||||
l -> l.category().length() > 20 ? l.category().substring(0, 20) + "..." : l.category(),
|
||||
Collectors.counting()
|
||||
));
|
||||
|
||||
|
||||
// Find top category by count
|
||||
var topCategory = distribution.entrySet().stream()
|
||||
.max(Map.Entry.comparingByValue())
|
||||
.map(Map.Entry::getKey)
|
||||
.orElse("N/A");
|
||||
|
||||
.max(Map.Entry.comparingByValue())
|
||||
.map(Map.Entry::getKey)
|
||||
.orElse("N/A");
|
||||
|
||||
// Calculate average bids per category
|
||||
Map<String, Double> avgBidsByCategory = lots.stream()
|
||||
.filter(l -> l.category() != null && !l.category().isEmpty() && l.currentBid() > 0)
|
||||
.collect(Collectors.groupingBy(
|
||||
l -> l.category().length() > 20 ? l.category().substring(0, 20) + "..." : l.category(),
|
||||
Collectors.averagingDouble(Lot::currentBid)
|
||||
));
|
||||
|
||||
.filter(l -> l.category() != null && !l.category().isEmpty() && l.currentBid() > 0)
|
||||
.collect(Collectors.groupingBy(
|
||||
l -> l.category().length() > 20 ? l.category().substring(0, 20) + "..." : l.category(),
|
||||
Collectors.averagingDouble(Lot::currentBid)
|
||||
));
|
||||
|
||||
double overallAvgBid = lots.stream()
|
||||
.filter(l -> l.currentBid() > 0)
|
||||
.mapToDouble(Lot::currentBid)
|
||||
.average()
|
||||
.orElse(0.0);
|
||||
|
||||
.filter(l -> l.currentBid() > 0)
|
||||
.mapToDouble(Lot::currentBid)
|
||||
.average()
|
||||
.orElse(0.0);
|
||||
|
||||
Map<String, Object> response = new HashMap<>();
|
||||
response.put("distribution", distribution);
|
||||
response.put("topCategory", topCategory);
|
||||
response.put("categoryCount", distribution.size());
|
||||
response.put("averageBidOverall", String.format("€%.2f", overallAvgBid));
|
||||
response.put("avgBidsByCategory", avgBidsByCategory);
|
||||
|
||||
|
||||
return Response.ok(response).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get category distribution", e);
|
||||
@@ -663,7 +654,7 @@ public class AuctionMonitorResource {
|
||||
.max(Map.Entry.comparingByValue())
|
||||
.map(Map.Entry::getKey)
|
||||
.orElse("N/A");
|
||||
|
||||
|
||||
if (!"N/A".equals(topCountry)) {
|
||||
insights.add(Map.of(
|
||||
"icon", "fa-globe",
|
||||
@@ -671,7 +662,7 @@ public class AuctionMonitorResource {
|
||||
"description", "Top performing country"
|
||||
));
|
||||
}
|
||||
|
||||
|
||||
// Add sleeper lots insight
|
||||
long sleeperCount = lots.stream().filter(Lot::isSleeperLot).count();
|
||||
if (sleeperCount > 0) {
|
||||
@@ -681,7 +672,7 @@ public class AuctionMonitorResource {
|
||||
"description", "High interest, low bids - opportunity?"
|
||||
));
|
||||
}
|
||||
|
||||
|
||||
// Add bargain insight
|
||||
long bargainCount = lots.stream().filter(Lot::isBelowEstimate).count();
|
||||
if (bargainCount > 5) {
|
||||
@@ -691,7 +682,7 @@ public class AuctionMonitorResource {
|
||||
"description", "Priced below auction house estimates"
|
||||
));
|
||||
}
|
||||
|
||||
|
||||
// Add watch/followers insight
|
||||
long highWatchCount = lots.stream()
|
||||
.filter(l -> l.followersCount() != null && l.followersCount() > 20)
|
||||
@@ -703,7 +694,7 @@ public class AuctionMonitorResource {
|
||||
"description", "High follower count, strong competition"
|
||||
));
|
||||
}
|
||||
|
||||
|
||||
return Response.ok(insights).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get insights", e);
|
||||
@@ -725,12 +716,12 @@ public class AuctionMonitorResource {
|
||||
var sleepers = allLots.stream()
|
||||
.filter(Lot::isSleeperLot)
|
||||
.toList();
|
||||
|
||||
|
||||
Map<String, Object> response = Map.of(
|
||||
"count", sleepers.size(),
|
||||
"lots", sleepers
|
||||
);
|
||||
|
||||
);
|
||||
|
||||
return Response.ok(response).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get sleeper lots", e);
|
||||
@@ -739,7 +730,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* GET /api/monitor/intelligence/bargains
|
||||
* Returns lots priced below auction house estimates
|
||||
@@ -759,12 +750,12 @@ public class AuctionMonitorResource {
|
||||
return ratioA.compareTo(ratioB);
|
||||
})
|
||||
.toList();
|
||||
|
||||
|
||||
Map<String, Object> response = Map.of(
|
||||
"count", bargains.size(),
|
||||
"lots", bargains
|
||||
);
|
||||
|
||||
);
|
||||
|
||||
return Response.ok(response).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get bargains", e);
|
||||
@@ -773,7 +764,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* GET /api/monitor/intelligence/popular
|
||||
* Returns lots by popularity level
|
||||
@@ -791,13 +782,13 @@ public class AuctionMonitorResource {
|
||||
return followersB.compareTo(followersA);
|
||||
})
|
||||
.toList();
|
||||
|
||||
|
||||
Map<String, Object> response = Map.of(
|
||||
"count", popular.size(),
|
||||
"level", level,
|
||||
"lots", popular
|
||||
);
|
||||
|
||||
);
|
||||
|
||||
return Response.ok(response).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get popular lots", e);
|
||||
@@ -806,7 +797,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* GET /api/monitor/intelligence/price-analysis
|
||||
* Returns price vs estimate analysis
|
||||
@@ -816,28 +807,28 @@ public class AuctionMonitorResource {
|
||||
public Response getPriceAnalysis() {
|
||||
try {
|
||||
var allLots = db.getAllLots();
|
||||
|
||||
|
||||
long belowEstimate = allLots.stream().filter(Lot::isBelowEstimate).count();
|
||||
long aboveEstimate = allLots.stream().filter(Lot::isAboveEstimate).count();
|
||||
long withEstimates = allLots.stream()
|
||||
.filter(lot -> lot.estimatedMin() != null && lot.estimatedMax() != null)
|
||||
.count();
|
||||
|
||||
|
||||
double avgPriceVsEstimate = allLots.stream()
|
||||
.map(Lot::getPriceVsEstimateRatio)
|
||||
.filter(ratio -> ratio != null)
|
||||
.mapToDouble(Double::doubleValue)
|
||||
.average()
|
||||
.orElse(0.0);
|
||||
|
||||
|
||||
Map<String, Object> response = Map.of(
|
||||
"totalLotsWithEstimates", withEstimates,
|
||||
"belowEstimate", belowEstimate,
|
||||
"aboveEstimate", aboveEstimate,
|
||||
"averagePriceVsEstimatePercent", Math.round(avgPriceVsEstimate),
|
||||
"bargainOpportunities", belowEstimate
|
||||
);
|
||||
|
||||
);
|
||||
|
||||
return Response.ok(response).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get price analysis", e);
|
||||
@@ -846,7 +837,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* GET /api/monitor/lots/{lotId}/intelligence
|
||||
* Returns detailed intelligence for a specific lot
|
||||
@@ -859,13 +850,13 @@ public class AuctionMonitorResource {
|
||||
.filter(l -> l.lotId() == lotId)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
|
||||
if (lot == null) {
|
||||
return Response.status(Response.Status.NOT_FOUND)
|
||||
.entity(Map.of("error", "Lot not found"))
|
||||
.build();
|
||||
}
|
||||
|
||||
|
||||
Map<String, Object> intelligence = new HashMap<>();
|
||||
intelligence.put("lotId", lot.lotId());
|
||||
intelligence.put("followersCount", lot.followersCount());
|
||||
@@ -882,7 +873,7 @@ public class AuctionMonitorResource {
|
||||
intelligence.put("condition", lot.condition());
|
||||
intelligence.put("vat", lot.vat());
|
||||
intelligence.put("buyerPremium", lot.buyerPremiumPercentage());
|
||||
|
||||
|
||||
return Response.ok(intelligence).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get lot intelligence", e);
|
||||
@@ -891,7 +882,7 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* GET /api/monitor/charts/watch-distribution
|
||||
* Returns follower/watch count distribution
|
||||
@@ -901,14 +892,14 @@ public class AuctionMonitorResource {
|
||||
public Response getWatchDistribution() {
|
||||
try {
|
||||
var lots = db.getAllLots();
|
||||
|
||||
|
||||
Map<String, Long> distribution = new HashMap<>();
|
||||
distribution.put("0 watchers", lots.stream().filter(l -> l.followersCount() == null || l.followersCount() == 0).count());
|
||||
distribution.put("1-5 watchers", lots.stream().filter(l -> l.followersCount() != null && l.followersCount() >= 1 && l.followersCount() <= 5).count());
|
||||
distribution.put("6-20 watchers", lots.stream().filter(l -> l.followersCount() != null && l.followersCount() >= 6 && l.followersCount() <= 20).count());
|
||||
distribution.put("21-50 watchers", lots.stream().filter(l -> l.followersCount() != null && l.followersCount() >= 21 && l.followersCount() <= 50).count());
|
||||
distribution.put("50+ watchers", lots.stream().filter(l -> l.followersCount() != null && l.followersCount() > 50).count());
|
||||
|
||||
|
||||
return Response.ok(distribution).build();
|
||||
} catch (Exception e) {
|
||||
LOG.error("Failed to get watch distribution", e);
|
||||
@@ -917,14 +908,14 @@ public class AuctionMonitorResource {
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Helper class for trend data
|
||||
public static class TrendHour {
|
||||
|
||||
|
||||
public int hour;
|
||||
public int lots;
|
||||
public int bids;
|
||||
|
||||
|
||||
public TrendHour(int hour, int lots, int bids) {
|
||||
this.hour = hour;
|
||||
this.lots = lots;
|
||||
|
||||
@@ -1,795 +1,218 @@
|
||||
package auctiora;
|
||||
|
||||
import jakarta.enterprise.context.ApplicationScoped;
|
||||
import jakarta.inject.Inject;
|
||||
import auctiora.db.*;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import org.eclipse.microprofile.config.inject.ConfigProperty;
|
||||
import org.jdbi.v3.core.Jdbi;
|
||||
|
||||
import java.io.Console;
|
||||
import java.sql.DriverManager;
|
||||
import java.sql.SQLException;
|
||||
import java.time.Instant;
|
||||
import java.time.LocalDateTime;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Service for persisting auctions, lots, and images into a SQLite database.
|
||||
* Data is typically populated by an external scraper process;
|
||||
* this service enriches it with image processing and monitoring.
|
||||
* Refactored database service using repository pattern and JDBI3.
|
||||
* Delegates operations to specialized repositories for better separation of concerns.
|
||||
*
|
||||
* @deprecated Legacy methods maintained for backward compatibility.
|
||||
* New code should use repositories directly via dependency injection.
|
||||
*/
|
||||
@Slf4j
|
||||
public class DatabaseService {
|
||||
|
||||
private final String url;
|
||||
private final Jdbi jdbi;
|
||||
private final LotRepository lotRepository;
|
||||
private final AuctionRepository auctionRepository;
|
||||
private final ImageRepository imageRepository;
|
||||
|
||||
DatabaseService(String dbPath) {
|
||||
// Enable WAL mode and busy timeout for concurrent access
|
||||
this.url = "jdbc:sqlite:" + dbPath + "?journal_mode=WAL&busy_timeout=10000";
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates tables if they do not already exist.
|
||||
* Schema supports data from external scraper and adds image processing results.
|
||||
*/
|
||||
void ensureSchema() throws SQLException {
|
||||
try (var conn = DriverManager.getConnection(url); var stmt = conn.createStatement()) {
|
||||
// Enable WAL mode for better concurrent access
|
||||
stmt.execute("PRAGMA journal_mode=WAL");
|
||||
stmt.execute("PRAGMA busy_timeout=10000");
|
||||
stmt.execute("PRAGMA synchronous=NORMAL");
|
||||
/**
|
||||
* Constructor for programmatic instantiation (tests, CLI tools).
|
||||
*/
|
||||
public DatabaseService(String dbPath) {
|
||||
String url = "jdbc:sqlite:" + dbPath + "?journal_mode=WAL&busy_timeout=10000";
|
||||
this.jdbi = Jdbi.create(url);
|
||||
|
||||
// Cache table (for HTTP caching)
|
||||
stmt.execute("""
|
||||
CREATE TABLE IF NOT EXISTS cache (
|
||||
url TEXT PRIMARY KEY,
|
||||
content BLOB,
|
||||
timestamp REAL,
|
||||
status_code INTEGER
|
||||
)""");
|
||||
// Initialize schema
|
||||
DatabaseSchema.ensureSchema(jdbi);
|
||||
|
||||
// Auctions table (populated by external scraper)
|
||||
stmt.execute("""
|
||||
CREATE TABLE IF NOT EXISTS auctions (
|
||||
auction_id TEXT PRIMARY KEY,
|
||||
url TEXT UNIQUE,
|
||||
title TEXT,
|
||||
location TEXT,
|
||||
lots_count INTEGER,
|
||||
first_lot_closing_time TEXT,
|
||||
scraped_at TEXT,
|
||||
city TEXT,
|
||||
country TEXT,
|
||||
type TEXT,
|
||||
lot_count INTEGER DEFAULT 0,
|
||||
closing_time TEXT,
|
||||
discovered_at INTEGER
|
||||
)""");
|
||||
// Create repositories
|
||||
this.lotRepository = new LotRepository(jdbi);
|
||||
this.auctionRepository = new AuctionRepository(jdbi);
|
||||
this.imageRepository = new ImageRepository(jdbi);
|
||||
}
|
||||
|
||||
// Lots table (populated by external scraper)
|
||||
stmt.execute("""
|
||||
CREATE TABLE IF NOT EXISTS lots (
|
||||
lot_id TEXT PRIMARY KEY,
|
||||
auction_id TEXT,
|
||||
url TEXT UNIQUE,
|
||||
title TEXT,
|
||||
current_bid TEXT,
|
||||
bid_count INTEGER,
|
||||
closing_time TEXT,
|
||||
viewing_time TEXT,
|
||||
pickup_date TEXT,
|
||||
location TEXT,
|
||||
description TEXT,
|
||||
category TEXT,
|
||||
scraped_at TEXT,
|
||||
sale_id INTEGER,
|
||||
manufacturer TEXT,
|
||||
type TEXT,
|
||||
year INTEGER,
|
||||
currency TEXT DEFAULT 'EUR',
|
||||
closing_notified INTEGER DEFAULT 0,
|
||||
starting_bid TEXT,
|
||||
minimum_bid TEXT,
|
||||
status TEXT,
|
||||
brand TEXT,
|
||||
model TEXT,
|
||||
attributes_json TEXT,
|
||||
first_bid_time TEXT,
|
||||
last_bid_time TEXT,
|
||||
bid_velocity REAL,
|
||||
bid_increment REAL,
|
||||
year_manufactured INTEGER,
|
||||
condition_score REAL,
|
||||
condition_description TEXT,
|
||||
serial_number TEXT,
|
||||
damage_description TEXT,
|
||||
followers_count INTEGER DEFAULT 0,
|
||||
estimated_min_price REAL,
|
||||
estimated_max_price REAL,
|
||||
lot_condition TEXT,
|
||||
appearance TEXT,
|
||||
estimated_min REAL,
|
||||
estimated_max REAL,
|
||||
next_bid_step_cents INTEGER,
|
||||
condition TEXT,
|
||||
category_path TEXT,
|
||||
city_location TEXT,
|
||||
country_code TEXT,
|
||||
bidding_status TEXT,
|
||||
packaging TEXT,
|
||||
quantity INTEGER,
|
||||
vat REAL,
|
||||
buyer_premium_percentage REAL,
|
||||
remarks TEXT,
|
||||
reserve_price REAL,
|
||||
reserve_met INTEGER,
|
||||
view_count INTEGER,
|
||||
FOREIGN KEY (auction_id) REFERENCES auctions(auction_id)
|
||||
)""");
|
||||
/**
|
||||
* Constructor with JDBI instance (for dependency injection).
|
||||
*/
|
||||
public DatabaseService(Jdbi jdbi) {
|
||||
this.jdbi = jdbi;
|
||||
DatabaseSchema.ensureSchema(jdbi);
|
||||
|
||||
// Images table (populated by external scraper with URLs and local_path)
|
||||
stmt.execute("""
|
||||
CREATE TABLE IF NOT EXISTS images (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT,
|
||||
url TEXT,
|
||||
local_path TEXT,
|
||||
downloaded INTEGER DEFAULT 0,
|
||||
labels TEXT,
|
||||
processed_at INTEGER,
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
)""");
|
||||
this.lotRepository = new LotRepository(jdbi);
|
||||
this.auctionRepository = new AuctionRepository(jdbi);
|
||||
this.imageRepository = new ImageRepository(jdbi);
|
||||
}
|
||||
|
||||
// Bid history table
|
||||
stmt.execute("""
|
||||
CREATE TABLE IF NOT EXISTS bid_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT NOT NULL,
|
||||
bid_amount REAL NOT NULL,
|
||||
bid_time TEXT NOT NULL,
|
||||
is_autobid INTEGER DEFAULT 0,
|
||||
bidder_id TEXT,
|
||||
bidder_number INTEGER,
|
||||
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
)""");
|
||||
// ==================== LEGACY COMPATIBILITY METHODS ====================
|
||||
// These methods delegate to repositories for backward compatibility
|
||||
|
||||
// Indexes for performance
|
||||
stmt.execute("CREATE INDEX IF NOT EXISTS idx_timestamp ON cache(timestamp)");
|
||||
stmt.execute("CREATE INDEX IF NOT EXISTS idx_auctions_country ON auctions(country)");
|
||||
stmt.execute("CREATE INDEX IF NOT EXISTS idx_lots_sale_id ON lots(sale_id)");
|
||||
stmt.execute("CREATE INDEX IF NOT EXISTS idx_images_lot_id ON images(lot_id)");
|
||||
stmt.execute("CREATE UNIQUE INDEX IF NOT EXISTS idx_unique_lot_url ON images(lot_id, url)");
|
||||
stmt.execute("CREATE INDEX IF NOT EXISTS idx_bid_history_lot_time ON bid_history(lot_id, bid_time)");
|
||||
stmt.execute("CREATE INDEX IF NOT EXISTS idx_bid_history_bidder ON bid_history(bidder_id)");
|
||||
}
|
||||
}
|
||||
void ensureSchema() {
|
||||
DatabaseSchema.ensureSchema(jdbi);
|
||||
}
|
||||
|
||||
synchronized void upsertAuction(AuctionInfo auction) {
|
||||
auctionRepository.upsert(auction);
|
||||
}
|
||||
|
||||
/**
|
||||
* Inserts or updates an auction record (typically called by external scraper)
|
||||
* Handles both auction_id conflicts and url uniqueness constraints
|
||||
*/
|
||||
synchronized void upsertAuction(AuctionInfo auction) throws SQLException {
|
||||
// First try to INSERT with ON CONFLICT on auction_id
|
||||
var insertSql = """
|
||||
INSERT INTO auctions (auction_id, title, location, city, country, url, type, lot_count, closing_time, discovered_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(auction_id) DO UPDATE SET
|
||||
title = excluded.title,
|
||||
location = excluded.location,
|
||||
city = excluded.city,
|
||||
country = excluded.country,
|
||||
url = excluded.url,
|
||||
type = excluded.type,
|
||||
lot_count = excluded.lot_count,
|
||||
closing_time = excluded.closing_time
|
||||
""";
|
||||
synchronized List<AuctionInfo> getAllAuctions() {
|
||||
return auctionRepository.getAll();
|
||||
}
|
||||
|
||||
try (var conn = DriverManager.getConnection(url)) {
|
||||
try (var ps = conn.prepareStatement(insertSql)) {
|
||||
ps.setLong(1, auction.auctionId());
|
||||
ps.setString(2, auction.title());
|
||||
ps.setString(3, auction.location());
|
||||
ps.setString(4, auction.city());
|
||||
ps.setString(5, auction.country());
|
||||
ps.setString(6, auction.url());
|
||||
ps.setString(7, auction.typePrefix());
|
||||
ps.setInt(8, auction.lotCount());
|
||||
ps.setString(9, auction.firstLotClosingTime() != null ? auction.firstLotClosingTime().toString() : null);
|
||||
ps.setLong(10, Instant.now().getEpochSecond());
|
||||
ps.executeUpdate();
|
||||
} catch (SQLException e) {
|
||||
// Handle both PRIMARY KEY and URL constraint failures
|
||||
String errMsg = e.getMessage();
|
||||
if (errMsg.contains("UNIQUE constraint failed: auctions.auction_id") ||
|
||||
errMsg.contains("UNIQUE constraint failed: auctions.url") ||
|
||||
errMsg.contains("PRIMARY KEY constraint failed")) {
|
||||
synchronized List<AuctionInfo> getAuctionsByCountry(String countryCode) {
|
||||
return auctionRepository.getByCountry(countryCode);
|
||||
}
|
||||
|
||||
// Try updating by URL as fallback (most reliable unique identifier)
|
||||
var updateByUrlSql = """
|
||||
UPDATE auctions SET
|
||||
auction_id = ?,
|
||||
title = ?,
|
||||
location = ?,
|
||||
city = ?,
|
||||
country = ?,
|
||||
type = ?,
|
||||
lot_count = ?,
|
||||
closing_time = ?
|
||||
WHERE url = ?
|
||||
""";
|
||||
try (var ps = conn.prepareStatement(updateByUrlSql)) {
|
||||
ps.setLong(1, auction.auctionId());
|
||||
ps.setString(2, auction.title());
|
||||
ps.setString(3, auction.location());
|
||||
ps.setString(4, auction.city());
|
||||
ps.setString(5, auction.country());
|
||||
ps.setString(6, auction.typePrefix());
|
||||
ps.setInt(7, auction.lotCount());
|
||||
ps.setString(8, auction.firstLotClosingTime() != null ? auction.firstLotClosingTime().toString() : null);
|
||||
ps.setString(9, auction.url());
|
||||
synchronized void upsertLot(Lot lot) {
|
||||
lotRepository.upsert(lot);
|
||||
}
|
||||
|
||||
int updated = ps.executeUpdate();
|
||||
if (updated == 0) {
|
||||
// Auction doesn't exist by URL either - this is unexpected
|
||||
log.warn("Could not insert or update auction with url={}, auction_id={} - constraint violation but no existing record found",
|
||||
auction.url(), auction.auctionId());
|
||||
} else {
|
||||
log.debug("Updated existing auction by URL: {}", auction.url());
|
||||
}
|
||||
} catch (SQLException updateEx) {
|
||||
// UPDATE also failed - log and swallow the exception
|
||||
log.warn("Failed to update auction by URL ({}): {}", auction.url(), updateEx.getMessage());
|
||||
}
|
||||
} else {
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves all auctions from the database
|
||||
*/
|
||||
synchronized List<AuctionInfo> getAllAuctions() throws SQLException {
|
||||
List<AuctionInfo> auctions = new ArrayList<>();
|
||||
var sql = "SELECT auction_id, title, location, city, country, url, type, lot_count, closing_time FROM auctions";
|
||||
synchronized void upsertLotWithIntelligence(Lot lot) {
|
||||
lotRepository.upsertWithIntelligence(lot);
|
||||
}
|
||||
|
||||
try (var conn = DriverManager.getConnection(url); var stmt = conn.createStatement()) {
|
||||
var rs = stmt.executeQuery(sql);
|
||||
while (rs.next()) {
|
||||
var closingStr = rs.getString("closing_time");
|
||||
LocalDateTime closing = null;
|
||||
if (closingStr != null && !closingStr.isBlank()) {
|
||||
try {
|
||||
closing = LocalDateTime.parse(closingStr);
|
||||
} catch (Exception e) {
|
||||
log.debug("Invalid closing_time format for auction {}: {}", rs.getLong("auction_id"), closingStr);
|
||||
}
|
||||
}
|
||||
synchronized void updateLotCurrentBid(Lot lot) {
|
||||
lotRepository.updateCurrentBid(lot);
|
||||
}
|
||||
|
||||
auctions.add(new AuctionInfo(
|
||||
rs.getLong("auction_id"),
|
||||
rs.getString("title"),
|
||||
rs.getString("location"),
|
||||
rs.getString("city"),
|
||||
rs.getString("country"),
|
||||
rs.getString("url"),
|
||||
rs.getString("type"),
|
||||
rs.getInt("lot_count"),
|
||||
closing
|
||||
));
|
||||
}
|
||||
}
|
||||
return auctions;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves auctions by country code
|
||||
*/
|
||||
synchronized List<AuctionInfo> getAuctionsByCountry(String countryCode) throws SQLException {
|
||||
List<AuctionInfo> auctions = new ArrayList<>();
|
||||
var sql = "SELECT auction_id, title, location, city, country, url, type, lot_count, closing_time "
|
||||
+ "FROM auctions WHERE country = ?";
|
||||
synchronized void updateLotNotificationFlags(Lot lot) {
|
||||
lotRepository.updateNotificationFlags(lot);
|
||||
}
|
||||
|
||||
try (var conn = DriverManager.getConnection(url); var ps = conn.prepareStatement(sql)) {
|
||||
ps.setString(1, countryCode);
|
||||
var rs = ps.executeQuery();
|
||||
while (rs.next()) {
|
||||
var closingStr = rs.getString("closing_time");
|
||||
LocalDateTime closing = null;
|
||||
if (closingStr != null && !closingStr.isBlank()) {
|
||||
try {
|
||||
closing = LocalDateTime.parse(closingStr);
|
||||
} catch (Exception e) {
|
||||
log.debug("Invalid closing_time format for auction {}: {}", rs.getLong("auction_id"), closingStr);
|
||||
}
|
||||
}
|
||||
synchronized List<Lot> getActiveLots() {
|
||||
return lotRepository.getActiveLots();
|
||||
}
|
||||
|
||||
auctions.add(new AuctionInfo(
|
||||
rs.getLong("auction_id"),
|
||||
rs.getString("title"),
|
||||
rs.getString("location"),
|
||||
rs.getString("city"),
|
||||
rs.getString("country"),
|
||||
rs.getString("url"),
|
||||
rs.getString("type"),
|
||||
rs.getInt("lot_count"),
|
||||
closing
|
||||
));
|
||||
}
|
||||
}
|
||||
return auctions;
|
||||
}
|
||||
|
||||
/**
|
||||
* Inserts or updates a lot record (typically called by external scraper)
|
||||
*/
|
||||
synchronized void upsertLot(Lot lot) throws SQLException {
|
||||
// First try to update existing lot by lot_id
|
||||
var updateSql = """
|
||||
UPDATE lots SET
|
||||
sale_id = ?,
|
||||
title = ?,
|
||||
description = ?,
|
||||
manufacturer = ?,
|
||||
type = ?,
|
||||
year = ?,
|
||||
category = ?,
|
||||
current_bid = ?,
|
||||
currency = ?,
|
||||
url = ?,
|
||||
closing_time = ?
|
||||
WHERE lot_id = ?
|
||||
""";
|
||||
synchronized List<Lot> getAllLots() {
|
||||
return lotRepository.getAllLots();
|
||||
}
|
||||
|
||||
var insertSql = """
|
||||
INSERT OR IGNORE INTO lots (lot_id, sale_id, title, description, manufacturer, type, year, category, current_bid, currency, url, closing_time, closing_notified)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""";
|
||||
synchronized List<BidHistory> getBidHistory(String lotId) {
|
||||
return lotRepository.getBidHistory(lotId);
|
||||
}
|
||||
|
||||
try (var conn = DriverManager.getConnection(url)) {
|
||||
// Try UPDATE first
|
||||
try (var ps = conn.prepareStatement(updateSql)) {
|
||||
ps.setLong(1, lot.saleId());
|
||||
ps.setString(2, lot.title());
|
||||
ps.setString(3, lot.description());
|
||||
ps.setString(4, lot.manufacturer());
|
||||
ps.setString(5, lot.type());
|
||||
ps.setInt(6, lot.year());
|
||||
ps.setString(7, lot.category());
|
||||
ps.setDouble(8, lot.currentBid());
|
||||
ps.setString(9, lot.currency());
|
||||
ps.setString(10, lot.url());
|
||||
ps.setString(11, lot.closingTime() != null ? lot.closingTime().toString() : null);
|
||||
ps.setLong(12, lot.lotId());
|
||||
synchronized void insertBidHistory(List<BidHistory> bidHistory) {
|
||||
lotRepository.insertBidHistory(bidHistory);
|
||||
}
|
||||
|
||||
int updated = ps.executeUpdate();
|
||||
if (updated > 0) {
|
||||
return; // Successfully updated existing record
|
||||
}
|
||||
}
|
||||
synchronized void insertImage(long lotId, String url, String filePath, List<String> labels) {
|
||||
imageRepository.insert(lotId, url, filePath, labels);
|
||||
}
|
||||
|
||||
// If no rows updated, try INSERT (ignore if conflicts with UNIQUE constraints)
|
||||
try (var ps = conn.prepareStatement(insertSql)) {
|
||||
ps.setLong(1, lot.lotId());
|
||||
ps.setLong(2, lot.saleId());
|
||||
ps.setString(3, lot.title());
|
||||
ps.setString(4, lot.description());
|
||||
ps.setString(5, lot.manufacturer());
|
||||
ps.setString(6, lot.type());
|
||||
ps.setInt(7, lot.year());
|
||||
ps.setString(8, lot.category());
|
||||
ps.setDouble(9, lot.currentBid());
|
||||
ps.setString(10, lot.currency());
|
||||
ps.setString(11, lot.url());
|
||||
ps.setString(12, lot.closingTime() != null ? lot.closingTime().toString() : null);
|
||||
ps.setInt(13, lot.closingNotified() ? 1 : 0);
|
||||
ps.executeUpdate();
|
||||
}
|
||||
}
|
||||
}
|
||||
synchronized void updateImageLabels(int imageId, List<String> labels) {
|
||||
imageRepository.updateLabels(imageId, labels);
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates a lot with full intelligence data from GraphQL enrichment.
|
||||
* This is a comprehensive update that includes all 24 intelligence fields.
|
||||
*/
|
||||
synchronized void upsertLotWithIntelligence(Lot lot) throws SQLException {
|
||||
var sql = """
|
||||
UPDATE lots SET
|
||||
sale_id = ?,
|
||||
title = ?,
|
||||
description = ?,
|
||||
manufacturer = ?,
|
||||
type = ?,
|
||||
year = ?,
|
||||
category = ?,
|
||||
current_bid = ?,
|
||||
currency = ?,
|
||||
url = ?,
|
||||
closing_time = ?,
|
||||
followers_count = ?,
|
||||
estimated_min = ?,
|
||||
estimated_max = ?,
|
||||
next_bid_step_in_cents = ?,
|
||||
condition = ?,
|
||||
category_path = ?,
|
||||
city_location = ?,
|
||||
country_code = ?,
|
||||
bidding_status = ?,
|
||||
appearance = ?,
|
||||
packaging = ?,
|
||||
quantity = ?,
|
||||
vat = ?,
|
||||
buyer_premium_percentage = ?,
|
||||
remarks = ?,
|
||||
starting_bid = ?,
|
||||
reserve_price = ?,
|
||||
reserve_met = ?,
|
||||
bid_increment = ?,
|
||||
view_count = ?,
|
||||
first_bid_time = ?,
|
||||
last_bid_time = ?,
|
||||
bid_velocity = ?
|
||||
WHERE lot_id = ?
|
||||
""";
|
||||
synchronized List<String> getImageLabels(int imageId) {
|
||||
return imageRepository.getLabels(imageId);
|
||||
}
|
||||
|
||||
try (var conn = DriverManager.getConnection(url); var ps = conn.prepareStatement(sql)) {
|
||||
ps.setLong(1, lot.saleId());
|
||||
ps.setString(2, lot.title());
|
||||
ps.setString(3, lot.description());
|
||||
ps.setString(4, lot.manufacturer());
|
||||
ps.setString(5, lot.type());
|
||||
ps.setInt(6, lot.year());
|
||||
ps.setString(7, lot.category());
|
||||
ps.setDouble(8, lot.currentBid());
|
||||
ps.setString(9, lot.currency());
|
||||
ps.setString(10, lot.url());
|
||||
ps.setString(11, lot.closingTime() != null ? lot.closingTime().toString() : null);
|
||||
synchronized List<ImageRecord> getImagesForLot(long lotId) {
|
||||
return imageRepository.getImagesForLot(lotId)
|
||||
.stream()
|
||||
.map(img -> new ImageRecord(img.id(), img.lotId(), img.url(), img.filePath(), img.labels()))
|
||||
.toList();
|
||||
}
|
||||
|
||||
// Intelligence fields
|
||||
if (lot.followersCount() != null) ps.setInt(12, lot.followersCount()); else ps.setNull(12, java.sql.Types.INTEGER);
|
||||
if (lot.estimatedMin() != null) ps.setDouble(13, lot.estimatedMin()); else ps.setNull(13, java.sql.Types.REAL);
|
||||
if (lot.estimatedMax() != null) ps.setDouble(14, lot.estimatedMax()); else ps.setNull(14, java.sql.Types.REAL);
|
||||
if (lot.nextBidStepInCents() != null) ps.setLong(15, lot.nextBidStepInCents()); else ps.setNull(15, java.sql.Types.BIGINT);
|
||||
ps.setString(16, lot.condition());
|
||||
ps.setString(17, lot.categoryPath());
|
||||
ps.setString(18, lot.cityLocation());
|
||||
ps.setString(19, lot.countryCode());
|
||||
ps.setString(20, lot.biddingStatus());
|
||||
ps.setString(21, lot.appearance());
|
||||
ps.setString(22, lot.packaging());
|
||||
if (lot.quantity() != null) ps.setLong(23, lot.quantity()); else ps.setNull(23, java.sql.Types.BIGINT);
|
||||
if (lot.vat() != null) ps.setDouble(24, lot.vat()); else ps.setNull(24, java.sql.Types.REAL);
|
||||
if (lot.buyerPremiumPercentage() != null) ps.setDouble(25, lot.buyerPremiumPercentage()); else ps.setNull(25, java.sql.Types.REAL);
|
||||
ps.setString(26, lot.remarks());
|
||||
if (lot.startingBid() != null) ps.setDouble(27, lot.startingBid()); else ps.setNull(27, java.sql.Types.REAL);
|
||||
if (lot.reservePrice() != null) ps.setDouble(28, lot.reservePrice()); else ps.setNull(28, java.sql.Types.REAL);
|
||||
if (lot.reserveMet() != null) ps.setInt(29, lot.reserveMet() ? 1 : 0); else ps.setNull(29, java.sql.Types.INTEGER);
|
||||
if (lot.bidIncrement() != null) ps.setDouble(30, lot.bidIncrement()); else ps.setNull(30, java.sql.Types.REAL);
|
||||
if (lot.viewCount() != null) ps.setInt(31, lot.viewCount()); else ps.setNull(31, java.sql.Types.INTEGER);
|
||||
ps.setString(32, lot.firstBidTime() != null ? lot.firstBidTime().toString() : null);
|
||||
ps.setString(33, lot.lastBidTime() != null ? lot.lastBidTime().toString() : null);
|
||||
if (lot.bidVelocity() != null) ps.setDouble(34, lot.bidVelocity()); else ps.setNull(34, java.sql.Types.REAL);
|
||||
synchronized List<ImageDetectionRecord> getImagesNeedingDetection() {
|
||||
return imageRepository.getImagesNeedingDetection()
|
||||
.stream()
|
||||
.map(img -> new ImageDetectionRecord(img.id(), img.lotId(), img.filePath()))
|
||||
.toList();
|
||||
}
|
||||
|
||||
ps.setLong(35, lot.lotId());
|
||||
synchronized int getImageCount() {
|
||||
return imageRepository.getImageCount();
|
||||
}
|
||||
|
||||
int updated = ps.executeUpdate();
|
||||
if (updated == 0) {
|
||||
log.warn("Failed to update lot {} - lot not found in database", lot.lotId());
|
||||
}
|
||||
}
|
||||
}
|
||||
synchronized List<AuctionInfo> importAuctionsFromScraper() {
|
||||
return jdbi.withHandle(handle -> {
|
||||
var sql = """
|
||||
SELECT
|
||||
l.auction_id,
|
||||
MIN(l.title) as title,
|
||||
MIN(l.location) as location,
|
||||
MIN(l.url) as url,
|
||||
COUNT(*) as lots_count,
|
||||
MIN(l.closing_time) as first_lot_closing_time,
|
||||
MIN(l.scraped_at) as scraped_at
|
||||
FROM lots l
|
||||
WHERE l.auction_id IS NOT NULL
|
||||
GROUP BY l.auction_id
|
||||
""";
|
||||
|
||||
/**
|
||||
* Inserts a complete image record (for testing/legacy compatibility).
|
||||
* In production, scraper inserts with local_path, monitor updates labels via updateImageLabels.
|
||||
*/
|
||||
synchronized void insertImage(long lotId, String url, String filePath, List<String> labels) throws SQLException {
|
||||
var sql = "INSERT INTO images (lot_id, url, local_path, labels, processed_at, downloaded) VALUES (?, ?, ?, ?, ?, 1)";
|
||||
try (var conn = DriverManager.getConnection(this.url); var ps = conn.prepareStatement(sql)) {
|
||||
ps.setLong(1, lotId);
|
||||
ps.setString(2, url);
|
||||
ps.setString(3, filePath);
|
||||
ps.setString(4, String.join(",", labels));
|
||||
ps.setLong(5, Instant.now().getEpochSecond());
|
||||
ps.executeUpdate();
|
||||
}
|
||||
}
|
||||
return handle.createQuery(sql)
|
||||
.map((rs, ctx) -> {
|
||||
try {
|
||||
var auction = ScraperDataAdapter.fromScraperAuction(rs);
|
||||
if (auction.auctionId() != 0L) {
|
||||
auctionRepository.upsert(auction);
|
||||
return auction;
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to import auction: {}", e.getMessage());
|
||||
}
|
||||
return null;
|
||||
})
|
||||
.list()
|
||||
.stream()
|
||||
.filter(a -> a != null)
|
||||
.toList();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the labels field for an image after object detection
|
||||
*/
|
||||
synchronized void updateImageLabels(int imageId, List<String> labels) throws SQLException {
|
||||
var sql = "UPDATE images SET labels = ?, processed_at = ? WHERE id = ?";
|
||||
try (var conn = DriverManager.getConnection(this.url); var ps = conn.prepareStatement(sql)) {
|
||||
ps.setString(1, String.join(",", labels));
|
||||
ps.setLong(2, Instant.now().getEpochSecond());
|
||||
ps.setInt(3, imageId);
|
||||
ps.executeUpdate();
|
||||
}
|
||||
}
|
||||
synchronized List<Lot> importLotsFromScraper() {
|
||||
return jdbi.withHandle(handle -> {
|
||||
var sql = "SELECT * FROM lots";
|
||||
|
||||
/**
|
||||
* Gets the labels for a specific image
|
||||
*/
|
||||
synchronized List<String> getImageLabels(int imageId) throws SQLException {
|
||||
var sql = "SELECT labels FROM images WHERE id = ?";
|
||||
try (var conn = DriverManager.getConnection(this.url); var ps = conn.prepareStatement(sql)) {
|
||||
ps.setInt(1, imageId);
|
||||
var rs = ps.executeQuery();
|
||||
if (rs.next()) {
|
||||
var labelsStr = rs.getString("labels");
|
||||
if (labelsStr != null && !labelsStr.isEmpty()) {
|
||||
return List.of(labelsStr.split(","));
|
||||
}
|
||||
}
|
||||
}
|
||||
return List.of();
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves images for a specific lot
|
||||
*/
|
||||
synchronized List<ImageRecord> getImagesForLot(long lotId) throws SQLException {
|
||||
List<ImageRecord> images = new ArrayList<>();
|
||||
var sql = "SELECT id, lot_id, url, local_path, labels FROM images WHERE lot_id = ?";
|
||||
|
||||
try (var conn = DriverManager.getConnection(url); var ps = conn.prepareStatement(sql)) {
|
||||
ps.setLong(1, lotId);
|
||||
var rs = ps.executeQuery();
|
||||
while (rs.next()) {
|
||||
images.add(new ImageRecord(
|
||||
rs.getInt("id"),
|
||||
rs.getLong("lot_id"),
|
||||
rs.getString("url"),
|
||||
rs.getString("local_path"),
|
||||
rs.getString("labels")
|
||||
));
|
||||
}
|
||||
}
|
||||
return images;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves all lots that are active and need monitoring
|
||||
*/
|
||||
synchronized List<Lot> getActiveLots() throws SQLException {
|
||||
List<Lot> list = new ArrayList<>();
|
||||
var sql = "SELECT lot_id, sale_id as auction_id, title, description, manufacturer, type, year, category, " +
|
||||
"current_bid, currency, url, closing_time, closing_notified FROM lots";
|
||||
return handle.createQuery(sql)
|
||||
.map((rs, ctx) -> {
|
||||
try {
|
||||
var lot = ScraperDataAdapter.fromScraperLot(rs);
|
||||
if (lot.lotId() != 0L && lot.saleId() != 0L) {
|
||||
lotRepository.upsert(lot);
|
||||
return lot;
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to import lot: {}", e.getMessage());
|
||||
}
|
||||
return null;
|
||||
})
|
||||
.list()
|
||||
.stream()
|
||||
.filter(l -> l != null)
|
||||
.toList();
|
||||
});
|
||||
}
|
||||
|
||||
try (var conn = DriverManager.getConnection(url); var stmt = conn.createStatement()) {
|
||||
var rs = stmt.executeQuery(sql);
|
||||
while (rs.next()) {
|
||||
try {
|
||||
// Use ScraperDataAdapter to handle TEXT parsing from legacy database
|
||||
var lot = ScraperDataAdapter.fromScraperLot(rs);
|
||||
list.add(lot);
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to parse lot {}: {}", rs.getString("lot_id"), e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
return list;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves all lots from the database
|
||||
*/
|
||||
synchronized List<Lot> getAllLots() throws SQLException {
|
||||
return getActiveLots();
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the total number of images in the database
|
||||
*/
|
||||
synchronized int getImageCount() throws SQLException {
|
||||
var sql = "SELECT COUNT(*) as count FROM images";
|
||||
try (var conn = DriverManager.getConnection(url); var stmt = conn.createStatement()) {
|
||||
var rs = stmt.executeQuery(sql);
|
||||
if (rs.next()) {
|
||||
return rs.getInt("count");
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the current bid of a lot (used by monitoring service)
|
||||
*/
|
||||
synchronized void updateLotCurrentBid(Lot lot) throws SQLException {
|
||||
try (var conn = DriverManager.getConnection(url);
|
||||
var ps = conn.prepareStatement("UPDATE lots SET current_bid = ? WHERE lot_id = ?")) {
|
||||
ps.setDouble(1, lot.currentBid());
|
||||
ps.setLong(2, lot.lotId());
|
||||
ps.executeUpdate();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the closingNotified flag of a lot
|
||||
*/
|
||||
synchronized void updateLotNotificationFlags(Lot lot) throws SQLException {
|
||||
try (var conn = DriverManager.getConnection(url);
|
||||
var ps = conn.prepareStatement("UPDATE lots SET closing_notified = ? WHERE lot_id = ?")) {
|
||||
ps.setInt(1, lot.closingNotified() ? 1 : 0);
|
||||
ps.setLong(2, lot.lotId());
|
||||
ps.executeUpdate();
|
||||
}
|
||||
}
|
||||
// ==================== DIRECT REPOSITORY ACCESS ====================
|
||||
// Expose repositories for modern usage patterns
|
||||
|
||||
/**
|
||||
* Retrieves bid history for a specific lot
|
||||
*/
|
||||
synchronized List<BidHistory> getBidHistory(String lotId) throws SQLException {
|
||||
List<BidHistory> history = new ArrayList<>();
|
||||
var sql = "SELECT id, lot_id, bid_amount, bid_time, is_autobid, bidder_id, bidder_number " +
|
||||
"FROM bid_history WHERE lot_id = ? ORDER BY bid_time DESC LIMIT 100";
|
||||
public LotRepository lots() {
|
||||
return lotRepository;
|
||||
}
|
||||
|
||||
try (var conn = DriverManager.getConnection(url);
|
||||
var ps = conn.prepareStatement(sql)) {
|
||||
ps.setString(1, lotId);
|
||||
var rs = ps.executeQuery();
|
||||
public AuctionRepository auctions() {
|
||||
return auctionRepository;
|
||||
}
|
||||
|
||||
while (rs.next()) {
|
||||
LocalDateTime bidTime = null;
|
||||
var bidTimeStr = rs.getString("bid_time");
|
||||
if (bidTimeStr != null && !bidTimeStr.isBlank()) {
|
||||
try {
|
||||
bidTime = LocalDateTime.parse(bidTimeStr);
|
||||
} catch (Exception e) {
|
||||
log.debug("Invalid bid_time format: {}", bidTimeStr);
|
||||
}
|
||||
}
|
||||
public ImageRepository images() {
|
||||
return imageRepository;
|
||||
}
|
||||
|
||||
history.add(new BidHistory(
|
||||
rs.getInt("id"),
|
||||
rs.getString("lot_id"),
|
||||
rs.getDouble("bid_amount"),
|
||||
bidTime,
|
||||
rs.getInt("is_autobid") != 0,
|
||||
rs.getString("bidder_id"),
|
||||
rs.getInt("bidder_number")
|
||||
));
|
||||
}
|
||||
}
|
||||
return history;
|
||||
}
|
||||
|
||||
/**
|
||||
* Imports auctions from scraper's schema format.
|
||||
* Since the scraper doesn't populate a separate auctions table,
|
||||
* we derive auction metadata by aggregating lots data.
|
||||
*
|
||||
* @return List of imported auctions
|
||||
*/
|
||||
synchronized List<AuctionInfo> importAuctionsFromScraper() throws SQLException {
|
||||
List<AuctionInfo> imported = new ArrayList<>();
|
||||
public Jdbi getJdbi() {
|
||||
return jdbi;
|
||||
}
|
||||
|
||||
// Derive auctions from lots table (scraper doesn't populate auctions table)
|
||||
var sql = """
|
||||
SELECT
|
||||
l.auction_id,
|
||||
MIN(l.title) as title,
|
||||
MIN(l.location) as location,
|
||||
MIN(l.url) as url,
|
||||
COUNT(*) as lots_count,
|
||||
MIN(l.closing_time) as first_lot_closing_time,
|
||||
MIN(l.scraped_at) as scraped_at
|
||||
FROM lots l
|
||||
WHERE l.auction_id IS NOT NULL
|
||||
GROUP BY l.auction_id
|
||||
""";
|
||||
// ==================== LEGACY RECORDS ====================
|
||||
// Keep records for backward compatibility with existing code
|
||||
|
||||
try (var conn = DriverManager.getConnection(url); var stmt = conn.createStatement()) {
|
||||
var rs = stmt.executeQuery(sql);
|
||||
while (rs.next()) {
|
||||
try {
|
||||
var auction = ScraperDataAdapter.fromScraperAuction(rs);
|
||||
// Skip auctions with invalid IDs (0 indicates parsing failed)
|
||||
if (auction.auctionId() == 0L) {
|
||||
log.debug("Skipping auction with invalid ID: auction_id={}", auction.auctionId());
|
||||
continue;
|
||||
}
|
||||
upsertAuction(auction);
|
||||
imported.add(auction);
|
||||
} catch (SQLException e) {
|
||||
// SQLException should be handled by upsertAuction, but if it propagates here, log it
|
||||
log.warn("Failed to import auction (SQL error): {}", e.getMessage());
|
||||
} catch (Exception e) {
|
||||
// Other exceptions (parsing errors, etc)
|
||||
log.warn("Failed to import auction (parsing error): {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
// Table might not exist in scraper format - that's ok
|
||||
log.info("ℹ️ Scraper lots table not found or incompatible schema: {}", e.getMessage());
|
||||
}
|
||||
public record ImageRecord(int id, long lotId, String url, String filePath, String labels) {}
|
||||
|
||||
return imported;
|
||||
}
|
||||
|
||||
/**
|
||||
* Imports lots from scraper's schema format.
|
||||
* Reads from scraper's tables and converts to monitor format using adapter.
|
||||
*
|
||||
* @return List of imported lots
|
||||
*/
|
||||
synchronized List<Lot> importLotsFromScraper() throws SQLException {
|
||||
List<Lot> imported = new ArrayList<>();
|
||||
var sql = "SELECT lot_id, auction_id, title, description, category, " +
|
||||
"current_bid, closing_time, url " +
|
||||
"FROM lots";
|
||||
|
||||
try (var conn = DriverManager.getConnection(url); var stmt = conn.createStatement()) {
|
||||
var rs = stmt.executeQuery(sql);
|
||||
while (rs.next()) {
|
||||
try {
|
||||
var lot = ScraperDataAdapter.fromScraperLot(rs);
|
||||
// Skip lots with invalid IDs (0 indicates parsing failed)
|
||||
if (lot.lotId() == 0L || lot.saleId() == 0L) {
|
||||
log.debug("Skipping lot with invalid ID: lot_id={}, sale_id={}", lot.lotId(), lot.saleId());
|
||||
continue;
|
||||
}
|
||||
upsertLot(lot);
|
||||
imported.add(lot);
|
||||
} catch (Exception e) {
|
||||
System.err.println("Failed to import lot: " + e.getMessage());
|
||||
}
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
// Table might not exist in scraper format - that's ok
|
||||
log.info("ℹ️ Scraper lots table not found or incompatible schema");
|
||||
}
|
||||
|
||||
return imported;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets images that have been downloaded by the scraper but need object detection.
|
||||
* Only returns images that have local_path set but no labels yet.
|
||||
*
|
||||
* @return List of images needing object detection
|
||||
*/
|
||||
synchronized List<ImageDetectionRecord> getImagesNeedingDetection() throws SQLException {
|
||||
List<ImageDetectionRecord> images = new ArrayList<>();
|
||||
var sql = """
|
||||
SELECT i.id, i.lot_id, i.local_path
|
||||
FROM images i
|
||||
WHERE i.local_path IS NOT NULL
|
||||
AND i.local_path != ''
|
||||
AND (i.labels IS NULL OR i.labels = '')
|
||||
""";
|
||||
|
||||
try (var conn = DriverManager.getConnection(url); var stmt = conn.createStatement()) {
|
||||
var rs = stmt.executeQuery(sql);
|
||||
while (rs.next()) {
|
||||
// Extract numeric lot ID from TEXT field (e.g., "A1-34732-49" -> 3473249)
|
||||
String lotIdStr = rs.getString("lot_id");
|
||||
long lotId = ScraperDataAdapter.extractNumericId(lotIdStr);
|
||||
|
||||
images.add(new ImageDetectionRecord(
|
||||
rs.getInt("id"),
|
||||
lotId,
|
||||
rs.getString("local_path")
|
||||
));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
log.info("ℹ️ No images needing detection found");
|
||||
}
|
||||
|
||||
return images;
|
||||
}
|
||||
|
||||
/**
|
||||
* Simple record for image data from database
|
||||
*/
|
||||
record ImageRecord(int id, long lotId, String url, String filePath, String labels) { }
|
||||
|
||||
/**
|
||||
* Record for images that need object detection processing
|
||||
*/
|
||||
record ImageDetectionRecord(int id, long lotId, String filePath) { }
|
||||
public record ImageDetectionRecord(int id, long lotId, String filePath) {}
|
||||
}
|
||||
|
||||
@@ -1,106 +1,55 @@
|
||||
package auctiora;
|
||||
|
||||
import jakarta.enterprise.context.ApplicationScoped;
|
||||
import jakarta.inject.Inject;
|
||||
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import java.io.IOException;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Paths;
|
||||
import java.sql.SQLException;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Service responsible for processing images from the IMAGES table.
|
||||
* Performs object detection on already-downloaded images and updates the database.
|
||||
*
|
||||
* NOTE: Image downloading is handled by the external scraper process.
|
||||
* This service only performs object detection on images that already have local_path set.
|
||||
*/
|
||||
@Slf4j
|
||||
class ImageProcessingService {
|
||||
|
||||
private final DatabaseService db;
|
||||
private final ObjectDetectionService detector;
|
||||
|
||||
ImageProcessingService(DatabaseService db, ObjectDetectionService detector) {
|
||||
this.db = db;
|
||||
this.detector = detector;
|
||||
}
|
||||
|
||||
/**
|
||||
* Processes a single image: runs object detection and updates labels in database.
|
||||
*
|
||||
* @param imageId database ID of the image record
|
||||
* @param localPath local file path to the downloaded image
|
||||
* @param lotId lot identifier (for logging)
|
||||
* @return true if processing succeeded
|
||||
*/
|
||||
boolean processImage(int imageId, String localPath, long lotId) {
|
||||
public record ImageProcessingService(DatabaseService db, ObjectDetectionService detector) {
|
||||
|
||||
boolean processImage(int id, String path, long lot) {
|
||||
try {
|
||||
// Normalize path separators (convert Windows backslashes to forward slashes)
|
||||
localPath = localPath.replace('\\', '/');
|
||||
|
||||
// Check if file exists before processing
|
||||
var file = new java.io.File(localPath);
|
||||
if (!file.exists() || !file.canRead()) {
|
||||
log.warn(" Image file not accessible: {}", localPath);
|
||||
path = path.replace('\\', '/');
|
||||
var f = new java.io.File(path);
|
||||
if (!f.exists() || !f.canRead()) {
|
||||
log.warn("Image not accessible: {}", path);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check file size (skip very large files that might cause issues)
|
||||
long fileSizeBytes = file.length();
|
||||
if (fileSizeBytes > 50 * 1024 * 1024) { // 50 MB limit
|
||||
log.warn(" Image file too large ({}MB): {}", fileSizeBytes / (1024 * 1024), localPath);
|
||||
if (f.length() > 50L * 1024 * 1024) {
|
||||
log.warn("Image too large: {}", path);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Run object detection on the local file
|
||||
var labels = detector.detectObjects(localPath);
|
||||
|
||||
// Update the database with detected labels
|
||||
db.updateImageLabels(imageId, labels);
|
||||
|
||||
if (!labels.isEmpty()) {
|
||||
log.info(" Lot {}: Detected {}", lotId, String.join(", ", labels));
|
||||
}
|
||||
|
||||
|
||||
var labels = detector.detectObjects(path);
|
||||
db.updateImageLabels(id, labels);
|
||||
|
||||
if (!labels.isEmpty())
|
||||
log.info("Lot {}: {}", lot, String.join(", ", labels));
|
||||
|
||||
return true;
|
||||
} catch (Exception e) {
|
||||
log.error(" Failed to process image {}: {}", imageId, e.getMessage());
|
||||
log.error("Process fail {}: {}", id, e.getMessage());
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Batch processes all pending images in the database.
|
||||
* Only processes images that have been downloaded by the scraper but haven't had object detection run yet.
|
||||
*/
|
||||
void processPendingImages() {
|
||||
log.info("Processing pending images...");
|
||||
|
||||
try {
|
||||
var pendingImages = db.getImagesNeedingDetection();
|
||||
log.info("Found {} images needing object detection", pendingImages.size());
|
||||
|
||||
var processed = 0;
|
||||
var detected = 0;
|
||||
|
||||
for (var image : pendingImages) {
|
||||
if (processImage(image.id(), image.filePath(), image.lotId())) {
|
||||
var images = db.getImagesNeedingDetection();
|
||||
log.info("Pending {}", images.size());
|
||||
|
||||
int processed = 0, detected = 0;
|
||||
|
||||
for (var i : images) {
|
||||
if (processImage(i.id(), i.filePath(), i.lotId())) {
|
||||
processed++;
|
||||
// Re-fetch to check if labels were found
|
||||
var labels = db.getImageLabels(image.id());
|
||||
if (labels != null && !labels.isEmpty()) {
|
||||
detected++;
|
||||
}
|
||||
var lbl = db.getImageLabels(i.id());
|
||||
if (lbl != null && !lbl.isEmpty()) detected++;
|
||||
}
|
||||
}
|
||||
|
||||
log.info("Processed {} images, detected objects in {}", processed, detected);
|
||||
|
||||
} catch (SQLException e) {
|
||||
log.error("Error processing pending images: {}", e.getMessage());
|
||||
|
||||
log.info("Processed {}, detected {}", processed, detected);
|
||||
|
||||
} catch (Exception e) {
|
||||
log.error("Batch fail: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@ import java.time.LocalDateTime;
|
||||
/// Data typically populated by the external scraper process.
|
||||
/// This project enriches the data with image analysis and monitoring.
|
||||
@With
|
||||
record Lot(
|
||||
public record Lot(
|
||||
long saleId,
|
||||
long lotId,
|
||||
String displayId, // Full lot ID string (e.g., "A1-34732-49") for GraphQL queries
|
||||
|
||||
@@ -16,73 +16,66 @@ import lombok.extern.slf4j.Slf4j;
|
||||
@Slf4j
|
||||
@ApplicationScoped
|
||||
public class LotEnrichmentScheduler {
|
||||
|
||||
@Inject
|
||||
LotEnrichmentService enrichmentService;
|
||||
|
||||
/**
|
||||
* Enriches lots closing within 1 hour - HIGH PRIORITY
|
||||
* Runs every 5 minutes
|
||||
*/
|
||||
@Scheduled(cron = "0 */5 * * * ?")
|
||||
public void enrichCriticalLots() {
|
||||
try {
|
||||
log.debug("Enriching critical lots (closing < 1 hour)");
|
||||
int enriched = enrichmentService.enrichClosingSoonLots(1);
|
||||
if (enriched > 0) {
|
||||
log.info("Enriched {} critical lots", enriched);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich critical lots", e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches lots closing within 6 hours - MEDIUM PRIORITY
|
||||
* Runs every 30 minutes
|
||||
*/
|
||||
@Scheduled(cron = "0 */30 * * * ?")
|
||||
public void enrichUrgentLots() {
|
||||
try {
|
||||
log.debug("Enriching urgent lots (closing < 6 hours)");
|
||||
int enriched = enrichmentService.enrichClosingSoonLots(6);
|
||||
if (enriched > 0) {
|
||||
log.info("Enriched {} urgent lots", enriched);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich urgent lots", e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches lots closing within 24 hours - NORMAL PRIORITY
|
||||
* Runs every 2 hours
|
||||
*/
|
||||
@Scheduled(cron = "0 0 */2 * * ?")
|
||||
public void enrichDailyLots() {
|
||||
try {
|
||||
log.debug("Enriching daily lots (closing < 24 hours)");
|
||||
int enriched = enrichmentService.enrichClosingSoonLots(24);
|
||||
if (enriched > 0) {
|
||||
log.info("Enriched {} daily lots", enriched);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich daily lots", e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches all active lots - LOW PRIORITY
|
||||
* Runs every 6 hours to keep all data fresh
|
||||
*/
|
||||
@Scheduled(cron = "0 0 */6 * * ?")
|
||||
public void enrichAllLots() {
|
||||
try {
|
||||
log.info("Starting full enrichment of all lots");
|
||||
int enriched = enrichmentService.enrichAllActiveLots();
|
||||
log.info("Full enrichment complete: {} lots updated", enriched);
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich all lots", e);
|
||||
}
|
||||
}
|
||||
|
||||
@Inject LotEnrichmentService enrichmentService;
|
||||
|
||||
/**
|
||||
* Enriches lots closing within 1 hour - HIGH PRIORITY
|
||||
* Runs every 5 minutes
|
||||
*/
|
||||
@Scheduled(cron = "0 */5 * * * ?")
|
||||
public void enrichCriticalLots() {
|
||||
try {
|
||||
log.debug("Enriching critical lots (closing < 1 hour)");
|
||||
int enriched = enrichmentService.enrichClosingSoonLots(1);
|
||||
if (enriched > 0) log.info("Enriched {} critical lots", enriched);
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich critical lots", e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches lots closing within 6 hours - MEDIUM PRIORITY
|
||||
* Runs every 30 minutes
|
||||
*/
|
||||
@Scheduled(cron = "0 */30 * * * ?")
|
||||
public void enrichUrgentLots() {
|
||||
try {
|
||||
log.debug("Enriching urgent lots (closing < 6 hours)");
|
||||
int enriched = enrichmentService.enrichClosingSoonLots(6);
|
||||
if (enriched > 0) log.info("Enriched {} urgent lots", enriched);
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich urgent lots", e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches lots closing within 24 hours - NORMAL PRIORITY
|
||||
* Runs every 2 hours
|
||||
*/
|
||||
@Scheduled(cron = "0 0 */2 * * ?")
|
||||
public void enrichDailyLots() {
|
||||
try {
|
||||
log.debug("Enriching daily lots (closing < 24 hours)");
|
||||
int enriched = enrichmentService.enrichClosingSoonLots(24);
|
||||
if (enriched > 0) log.info("Enriched {} daily lots", enriched);
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich daily lots", e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches all active lots - LOW PRIORITY
|
||||
* Runs every 6 hours to keep all data fresh
|
||||
*/
|
||||
@Scheduled(cron = "0 0 */6 * * ?")
|
||||
public void enrichAllLots() {
|
||||
try {
|
||||
log.info("Starting full enrichment of all lots");
|
||||
int enriched = enrichmentService.enrichAllActiveLots();
|
||||
log.info("Full enrichment complete: {} lots updated", enriched);
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich all lots", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,190 +16,186 @@ import java.util.stream.Collectors;
|
||||
@Slf4j
|
||||
@ApplicationScoped
|
||||
public class LotEnrichmentService {
|
||||
|
||||
@Inject
|
||||
TroostwijkGraphQLClient graphQLClient;
|
||||
|
||||
@Inject
|
||||
DatabaseService db;
|
||||
|
||||
/**
|
||||
* Enriches a single lot with GraphQL intelligence data
|
||||
*/
|
||||
public boolean enrichLot(Lot lot) {
|
||||
if (lot.displayId() == null || lot.displayId().isBlank()) {
|
||||
log.debug("Cannot enrich lot {} - missing displayId", lot.lotId());
|
||||
|
||||
@Inject TroostwijkGraphQLClient graphQLClient;
|
||||
@Inject DatabaseService db;
|
||||
/**
|
||||
* Enriches a single lot with GraphQL intelligence data
|
||||
*/
|
||||
public boolean enrichLot(Lot lot) {
|
||||
if (lot.displayId() == null || lot.displayId().isBlank()) {
|
||||
log.debug("Cannot enrich lot {} - missing displayId", lot.lotId());
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
var intelligence = graphQLClient.fetchLotIntelligence(lot.displayId(), lot.lotId());
|
||||
if (intelligence == null) {
|
||||
log.debug("No intelligence data for lot {}", lot.displayId());
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
}
|
||||
|
||||
// Merge intelligence with existing lot data
|
||||
var enrichedLot = mergeLotWithIntelligence(lot, intelligence);
|
||||
db.upsertLotWithIntelligence(enrichedLot);
|
||||
|
||||
log.debug("Enriched lot {} with GraphQL data", lot.lotId());
|
||||
return true;
|
||||
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to enrich lot {}: {}", lot.lotId(), e.getMessage());
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches multiple lots sequentially
|
||||
* @param lots List of lots to enrich
|
||||
* @return Number of successfully enriched lots
|
||||
*/
|
||||
public int enrichLotsBatch(List<Lot> lots) {
|
||||
if (lots.isEmpty()) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
log.info("Enriching {} lots via GraphQL", lots.size());
|
||||
int enrichedCount = 0;
|
||||
|
||||
for (var lot : lots) {
|
||||
if (lot.displayId() == null || lot.displayId().isBlank()) {
|
||||
log.debug("Skipping lot {} - missing displayId", lot.lotId());
|
||||
continue;
|
||||
}
|
||||
|
||||
try {
|
||||
var intelligence = graphQLClient.fetchLotIntelligence(lot.displayId(), lot.lotId());
|
||||
if (intelligence == null) {
|
||||
log.debug("No intelligence data for lot {}", lot.displayId());
|
||||
return false;
|
||||
if (intelligence != null) {
|
||||
var enrichedLot = mergeLotWithIntelligence(lot, intelligence);
|
||||
db.upsertLotWithIntelligence(enrichedLot);
|
||||
enrichedCount++;
|
||||
} else {
|
||||
log.debug("No intelligence data for lot {}", lot.displayId());
|
||||
}
|
||||
|
||||
// Merge intelligence with existing lot data
|
||||
var enrichedLot = mergeLotWithIntelligence(lot, intelligence);
|
||||
db.upsertLotWithIntelligence(enrichedLot);
|
||||
|
||||
log.debug("Enriched lot {} with GraphQL data", lot.lotId());
|
||||
return true;
|
||||
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to enrich lot {}: {}", lot.lotId(), e.getMessage());
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches multiple lots sequentially
|
||||
* @param lots List of lots to enrich
|
||||
* @return Number of successfully enriched lots
|
||||
*/
|
||||
public int enrichLotsBatch(List<Lot> lots) {
|
||||
if (lots.isEmpty()) {
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to enrich lot {}: {}", lot.displayId(), e.getMessage());
|
||||
}
|
||||
|
||||
// Small delay to respect rate limits (handled by RateLimitedHttpClient)
|
||||
}
|
||||
|
||||
log.info("Successfully enriched {}/{} lots", enrichedCount, lots.size());
|
||||
return enrichedCount;
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches lots closing soon (within specified hours) with higher priority
|
||||
*/
|
||||
public int enrichClosingSoonLots(int hoursUntilClose) {
|
||||
try {
|
||||
var allLots = db.getAllLots();
|
||||
var closingSoon = allLots.stream()
|
||||
.filter(lot -> lot.closingTime() != null)
|
||||
.filter(lot -> {
|
||||
long minutes = lot.minutesUntilClose();
|
||||
return minutes > 0 && minutes <= hoursUntilClose * 60;
|
||||
})
|
||||
.toList();
|
||||
|
||||
if (closingSoon.isEmpty()) {
|
||||
log.debug("No lots closing within {} hours", hoursUntilClose);
|
||||
return 0;
|
||||
}
|
||||
|
||||
log.info("Enriching {} lots via GraphQL", lots.size());
|
||||
int enrichedCount = 0;
|
||||
|
||||
for (var lot : lots) {
|
||||
if (lot.displayId() == null || lot.displayId().isBlank()) {
|
||||
log.debug("Skipping lot {} - missing displayId", lot.lotId());
|
||||
continue;
|
||||
}
|
||||
|
||||
log.info("Enriching {} lots closing within {} hours", closingSoon.size(), hoursUntilClose);
|
||||
return enrichLotsBatch(closingSoon);
|
||||
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich closing soon lots: {}", e.getMessage());
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches all active lots (can be slow for large datasets)
|
||||
*/
|
||||
public int enrichAllActiveLots() {
|
||||
try {
|
||||
var allLots = db.getAllLots();
|
||||
log.info("Enriching all {} active lots", allLots.size());
|
||||
|
||||
// Process in batches to avoid overwhelming the API
|
||||
int batchSize = 50;
|
||||
int totalEnriched = 0;
|
||||
|
||||
for (int i = 0; i < allLots.size(); i += batchSize) {
|
||||
int end = Math.min(i + batchSize, allLots.size());
|
||||
List<Lot> batch = allLots.subList(i, end);
|
||||
|
||||
int enriched = enrichLotsBatch(batch);
|
||||
totalEnriched += enriched;
|
||||
|
||||
// Small delay between batches to respect rate limits
|
||||
if (end < allLots.size()) {
|
||||
Thread.sleep(1000);
|
||||
}
|
||||
|
||||
try {
|
||||
var intelligence = graphQLClient.fetchLotIntelligence(lot.displayId(), lot.lotId());
|
||||
if (intelligence != null) {
|
||||
var enrichedLot = mergeLotWithIntelligence(lot, intelligence);
|
||||
db.upsertLotWithIntelligence(enrichedLot);
|
||||
enrichedCount++;
|
||||
} else {
|
||||
log.debug("No intelligence data for lot {}", lot.displayId());
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to enrich lot {}: {}", lot.displayId(), e.getMessage());
|
||||
}
|
||||
|
||||
// Small delay to respect rate limits (handled by RateLimitedHttpClient)
|
||||
}
|
||||
|
||||
log.info("Successfully enriched {}/{} lots", enrichedCount, lots.size());
|
||||
return enrichedCount;
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches lots closing soon (within specified hours) with higher priority
|
||||
*/
|
||||
public int enrichClosingSoonLots(int hoursUntilClose) {
|
||||
try {
|
||||
var allLots = db.getAllLots();
|
||||
var closingSoon = allLots.stream()
|
||||
.filter(lot -> lot.closingTime() != null)
|
||||
.filter(lot -> {
|
||||
long minutes = lot.minutesUntilClose();
|
||||
return minutes > 0 && minutes <= hoursUntilClose * 60;
|
||||
})
|
||||
.toList();
|
||||
|
||||
if (closingSoon.isEmpty()) {
|
||||
log.debug("No lots closing within {} hours", hoursUntilClose);
|
||||
return 0;
|
||||
}
|
||||
|
||||
log.info("Enriching {} lots closing within {} hours", closingSoon.size(), hoursUntilClose);
|
||||
return enrichLotsBatch(closingSoon);
|
||||
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich closing soon lots: {}", e.getMessage());
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enriches all active lots (can be slow for large datasets)
|
||||
*/
|
||||
public int enrichAllActiveLots() {
|
||||
try {
|
||||
var allLots = db.getAllLots();
|
||||
log.info("Enriching all {} active lots", allLots.size());
|
||||
|
||||
// Process in batches to avoid overwhelming the API
|
||||
int batchSize = 50;
|
||||
int totalEnriched = 0;
|
||||
|
||||
for (int i = 0; i < allLots.size(); i += batchSize) {
|
||||
int end = Math.min(i + batchSize, allLots.size());
|
||||
List<Lot> batch = allLots.subList(i, end);
|
||||
|
||||
int enriched = enrichLotsBatch(batch);
|
||||
totalEnriched += enriched;
|
||||
|
||||
// Small delay between batches to respect rate limits
|
||||
if (end < allLots.size()) {
|
||||
Thread.sleep(1000);
|
||||
}
|
||||
}
|
||||
|
||||
log.info("Finished enriching all lots. Total enriched: {}/{}", totalEnriched, allLots.size());
|
||||
return totalEnriched;
|
||||
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich all lots: {}", e.getMessage());
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Merges existing lot data with GraphQL intelligence
|
||||
*/
|
||||
private Lot mergeLotWithIntelligence(Lot lot, LotIntelligence intel) {
|
||||
return new Lot(
|
||||
lot.saleId(),
|
||||
lot.lotId(),
|
||||
lot.displayId(), // Preserve displayId
|
||||
lot.title(),
|
||||
lot.description(),
|
||||
lot.manufacturer(),
|
||||
lot.type(),
|
||||
lot.year(),
|
||||
lot.category(),
|
||||
lot.currentBid(),
|
||||
lot.currency(),
|
||||
lot.url(),
|
||||
lot.closingTime(),
|
||||
lot.closingNotified(),
|
||||
// HIGH PRIORITY FIELDS from GraphQL
|
||||
intel.followersCount(),
|
||||
intel.estimatedMin(),
|
||||
intel.estimatedMax(),
|
||||
intel.nextBidStepInCents(),
|
||||
intel.condition(),
|
||||
intel.categoryPath(),
|
||||
intel.cityLocation(),
|
||||
intel.countryCode(),
|
||||
// MEDIUM PRIORITY FIELDS
|
||||
intel.biddingStatus(),
|
||||
intel.appearance(),
|
||||
intel.packaging(),
|
||||
intel.quantity(),
|
||||
intel.vat(),
|
||||
intel.buyerPremiumPercentage(),
|
||||
intel.remarks(),
|
||||
// BID INTELLIGENCE FIELDS
|
||||
intel.startingBid(),
|
||||
intel.reservePrice(),
|
||||
intel.reserveMet(),
|
||||
intel.bidIncrement(),
|
||||
intel.viewCount(),
|
||||
intel.firstBidTime(),
|
||||
intel.lastBidTime(),
|
||||
intel.bidVelocity(),
|
||||
null, // condition_score (computed separately)
|
||||
null // provenance_docs (computed separately)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
log.info("Finished enriching all lots. Total enriched: {}/{}", totalEnriched, allLots.size());
|
||||
return totalEnriched;
|
||||
|
||||
} catch (Exception e) {
|
||||
log.error("Failed to enrich all lots: {}", e.getMessage());
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Merges existing lot data with GraphQL intelligence
|
||||
*/
|
||||
private Lot mergeLotWithIntelligence(Lot lot, LotIntelligence intel) {
|
||||
return new Lot(
|
||||
lot.saleId(),
|
||||
lot.lotId(),
|
||||
lot.displayId(), // Preserve displayId
|
||||
lot.title(),
|
||||
lot.description(),
|
||||
lot.manufacturer(),
|
||||
lot.type(),
|
||||
lot.year(),
|
||||
lot.category(),
|
||||
lot.currentBid(),
|
||||
lot.currency(),
|
||||
lot.url(),
|
||||
lot.closingTime(),
|
||||
lot.closingNotified(),
|
||||
// HIGH PRIORITY FIELDS from GraphQL
|
||||
intel.followersCount(),
|
||||
intel.estimatedMin(),
|
||||
intel.estimatedMax(),
|
||||
intel.nextBidStepInCents(),
|
||||
intel.condition(),
|
||||
intel.categoryPath(),
|
||||
intel.cityLocation(),
|
||||
intel.countryCode(),
|
||||
// MEDIUM PRIORITY FIELDS
|
||||
intel.biddingStatus(),
|
||||
intel.appearance(),
|
||||
intel.packaging(),
|
||||
intel.quantity(),
|
||||
intel.vat(),
|
||||
intel.buyerPremiumPercentage(),
|
||||
intel.remarks(),
|
||||
// BID INTELLIGENCE FIELDS
|
||||
intel.startingBid(),
|
||||
intel.reservePrice(),
|
||||
intel.reserveMet(),
|
||||
intel.bidIncrement(),
|
||||
intel.viewCount(),
|
||||
intel.firstBidTime(),
|
||||
intel.lastBidTime(),
|
||||
intel.bidVelocity(),
|
||||
null, // condition_score (computed separately)
|
||||
null // provenance_docs (computed separately)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,231 +0,0 @@
|
||||
package auctiora;
|
||||
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import org.opencv.core.Core;
|
||||
|
||||
/**
|
||||
* Main entry point for Troostwijk Auction Monitor.
|
||||
*
|
||||
* ARCHITECTURE:
|
||||
* This project focuses on:
|
||||
* 1. Image processing and object detection
|
||||
* 2. Bid monitoring and notifications
|
||||
* 3. Data enrichment
|
||||
*
|
||||
* Auction/Lot scraping is handled by the external ARCHITECTURE-TROOSTWIJK-SCRAPER process.
|
||||
* That process populates the auctions and lots tables in the shared database.
|
||||
* This process reads from those tables and enriches them with:
|
||||
* - Downloaded images
|
||||
* - Object detection labels
|
||||
* - Bid monitoring
|
||||
* - Notifications
|
||||
*/
|
||||
@Slf4j
|
||||
public class Main {
|
||||
|
||||
@SuppressWarnings("restricted")
|
||||
private static Object loadOpenCV() {
|
||||
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
|
||||
return null;
|
||||
}
|
||||
|
||||
public static void main(String[] args) throws Exception {
|
||||
log.info("=== Troostwijk Auction Monitor ===\n");
|
||||
|
||||
// Parse command line arguments
|
||||
var mode = args.length > 0 ? args[0] : "workflow";
|
||||
|
||||
// Configuration - Windows paths
|
||||
var databaseFile = System.getenv().getOrDefault("DATABASE_FILE", "C:\\mnt\\okcomputer\\output\\cache.db");
|
||||
var notificationConfig = System.getenv().getOrDefault("NOTIFICATION_CONFIG", "desktop");
|
||||
|
||||
// YOLO model paths (optional - monitor works without object detection)
|
||||
var yoloCfg = "models/yolov4.cfg";
|
||||
var yoloWeights = "models/yolov4.weights";
|
||||
var yoloClasses = "models/coco.names";
|
||||
|
||||
// Load native OpenCV library (only if models exist)
|
||||
try {
|
||||
loadOpenCV();
|
||||
log.info("✓ OpenCV loaded");
|
||||
} catch (UnsatisfiedLinkError e) {
|
||||
log.info("⚠️ OpenCV not available - image detection disabled");
|
||||
}
|
||||
|
||||
switch (mode.toLowerCase()) {
|
||||
case "workflow":
|
||||
runWorkflowMode(databaseFile, notificationConfig, yoloCfg, yoloWeights, yoloClasses);
|
||||
break;
|
||||
|
||||
case "once":
|
||||
runOnceMode(databaseFile, notificationConfig, yoloCfg, yoloWeights, yoloClasses);
|
||||
break;
|
||||
|
||||
case "legacy":
|
||||
runLegacyMode(databaseFile, notificationConfig, yoloCfg, yoloWeights, yoloClasses);
|
||||
break;
|
||||
|
||||
case "status":
|
||||
showStatus(databaseFile, notificationConfig, yoloCfg, yoloWeights, yoloClasses);
|
||||
break;
|
||||
|
||||
default:
|
||||
showUsage();
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* WORKFLOW MODE: Run orchestrated scheduled workflows (default)
|
||||
* This is the recommended mode for production use.
|
||||
*/
|
||||
private static void runWorkflowMode(String dbPath, String notifConfig,
|
||||
String yoloCfg, String yoloWeights, String yoloClasses)
|
||||
throws Exception {
|
||||
|
||||
log.info("🚀 Starting in WORKFLOW MODE (Orchestrated Scheduling)\n");
|
||||
|
||||
var orchestrator = new WorkflowOrchestrator(
|
||||
dbPath, notifConfig, yoloCfg, yoloWeights, yoloClasses
|
||||
);
|
||||
|
||||
// Show initial status
|
||||
orchestrator.printStatus();
|
||||
|
||||
// Start all scheduled workflows
|
||||
orchestrator.startScheduledWorkflows();
|
||||
|
||||
log.info("✓ All workflows are running");
|
||||
log.info(" - Scraper import: every 30 min");
|
||||
log.info(" - Image processing: every 1 hour");
|
||||
log.info(" - Bid monitoring: every 15 min");
|
||||
log.info(" - Closing alerts: every 5 min");
|
||||
log.info("\nPress Ctrl+C to stop.\n");
|
||||
|
||||
// Add shutdown hook
|
||||
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
|
||||
log.info("\n🛑 Shutdown signal received...");
|
||||
orchestrator.shutdown();
|
||||
}));
|
||||
|
||||
// Keep application alive
|
||||
try {
|
||||
Thread.sleep(Long.MAX_VALUE);
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
orchestrator.shutdown();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* ONCE MODE: Run complete workflow once and exit
|
||||
* Useful for cron jobs or scheduled tasks.
|
||||
*/
|
||||
private static void runOnceMode(String dbPath, String notifConfig,
|
||||
String yoloCfg, String yoloWeights, String yoloClasses)
|
||||
throws Exception {
|
||||
|
||||
log.info("🔄 Starting in ONCE MODE (Single Execution)\n");
|
||||
|
||||
var orchestrator = new WorkflowOrchestrator(
|
||||
dbPath, notifConfig, yoloCfg, yoloWeights, yoloClasses
|
||||
);
|
||||
|
||||
orchestrator.runCompleteWorkflowOnce();
|
||||
|
||||
log.info("✓ Workflow execution completed. Exiting.\n");
|
||||
}
|
||||
|
||||
/**
|
||||
* LEGACY MODE: Original monitoring approach
|
||||
* Kept for backward compatibility.
|
||||
*/
|
||||
private static void runLegacyMode(String dbPath, String notifConfig,
|
||||
String yoloCfg, String yoloWeights, String yoloClasses)
|
||||
throws Exception {
|
||||
|
||||
log.info("⚙️ Starting in LEGACY MODE\n");
|
||||
|
||||
var monitor = new TroostwijkMonitor(dbPath, notifConfig,
|
||||
yoloCfg, yoloWeights, yoloClasses);
|
||||
|
||||
log.info("\n📊 Current Database State:");
|
||||
monitor.printDatabaseStats();
|
||||
|
||||
log.info("\n[1/2] Processing images...");
|
||||
monitor.processPendingImages();
|
||||
|
||||
log.info("\n[2/2] Starting bid monitoring...");
|
||||
monitor.scheduleMonitoring();
|
||||
|
||||
log.info("\n✓ Monitor is running. Press Ctrl+C to stop.\n");
|
||||
log.info("NOTE: This process expects auction/lot data from the external scraper.");
|
||||
log.info(" Make sure ARCHITECTURE-TROOSTWIJK-SCRAPER is running and populating the database.\n");
|
||||
|
||||
try {
|
||||
Thread.sleep(Long.MAX_VALUE);
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
log.info("Monitor interrupted, exiting.");
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* STATUS MODE: Show current status and exit
|
||||
*/
|
||||
private static void showStatus(String dbPath, String notifConfig,
|
||||
String yoloCfg, String yoloWeights, String yoloClasses)
|
||||
throws Exception {
|
||||
|
||||
log.info("📊 Checking Status...\n");
|
||||
|
||||
var orchestrator = new WorkflowOrchestrator(
|
||||
dbPath, notifConfig, yoloCfg, yoloWeights, yoloClasses
|
||||
);
|
||||
|
||||
orchestrator.printStatus();
|
||||
}
|
||||
|
||||
/**
|
||||
* Show usage information
|
||||
*/
|
||||
private static void showUsage() {
|
||||
log.info("Usage: java -jar troostwijk-monitor.jar [mode]\n");
|
||||
log.info("Modes:");
|
||||
log.info(" workflow - Run orchestrated scheduled workflows (default)");
|
||||
log.info(" once - Run complete workflow once and exit (for cron)");
|
||||
log.info(" legacy - Run original monitoring approach");
|
||||
log.info(" status - Show current status and exit");
|
||||
log.info("\nEnvironment Variables:");
|
||||
log.info(" DATABASE_FILE - Path to SQLite database");
|
||||
log.info(" (default: C:\\mnt\\okcomputer\\output\\cache.db)");
|
||||
log.info(" NOTIFICATION_CONFIG - 'desktop' or 'smtp:user:pass:email'");
|
||||
log.info(" (default: desktop)");
|
||||
log.info("\nExamples:");
|
||||
log.info(" java -jar troostwijk-monitor.jar workflow");
|
||||
log.info(" java -jar troostwijk-monitor.jar once");
|
||||
log.info(" java -jar troostwijk-monitor.jar status");
|
||||
IO.println();
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative entry point for container environments.
|
||||
* Simply keeps the container alive for manual commands.
|
||||
*/
|
||||
public static void main2(String[] args) {
|
||||
if (args.length > 0) {
|
||||
log.info("Command mode - exiting to allow shell commands");
|
||||
return;
|
||||
}
|
||||
|
||||
log.info("Troostwijk Monitor container is running and healthy.");
|
||||
log.info("Use 'docker exec' or 'dokku run' to execute commands.");
|
||||
|
||||
try {
|
||||
Thread.sleep(Long.MAX_VALUE);
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
log.info("Container interrupted, exiting.");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,48 +1,49 @@
|
||||
package auctiora;
|
||||
|
||||
import jakarta.enterprise.context.ApplicationScoped;
|
||||
import jakarta.inject.Inject;
|
||||
import org.eclipse.microprofile.config.inject.ConfigProperty;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
|
||||
import javax.mail.*;
|
||||
import javax.mail.internet.*;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import java.awt.*;
|
||||
import java.util.Date;
|
||||
import java.util.Properties;
|
||||
|
||||
@Slf4j
|
||||
public class NotificationService {
|
||||
|
||||
private final Config config;
|
||||
|
||||
public NotificationService(String cfg) {
|
||||
this.config = Config.parse(cfg);
|
||||
public record NotificationService(Config cfg) {
|
||||
|
||||
// Extra convenience constructor: raw string → Config
|
||||
public NotificationService(String raw) {
|
||||
this(Config.parse(raw));
|
||||
}
|
||||
|
||||
public void sendNotification(String message, String title, int priority) {
|
||||
if (config.useDesktop()) sendDesktop(title, message, priority);
|
||||
if (config.useEmail()) sendEmail(title, message, priority);
|
||||
public void sendNotification(String msg, String title, int prio) {
|
||||
if (cfg.useDesktop()) sendDesktop(title, msg, prio);
|
||||
if (cfg.useEmail()) sendEmail(title, msg, prio);
|
||||
}
|
||||
|
||||
private void sendDesktop(String title, String msg, int prio) {
|
||||
try {
|
||||
if (!SystemTray.isSupported()) {
|
||||
log.info("Desktop notifications not supported — " + title + " / " + msg);
|
||||
log.info("Desktop not supported: {}", title);
|
||||
return;
|
||||
}
|
||||
var tray = SystemTray.getSystemTray();
|
||||
var image = Toolkit.getDefaultToolkit().createImage(new byte[0]);
|
||||
var trayIcon = new TrayIcon(image, "NotificationService");
|
||||
trayIcon.setImageAutoSize(true);
|
||||
|
||||
var tray = SystemTray.getSystemTray();
|
||||
var icon = new TrayIcon(
|
||||
Toolkit.getDefaultToolkit().createImage(new byte[0]),
|
||||
"notify"
|
||||
);
|
||||
icon.setImageAutoSize(true);
|
||||
tray.add(icon);
|
||||
|
||||
var type = prio > 0 ? TrayIcon.MessageType.WARNING : TrayIcon.MessageType.INFO;
|
||||
tray.add(trayIcon);
|
||||
trayIcon.displayMessage(title, msg, type);
|
||||
icon.displayMessage(title, msg, type);
|
||||
|
||||
Thread.sleep(2000);
|
||||
tray.remove(trayIcon);
|
||||
log.info("Desktop notification sent: " + title);
|
||||
tray.remove(icon);
|
||||
log.info("Desktop notification: {}", title);
|
||||
} catch (Exception e) {
|
||||
System.err.println("Desktop notification failed: " + e);
|
||||
log.warn("Desktop failed: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -57,29 +58,32 @@ public class NotificationService {
|
||||
|
||||
var session = Session.getInstance(props, new Authenticator() {
|
||||
|
||||
@Override
|
||||
protected PasswordAuthentication getPasswordAuthentication() {
|
||||
return new PasswordAuthentication(config.smtpUsername(), config.smtpPassword());
|
||||
return new PasswordAuthentication(cfg.smtpUsername(), cfg.smtpPassword());
|
||||
}
|
||||
});
|
||||
|
||||
var m = new MimeMessage(session);
|
||||
m.setFrom(new InternetAddress(config.smtpUsername()));
|
||||
m.setRecipients(Message.RecipientType.TO, InternetAddress.parse(config.toEmail()));
|
||||
m.setFrom(new InternetAddress(cfg.smtpUsername()));
|
||||
m.setRecipients(Message.RecipientType.TO, InternetAddress.parse(cfg.toEmail()));
|
||||
m.setSubject("[Troostwijk] " + title);
|
||||
m.setText(msg);
|
||||
m.setSentDate(new Date());
|
||||
|
||||
if (prio > 0) {
|
||||
m.setHeader("X-Priority", "1");
|
||||
m.setHeader("Importance", "High");
|
||||
}
|
||||
|
||||
Transport.send(m);
|
||||
log.info("Email notification sent: " + title);
|
||||
log.info("Email notification: {}", title);
|
||||
} catch (Exception e) {
|
||||
log.info("Email notification failed: " + e);
|
||||
log.warn("Email failed: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
private record Config(
|
||||
public record Config(
|
||||
boolean useDesktop,
|
||||
boolean useEmail,
|
||||
String smtpUsername,
|
||||
@@ -87,16 +91,20 @@ public class NotificationService {
|
||||
String toEmail
|
||||
) {
|
||||
|
||||
static Config parse(String cfg) {
|
||||
if ("desktop".equalsIgnoreCase(cfg)) {
|
||||
public static Config parse(String raw) {
|
||||
if ("desktop".equalsIgnoreCase(raw)) {
|
||||
return new Config(true, false, null, null, null);
|
||||
} else if (cfg.startsWith("smtp:")) {
|
||||
var parts = cfg.split(":", -1); // Use -1 to include trailing empty strings
|
||||
if (parts.length < 4)
|
||||
throw new IllegalArgumentException("Email config must be 'smtp:username:password:toEmail'");
|
||||
return new Config(true, true, parts[1], parts[2], parts[3]);
|
||||
}
|
||||
throw new IllegalArgumentException("Config must be 'desktop' or 'smtp:username:password:toEmail'");
|
||||
|
||||
if (raw != null && raw.startsWith("smtp:")) {
|
||||
var p = raw.split(":", -1);
|
||||
if (p.length < 4) {
|
||||
throw new IllegalArgumentException("Format: smtp:username:password:toEmail");
|
||||
}
|
||||
return new Config(true, true, p[1], p[2], p[3]);
|
||||
}
|
||||
|
||||
throw new IllegalArgumentException("Use 'desktop' or 'smtp:username:password:toEmail'");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
package auctiora;
|
||||
|
||||
import jakarta.annotation.PostConstruct;
|
||||
import jakarta.enterprise.context.ApplicationScoped;
|
||||
import jakarta.inject.Inject;
|
||||
import org.eclipse.microprofile.config.inject.ConfigProperty;
|
||||
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import nu.pattern.OpenCV;
|
||||
import org.opencv.core.Core;
|
||||
import org.opencv.core.Mat;
|
||||
import org.opencv.core.Scalar;
|
||||
import org.opencv.core.Size;
|
||||
@@ -23,7 +26,7 @@ import static org.opencv.dnn.Dnn.DNN_TARGET_CPU;
|
||||
/**
|
||||
* Service for performing object detection on images using OpenCV's DNN
|
||||
* module. The DNN module can load pre‑trained models from several
|
||||
* frameworks (Darknet, TensorFlow, ONNX, etc.)【784097309529506†L209-L233】. Here
|
||||
* frameworks (Darknet, TensorFlow, ONNX, etc.). Here
|
||||
* we load a YOLO model (Darknet) by specifying the configuration and
|
||||
* weights files. For each image we run a forward pass and return a
|
||||
* list of detected class labels.
|
||||
@@ -34,13 +37,44 @@ import static org.opencv.dnn.Dnn.DNN_TARGET_CPU;
|
||||
@Slf4j
|
||||
public class ObjectDetectionService {
|
||||
|
||||
private final Net net;
|
||||
private final List<String> classNames;
|
||||
private final boolean enabled;
|
||||
private int warnCount = 0;
|
||||
private static final int MAX_WARNINGS = 5;
|
||||
private Net net;
|
||||
private List<String> classNames;
|
||||
private boolean enabled;
|
||||
private int warnCount = 0;
|
||||
private static final int MAX_WARNINGS = 5;
|
||||
private static boolean openCvLoaded = false;
|
||||
|
||||
private final String cfgPath;
|
||||
private final String weightsPath;
|
||||
private final String classNamesPath;
|
||||
|
||||
ObjectDetectionService(String cfgPath, String weightsPath, String classNamesPath) throws IOException {
|
||||
this.cfgPath = cfgPath;
|
||||
this.weightsPath = weightsPath;
|
||||
this.classNamesPath = classNamesPath;
|
||||
}
|
||||
|
||||
@PostConstruct
|
||||
void init() {
|
||||
// Load OpenCV native libraries first
|
||||
if (!openCvLoaded) {
|
||||
try {
|
||||
OpenCV.loadLocally();
|
||||
openCvLoaded = true;
|
||||
log.info("✓ OpenCV {} loaded successfully", Core.VERSION);
|
||||
} catch (Exception e) {
|
||||
log.warn("⚠️ Object detection disabled: OpenCV native libraries not loaded");
|
||||
enabled = false;
|
||||
net = null;
|
||||
classNames = new ArrayList<>();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
initializeModel();
|
||||
}
|
||||
|
||||
private void initializeModel() {
|
||||
// Check if model files exist
|
||||
var cfgFile = Paths.get(cfgPath);
|
||||
var weightsFile = Paths.get(weightsPath);
|
||||
@@ -53,44 +87,48 @@ public class ObjectDetectionService {
|
||||
log.info(" - {}", weightsPath);
|
||||
log.info(" - {}", classNamesPath);
|
||||
log.info(" Scraper will continue without image analysis.");
|
||||
this.enabled = false;
|
||||
this.net = null;
|
||||
this.classNames = new ArrayList<>();
|
||||
enabled = false;
|
||||
net = null;
|
||||
classNames = new ArrayList<>();
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Load network
|
||||
this.net = Dnn.readNetFromDarknet(cfgPath, weightsPath);
|
||||
|
||||
net = Dnn.readNetFromDarknet(cfgPath, weightsPath);
|
||||
|
||||
// Try to use GPU/CUDA if available, fallback to CPU
|
||||
try {
|
||||
this.net.setPreferableBackend(Dnn.DNN_BACKEND_CUDA);
|
||||
this.net.setPreferableTarget(Dnn.DNN_TARGET_CUDA);
|
||||
net.setPreferableBackend(Dnn.DNN_BACKEND_CUDA);
|
||||
net.setPreferableTarget(Dnn.DNN_TARGET_CUDA);
|
||||
log.info("✓ Object detection enabled with YOLO (CUDA/GPU acceleration)");
|
||||
} catch (Exception e) {
|
||||
// CUDA not available, try Vulkan for AMD GPUs
|
||||
try {
|
||||
this.net.setPreferableBackend(Dnn.DNN_BACKEND_VKCOM);
|
||||
this.net.setPreferableTarget(Dnn.DNN_TARGET_VULKAN);
|
||||
net.setPreferableBackend(Dnn.DNN_BACKEND_VKCOM);
|
||||
net.setPreferableTarget(Dnn.DNN_TARGET_VULKAN);
|
||||
log.info("✓ Object detection enabled with YOLO (Vulkan/GPU acceleration)");
|
||||
} catch (Exception e2) {
|
||||
// GPU not available, fallback to CPU
|
||||
this.net.setPreferableBackend(DNN_BACKEND_OPENCV);
|
||||
this.net.setPreferableTarget(DNN_TARGET_CPU);
|
||||
net.setPreferableBackend(DNN_BACKEND_OPENCV);
|
||||
net.setPreferableTarget(DNN_TARGET_CPU);
|
||||
log.info("✓ Object detection enabled with YOLO (CPU only)");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Load class names (one per line)
|
||||
this.classNames = Files.readAllLines(classNamesFile);
|
||||
this.enabled = true;
|
||||
classNames = Files.readAllLines(classNamesFile);
|
||||
enabled = true;
|
||||
} catch (UnsatisfiedLinkError e) {
|
||||
System.err.println("⚠️ Object detection disabled: OpenCV native libraries not loaded");
|
||||
throw new IOException("Failed to initialize object detection: OpenCV native libraries not loaded", e);
|
||||
log.error("⚠️ Object detection disabled: OpenCV native libraries not loaded", e);
|
||||
enabled = false;
|
||||
net = null;
|
||||
classNames = new ArrayList<>();
|
||||
} catch (Exception e) {
|
||||
System.err.println("⚠️ Object detection disabled: " + e.getMessage());
|
||||
throw new IOException("Failed to initialize object detection", e);
|
||||
log.error("⚠️ Object detection disabled: " + e.getMessage(), e);
|
||||
enabled = false;
|
||||
net = null;
|
||||
classNames = new ArrayList<>();
|
||||
}
|
||||
}
|
||||
/**
|
||||
@@ -121,15 +159,15 @@ public class ObjectDetectionService {
|
||||
var confThreshold = 0.5f;
|
||||
for (var out : outs) {
|
||||
// YOLO output shape: [num_detections, 85] where 85 = 4 (bbox) + 1 (objectness) + 80 (classes)
|
||||
int numDetections = out.rows();
|
||||
int numElements = out.cols();
|
||||
int numDetections = out.rows();
|
||||
int numElements = out.cols();
|
||||
int expectedLength = 5 + classNames.size();
|
||||
|
||||
|
||||
if (numElements < expectedLength) {
|
||||
// Rate-limit warnings to prevent thread blocking from excessive logging
|
||||
if (warnCount < MAX_WARNINGS) {
|
||||
log.warn("Output matrix has wrong dimensions: expected {} columns, got {}. Output shape: [{}, {}]",
|
||||
expectedLength, numElements, numDetections, numElements);
|
||||
expectedLength, numElements, numDetections, numElements);
|
||||
warnCount++;
|
||||
if (warnCount == MAX_WARNINGS) {
|
||||
log.warn("Suppressing further dimension warnings (reached {} warnings)", MAX_WARNINGS);
|
||||
@@ -137,27 +175,27 @@ public class ObjectDetectionService {
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
|
||||
for (var i = 0; i < numDetections; i++) {
|
||||
// Get entire row (all 85 elements)
|
||||
var data = new double[numElements];
|
||||
for (int j = 0; j < numElements; j++) {
|
||||
data[j] = out.get(i, j)[0];
|
||||
}
|
||||
|
||||
|
||||
// Extract objectness score (index 4) and class scores (index 5+)
|
||||
double objectness = data[4];
|
||||
if (objectness < confThreshold) {
|
||||
continue; // Skip low-confidence detections
|
||||
}
|
||||
|
||||
|
||||
// Extract class scores
|
||||
var scores = new double[classNames.size()];
|
||||
System.arraycopy(data, 5, scores, 0, Math.min(scores.length, data.length - 5));
|
||||
|
||||
|
||||
var classId = argMax(scores);
|
||||
var confidence = scores[classId] * objectness; // Combine objectness with class confidence
|
||||
|
||||
|
||||
if (confidence > confThreshold) {
|
||||
var label = classNames.get(classId);
|
||||
if (!labels.contains(label)) {
|
||||
@@ -166,7 +204,7 @@ public class ObjectDetectionService {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Release resources
|
||||
image.release();
|
||||
blob.release();
|
||||
|
||||
@@ -5,6 +5,7 @@ import io.quarkus.scheduler.Scheduled;
|
||||
import jakarta.enterprise.context.ApplicationScoped;
|
||||
import jakarta.enterprise.event.Observes;
|
||||
import jakarta.inject.Inject;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import org.eclipse.microprofile.config.inject.ConfigProperty;
|
||||
import org.jboss.logging.Logger;
|
||||
|
||||
@@ -22,24 +23,14 @@ public class QuarkusWorkflowScheduler {
|
||||
|
||||
private static final Logger LOG = Logger.getLogger(QuarkusWorkflowScheduler.class);
|
||||
|
||||
@Inject
|
||||
DatabaseService db;
|
||||
@Inject DatabaseService db;
|
||||
@Inject NotificationService notifier;
|
||||
@Inject ObjectDetectionService detector;
|
||||
@Inject ImageProcessingService imageProcessor;
|
||||
@Inject LotEnrichmentService enrichmentService;
|
||||
|
||||
@Inject
|
||||
NotificationService notifier;
|
||||
@ConfigProperty(name = "auction.database.path") String databasePath;
|
||||
|
||||
@Inject
|
||||
ObjectDetectionService detector;
|
||||
|
||||
@Inject
|
||||
ImageProcessingService imageProcessor;
|
||||
|
||||
@Inject
|
||||
LotEnrichmentService enrichmentService;
|
||||
|
||||
@ConfigProperty(name = "auction.database.path")
|
||||
String databasePath;
|
||||
|
||||
/**
|
||||
* Triggered on application startup to enrich existing lots with bid intelligence
|
||||
*/
|
||||
@@ -108,41 +99,41 @@ public class QuarkusWorkflowScheduler {
|
||||
try {
|
||||
LOG.info("🖼️ [WORKFLOW 2] Processing pending images...");
|
||||
var start = System.currentTimeMillis();
|
||||
|
||||
|
||||
// Get images that have been downloaded but need object detection
|
||||
var pendingImages = db.getImagesNeedingDetection();
|
||||
|
||||
|
||||
if (pendingImages.isEmpty()) {
|
||||
LOG.info(" → No pending images to process");
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
// Limit batch size to prevent thread blocking (max 100 images per run)
|
||||
final int MAX_BATCH_SIZE = 100;
|
||||
int totalPending = pendingImages.size();
|
||||
int totalPending = pendingImages.size();
|
||||
if (totalPending > MAX_BATCH_SIZE) {
|
||||
LOG.infof(" → Found %d pending images, processing first %d (batch limit)",
|
||||
totalPending, MAX_BATCH_SIZE);
|
||||
totalPending, MAX_BATCH_SIZE);
|
||||
pendingImages = pendingImages.subList(0, MAX_BATCH_SIZE);
|
||||
} else {
|
||||
LOG.infof(" → Processing %d images", totalPending);
|
||||
}
|
||||
|
||||
|
||||
var processed = 0;
|
||||
var detected = 0;
|
||||
var failed = 0;
|
||||
|
||||
|
||||
for (var image : pendingImages) {
|
||||
try {
|
||||
// Run object detection on already-downloaded image
|
||||
if (imageProcessor.processImage(image.id(), image.filePath(), image.lotId())) {
|
||||
processed++;
|
||||
|
||||
|
||||
// Check if objects were detected
|
||||
var labels = db.getImageLabels(image.id());
|
||||
if (labels != null && !labels.isEmpty()) {
|
||||
detected++;
|
||||
|
||||
|
||||
// Send notification for interesting detections
|
||||
if (labels.size() >= 3) {
|
||||
notifier.sendNotification(
|
||||
@@ -151,16 +142,16 @@ public class QuarkusWorkflowScheduler {
|
||||
String.join(", ", labels)),
|
||||
"Objects Detected",
|
||||
0
|
||||
);
|
||||
);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
failed++;
|
||||
}
|
||||
|
||||
|
||||
// Rate limiting (lighter since no network I/O)
|
||||
Thread.sleep(100);
|
||||
|
||||
|
||||
} catch (Exception e) {
|
||||
failed++;
|
||||
LOG.warnf(" ⚠️ Failed to process image: %s", e.getMessage());
|
||||
@@ -170,7 +161,7 @@ public class QuarkusWorkflowScheduler {
|
||||
var duration = System.currentTimeMillis() - start;
|
||||
LOG.infof(" ✓ Processed %d/%d images, detected objects in %d, failed %d (%.1fs)",
|
||||
processed, totalPending, detected, failed, duration / 1000.0);
|
||||
|
||||
|
||||
if (totalPending > MAX_BATCH_SIZE) {
|
||||
LOG.infof(" → %d images remaining for next run", totalPending - MAX_BATCH_SIZE);
|
||||
}
|
||||
@@ -238,7 +229,7 @@ public class QuarkusWorkflowScheduler {
|
||||
lot.manufacturer(), lot.type(), lot.year(), lot.category(),
|
||||
lot.currentBid(), lot.currency(), lot.url(),
|
||||
lot.closingTime(), true
|
||||
);
|
||||
);
|
||||
db.updateLotNotificationFlags(updated);
|
||||
|
||||
alertsSent++;
|
||||
|
||||
@@ -142,29 +142,15 @@ public class RateLimitedHttpClient {
|
||||
* Determines max requests per second for a given host.
|
||||
*/
|
||||
private int getMaxRequestsPerSecond(String host) {
|
||||
if (host.contains("troostwijk")) {
|
||||
return troostwijkMaxRequestsPerSecond;
|
||||
}
|
||||
return defaultMaxRequestsPerSecond;
|
||||
return host.contains("troostwijk") ? troostwijkMaxRequestsPerSecond : defaultMaxRequestsPerSecond;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts host from URI (e.g., "api.troostwijkauctions.com").
|
||||
*/
|
||||
private String extractHost(URI uri) {
|
||||
return uri.getHost() != null ? uri.getHost() : uri.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets statistics for all hosts.
|
||||
*/
|
||||
public Map<String, RequestStats> getAllStats() {
|
||||
return Map.copyOf(requestStats);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets statistics for a specific host.
|
||||
*/
|
||||
public RequestStats getStats(String host) {
|
||||
return requestStats.get(host);
|
||||
}
|
||||
@@ -218,10 +204,7 @@ public class RateLimitedHttpClient {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Statistics tracker for HTTP requests per host.
|
||||
*/
|
||||
public static class RequestStats {
|
||||
public static final class RequestStats {
|
||||
|
||||
private final String host;
|
||||
private final AtomicLong totalRequests = new AtomicLong(0);
|
||||
@@ -234,25 +217,16 @@ public class RateLimitedHttpClient {
|
||||
this.host = host;
|
||||
}
|
||||
|
||||
void incrementTotal() {
|
||||
totalRequests.incrementAndGet();
|
||||
}
|
||||
void incrementTotal() { totalRequests.incrementAndGet(); }
|
||||
|
||||
void recordSuccess(long durationMs) {
|
||||
successfulRequests.incrementAndGet();
|
||||
totalDurationMs.addAndGet(durationMs);
|
||||
}
|
||||
|
||||
void incrementFailed() {
|
||||
failedRequests.incrementAndGet();
|
||||
}
|
||||
|
||||
void incrementRateLimited() {
|
||||
rateLimitedRequests.incrementAndGet();
|
||||
}
|
||||
|
||||
// Getters
|
||||
public String getHost() { return host; }
|
||||
void incrementFailed() { failedRequests.incrementAndGet(); }
|
||||
void incrementRateLimited() { rateLimitedRequests.incrementAndGet(); }
|
||||
public String getHost() { return host; }
|
||||
public long getTotalRequests() { return totalRequests.get(); }
|
||||
public long getSuccessfulRequests() { return successfulRequests.get(); }
|
||||
public long getFailedRequests() { return failedRequests.get(); }
|
||||
|
||||
@@ -63,13 +63,15 @@ public class ScraperDataAdapter {
|
||||
lotIdStr, // Store full displayId for GraphQL queries
|
||||
rs.getString("title"),
|
||||
getStringOrDefault(rs, "description", ""),
|
||||
"", "", 0,
|
||||
getStringOrDefault(rs, "manufacturer", ""),
|
||||
getStringOrDefault(rs, "type", ""),
|
||||
getIntOrDefault(rs, "year", 0),
|
||||
getStringOrDefault(rs, "category", ""),
|
||||
bid,
|
||||
currency,
|
||||
rs.getString("url"),
|
||||
closing,
|
||||
false,
|
||||
getBooleanOrDefault(rs, "closing_notified", false),
|
||||
// New intelligence fields - set to null for now
|
||||
null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null, null, null,
|
||||
@@ -166,4 +168,9 @@ public class ScraperDataAdapter {
|
||||
var v = rs.getInt(col);
|
||||
return rs.wasNull() ? def : v;
|
||||
}
|
||||
|
||||
private static boolean getBooleanOrDefault(ResultSet rs, String col, boolean def) throws SQLException {
|
||||
var v = rs.getInt(col);
|
||||
return rs.wasNull() ? def : v != 0;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,7 +5,9 @@ import jakarta.ws.rs.Path;
|
||||
import jakarta.ws.rs.Produces;
|
||||
import jakarta.ws.rs.core.MediaType;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import nu.pattern.OpenCV;
|
||||
import org.eclipse.microprofile.config.inject.ConfigProperty;
|
||||
import org.opencv.core.Core;
|
||||
import java.time.Instant;
|
||||
import java.time.ZoneId;
|
||||
import java.time.format.DateTimeFormatter;
|
||||
@@ -19,18 +21,11 @@ public class StatusResource {
|
||||
DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss z")
|
||||
.withZone(ZoneId.systemDefault());
|
||||
|
||||
@ConfigProperty(name = "application.version", defaultValue = "1.0-SNAPSHOT")
|
||||
String appVersion;
|
||||
@ConfigProperty(name = "application.groupId")
|
||||
String groupId;
|
||||
@ConfigProperty(name = "application.version", defaultValue = "1.0-SNAPSHOT") String appVersion;
|
||||
@ConfigProperty(name = "application.groupId") String groupId;
|
||||
@ConfigProperty(name = "application.artifactId") String artifactId;
|
||||
@ConfigProperty(name = "application.version") String version;
|
||||
|
||||
@ConfigProperty(name = "application.artifactId")
|
||||
String artifactId;
|
||||
|
||||
@ConfigProperty(name = "application.version")
|
||||
String version;
|
||||
|
||||
// Java 16+ Record for structured response
|
||||
public record StatusResponse(
|
||||
String groupId,
|
||||
String artifactId,
|
||||
@@ -47,8 +42,6 @@ public class StatusResource {
|
||||
@Path("/status")
|
||||
@Produces(MediaType.APPLICATION_JSON)
|
||||
public StatusResponse getStatus() {
|
||||
log.info("Status endpoint called");
|
||||
|
||||
return new StatusResponse(groupId, artifactId, version,
|
||||
"running",
|
||||
FORMATTER.format(Instant.now()),
|
||||
@@ -63,8 +56,6 @@ public class StatusResource {
|
||||
@Path("/hello")
|
||||
@Produces(MediaType.APPLICATION_JSON)
|
||||
public Map<String, String> sayHello() {
|
||||
log.info("hello endpoint called");
|
||||
|
||||
return Map.of(
|
||||
"message", "Hello from Scrape-UI!",
|
||||
"timestamp", FORMATTER.format(Instant.now()),
|
||||
@@ -74,11 +65,10 @@ public class StatusResource {
|
||||
|
||||
private String getOpenCvVersion() {
|
||||
try {
|
||||
// Load OpenCV if not already loaded (safe to call multiple times)
|
||||
nu.pattern.OpenCV.loadLocally();
|
||||
return org.opencv.core.Core.VERSION;
|
||||
// OpenCV is already loaded by AuctionMonitorProducer
|
||||
return Core.VERSION;
|
||||
} catch (Exception e) {
|
||||
return "4.9.0 (default)";
|
||||
return "Not loaded";
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -44,15 +44,15 @@ public class TroostwijkGraphQLClient {
|
||||
}
|
||||
|
||||
try {
|
||||
String query = buildLotQuery();
|
||||
String variables = buildVariables(displayId);
|
||||
var query = buildLotQuery();
|
||||
var variables = buildVariables(displayId);
|
||||
|
||||
// Proper GraphQL request format with query and variables
|
||||
String requestBody = String.format(
|
||||
var requestBody = String.format(
|
||||
"{\"query\":\"%s\",\"variables\":%s}",
|
||||
escapeJson(query),
|
||||
variables
|
||||
);
|
||||
);
|
||||
|
||||
var request = java.net.http.HttpRequest.newBuilder()
|
||||
.uri(java.net.URI.create(GRAPHQL_ENDPOINT))
|
||||
@@ -87,15 +87,15 @@ public class TroostwijkGraphQLClient {
|
||||
List<LotIntelligence> results = new ArrayList<>();
|
||||
|
||||
// Split into batches of 50 to avoid query size limits
|
||||
int batchSize = 50;
|
||||
for (int i = 0; i < lotIds.size(); i += batchSize) {
|
||||
int end = Math.min(i + batchSize, lotIds.size());
|
||||
List<Long> batch = lotIds.subList(i, end);
|
||||
var batchSize = 50;
|
||||
for (var i = 0; i < lotIds.size(); i += batchSize) {
|
||||
var end = Math.min(i + batchSize, lotIds.size());
|
||||
var batch = lotIds.subList(i, end);
|
||||
|
||||
try {
|
||||
String query = buildBatchLotQuery(batch);
|
||||
String requestBody = String.format("{\"query\":\"%s\"}",
|
||||
escapeJson(query));
|
||||
var query = buildBatchLotQuery(batch);
|
||||
var requestBody = String.format("{\"query\":\"%s\"}",
|
||||
escapeJson(query));
|
||||
|
||||
var request = java.net.http.HttpRequest.newBuilder()
|
||||
.uri(java.net.URI.create(GRAPHQL_ENDPOINT))
|
||||
@@ -162,9 +162,9 @@ public class TroostwijkGraphQLClient {
|
||||
}
|
||||
|
||||
private String buildBatchLotQuery(List<Long> lotIds) {
|
||||
StringBuilder query = new StringBuilder("query {");
|
||||
var query = new StringBuilder("query {");
|
||||
|
||||
for (int i = 0; i < lotIds.size(); i++) {
|
||||
for (var i = 0; i < lotIds.size(); i++) {
|
||||
query.append(String.format("""
|
||||
lot%d: lot(id: %d) {
|
||||
id
|
||||
@@ -196,9 +196,9 @@ public class TroostwijkGraphQLClient {
|
||||
log.debug("GraphQL API returned HTML instead of JSON - likely auth required or wrong endpoint");
|
||||
return null;
|
||||
}
|
||||
|
||||
JsonNode root = objectMapper.readTree(json);
|
||||
JsonNode lotNode = root.path("data").path("lotDetails");
|
||||
|
||||
var root = objectMapper.readTree(json);
|
||||
var lotNode = root.path("data").path("lotDetails");
|
||||
|
||||
if (lotNode.isMissingNode()) {
|
||||
log.debug("No lotDetails in GraphQL response");
|
||||
@@ -206,19 +206,19 @@ public class TroostwijkGraphQLClient {
|
||||
}
|
||||
|
||||
// Extract location from nested object
|
||||
JsonNode locationNode = lotNode.path("location");
|
||||
String city = locationNode.isMissingNode() ? null : getStringOrNull(locationNode, "city");
|
||||
String countryCode = locationNode.isMissingNode() ? null : getStringOrNull(locationNode, "country");
|
||||
var locationNode = lotNode.path("location");
|
||||
var city = locationNode.isMissingNode() ? null : getStringOrNull(locationNode, "city");
|
||||
var countryCode = locationNode.isMissingNode() ? null : getStringOrNull(locationNode, "country");
|
||||
|
||||
// Extract bids count from nested biddingStatistics
|
||||
JsonNode statsNode = lotNode.path("biddingStatistics");
|
||||
Integer bidsCount = statsNode.isMissingNode() ? null : getIntOrNull(statsNode, "numberOfBids");
|
||||
var statsNode = lotNode.path("biddingStatistics");
|
||||
var bidsCount = statsNode.isMissingNode() ? null : getIntOrNull(statsNode, "numberOfBids");
|
||||
|
||||
// Convert cents to euros for estimates
|
||||
Long estimatedMinCents = getLongOrNull(lotNode, "estimatedValueInCentsMin");
|
||||
Long estimatedMaxCents = getLongOrNull(lotNode, "estimatedValueInCentsMax");
|
||||
Double estimatedMin = estimatedMinCents != null ? estimatedMinCents.doubleValue() : null;
|
||||
Double estimatedMax = estimatedMaxCents != null ? estimatedMaxCents.doubleValue() : null;
|
||||
var estimatedMinCents = getLongOrNull(lotNode, "estimatedValueInCentsMin");
|
||||
var estimatedMaxCents = getLongOrNull(lotNode, "estimatedValueInCentsMax");
|
||||
var estimatedMin = estimatedMinCents != null ? estimatedMinCents.doubleValue() : null;
|
||||
var estimatedMax = estimatedMaxCents != null ? estimatedMaxCents.doubleValue() : null;
|
||||
|
||||
return new LotIntelligence(
|
||||
lotId,
|
||||
@@ -257,11 +257,11 @@ public class TroostwijkGraphQLClient {
|
||||
List<LotIntelligence> results = new ArrayList<>();
|
||||
|
||||
try {
|
||||
JsonNode root = objectMapper.readTree(json);
|
||||
JsonNode data = root.path("data");
|
||||
var root = objectMapper.readTree(json);
|
||||
var data = root.path("data");
|
||||
|
||||
for (int i = 0; i < lotIds.size(); i++) {
|
||||
JsonNode lotNode = data.path("lot" + i);
|
||||
for (var i = 0; i < lotIds.size(); i++) {
|
||||
var lotNode = data.path("lot" + i);
|
||||
if (!lotNode.isMissingNode()) {
|
||||
var intelligence = parseLotIntelligenceFromNode(lotNode, lotIds.get(i));
|
||||
if (intelligence != null) {
|
||||
@@ -313,17 +313,17 @@ public class TroostwijkGraphQLClient {
|
||||
|
||||
private Double calculateBidVelocity(JsonNode lotNode) {
|
||||
try {
|
||||
Integer bidsCount = getIntOrNull(lotNode, "bidsCount");
|
||||
String firstBidStr = getStringOrNull(lotNode, "firstBidTime");
|
||||
var bidsCount = getIntOrNull(lotNode, "bidsCount");
|
||||
var firstBidStr = getStringOrNull(lotNode, "firstBidTime");
|
||||
|
||||
if (bidsCount == null || firstBidStr == null || bidsCount == 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
LocalDateTime firstBid = parseDateTime(firstBidStr);
|
||||
|
||||
var firstBid = parseDateTime(firstBidStr);
|
||||
if (firstBid == null) return null;
|
||||
|
||||
long hoursElapsed = java.time.Duration.between(firstBid, LocalDateTime.now()).toHours();
|
||||
|
||||
var hoursElapsed = java.time.Duration.between(firstBid, LocalDateTime.now()).toHours();
|
||||
if (hoursElapsed == 0) return (double) bidsCount;
|
||||
|
||||
return (double) bidsCount / hoursElapsed;
|
||||
@@ -352,27 +352,27 @@ public class TroostwijkGraphQLClient {
|
||||
}
|
||||
|
||||
private Integer getIntOrNull(JsonNode node, String field) {
|
||||
JsonNode fieldNode = node.path(field);
|
||||
var fieldNode = node.path(field);
|
||||
return fieldNode.isNumber() ? fieldNode.asInt() : null;
|
||||
}
|
||||
|
||||
private Long getLongOrNull(JsonNode node, String field) {
|
||||
JsonNode fieldNode = node.path(field);
|
||||
var fieldNode = node.path(field);
|
||||
return fieldNode.isNumber() ? fieldNode.asLong() : null;
|
||||
}
|
||||
|
||||
private Double getDoubleOrNull(JsonNode node, String field) {
|
||||
JsonNode fieldNode = node.path(field);
|
||||
var fieldNode = node.path(field);
|
||||
return fieldNode.isNumber() ? fieldNode.asDouble() : null;
|
||||
}
|
||||
|
||||
private String getStringOrNull(JsonNode node, String field) {
|
||||
JsonNode fieldNode = node.path(field);
|
||||
var fieldNode = node.path(field);
|
||||
return fieldNode.isTextual() ? fieldNode.asText() : null;
|
||||
}
|
||||
|
||||
private Boolean getBooleanOrNull(JsonNode node, String field) {
|
||||
JsonNode fieldNode = node.path(field);
|
||||
var fieldNode = node.path(field);
|
||||
return fieldNode.isBoolean() ? fieldNode.asBoolean() : null;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -60,25 +60,21 @@ public class TroostwijkMonitor {
|
||||
for (var lot : activeLots) {
|
||||
checkAndUpdateLot(lot);
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
} catch (Exception e) {
|
||||
log.error("Error during scheduled monitoring", e);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
private void checkAndUpdateLot(Lot lot) {
|
||||
refreshLotBid(lot);
|
||||
|
||||
|
||||
var minutesLeft = lot.minutesUntilClose();
|
||||
if (minutesLeft < 30) {
|
||||
if (minutesLeft <= 5 && !lot.closingNotified()) {
|
||||
notifier.sendNotification(
|
||||
"Kavel " + lot.lotId() + " sluit binnen " + minutesLeft + " min.",
|
||||
"Lot nearing closure", 1);
|
||||
try {
|
||||
db.updateLotNotificationFlags(lot.withClosingNotified(true));
|
||||
} catch (SQLException e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
db.updateLotNotificationFlags(lot.withClosingNotified(true));
|
||||
}
|
||||
scheduler.schedule(() -> refreshLotBid(lot), 5, TimeUnit.MINUTES);
|
||||
}
|
||||
@@ -109,12 +105,12 @@ public class TroostwijkMonitor {
|
||||
notifier.sendNotification(msg, "Kavel bieding update", 0);
|
||||
}
|
||||
}
|
||||
} catch (IOException | InterruptedException | SQLException e) {
|
||||
} catch (IOException | InterruptedException e) {
|
||||
log.warn("Failed to refresh bid for lot {}", lot.lotId(), e);
|
||||
if (e instanceof InterruptedException) Thread.currentThread().interrupt();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
public void printDatabaseStats() {
|
||||
try {
|
||||
var allLots = db.getAllLots();
|
||||
@@ -123,9 +119,9 @@ public class TroostwijkMonitor {
|
||||
allLots.size(), imageCount);
|
||||
if (!allLots.isEmpty()) {
|
||||
var sum = allLots.stream().mapToDouble(Lot::currentBid).sum();
|
||||
log.info("Total current bids: €{:.2f}", sum);
|
||||
log.info("Total current bids: €{}", String.format("%.2f", sum));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
} catch (Exception e) {
|
||||
log.warn("Could not retrieve database stats", e);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -50,25 +50,25 @@ public class ValuationAnalyticsResource {
|
||||
public Response calculateValuation(ValuationRequest request) {
|
||||
try {
|
||||
LOG.infof("Valuation request for lot: %s", request.lotId);
|
||||
long startTime = System.currentTimeMillis();
|
||||
var startTime = System.currentTimeMillis();
|
||||
|
||||
// Step 1: Fetch comparable sales from database
|
||||
List<ComparableLot> comparables = fetchComparables(request);
|
||||
var comparables = fetchComparables(request);
|
||||
|
||||
// Step 2: Calculate Fair Market Value (FMV)
|
||||
FairMarketValue fmv = calculateFairMarketValue(request, comparables);
|
||||
var fmv = calculateFairMarketValue(request, comparables);
|
||||
|
||||
// Step 3: Calculate undervaluation score
|
||||
double undervaluationScore = calculateUndervaluationScore(request, fmv.value);
|
||||
var undervaluationScore = calculateUndervaluationScore(request, fmv.value);
|
||||
|
||||
// Step 4: Predict final price
|
||||
PricePrediction prediction = calculateFinalPrice(request, fmv.value);
|
||||
var prediction = calculateFinalPrice(request, fmv.value);
|
||||
|
||||
// Step 5: Generate bidding strategy
|
||||
BiddingStrategy strategy = generateBiddingStrategy(request, fmv, prediction);
|
||||
var strategy = generateBiddingStrategy(request, fmv, prediction);
|
||||
|
||||
// Step 6: Compile response
|
||||
ValuationResponse response = new ValuationResponse();
|
||||
var response = new ValuationResponse();
|
||||
response.lotId = request.lotId;
|
||||
response.timestamp = LocalDateTime.now().toString();
|
||||
response.fairMarketValue = fmv;
|
||||
@@ -76,8 +76,8 @@ public class ValuationAnalyticsResource {
|
||||
response.pricePrediction = prediction;
|
||||
response.biddingStrategy = strategy;
|
||||
response.parameters = request;
|
||||
|
||||
long duration = System.currentTimeMillis() - startTime;
|
||||
|
||||
var duration = System.currentTimeMillis() - startTime;
|
||||
LOG.infof("Valuation completed in %d ms", duration);
|
||||
|
||||
return Response.ok(response).build();
|
||||
@@ -115,24 +115,24 @@ public class ValuationAnalyticsResource {
|
||||
* Where weights are exponential/logistic functions of similarity
|
||||
*/
|
||||
private FairMarketValue calculateFairMarketValue(ValuationRequest req, List<ComparableLot> comparables) {
|
||||
double weightedSum = 0.0;
|
||||
double weightSum = 0.0;
|
||||
var weightedSum = 0.0;
|
||||
var weightSum = 0.0;
|
||||
List<WeightedComparable> weightedComps = new ArrayList<>();
|
||||
|
||||
for (ComparableLot comp : comparables) {
|
||||
for (var comp : comparables) {
|
||||
// Condition weight: ω_c = exp(-λ_c · |C_target - C_i|)
|
||||
double omegaC = Math.exp(-0.693 * Math.abs(req.conditionScore - comp.conditionScore));
|
||||
var omegaC = Math.exp(-0.693 * Math.abs(req.conditionScore - comp.conditionScore));
|
||||
|
||||
// Time weight: ω_t = exp(-λ_t · |T_target - T_i|)
|
||||
double omegaT = Math.exp(-0.048 * Math.abs(req.manufacturingYear - comp.manufacturingYear));
|
||||
var omegaT = Math.exp(-0.048 * Math.abs(req.manufacturingYear - comp.manufacturingYear));
|
||||
|
||||
// Provenance weight: ω_p = 1 + δ_p · (P_target - P_i)
|
||||
double omegaP = 1 + 0.15 * ((req.provenanceDocs > 0 ? 1 : 0) - comp.hasProvenance);
|
||||
var omegaP = 1 + 0.15 * ((req.provenanceDocs > 0 ? 1 : 0) - comp.hasProvenance);
|
||||
|
||||
// Historical weight: ω_h = 1 / (1 + e^(-kh · (D_i - D_median)))
|
||||
double omegaH = 1.0 / (1 + Math.exp(-0.01 * (comp.daysAgo - 40)));
|
||||
|
||||
double totalWeight = omegaC * omegaT * omegaP * omegaH;
|
||||
var omegaH = 1.0 / (1 + Math.exp(-0.01 * (comp.daysAgo - 40)));
|
||||
|
||||
var totalWeight = omegaC * omegaT * omegaP * omegaH;
|
||||
|
||||
weightedSum += comp.finalPrice * totalWeight;
|
||||
weightSum += totalWeight;
|
||||
@@ -140,20 +140,20 @@ public class ValuationAnalyticsResource {
|
||||
// Store for transparency
|
||||
weightedComps.add(new WeightedComparable(comp, totalWeight, omegaC, omegaT, omegaP, omegaH));
|
||||
}
|
||||
|
||||
double baseFMV = weightSum > 0 ? weightedSum / weightSum : (req.estimatedMin + req.estimatedMax) / 2;
|
||||
|
||||
var baseFMV = weightSum > 0 ? weightedSum / weightSum : (req.estimatedMin + req.estimatedMax) / 2;
|
||||
|
||||
// Apply condition multiplier: M_cond = exp(α_c · √C_target - β_c)
|
||||
double conditionMultiplier = Math.exp(0.15 * Math.sqrt(req.conditionScore) - 0.40);
|
||||
var conditionMultiplier = Math.exp(0.15 * Math.sqrt(req.conditionScore) - 0.40);
|
||||
baseFMV *= conditionMultiplier;
|
||||
|
||||
// Apply provenance premium: Δ_prov = V_base · (η_0 + η_1 · ln(1 + N_docs))
|
||||
if (req.provenanceDocs > 0) {
|
||||
double provenancePremium = 0.08 + 0.035 * Math.log(1 + req.provenanceDocs);
|
||||
var provenancePremium = 0.08 + 0.035 * Math.log(1 + req.provenanceDocs);
|
||||
baseFMV *= (1 + provenancePremium);
|
||||
}
|
||||
|
||||
FairMarketValue fmv = new FairMarketValue();
|
||||
|
||||
var fmv = new FairMarketValue();
|
||||
fmv.value = Math.round(baseFMV * 100.0) / 100.0;
|
||||
fmv.conditionMultiplier = Math.round(conditionMultiplier * 1000.0) / 1000.0;
|
||||
fmv.provenancePremium = req.provenanceDocs > 0 ? 0.08 + 0.035 * Math.log(1 + req.provenanceDocs) : 0.0;
|
||||
@@ -170,12 +170,12 @@ public class ValuationAnalyticsResource {
|
||||
*/
|
||||
private double calculateUndervaluationScore(ValuationRequest req, double fmv) {
|
||||
if (fmv <= 0) return 0.0;
|
||||
|
||||
double priceGap = (fmv - req.currentBid) / fmv;
|
||||
double velocityFactor = 1 + req.bidVelocity / 10.0;
|
||||
double watchRatio = Math.log(1 + req.watchCount / Math.max(req.bidCount, 1));
|
||||
|
||||
double uScore = priceGap * req.marketVolatility * velocityFactor * watchRatio;
|
||||
|
||||
var priceGap = (fmv - req.currentBid) / fmv;
|
||||
var velocityFactor = 1 + req.bidVelocity / 10.0;
|
||||
var watchRatio = Math.log(1 + req.watchCount / Math.max(req.bidCount, 1));
|
||||
|
||||
var uScore = priceGap * req.marketVolatility * velocityFactor * watchRatio;
|
||||
|
||||
return Math.max(0.0, Math.round(uScore * 1000.0) / 1000.0);
|
||||
}
|
||||
@@ -186,22 +186,22 @@ public class ValuationAnalyticsResource {
|
||||
*/
|
||||
private PricePrediction calculateFinalPrice(ValuationRequest req, double fmv) {
|
||||
// Bid momentum error: ε_bid = tanh(φ_1 · Λ_b - φ_2 · P_current/FMV)
|
||||
double epsilonBid = Math.tanh(0.15 * req.bidVelocity - 0.10 * (req.currentBid / fmv));
|
||||
var epsilonBid = Math.tanh(0.15 * req.bidVelocity - 0.10 * (req.currentBid / fmv));
|
||||
|
||||
// Time pressure error: ε_time = ψ · exp(-t_close/30)
|
||||
double epsilonTime = 0.20 * Math.exp(-req.minutesUntilClose / 30.0);
|
||||
var epsilonTime = 0.20 * Math.exp(-req.minutesUntilClose / 30.0);
|
||||
|
||||
// Competition error: ε_comp = ρ · ln(1 + W_watch/50)
|
||||
double epsilonComp = 0.08 * Math.log(1 + req.watchCount / 50.0);
|
||||
|
||||
double predictedPrice = fmv * (1 + epsilonBid + epsilonTime + epsilonComp);
|
||||
var epsilonComp = 0.08 * Math.log(1 + req.watchCount / 50.0);
|
||||
|
||||
var predictedPrice = fmv * (1 + epsilonBid + epsilonTime + epsilonComp);
|
||||
|
||||
// 95% confidence interval: ± 1.96 · σ_residual
|
||||
double residualStdDev = fmv * 0.08; // Mock residual standard deviation
|
||||
double ciLower = predictedPrice - 1.96 * residualStdDev;
|
||||
double ciUpper = predictedPrice + 1.96 * residualStdDev;
|
||||
|
||||
PricePrediction pred = new PricePrediction();
|
||||
var residualStdDev = fmv * 0.08; // Mock residual standard deviation
|
||||
var ciLower = predictedPrice - 1.96 * residualStdDev;
|
||||
var ciUpper = predictedPrice + 1.96 * residualStdDev;
|
||||
|
||||
var pred = new PricePrediction();
|
||||
pred.predictedPrice = Math.round(predictedPrice * 100.0) / 100.0;
|
||||
pred.confidenceIntervalLower = Math.round(ciLower * 100.0) / 100.0;
|
||||
pred.confidenceIntervalUpper = Math.round(ciUpper * 100.0) / 100.0;
|
||||
@@ -218,7 +218,7 @@ public class ValuationAnalyticsResource {
|
||||
* Generates optimal bidding strategy based on market conditions
|
||||
*/
|
||||
private BiddingStrategy generateBiddingStrategy(ValuationRequest req, FairMarketValue fmv, PricePrediction pred) {
|
||||
BiddingStrategy strategy = new BiddingStrategy();
|
||||
var strategy = new BiddingStrategy();
|
||||
|
||||
// Determine competition level
|
||||
if (req.bidVelocity > 5.0) {
|
||||
@@ -236,7 +236,7 @@ public class ValuationAnalyticsResource {
|
||||
strategy.recommendedTiming = "FINAL_10_MINUTES";
|
||||
|
||||
// Adjust max bid based on undervaluation
|
||||
double undervaluationScore = calculateUndervaluationScore(req, fmv.value);
|
||||
var undervaluationScore = calculateUndervaluationScore(req, fmv.value);
|
||||
if (undervaluationScore > 0.25) {
|
||||
// Aggressive strategy for undervalued lots
|
||||
strategy.maxBid = fmv.value * (1 + 0.05); // Conservative overbid
|
||||
@@ -270,7 +270,7 @@ public class ValuationAnalyticsResource {
|
||||
* Calculates confidence score based on number and quality of comparables
|
||||
*/
|
||||
private double calculateFMVConfidence(int comparableCount, double totalWeight) {
|
||||
double confidence = 0.5; // Base confidence
|
||||
var confidence = 0.5; // Base confidence
|
||||
|
||||
// Boost for more comparables
|
||||
confidence += Math.min(comparableCount * 0.05, 0.3);
|
||||
|
||||
164
src/main/java/auctiora/db/AuctionRepository.java
Normal file
164
src/main/java/auctiora/db/AuctionRepository.java
Normal file
@@ -0,0 +1,164 @@
|
||||
package auctiora.db;
|
||||
|
||||
import auctiora.AuctionInfo;
|
||||
import lombok.RequiredArgsConstructor;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import org.jdbi.v3.core.Jdbi;
|
||||
|
||||
import java.time.LocalDateTime;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Repository for auction-related database operations using JDBI3.
|
||||
* Handles CRUD operations and queries for auctions.
|
||||
*/
|
||||
@Slf4j
|
||||
@RequiredArgsConstructor
|
||||
public class AuctionRepository {
|
||||
|
||||
private final Jdbi jdbi;
|
||||
|
||||
/**
|
||||
* Inserts or updates an auction record.
|
||||
* Handles both auction_id conflicts and url uniqueness constraints.
|
||||
*/
|
||||
public void upsert(AuctionInfo auction) {
|
||||
jdbi.useTransaction(handle -> {
|
||||
try {
|
||||
// Try INSERT with ON CONFLICT on auction_id
|
||||
handle.createUpdate("""
|
||||
INSERT INTO auctions (
|
||||
auction_id, title, location, city, country, url, type, lot_count, closing_time, discovered_at
|
||||
) VALUES (
|
||||
:auctionId, :title, :location, :city, :country, :url, :type, :lotCount, :closingTime, :discoveredAt
|
||||
)
|
||||
ON CONFLICT(auction_id) DO UPDATE SET
|
||||
title = excluded.title,
|
||||
location = excluded.location,
|
||||
city = excluded.city,
|
||||
country = excluded.country,
|
||||
url = excluded.url,
|
||||
type = excluded.type,
|
||||
lot_count = excluded.lot_count,
|
||||
closing_time = excluded.closing_time
|
||||
""")
|
||||
.bind("auctionId", auction.auctionId())
|
||||
.bind("title", auction.title())
|
||||
.bind("location", auction.location())
|
||||
.bind("city", auction.city())
|
||||
.bind("country", auction.country())
|
||||
.bind("url", auction.url())
|
||||
.bind("type", auction.typePrefix())
|
||||
.bind("lotCount", auction.lotCount())
|
||||
.bind("closingTime", auction.firstLotClosingTime() != null ? auction.firstLotClosingTime().toString() : null)
|
||||
.bind("discoveredAt", java.time.Instant.now().getEpochSecond())
|
||||
.execute();
|
||||
|
||||
} catch (Exception e) {
|
||||
// If UNIQUE constraint on url fails, try updating by url
|
||||
String errMsg = e.getMessage();
|
||||
if (errMsg != null && (errMsg.contains("UNIQUE constraint failed") ||
|
||||
errMsg.contains("PRIMARY KEY constraint failed"))) {
|
||||
log.debug("Auction conflict detected, attempting update by URL: {}", auction.url());
|
||||
|
||||
int updated = handle.createUpdate("""
|
||||
UPDATE auctions SET
|
||||
auction_id = :auctionId,
|
||||
title = :title,
|
||||
location = :location,
|
||||
city = :city,
|
||||
country = :country,
|
||||
type = :type,
|
||||
lot_count = :lotCount,
|
||||
closing_time = :closingTime
|
||||
WHERE url = :url
|
||||
""")
|
||||
.bind("auctionId", auction.auctionId())
|
||||
.bind("title", auction.title())
|
||||
.bind("location", auction.location())
|
||||
.bind("city", auction.city())
|
||||
.bind("country", auction.country())
|
||||
.bind("type", auction.typePrefix())
|
||||
.bind("lotCount", auction.lotCount())
|
||||
.bind("closingTime", auction.firstLotClosingTime() != null ? auction.firstLotClosingTime().toString() : null)
|
||||
.bind("url", auction.url())
|
||||
.execute();
|
||||
|
||||
if (updated == 0) {
|
||||
log.warn("Failed to update auction by URL: {}", auction.url());
|
||||
}
|
||||
} else {
|
||||
log.error("Unexpected error upserting auction: {}", e.getMessage(), e);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves all auctions from the database.
|
||||
*/
|
||||
public List<AuctionInfo> getAll() {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("SELECT * FROM auctions")
|
||||
.map((rs, ctx) -> {
|
||||
String closingStr = rs.getString("closing_time");
|
||||
LocalDateTime closingTime = null;
|
||||
if (closingStr != null && !closingStr.isBlank()) {
|
||||
try {
|
||||
closingTime = LocalDateTime.parse(closingStr);
|
||||
} catch (Exception e) {
|
||||
log.warn("Invalid closing_time format: {}", closingStr);
|
||||
}
|
||||
}
|
||||
|
||||
return new AuctionInfo(
|
||||
rs.getLong("auction_id"),
|
||||
rs.getString("title"),
|
||||
rs.getString("location"),
|
||||
rs.getString("city"),
|
||||
rs.getString("country"),
|
||||
rs.getString("url"),
|
||||
rs.getString("type"),
|
||||
rs.getInt("lot_count"),
|
||||
closingTime
|
||||
);
|
||||
})
|
||||
.list()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves auctions filtered by country code.
|
||||
*/
|
||||
public List<AuctionInfo> getByCountry(String countryCode) {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("SELECT * FROM auctions WHERE country = :country")
|
||||
.bind("country", countryCode)
|
||||
.map((rs, ctx) -> {
|
||||
String closingStr = rs.getString("closing_time");
|
||||
LocalDateTime closingTime = null;
|
||||
if (closingStr != null && !closingStr.isBlank()) {
|
||||
try {
|
||||
closingTime = LocalDateTime.parse(closingStr);
|
||||
} catch (Exception e) {
|
||||
log.warn("Invalid closing_time format: {}", closingStr);
|
||||
}
|
||||
}
|
||||
|
||||
return new AuctionInfo(
|
||||
rs.getLong("auction_id"),
|
||||
rs.getString("title"),
|
||||
rs.getString("location"),
|
||||
rs.getString("city"),
|
||||
rs.getString("country"),
|
||||
rs.getString("url"),
|
||||
rs.getString("type"),
|
||||
rs.getInt("lot_count"),
|
||||
closingTime
|
||||
);
|
||||
})
|
||||
.list()
|
||||
);
|
||||
}
|
||||
}
|
||||
154
src/main/java/auctiora/db/DatabaseSchema.java
Normal file
154
src/main/java/auctiora/db/DatabaseSchema.java
Normal file
@@ -0,0 +1,154 @@
|
||||
package auctiora.db;
|
||||
|
||||
import lombok.experimental.UtilityClass;
|
||||
import org.jdbi.v3.core.Jdbi;
|
||||
|
||||
/**
|
||||
* Database schema DDL definitions for all tables and indexes.
|
||||
* Uses text blocks (Java 15+) for clean SQL formatting.
|
||||
*/
|
||||
@UtilityClass
|
||||
public class DatabaseSchema {
|
||||
|
||||
/**
|
||||
* Initializes all database tables and indexes if they don't exist.
|
||||
*/
|
||||
public void ensureSchema(Jdbi jdbi) {
|
||||
jdbi.useHandle(handle -> {
|
||||
// Enable WAL mode for better concurrent access
|
||||
handle.execute("PRAGMA journal_mode=WAL");
|
||||
handle.execute("PRAGMA busy_timeout=10000");
|
||||
handle.execute("PRAGMA synchronous=NORMAL");
|
||||
|
||||
createTables(handle);
|
||||
createIndexes(handle);
|
||||
});
|
||||
}
|
||||
|
||||
private void createTables(org.jdbi.v3.core.Handle handle) {
|
||||
// Cache table (for HTTP caching)
|
||||
handle.execute("""
|
||||
CREATE TABLE IF NOT EXISTS cache (
|
||||
url TEXT PRIMARY KEY,
|
||||
content BLOB,
|
||||
timestamp REAL,
|
||||
status_code INTEGER
|
||||
)""");
|
||||
|
||||
// Auctions table (populated by external scraper)
|
||||
handle.execute("""
|
||||
CREATE TABLE IF NOT EXISTS auctions (
|
||||
auction_id TEXT PRIMARY KEY,
|
||||
url TEXT UNIQUE,
|
||||
title TEXT,
|
||||
location TEXT,
|
||||
lots_count INTEGER,
|
||||
first_lot_closing_time TEXT,
|
||||
scraped_at TEXT,
|
||||
city TEXT,
|
||||
country TEXT,
|
||||
type TEXT,
|
||||
lot_count INTEGER DEFAULT 0,
|
||||
closing_time TEXT,
|
||||
discovered_at INTEGER
|
||||
)""");
|
||||
|
||||
// Lots table (populated by external scraper)
|
||||
handle.execute("""
|
||||
CREATE TABLE IF NOT EXISTS lots (
|
||||
lot_id TEXT PRIMARY KEY,
|
||||
auction_id TEXT,
|
||||
url TEXT UNIQUE,
|
||||
title TEXT,
|
||||
current_bid TEXT,
|
||||
bid_count INTEGER,
|
||||
closing_time TEXT,
|
||||
viewing_time TEXT,
|
||||
pickup_date TEXT,
|
||||
location TEXT,
|
||||
description TEXT,
|
||||
category TEXT,
|
||||
scraped_at TEXT,
|
||||
sale_id INTEGER,
|
||||
manufacturer TEXT,
|
||||
type TEXT,
|
||||
year INTEGER,
|
||||
currency TEXT DEFAULT 'EUR',
|
||||
closing_notified INTEGER DEFAULT 0,
|
||||
starting_bid TEXT,
|
||||
minimum_bid TEXT,
|
||||
status TEXT,
|
||||
brand TEXT,
|
||||
model TEXT,
|
||||
attributes_json TEXT,
|
||||
first_bid_time TEXT,
|
||||
last_bid_time TEXT,
|
||||
bid_velocity REAL,
|
||||
bid_increment REAL,
|
||||
year_manufactured INTEGER,
|
||||
condition_score REAL,
|
||||
condition_description TEXT,
|
||||
serial_number TEXT,
|
||||
damage_description TEXT,
|
||||
followers_count INTEGER DEFAULT 0,
|
||||
estimated_min_price REAL,
|
||||
estimated_max_price REAL,
|
||||
lot_condition TEXT,
|
||||
appearance TEXT,
|
||||
estimated_min REAL,
|
||||
estimated_max REAL,
|
||||
next_bid_step_cents INTEGER,
|
||||
condition TEXT,
|
||||
category_path TEXT,
|
||||
city_location TEXT,
|
||||
country_code TEXT,
|
||||
bidding_status TEXT,
|
||||
packaging TEXT,
|
||||
quantity INTEGER,
|
||||
vat REAL,
|
||||
buyer_premium_percentage REAL,
|
||||
remarks TEXT,
|
||||
reserve_price REAL,
|
||||
reserve_met INTEGER,
|
||||
view_count INTEGER,
|
||||
FOREIGN KEY (auction_id) REFERENCES auctions(auction_id)
|
||||
)""");
|
||||
|
||||
// Images table (populated by external scraper with URLs and local_path)
|
||||
handle.execute("""
|
||||
CREATE TABLE IF NOT EXISTS images (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT,
|
||||
url TEXT,
|
||||
local_path TEXT,
|
||||
downloaded INTEGER DEFAULT 0,
|
||||
labels TEXT,
|
||||
processed_at INTEGER,
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
)""");
|
||||
|
||||
// Bid history table
|
||||
handle.execute("""
|
||||
CREATE TABLE IF NOT EXISTS bid_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
lot_id TEXT NOT NULL,
|
||||
bid_amount REAL NOT NULL,
|
||||
bid_time TEXT NOT NULL,
|
||||
is_autobid INTEGER DEFAULT 0,
|
||||
bidder_id TEXT,
|
||||
bidder_number INTEGER,
|
||||
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (lot_id) REFERENCES lots(lot_id)
|
||||
)""");
|
||||
}
|
||||
|
||||
private void createIndexes(org.jdbi.v3.core.Handle handle) {
|
||||
handle.execute("CREATE INDEX IF NOT EXISTS idx_timestamp ON cache(timestamp)");
|
||||
handle.execute("CREATE INDEX IF NOT EXISTS idx_auctions_country ON auctions(country)");
|
||||
handle.execute("CREATE INDEX IF NOT EXISTS idx_lots_sale_id ON lots(sale_id)");
|
||||
handle.execute("CREATE INDEX IF NOT EXISTS idx_images_lot_id ON images(lot_id)");
|
||||
handle.execute("CREATE UNIQUE INDEX IF NOT EXISTS idx_unique_lot_url ON images(lot_id, url)");
|
||||
handle.execute("CREATE INDEX IF NOT EXISTS idx_bid_history_lot_time ON bid_history(lot_id, bid_time)");
|
||||
handle.execute("CREATE INDEX IF NOT EXISTS idx_bid_history_bidder ON bid_history(bidder_id)");
|
||||
}
|
||||
}
|
||||
137
src/main/java/auctiora/db/ImageRepository.java
Normal file
137
src/main/java/auctiora/db/ImageRepository.java
Normal file
@@ -0,0 +1,137 @@
|
||||
package auctiora.db;
|
||||
|
||||
import lombok.RequiredArgsConstructor;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import org.jdbi.v3.core.Jdbi;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Repository for image-related database operations using JDBI3.
|
||||
* Handles image storage, object detection labels, and processing status.
|
||||
*/
|
||||
@Slf4j
|
||||
@RequiredArgsConstructor
|
||||
public class ImageRepository {
|
||||
|
||||
private final Jdbi jdbi;
|
||||
|
||||
/**
|
||||
* Image record containing all image metadata.
|
||||
*/
|
||||
public record ImageRecord(int id, long lotId, String url, String filePath, String labels) {}
|
||||
|
||||
/**
|
||||
* Minimal record for images needing object detection processing.
|
||||
*/
|
||||
public record ImageDetectionRecord(int id, long lotId, String filePath) {}
|
||||
|
||||
/**
|
||||
* Inserts a complete image record (for testing/legacy compatibility).
|
||||
* In production, scraper inserts with local_path, monitor updates labels via updateLabels.
|
||||
*/
|
||||
public void insert(long lotId, String url, String filePath, List<String> labels) {
|
||||
jdbi.useHandle(handle ->
|
||||
handle.createUpdate("""
|
||||
INSERT INTO images (lot_id, url, local_path, labels, processed_at, downloaded)
|
||||
VALUES (:lotId, :url, :localPath, :labels, :processedAt, 1)
|
||||
""")
|
||||
.bind("lotId", lotId)
|
||||
.bind("url", url)
|
||||
.bind("localPath", filePath)
|
||||
.bind("labels", String.join(",", labels))
|
||||
.bind("processedAt", Instant.now().getEpochSecond())
|
||||
.execute()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the labels field for an image after object detection.
|
||||
*/
|
||||
public void updateLabels(int imageId, List<String> labels) {
|
||||
jdbi.useHandle(handle ->
|
||||
handle.createUpdate("UPDATE images SET labels = :labels, processed_at = :processedAt WHERE id = :id")
|
||||
.bind("labels", String.join(",", labels))
|
||||
.bind("processedAt", Instant.now().getEpochSecond())
|
||||
.bind("id", imageId)
|
||||
.execute()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the labels for a specific image.
|
||||
*/
|
||||
public List<String> getLabels(int imageId) {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("SELECT labels FROM images WHERE id = :id")
|
||||
.bind("id", imageId)
|
||||
.mapTo(String.class)
|
||||
.findOne()
|
||||
.map(labelsStr -> {
|
||||
if (labelsStr != null && !labelsStr.isEmpty()) {
|
||||
return List.of(labelsStr.split(","));
|
||||
}
|
||||
return List.<String>of();
|
||||
})
|
||||
.orElse(List.of())
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves images for a specific lot.
|
||||
*/
|
||||
public List<ImageRecord> getImagesForLot(long lotId) {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("SELECT id, lot_id, url, local_path, labels FROM images WHERE lot_id = :lotId")
|
||||
.bind("lotId", lotId)
|
||||
.map((rs, ctx) -> new ImageRecord(
|
||||
rs.getInt("id"),
|
||||
rs.getLong("lot_id"),
|
||||
rs.getString("url"),
|
||||
rs.getString("local_path"),
|
||||
rs.getString("labels")
|
||||
))
|
||||
.list()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets images that have been downloaded by the scraper but need object detection.
|
||||
* Only returns images that have local_path set but no labels yet.
|
||||
*/
|
||||
public List<ImageDetectionRecord> getImagesNeedingDetection() {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("""
|
||||
SELECT i.id, i.lot_id, i.local_path
|
||||
FROM images i
|
||||
WHERE i.local_path IS NOT NULL
|
||||
AND i.local_path != ''
|
||||
AND (i.labels IS NULL OR i.labels = '')
|
||||
""")
|
||||
.map((rs, ctx) -> {
|
||||
// Extract numeric lot ID from TEXT field (e.g., "A1-34732-49" -> 3473249)
|
||||
String lotIdStr = rs.getString("lot_id");
|
||||
long lotId = auctiora.ScraperDataAdapter.extractNumericId(lotIdStr);
|
||||
|
||||
return new ImageDetectionRecord(
|
||||
rs.getInt("id"),
|
||||
lotId,
|
||||
rs.getString("local_path")
|
||||
);
|
||||
})
|
||||
.list()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the total number of images in the database.
|
||||
*/
|
||||
public int getImageCount() {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("SELECT COUNT(*) FROM images")
|
||||
.mapTo(Integer.class)
|
||||
.one()
|
||||
);
|
||||
}
|
||||
}
|
||||
275
src/main/java/auctiora/db/LotRepository.java
Normal file
275
src/main/java/auctiora/db/LotRepository.java
Normal file
@@ -0,0 +1,275 @@
|
||||
package auctiora.db;
|
||||
|
||||
import auctiora.Lot;
|
||||
import auctiora.BidHistory;
|
||||
import lombok.RequiredArgsConstructor;
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import org.jdbi.v3.core.Jdbi;
|
||||
|
||||
import java.time.LocalDateTime;
|
||||
import java.util.List;
|
||||
|
||||
import static java.sql.Types.*;
|
||||
|
||||
/**
|
||||
* Repository for lot-related database operations using JDBI3.
|
||||
* Handles CRUD operations and queries for auction lots.
|
||||
*/
|
||||
@Slf4j
|
||||
@RequiredArgsConstructor
|
||||
public class LotRepository {
|
||||
|
||||
private final Jdbi jdbi;
|
||||
|
||||
/**
|
||||
* Inserts or updates a lot (upsert operation).
|
||||
* First tries UPDATE, then falls back to INSERT if lot doesn't exist.
|
||||
*/
|
||||
public void upsert(Lot lot) {
|
||||
jdbi.useTransaction(handle -> {
|
||||
// Try UPDATE first
|
||||
int updated = handle.createUpdate("""
|
||||
UPDATE lots SET
|
||||
sale_id = :saleId,
|
||||
auction_id = :auctionId,
|
||||
title = :title,
|
||||
description = :description,
|
||||
manufacturer = :manufacturer,
|
||||
type = :type,
|
||||
year = :year,
|
||||
category = :category,
|
||||
current_bid = :currentBid,
|
||||
currency = :currency,
|
||||
url = :url,
|
||||
closing_time = :closingTime
|
||||
WHERE lot_id = :lotId
|
||||
""")
|
||||
.bind("saleId", String.valueOf(lot.saleId()))
|
||||
.bind("auctionId", String.valueOf(lot.saleId())) // auction_id = sale_id
|
||||
.bind("title", lot.title())
|
||||
.bind("description", lot.description())
|
||||
.bind("manufacturer", lot.manufacturer())
|
||||
.bind("type", lot.type())
|
||||
.bind("year", lot.year())
|
||||
.bind("category", lot.category())
|
||||
.bind("currentBid", lot.currentBid())
|
||||
.bind("currency", lot.currency())
|
||||
.bind("url", lot.url())
|
||||
.bind("closingTime", lot.closingTime() != null ? lot.closingTime().toString() : null)
|
||||
.bind("lotId", String.valueOf(lot.lotId()))
|
||||
.execute();
|
||||
|
||||
if (updated == 0) {
|
||||
// No rows updated, perform INSERT
|
||||
handle.createUpdate("""
|
||||
INSERT OR IGNORE INTO lots (
|
||||
lot_id, sale_id, auction_id, title, description, manufacturer, type, year,
|
||||
category, current_bid, currency, url, closing_time, closing_notified
|
||||
) VALUES (
|
||||
:lotId, :saleId, :auctionId, :title, :description, :manufacturer, :type, :year,
|
||||
:category, :currentBid, :currency, :url, :closingTime, :closingNotified
|
||||
)
|
||||
""")
|
||||
.bind("lotId", String.valueOf(lot.lotId()))
|
||||
.bind("saleId", String.valueOf(lot.saleId()))
|
||||
.bind("auctionId", String.valueOf(lot.saleId())) // auction_id = sale_id
|
||||
.bind("title", lot.title())
|
||||
.bind("description", lot.description())
|
||||
.bind("manufacturer", lot.manufacturer())
|
||||
.bind("type", lot.type())
|
||||
.bind("year", lot.year())
|
||||
.bind("category", lot.category())
|
||||
.bind("currentBid", lot.currentBid())
|
||||
.bind("currency", lot.currency())
|
||||
.bind("url", lot.url())
|
||||
.bind("closingTime", lot.closingTime() != null ? lot.closingTime().toString() : null)
|
||||
.bind("closingNotified", lot.closingNotified() ? 1 : 0)
|
||||
.execute();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates a lot with full intelligence data from GraphQL enrichment.
|
||||
* Includes all 24+ intelligence fields from bidding platform.
|
||||
*/
|
||||
public void upsertWithIntelligence(Lot lot) {
|
||||
jdbi.useHandle(handle -> {
|
||||
var update = handle.createUpdate("""
|
||||
UPDATE lots SET
|
||||
sale_id = :saleId,
|
||||
title = :title,
|
||||
description = :description,
|
||||
manufacturer = :manufacturer,
|
||||
type = :type,
|
||||
year = :year,
|
||||
category = :category,
|
||||
current_bid = :currentBid,
|
||||
currency = :currency,
|
||||
url = :url,
|
||||
closing_time = :closingTime,
|
||||
followers_count = :followersCount,
|
||||
estimated_min = :estimatedMin,
|
||||
estimated_max = :estimatedMax,
|
||||
next_bid_step_cents = :nextBidStepInCents,
|
||||
condition = :condition,
|
||||
category_path = :categoryPath,
|
||||
city_location = :cityLocation,
|
||||
country_code = :countryCode,
|
||||
bidding_status = :biddingStatus,
|
||||
appearance = :appearance,
|
||||
packaging = :packaging,
|
||||
quantity = :quantity,
|
||||
vat = :vat,
|
||||
buyer_premium_percentage = :buyerPremiumPercentage,
|
||||
remarks = :remarks,
|
||||
starting_bid = :startingBid,
|
||||
reserve_price = :reservePrice,
|
||||
reserve_met = :reserveMet,
|
||||
bid_increment = :bidIncrement,
|
||||
view_count = :viewCount,
|
||||
first_bid_time = :firstBidTime,
|
||||
last_bid_time = :lastBidTime,
|
||||
bid_velocity = :bidVelocity
|
||||
WHERE lot_id = :lotId
|
||||
""")
|
||||
.bind("saleId", lot.saleId())
|
||||
.bind("title", lot.title())
|
||||
.bind("description", lot.description())
|
||||
.bind("manufacturer", lot.manufacturer())
|
||||
.bind("type", lot.type())
|
||||
.bind("year", lot.year())
|
||||
.bind("category", lot.category())
|
||||
.bind("currentBid", lot.currentBid())
|
||||
.bind("currency", lot.currency())
|
||||
.bind("url", lot.url())
|
||||
.bind("closingTime", lot.closingTime() != null ? lot.closingTime().toString() : null)
|
||||
.bind("followersCount", lot.followersCount())
|
||||
.bind("estimatedMin", lot.estimatedMin())
|
||||
.bind("estimatedMax", lot.estimatedMax())
|
||||
.bind("nextBidStepInCents", lot.nextBidStepInCents())
|
||||
.bind("condition", lot.condition())
|
||||
.bind("categoryPath", lot.categoryPath())
|
||||
.bind("cityLocation", lot.cityLocation())
|
||||
.bind("countryCode", lot.countryCode())
|
||||
.bind("biddingStatus", lot.biddingStatus())
|
||||
.bind("appearance", lot.appearance())
|
||||
.bind("packaging", lot.packaging())
|
||||
.bind("quantity", lot.quantity())
|
||||
.bind("vat", lot.vat())
|
||||
.bind("buyerPremiumPercentage", lot.buyerPremiumPercentage())
|
||||
.bind("remarks", lot.remarks())
|
||||
.bind("startingBid", lot.startingBid())
|
||||
.bind("reservePrice", lot.reservePrice())
|
||||
.bind("reserveMet", lot.reserveMet() != null && lot.reserveMet() ? 1 : null)
|
||||
.bind("bidIncrement", lot.bidIncrement())
|
||||
.bind("viewCount", lot.viewCount())
|
||||
.bind("firstBidTime", lot.firstBidTime() != null ? lot.firstBidTime().toString() : null)
|
||||
.bind("lastBidTime", lot.lastBidTime() != null ? lot.lastBidTime().toString() : null)
|
||||
.bind("bidVelocity", lot.bidVelocity())
|
||||
.bind("lotId", lot.lotId());
|
||||
|
||||
int updated = update.execute();
|
||||
if (updated == 0) {
|
||||
log.warn("Failed to update lot {} - lot not found in database", lot.lotId());
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates only the current bid for a lot (lightweight update).
|
||||
*/
|
||||
public void updateCurrentBid(Lot lot) {
|
||||
jdbi.useHandle(handle ->
|
||||
handle.createUpdate("UPDATE lots SET current_bid = :bid WHERE lot_id = :lotId")
|
||||
.bind("bid", lot.currentBid())
|
||||
.bind("lotId", String.valueOf(lot.lotId()))
|
||||
.execute()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates notification flags for a lot.
|
||||
*/
|
||||
public void updateNotificationFlags(Lot lot) {
|
||||
jdbi.useHandle(handle ->
|
||||
handle.createUpdate("UPDATE lots SET closing_notified = :notified WHERE lot_id = :lotId")
|
||||
.bind("notified", lot.closingNotified() ? 1 : 0)
|
||||
.bind("lotId", String.valueOf(lot.lotId()))
|
||||
.execute()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves all active lots.
|
||||
* Note: Despite the name, this returns ALL lots (legacy behavior for backward compatibility).
|
||||
*/
|
||||
public List<Lot> getActiveLots() {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("SELECT * FROM lots")
|
||||
.map((rs, ctx) -> auctiora.ScraperDataAdapter.fromScraperLot(rs))
|
||||
.list()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves all lots from the database.
|
||||
*/
|
||||
public List<Lot> getAllLots() {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("SELECT * FROM lots")
|
||||
.map((rs, ctx) -> auctiora.ScraperDataAdapter.fromScraperLot(rs))
|
||||
.list()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves bid history for a specific lot.
|
||||
*/
|
||||
public List<BidHistory> getBidHistory(String lotId) {
|
||||
return jdbi.withHandle(handle ->
|
||||
handle.createQuery("""
|
||||
SELECT id, lot_id, bid_amount, bid_time, is_autobid, bidder_id, bidder_number
|
||||
FROM bid_history
|
||||
WHERE lot_id = :lotId
|
||||
ORDER BY bid_time DESC
|
||||
""")
|
||||
.bind("lotId", lotId)
|
||||
.map((rs, ctx) -> new BidHistory(
|
||||
rs.getInt("id"),
|
||||
rs.getString("lot_id"),
|
||||
rs.getDouble("bid_amount"),
|
||||
LocalDateTime.parse(rs.getString("bid_time")),
|
||||
rs.getInt("is_autobid") != 0,
|
||||
rs.getString("bidder_id"),
|
||||
(Integer) rs.getObject("bidder_number")
|
||||
))
|
||||
.list()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Inserts bid history records in batch.
|
||||
*/
|
||||
public void insertBidHistory(List<BidHistory> bidHistory) {
|
||||
jdbi.useHandle(handle -> {
|
||||
var batch = handle.prepareBatch("""
|
||||
INSERT OR IGNORE INTO bid_history (
|
||||
lot_id, bid_amount, bid_time, is_autobid, bidder_id, bidder_number
|
||||
) VALUES (:lotId, :bidAmount, :bidTime, :isAutobid, :bidderId, :bidderNumber)
|
||||
""");
|
||||
|
||||
bidHistory.forEach(bid ->
|
||||
batch.bind("lotId", bid.lotId())
|
||||
.bind("bidAmount", bid.bidAmount())
|
||||
.bind("bidTime", bid.bidTime().toString())
|
||||
.bind("isAutobid", bid.isAutobid() ? 1 : 0)
|
||||
.bind("bidderId", bid.bidderId())
|
||||
.bind("bidderNumber", bid.bidderNumber())
|
||||
.add()
|
||||
);
|
||||
|
||||
batch.execute();
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -44,12 +44,15 @@ quarkus.rest.path=/
|
||||
quarkus.http.root-path=/
|
||||
|
||||
# Auction Monitor Configuration
|
||||
auction.database.path=C:\\mnt\\okcomputer\\output\\cache.db
|
||||
auction.images.path=C:\\mnt\\okcomputer\\output\\images
|
||||
auction.notification.config=desktop
|
||||
auction.yolo.config=models/yolov4.cfg
|
||||
auction.yolo.weights=models/yolov4.weights
|
||||
auction.yolo.classes=models/coco.names
|
||||
auction.database.path=/mnt/okcomputer/output/cache.db
|
||||
auction.images.path=/mnt/okcomputer/output/images
|
||||
# auction.notification.config=desktop
|
||||
# Format: smtp:username:password:recipient_email
|
||||
auction.notification.config=smtp:michael.bakker1986@gmail.com:agrepolhlnvhipkv:michael.bakker1986@gmail.com
|
||||
|
||||
auction.yolo.config=/mnt/okcomputer/output/models/yolov4.cfg
|
||||
auction.yolo.weights=/mnt/okcomputer/output/models/yolov4.weights
|
||||
auction.yolo.classes=/mnt/okcomputer/output/models/coco.names
|
||||
|
||||
# Scheduler Configuration
|
||||
quarkus.scheduler.enabled=true
|
||||
@@ -69,3 +72,4 @@ auction.http.timeout-seconds=30
|
||||
|
||||
# Health Check Configuration
|
||||
quarkus.smallrye-health.root-path=/health
|
||||
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
package auctiora;
|
||||
|
||||
import lombok.extern.slf4j.Slf4j;
|
||||
import org.jsoup.Jsoup;
|
||||
import org.jsoup.nodes.Document;
|
||||
import org.jsoup.nodes.Element;
|
||||
import org.jsoup.select.Elements;
|
||||
import org.junit.jupiter.api.BeforeAll;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Paths;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
import static org.junit.jupiter.api.Assertions.*;
|
||||
|
||||
/**
|
||||
* Test auction parsing logic using saved HTML from test.html
|
||||
* Tests the markup data extraction for each auction found
|
||||
*/
|
||||
@Slf4j
|
||||
public class AuctionParsingTest {
|
||||
|
||||
private static String testHtml;
|
||||
|
||||
@BeforeAll
|
||||
public static void loadTestHtml() throws IOException {
|
||||
// Load the test HTML file
|
||||
testHtml = Files.readString(Paths.get("src/test/resources/test_auctions.html"));
|
||||
log.info("Loaded test HTML ({} characters)", testHtml.length());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testLocationPatternMatching() {
|
||||
log.info("\n=== Location Pattern Tests ===");
|
||||
|
||||
// Test different location formats
|
||||
var testCases = new String[]{
|
||||
"<p>Amsterdam, NL</p>",
|
||||
"<p class=\"flex truncate\"><span class=\"w-full truncate\">Sofia,<!-- --> </span>BG</p>",
|
||||
"<p>Berlin, DE</p>",
|
||||
"<span>Brussels,</span>BE"
|
||||
};
|
||||
|
||||
for (var testHtml : testCases) {
|
||||
var doc = Jsoup.parse(testHtml);
|
||||
var elem = doc.select("p, span").first();
|
||||
|
||||
if (elem != null) {
|
||||
var text = elem.text();
|
||||
log.info("\nTest: {}", testHtml);
|
||||
log.info("Text: {}", text);
|
||||
|
||||
// Test regex pattern
|
||||
if (text.matches(".*[A-Z]{2}$")) {
|
||||
var countryCode = text.substring(text.length() - 2);
|
||||
var cityPart = text.substring(0, text.length() - 2).trim().replaceAll("[,\\s]+$", "");
|
||||
log.info("→ Extracted: {}, {}", cityPart, countryCode);
|
||||
} else {
|
||||
log.info("→ No match");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testFullTextPatternMatching() {
|
||||
log.info("\n=== Full Text Pattern Tests ===");
|
||||
|
||||
// Test the complete auction text format
|
||||
var testCases = new String[]{
|
||||
"woensdag om 18:00 1 Vrachtwagens voor bedrijfsvoertuigen Loßburg, DE",
|
||||
"maandag om 14:30 5 Industriële machines Amsterdam, NL",
|
||||
"vrijdag om 10:00 12 Landbouwmachines Antwerpen, BE"
|
||||
};
|
||||
|
||||
for (var testText : testCases) {
|
||||
log.info("\nParsing: \"{}\"", testText);
|
||||
|
||||
// Simulated extraction
|
||||
var remaining = testText;
|
||||
|
||||
// Extract time
|
||||
var timePattern = java.util.regex.Pattern.compile("(\\w+)\\s+om\\s+(\\d{1,2}:\\d{2})");
|
||||
var timeMatcher = timePattern.matcher(remaining);
|
||||
if (timeMatcher.find()) {
|
||||
log.info(" Time: {} om {}", timeMatcher.group(1), timeMatcher.group(2));
|
||||
remaining = remaining.substring(timeMatcher.end()).trim();
|
||||
}
|
||||
|
||||
// Extract location
|
||||
var locPattern = java.util.regex.Pattern.compile(
|
||||
"([A-ZÀ-ÿa-z][A-ZÀ-ÿa-z\\s\\-'öäüßàèéêëïôùûç]+?),\\s*([A-Z]{2})\\s*$"
|
||||
);
|
||||
var locMatcher = locPattern.matcher(remaining);
|
||||
if (locMatcher.find()) {
|
||||
log.info(" Location: {}, {}", locMatcher.group(1), locMatcher.group(2));
|
||||
remaining = remaining.substring(0, locMatcher.start()).trim();
|
||||
}
|
||||
|
||||
// Extract lot count
|
||||
var lotPattern = java.util.regex.Pattern.compile("^(\\d+)\\s+");
|
||||
var lotMatcher = lotPattern.matcher(remaining);
|
||||
if (lotMatcher.find()) {
|
||||
log.info(" Lot count: {}", lotMatcher.group(1));
|
||||
remaining = remaining.substring(lotMatcher.end()).trim();
|
||||
}
|
||||
|
||||
// What remains is title
|
||||
log.info(" Title: {}", remaining);
|
||||
}
|
||||
}
|
||||
}
|
||||
138
src/test/java/auctiora/ClosingTimeCalculationTest.java
Normal file
138
src/test/java/auctiora/ClosingTimeCalculationTest.java
Normal file
@@ -0,0 +1,138 @@
|
||||
package auctiora;
|
||||
|
||||
import org.junit.jupiter.api.*;
|
||||
|
||||
import java.time.LocalDateTime;
|
||||
|
||||
import static org.junit.jupiter.api.Assertions.*;
|
||||
|
||||
/**
|
||||
* Tests for closing time calculations that power the UI
|
||||
* Tests the minutesUntilClose() logic used in dashboard and alerts
|
||||
*/
|
||||
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
|
||||
@DisplayName("Closing Time Calculation Tests")
|
||||
class ClosingTimeCalculationTest {
|
||||
|
||||
@Test
|
||||
@Order(1)
|
||||
@DisplayName("Should calculate minutes until close for lot closing in 15 minutes")
|
||||
void testMinutesUntilClose15Minutes() {
|
||||
var lot = createLot(LocalDateTime.now().plusMinutes(15));
|
||||
long minutes = lot.minutesUntilClose();
|
||||
|
||||
assertTrue(minutes >= 14 && minutes <= 16,
|
||||
"Should be approximately 15 minutes, was: " + minutes);
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(2)
|
||||
@DisplayName("Should calculate minutes until close for lot closing in 2 hours")
|
||||
void testMinutesUntilClose2Hours() {
|
||||
var lot = createLot(LocalDateTime.now().plusHours(2));
|
||||
long minutes = lot.minutesUntilClose();
|
||||
|
||||
assertTrue(minutes >= 119 && minutes <= 121,
|
||||
"Should be approximately 120 minutes, was: " + minutes);
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(3)
|
||||
@DisplayName("Should return negative value for already closed lot")
|
||||
void testMinutesUntilCloseNegative() {
|
||||
var lot = createLot(LocalDateTime.now().minusHours(1));
|
||||
long minutes = lot.minutesUntilClose();
|
||||
|
||||
assertTrue(minutes < 0,
|
||||
"Should be negative for closed lots, was: " + minutes);
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(4)
|
||||
@DisplayName("Should return MAX_VALUE when lot has no closing time")
|
||||
void testMinutesUntilCloseNoTime() {
|
||||
var lot = Lot.basic(100, 1001, "No closing time", "", "", "", 0, "General",
|
||||
100.0, "EUR", "http://test.com/1001", null, false);
|
||||
long minutes = lot.minutesUntilClose();
|
||||
|
||||
assertEquals(Long.MAX_VALUE, minutes,
|
||||
"Should return MAX_VALUE when no closing time set");
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(5)
|
||||
@DisplayName("Should identify lots closing within 5 minutes (critical threshold)")
|
||||
void testCriticalClosingThreshold() {
|
||||
var closing4Min = createLot(LocalDateTime.now().plusMinutes(4));
|
||||
var closing5Min = createLot(LocalDateTime.now().plusMinutes(5));
|
||||
var closing6Min = createLot(LocalDateTime.now().plusMinutes(6));
|
||||
|
||||
assertTrue(closing4Min.minutesUntilClose() < 5,
|
||||
"Lot closing in 4 min should be < 5 minutes");
|
||||
assertTrue(closing5Min.minutesUntilClose() >= 5,
|
||||
"Lot closing in 5 min should be >= 5 minutes");
|
||||
assertTrue(closing6Min.minutesUntilClose() > 5,
|
||||
"Lot closing in 6 min should be > 5 minutes");
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(6)
|
||||
@DisplayName("Should identify lots closing within 30 minutes (dashboard threshold)")
|
||||
void testDashboardClosingThreshold() {
|
||||
var closing20Min = createLot(LocalDateTime.now().plusMinutes(20));
|
||||
var closing31Min = createLot(LocalDateTime.now().plusMinutes(31)); // Use 31 to avoid boundary timing issue
|
||||
var closing40Min = createLot(LocalDateTime.now().plusMinutes(40));
|
||||
|
||||
assertTrue(closing20Min.minutesUntilClose() < 30,
|
||||
"Lot closing in 20 min should be < 30 minutes");
|
||||
assertTrue(closing31Min.minutesUntilClose() >= 30,
|
||||
"Lot closing in 31 min should be >= 30 minutes");
|
||||
assertTrue(closing40Min.minutesUntilClose() > 30,
|
||||
"Lot closing in 40 min should be > 30 minutes");
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(7)
|
||||
@DisplayName("Should calculate correctly for lots closing soon (boundary cases)")
|
||||
void testBoundaryCases() {
|
||||
// Just closed (< 1 minute ago)
|
||||
var justClosed = createLot(LocalDateTime.now().minusSeconds(30));
|
||||
assertTrue(justClosed.minutesUntilClose() <= 0, "Just closed should be <= 0");
|
||||
|
||||
// Closing very soon (< 1 minute)
|
||||
var closingVerySoon = createLot(LocalDateTime.now().plusSeconds(30));
|
||||
assertTrue(closingVerySoon.minutesUntilClose() < 1, "Closing in 30 sec should be < 1 minute");
|
||||
|
||||
// Closing in exactly 1 hour
|
||||
var closing1Hour = createLot(LocalDateTime.now().plusHours(1));
|
||||
long minutes1Hour = closing1Hour.minutesUntilClose();
|
||||
assertTrue(minutes1Hour >= 59 && minutes1Hour <= 61,
|
||||
"Closing in 1 hour should be ~60 minutes, was: " + minutes1Hour);
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(8)
|
||||
@DisplayName("Multiple lots should sort correctly by urgency")
|
||||
void testSortingByUrgency() {
|
||||
var lot5Min = createLot(LocalDateTime.now().plusMinutes(5));
|
||||
var lot30Min = createLot(LocalDateTime.now().plusMinutes(30));
|
||||
var lot1Hour = createLot(LocalDateTime.now().plusHours(1));
|
||||
var lot3Hours = createLot(LocalDateTime.now().plusHours(3));
|
||||
|
||||
var lots = java.util.List.of(lot3Hours, lot30Min, lot5Min, lot1Hour);
|
||||
var sorted = lots.stream()
|
||||
.sorted((a, b) -> Long.compare(a.minutesUntilClose(), b.minutesUntilClose()))
|
||||
.toList();
|
||||
|
||||
assertEquals(lot5Min, sorted.get(0), "Most urgent should be first");
|
||||
assertEquals(lot30Min, sorted.get(1), "Second most urgent");
|
||||
assertEquals(lot1Hour, sorted.get(2), "Third most urgent");
|
||||
assertEquals(lot3Hours, sorted.get(3), "Least urgent should be last");
|
||||
}
|
||||
|
||||
// Helper method
|
||||
private Lot createLot(LocalDateTime closingTime) {
|
||||
return Lot.basic(100, 1001, "Test Item", "", "", "", 0, "General",
|
||||
100.0, "EUR", "http://test.com/1001", closingTime, false);
|
||||
}
|
||||
}
|
||||
@@ -22,6 +22,13 @@ class DatabaseServiceTest {
|
||||
|
||||
@BeforeAll
|
||||
void setUp() throws SQLException {
|
||||
// Load SQLite JDBC driver
|
||||
try {
|
||||
Class.forName("org.sqlite.JDBC");
|
||||
} catch (ClassNotFoundException e) {
|
||||
throw new SQLException("SQLite JDBC driver not found", e);
|
||||
}
|
||||
|
||||
testDbPath = "test_database_" + System.currentTimeMillis() + ".db";
|
||||
db = new DatabaseService(testDbPath);
|
||||
db.ensureSchema();
|
||||
@@ -350,7 +357,7 @@ class DatabaseServiceTest {
|
||||
100.0, "EUR", "https://example.com/" + i, null, false
|
||||
));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
} catch (Exception e) {
|
||||
fail("Thread 1 failed: " + e.getMessage());
|
||||
}
|
||||
});
|
||||
@@ -363,7 +370,7 @@ class DatabaseServiceTest {
|
||||
200.0, "EUR", "https://example.com/" + i, null, false
|
||||
));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
} catch (Exception e) {
|
||||
fail("Thread 2 failed: " + e.getMessage());
|
||||
}
|
||||
});
|
||||
|
||||
@@ -20,37 +20,51 @@ class ImageProcessingServiceTest {
|
||||
private DatabaseService mockDb;
|
||||
private ObjectDetectionService mockDetector;
|
||||
private ImageProcessingService service;
|
||||
private java.io.File testImage;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
void setUp() throws Exception {
|
||||
mockDb = mock(DatabaseService.class);
|
||||
mockDetector = mock(ObjectDetectionService.class);
|
||||
service = new ImageProcessingService(mockDb, mockDetector);
|
||||
|
||||
// Create a temporary test image file
|
||||
testImage = java.io.File.createTempFile("test_image_", ".jpg");
|
||||
testImage.deleteOnExit();
|
||||
// Write minimal JPEG header to make it a valid file
|
||||
try (var out = new java.io.FileOutputStream(testImage)) {
|
||||
out.write(new byte[]{(byte)0xFF, (byte)0xD8, (byte)0xFF, (byte)0xE0});
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should process single image and update labels")
|
||||
void testProcessImage() throws SQLException {
|
||||
// Mock object detection
|
||||
when(mockDetector.detectObjects("/path/to/image.jpg"))
|
||||
// Normalize path (convert backslashes to forward slashes)
|
||||
String normalizedPath = testImage.getAbsolutePath().replace('\\', '/');
|
||||
|
||||
// Mock object detection with normalized path
|
||||
when(mockDetector.detectObjects(normalizedPath))
|
||||
.thenReturn(List.of("car", "vehicle"));
|
||||
|
||||
// Process image
|
||||
boolean result = service.processImage(1, "/path/to/image.jpg", 12345);
|
||||
boolean result = service.processImage(1, testImage.getAbsolutePath(), 12345);
|
||||
|
||||
// Verify success
|
||||
assertTrue(result);
|
||||
verify(mockDetector).detectObjects("/path/to/image.jpg");
|
||||
verify(mockDetector).detectObjects(normalizedPath);
|
||||
verify(mockDb).updateImageLabels(1, List.of("car", "vehicle"));
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle empty detection results")
|
||||
void testProcessImageWithNoDetections() throws SQLException {
|
||||
when(mockDetector.detectObjects(anyString()))
|
||||
String normalizedPath = testImage.getAbsolutePath().replace('\\', '/');
|
||||
|
||||
when(mockDetector.detectObjects(normalizedPath))
|
||||
.thenReturn(List.of());
|
||||
|
||||
boolean result = service.processImage(2, "/path/to/image2.jpg", 12346);
|
||||
boolean result = service.processImage(2, testImage.getAbsolutePath(), 12346);
|
||||
|
||||
assertTrue(result);
|
||||
verify(mockDb).updateImageLabels(2, List.of());
|
||||
@@ -58,15 +72,17 @@ class ImageProcessingServiceTest {
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle database error gracefully")
|
||||
void testProcessImageDatabaseError() throws SQLException {
|
||||
when(mockDetector.detectObjects(anyString()))
|
||||
void testProcessImageDatabaseError() {
|
||||
String normalizedPath = testImage.getAbsolutePath().replace('\\', '/');
|
||||
|
||||
when(mockDetector.detectObjects(normalizedPath))
|
||||
.thenReturn(List.of("object"));
|
||||
|
||||
doThrow(new SQLException("Database error"))
|
||||
doThrow(new RuntimeException("Database error"))
|
||||
.when(mockDb).updateImageLabels(anyInt(), anyList());
|
||||
|
||||
// Should return false on error
|
||||
boolean result = service.processImage(3, "/path/to/image3.jpg", 12347);
|
||||
boolean result = service.processImage(3, testImage.getAbsolutePath(), 12347);
|
||||
assertFalse(result);
|
||||
}
|
||||
|
||||
@@ -84,13 +100,15 @@ class ImageProcessingServiceTest {
|
||||
@Test
|
||||
@DisplayName("Should process pending images batch")
|
||||
void testProcessPendingImages() throws SQLException {
|
||||
// Mock pending images from database
|
||||
String normalizedPath = testImage.getAbsolutePath().replace('\\', '/');
|
||||
|
||||
// Mock pending images from database - use real test image path
|
||||
when(mockDb.getImagesNeedingDetection()).thenReturn(List.of(
|
||||
new DatabaseService.ImageDetectionRecord(1, 100L, "/images/100/001.jpg"),
|
||||
new DatabaseService.ImageDetectionRecord(2, 101L, "/images/101/001.jpg")
|
||||
new DatabaseService.ImageDetectionRecord(1, 100L, testImage.getAbsolutePath()),
|
||||
new DatabaseService.ImageDetectionRecord(2, 101L, testImage.getAbsolutePath())
|
||||
));
|
||||
|
||||
when(mockDetector.detectObjects(anyString()))
|
||||
when(mockDetector.detectObjects(normalizedPath))
|
||||
.thenReturn(List.of("item1"))
|
||||
.thenReturn(List.of("item2"));
|
||||
|
||||
@@ -103,7 +121,7 @@ class ImageProcessingServiceTest {
|
||||
|
||||
// Verify all images were processed
|
||||
verify(mockDb).getImagesNeedingDetection();
|
||||
verify(mockDetector, times(2)).detectObjects(anyString());
|
||||
verify(mockDetector, times(2)).detectObjects(normalizedPath);
|
||||
verify(mockDb, times(2)).updateImageLabels(anyInt(), anyList());
|
||||
}
|
||||
|
||||
@@ -121,15 +139,16 @@ class ImageProcessingServiceTest {
|
||||
@Test
|
||||
@DisplayName("Should continue processing after single image failure")
|
||||
void testProcessPendingImagesWithFailure() throws SQLException {
|
||||
String normalizedPath = testImage.getAbsolutePath().replace('\\', '/');
|
||||
|
||||
when(mockDb.getImagesNeedingDetection()).thenReturn(List.of(
|
||||
new DatabaseService.ImageDetectionRecord(1, 100L, "/images/100/001.jpg"),
|
||||
new DatabaseService.ImageDetectionRecord(2, 101L, "/images/101/001.jpg")
|
||||
new DatabaseService.ImageDetectionRecord(1, 100L, testImage.getAbsolutePath()),
|
||||
new DatabaseService.ImageDetectionRecord(2, 101L, testImage.getAbsolutePath())
|
||||
));
|
||||
|
||||
// First image fails, second succeeds
|
||||
when(mockDetector.detectObjects("/images/100/001.jpg"))
|
||||
.thenThrow(new RuntimeException("Detection error"));
|
||||
when(mockDetector.detectObjects("/images/101/001.jpg"))
|
||||
when(mockDetector.detectObjects(normalizedPath))
|
||||
.thenThrow(new RuntimeException("Detection error"))
|
||||
.thenReturn(List.of("item"));
|
||||
|
||||
when(mockDb.getImageLabels(2))
|
||||
@@ -138,14 +157,14 @@ class ImageProcessingServiceTest {
|
||||
service.processPendingImages();
|
||||
|
||||
// Verify second image was still processed
|
||||
verify(mockDetector, times(2)).detectObjects(anyString());
|
||||
verify(mockDetector, times(2)).detectObjects(normalizedPath);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle database query error in batch processing")
|
||||
void testProcessPendingImagesDatabaseError() throws SQLException {
|
||||
void testProcessPendingImagesDatabaseError() {
|
||||
when(mockDb.getImagesNeedingDetection())
|
||||
.thenThrow(new SQLException("Database connection failed"));
|
||||
.thenThrow(new RuntimeException("Database connection failed"));
|
||||
|
||||
// Should not throw exception
|
||||
assertDoesNotThrow(() -> service.processPendingImages());
|
||||
@@ -154,10 +173,12 @@ class ImageProcessingServiceTest {
|
||||
@Test
|
||||
@DisplayName("Should process images with multiple detected objects")
|
||||
void testProcessImageMultipleDetections() throws SQLException {
|
||||
when(mockDetector.detectObjects(anyString()))
|
||||
String normalizedPath = testImage.getAbsolutePath().replace('\\', '/');
|
||||
|
||||
when(mockDetector.detectObjects(normalizedPath))
|
||||
.thenReturn(List.of("car", "truck", "vehicle", "road"));
|
||||
|
||||
boolean result = service.processImage(5, "/path/to/image5.jpg", 12349);
|
||||
boolean result = service.processImage(5, testImage.getAbsolutePath(), 12349);
|
||||
|
||||
assertTrue(result);
|
||||
verify(mockDb).updateImageLabels(5, List.of("car", "truck", "vehicle", "road"));
|
||||
|
||||
@@ -370,11 +370,11 @@ class IntegrationTest {
|
||||
"https://example.com/60" + i, "A1", 5, null
|
||||
));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
} catch (Exception e) {
|
||||
fail("Auction thread failed: " + e.getMessage());
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
var lotThread = new Thread(() -> {
|
||||
try {
|
||||
for (var i = 0; i < 10; i++) {
|
||||
@@ -383,7 +383,7 @@ class IntegrationTest {
|
||||
100.0 * i, "EUR", "https://example.com/70" + i, null, false
|
||||
));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
} catch (Exception e) {
|
||||
fail("Lot thread failed: " + e.getMessage());
|
||||
}
|
||||
});
|
||||
|
||||
@@ -21,7 +21,6 @@ class ObjectDetectionServiceTest {
|
||||
@Test
|
||||
@DisplayName("Should initialize with missing YOLO models (disabled mode)")
|
||||
void testInitializeWithoutModels() throws IOException {
|
||||
// When models don't exist, service should initialize in disabled mode
|
||||
ObjectDetectionService service = new ObjectDetectionService(
|
||||
"non_existent.cfg",
|
||||
"non_existent.weights",
|
||||
@@ -82,9 +81,8 @@ class ObjectDetectionServiceTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should throw IOException when model files exist but OpenCV fails to load")
|
||||
@DisplayName("Should gracefully handle when model files exist but OpenCV fails to load")
|
||||
void testInitializeWithValidModels() throws IOException {
|
||||
// Create dummy model files for testing initialization
|
||||
var cfgPath = Paths.get(TEST_CFG);
|
||||
var weightsPath = Paths.get(TEST_WEIGHTS);
|
||||
var classesPath = Paths.get(TEST_CLASSES);
|
||||
@@ -95,10 +93,11 @@ class ObjectDetectionServiceTest {
|
||||
Files.writeString(classesPath, "person\ncar\ntruck\n");
|
||||
|
||||
// When files exist but OpenCV native library isn't loaded,
|
||||
// constructor should throw IOException wrapping the UnsatisfiedLinkError
|
||||
assertThrows(IOException.class, () -> {
|
||||
new ObjectDetectionService(TEST_CFG, TEST_WEIGHTS, TEST_CLASSES);
|
||||
});
|
||||
// service should construct successfully but be disabled (handled in @PostConstruct)
|
||||
var service = new ObjectDetectionService(TEST_CFG, TEST_WEIGHTS, TEST_CLASSES);
|
||||
// Service is created, but init() handles failures gracefully
|
||||
// detectObjects should return empty list when disabled
|
||||
assertNotNull(service);
|
||||
} finally {
|
||||
Files.deleteIfExists(cfgPath);
|
||||
Files.deleteIfExists(weightsPath);
|
||||
|
||||
@@ -16,366 +16,365 @@ import static org.junit.jupiter.api.Assertions.*;
|
||||
*/
|
||||
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
|
||||
class TroostwijkMonitorTest {
|
||||
|
||||
private String testDbPath;
|
||||
private TroostwijkMonitor monitor;
|
||||
|
||||
@BeforeAll
|
||||
void setUp() throws SQLException, IOException {
|
||||
testDbPath = "test_monitor_" + System.currentTimeMillis() + ".db";
|
||||
|
||||
// Initialize with non-existent YOLO models (disabled mode)
|
||||
monitor = new TroostwijkMonitor(
|
||||
testDbPath,
|
||||
"desktop",
|
||||
"non_existent.cfg",
|
||||
"non_existent.weights",
|
||||
"non_existent.txt"
|
||||
);
|
||||
}
|
||||
|
||||
@AfterAll
|
||||
void tearDown() throws Exception {
|
||||
Files.deleteIfExists(Paths.get(testDbPath));
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should initialize monitor successfully")
|
||||
void testMonitorInitialization() {
|
||||
assertNotNull(monitor);
|
||||
assertNotNull(monitor.getDb());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should print database stats without error")
|
||||
void testPrintDatabaseStats() {
|
||||
assertDoesNotThrow(() -> monitor.printDatabaseStats());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should process pending images without error")
|
||||
void testProcessPendingImages() {
|
||||
assertDoesNotThrow(() -> monitor.processPendingImages());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle empty database gracefully")
|
||||
void testEmptyDatabaseHandling() throws SQLException {
|
||||
var auctions = monitor.getDb().getAllAuctions();
|
||||
var lots = monitor.getDb().getAllLots();
|
||||
|
||||
assertNotNull(auctions);
|
||||
assertNotNull(lots);
|
||||
assertTrue(auctions.isEmpty() || auctions.size() >= 0);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should track lots in database")
|
||||
void testLotTracking() throws SQLException {
|
||||
// Insert test lot
|
||||
var lot = Lot.basic(
|
||||
11111, 22222,
|
||||
"Test Forklift",
|
||||
"Electric forklift in good condition",
|
||||
"Toyota",
|
||||
"Electric",
|
||||
2020,
|
||||
"Machinery",
|
||||
1500.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/22222",
|
||||
LocalDateTime.now().plusDays(1),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(lot);
|
||||
|
||||
var lots = monitor.getDb().getAllLots();
|
||||
assertTrue(lots.stream().anyMatch(l -> l.lotId() == 22222));
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should monitor lots closing soon")
|
||||
void testClosingSoonMonitoring() throws SQLException {
|
||||
// Insert lot closing in 4 minutes
|
||||
var closingSoon = Lot.basic(
|
||||
33333, 44444,
|
||||
"Closing Soon Item",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
100.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/44444",
|
||||
LocalDateTime.now().plusMinutes(4),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(closingSoon);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 44444)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertTrue(found.minutesUntilClose() < 30);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should identify lots with time remaining")
|
||||
void testTimeRemainingCalculation() throws SQLException {
|
||||
var futureLot = Lot.basic(
|
||||
55555, 66666,
|
||||
"Future Lot",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
200.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/66666",
|
||||
LocalDateTime.now().plusHours(2),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(futureLot);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 66666)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertTrue(found.minutesUntilClose() > 60);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle lots without closing time")
|
||||
void testLotsWithoutClosingTime() throws SQLException {
|
||||
var noClosing = Lot.basic(
|
||||
77777, 88888,
|
||||
"No Closing Time",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
150.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/88888",
|
||||
null,
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(noClosing);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 88888)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertNull(found.closingTime());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should track notification status")
|
||||
void testNotificationStatusTracking() throws SQLException {
|
||||
var lot = Lot.basic(
|
||||
99999, 11110,
|
||||
"Test Notification",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
100.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/11110",
|
||||
LocalDateTime.now().plusMinutes(3),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(lot);
|
||||
|
||||
// Update notification flag
|
||||
var notified = Lot.basic(
|
||||
99999, 11110,
|
||||
"Test Notification",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
100.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/11110",
|
||||
LocalDateTime.now().plusMinutes(3),
|
||||
true
|
||||
);
|
||||
|
||||
monitor.getDb().updateLotNotificationFlags(notified);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 11110)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertTrue(found.closingNotified());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should update bid amounts")
|
||||
void testBidAmountUpdates() throws SQLException {
|
||||
var lot = Lot.basic(
|
||||
12121, 13131,
|
||||
"Bid Update Test",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
100.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/13131",
|
||||
LocalDateTime.now().plusDays(1),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(lot);
|
||||
|
||||
// Simulate bid increase
|
||||
var higherBid = Lot.basic(
|
||||
12121, 13131,
|
||||
"Bid Update Test",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
250.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/13131",
|
||||
LocalDateTime.now().plusDays(1),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().updateLotCurrentBid(higherBid);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 13131)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertEquals(250.00, found.currentBid(), 0.01);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle multiple concurrent lot updates")
|
||||
void testConcurrentLotUpdates() throws InterruptedException, SQLException {
|
||||
Thread t1 = new Thread(() -> {
|
||||
try {
|
||||
for (int i = 0; i < 5; i++) {
|
||||
monitor.getDb().upsertLot(Lot.basic(
|
||||
20000 + i, 30000 + i, "Concurrent " + i, "Desc", "", "", 0, "Cat",
|
||||
100.0, "EUR", "https://example.com/" + i, null, false
|
||||
));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
fail("Thread 1 failed: " + e.getMessage());
|
||||
|
||||
private String testDbPath;
|
||||
private TroostwijkMonitor monitor;
|
||||
|
||||
@BeforeAll
|
||||
void setUp() throws SQLException, IOException {
|
||||
testDbPath = "test_monitor_" + System.currentTimeMillis() + ".db";
|
||||
|
||||
monitor = new TroostwijkMonitor(
|
||||
testDbPath,
|
||||
"desktop",
|
||||
"non_existent.cfg",
|
||||
"non_existent.weights",
|
||||
"non_existent.txt"
|
||||
);
|
||||
}
|
||||
|
||||
@AfterAll
|
||||
void tearDown() throws Exception {
|
||||
Files.deleteIfExists(Paths.get(testDbPath));
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should initialize monitor successfully")
|
||||
void testMonitorInitialization() {
|
||||
assertNotNull(monitor);
|
||||
assertNotNull(monitor.getDb());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should print database stats without error")
|
||||
void testPrintDatabaseStats() {
|
||||
assertDoesNotThrow(() -> monitor.printDatabaseStats());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should process pending images without error")
|
||||
void testProcessPendingImages() {
|
||||
assertDoesNotThrow(() -> monitor.processPendingImages());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle empty database gracefully")
|
||||
void testEmptyDatabaseHandling() throws SQLException {
|
||||
var auctions = monitor.getDb().getAllAuctions();
|
||||
var lots = monitor.getDb().getAllLots();
|
||||
|
||||
assertNotNull(auctions);
|
||||
assertNotNull(lots);
|
||||
assertTrue(auctions.isEmpty() || auctions.size() >= 0);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should track lots in database")
|
||||
void testLotTracking() throws SQLException {
|
||||
// Insert test lot
|
||||
var lot = Lot.basic(
|
||||
11111, 22222,
|
||||
"Test Forklift",
|
||||
"Electric forklift in good condition",
|
||||
"Toyota",
|
||||
"Electric",
|
||||
2020,
|
||||
"Machinery",
|
||||
1500.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/22222",
|
||||
LocalDateTime.now().plusDays(1),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(lot);
|
||||
|
||||
var lots = monitor.getDb().getAllLots();
|
||||
assertTrue(lots.stream().anyMatch(l -> l.lotId() == 22222));
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should monitor lots closing soon")
|
||||
void testClosingSoonMonitoring() throws SQLException {
|
||||
// Insert lot closing in 4 minutes
|
||||
var closingSoon = Lot.basic(
|
||||
33333, 44444,
|
||||
"Closing Soon Item",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
100.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/44444",
|
||||
LocalDateTime.now().plusMinutes(4),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(closingSoon);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 44444)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertTrue(found.minutesUntilClose() < 30);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should identify lots with time remaining")
|
||||
void testTimeRemainingCalculation() throws SQLException {
|
||||
var futureLot = Lot.basic(
|
||||
55555, 66666,
|
||||
"Future Lot",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
200.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/66666",
|
||||
LocalDateTime.now().plusHours(2),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(futureLot);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 66666)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertTrue(found.minutesUntilClose() > 60);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle lots without closing time")
|
||||
void testLotsWithoutClosingTime() throws SQLException {
|
||||
var noClosing = Lot.basic(
|
||||
77777, 88888,
|
||||
"No Closing Time",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
150.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/88888",
|
||||
null,
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(noClosing);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 88888)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertNull(found.closingTime());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should track notification status")
|
||||
void testNotificationStatusTracking() throws SQLException {
|
||||
var lot = Lot.basic(
|
||||
99999, 11110,
|
||||
"Test Notification",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
100.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/11110",
|
||||
LocalDateTime.now().plusMinutes(3),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(lot);
|
||||
|
||||
// Update notification flag
|
||||
var notified = Lot.basic(
|
||||
99999, 11110,
|
||||
"Test Notification",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
100.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/11110",
|
||||
LocalDateTime.now().plusMinutes(3),
|
||||
true
|
||||
);
|
||||
|
||||
monitor.getDb().updateLotNotificationFlags(notified);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 11110)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertTrue(found.closingNotified());
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should update bid amounts")
|
||||
void testBidAmountUpdates() throws SQLException {
|
||||
var lot = Lot.basic(
|
||||
12121, 13131,
|
||||
"Bid Update Test",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
100.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/13131",
|
||||
LocalDateTime.now().plusDays(1),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(lot);
|
||||
|
||||
// Simulate bid increase
|
||||
var higherBid = Lot.basic(
|
||||
12121, 13131,
|
||||
"Bid Update Test",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
250.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/13131",
|
||||
LocalDateTime.now().plusDays(1),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().updateLotCurrentBid(higherBid);
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
var found = lots.stream()
|
||||
.filter(l -> l.lotId() == 13131)
|
||||
.findFirst()
|
||||
.orElse(null);
|
||||
|
||||
assertNotNull(found);
|
||||
assertEquals(250.00, found.currentBid(), 0.01);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle multiple concurrent lot updates")
|
||||
void testConcurrentLotUpdates() throws InterruptedException, SQLException {
|
||||
Thread t1 = new Thread(() -> {
|
||||
try {
|
||||
for (int i = 0; i < 5; i++) {
|
||||
monitor.getDb().upsertLot(Lot.basic(
|
||||
20000 + i, 30000 + i, "Concurrent " + i, "Desc", "", "", 0, "Cat",
|
||||
100.0, "EUR", "https://example.com/" + i, null, false
|
||||
));
|
||||
}
|
||||
});
|
||||
|
||||
Thread t2 = new Thread(() -> {
|
||||
try {
|
||||
for (int i = 5; i < 10; i++) {
|
||||
monitor.getDb().upsertLot(Lot.basic(
|
||||
20000 + i, 30000 + i, "Concurrent " + i, "Desc", "", "", 0, "Cat",
|
||||
200.0, "EUR", "https://example.com/" + i, null, false
|
||||
));
|
||||
}
|
||||
} catch (SQLException e) {
|
||||
fail("Thread 2 failed: " + e.getMessage());
|
||||
} catch (Exception e) {
|
||||
fail("Thread 1 failed: " + e.getMessage());
|
||||
}
|
||||
});
|
||||
|
||||
Thread t2 = new Thread(() -> {
|
||||
try {
|
||||
for (int i = 5; i < 10; i++) {
|
||||
monitor.getDb().upsertLot(Lot.basic(
|
||||
20000 + i, 30000 + i, "Concurrent " + i, "Desc", "", "", 0, "Cat",
|
||||
200.0, "EUR", "https://example.com/" + i, null, false
|
||||
));
|
||||
}
|
||||
});
|
||||
|
||||
t1.start();
|
||||
t2.start();
|
||||
t1.join();
|
||||
t2.join();
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
long count = lots.stream()
|
||||
.filter(l -> l.lotId() >= 30000 && l.lotId() < 30010)
|
||||
.count();
|
||||
|
||||
assertTrue(count >= 10);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should schedule monitoring without error")
|
||||
void testScheduleMonitoring() {
|
||||
// This just tests that scheduling doesn't throw
|
||||
// Actual monitoring would run in background
|
||||
assertDoesNotThrow(() -> {
|
||||
// Don't actually start monitoring in test
|
||||
// Just verify monitor is ready
|
||||
assertNotNull(monitor);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle database with auctions and lots")
|
||||
void testDatabaseWithData() throws SQLException {
|
||||
// Insert auction
|
||||
var auction = new AuctionInfo(
|
||||
40000,
|
||||
"Test Auction",
|
||||
"Amsterdam, NL",
|
||||
"Amsterdam",
|
||||
"NL",
|
||||
"https://example.com/auction/40000",
|
||||
"A7",
|
||||
10,
|
||||
LocalDateTime.now().plusDays(2)
|
||||
);
|
||||
|
||||
monitor.getDb().upsertAuction(auction);
|
||||
|
||||
// Insert related lot
|
||||
var lot = Lot.basic(
|
||||
40000, 50000,
|
||||
"Test Lot",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
500.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/50000",
|
||||
LocalDateTime.now().plusDays(2),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(lot);
|
||||
|
||||
// Verify
|
||||
var auctions = monitor.getDb().getAllAuctions();
|
||||
var lots = monitor.getDb().getAllLots();
|
||||
|
||||
assertTrue(auctions.stream().anyMatch(a -> a.auctionId() == 40000));
|
||||
assertTrue(lots.stream().anyMatch(l -> l.lotId() == 50000));
|
||||
}
|
||||
} catch (Exception e) {
|
||||
fail("Thread 2 failed: " + e.getMessage());
|
||||
}
|
||||
});
|
||||
|
||||
t1.start();
|
||||
t2.start();
|
||||
t1.join();
|
||||
t2.join();
|
||||
|
||||
var lots = monitor.getDb().getActiveLots();
|
||||
long count = lots.stream()
|
||||
.filter(l -> l.lotId() >= 30000 && l.lotId() < 30010)
|
||||
.count();
|
||||
|
||||
assertTrue(count >= 10);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should schedule monitoring without error")
|
||||
void testScheduleMonitoring() {
|
||||
// This just tests that scheduling doesn't throw
|
||||
// Actual monitoring would run in background
|
||||
assertDoesNotThrow(() -> {
|
||||
// Don't actually start monitoring in test
|
||||
// Just verify monitor is ready
|
||||
assertNotNull(monitor);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("Should handle database with auctions and lots")
|
||||
void testDatabaseWithData() throws SQLException {
|
||||
// Insert auction
|
||||
var auction = new AuctionInfo(
|
||||
40000,
|
||||
"Test Auction",
|
||||
"Amsterdam, NL",
|
||||
"Amsterdam",
|
||||
"NL",
|
||||
"https://example.com/auction/40000",
|
||||
"A7",
|
||||
10,
|
||||
LocalDateTime.now().plusDays(2)
|
||||
);
|
||||
|
||||
monitor.getDb().upsertAuction(auction);
|
||||
|
||||
// Insert related lot
|
||||
var lot = Lot.basic(
|
||||
40000, 50000,
|
||||
"Test Lot",
|
||||
"Description",
|
||||
"",
|
||||
"",
|
||||
0,
|
||||
"Category",
|
||||
500.00,
|
||||
"EUR",
|
||||
"https://example.com/lot/50000",
|
||||
LocalDateTime.now().plusDays(2),
|
||||
false
|
||||
);
|
||||
|
||||
monitor.getDb().upsertLot(lot);
|
||||
|
||||
// Verify
|
||||
var auctions = monitor.getDb().getAllAuctions();
|
||||
var lots = monitor.getDb().getAllLots();
|
||||
|
||||
assertTrue(auctions.stream().anyMatch(a -> a.auctionId() == 40000));
|
||||
assertTrue(lots.stream().anyMatch(l -> l.lotId() == 50000));
|
||||
}
|
||||
}
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -1,26 +0,0 @@
|
||||
Configure your devices to use the Pi-hole as their DNS server │
|
||||
│ using: │
|
||||
│ │
|
||||
│ IPv4: 192.168.1.159 │
|
||||
│ IPv6: fdc5:59a6:9ac1:f11f:2c86:25d3:6282:37ef │
|
||||
│ If you have not done so already, the above IP should be set to │
|
||||
│ static. │
|
||||
│ View the web interface at http://pi.hole:80/admin or │
|
||||
│ http://192.168.1.159:80/admin │
|
||||
│ │
|
||||
│ Your Admin Webpage login password is gYj7Enh- │
|
||||
│ │
|
||||
│ │
|
||||
│ To allow your user to use all CLI functions without │
|
||||
│ authentication, │
|
||||
│ refer to https://docs.pi-hole.net/main/post-install/ │
|
||||
├─────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
127.0.0.1
|
||||
192.168.1.159
|
||||
::1
|
||||
fdc5:59a6:9ac1:f11f:2c86:25d3:6282:37ef
|
||||
fdc5:59a6:9ac1:f11f:bd8c:6e87:65f0:243c
|
||||
fe80::a05b:bbc6:d47f:3002%enp9s0
|
||||
2IXD-XJN9-C337-1K4Y-BBEO-HV1F-3BVI
|
||||
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user