This commit ensures that compression jobs survive Docker container restarts
and are automatically recovered and restarted.
Changes:
- Modified CancelledError handler to preserve job status during shutdown
- Jobs now keep 'processing' or 'validating' status instead of being marked
as 'cancelled' when the app shuts down
- Added job persistence layer using SQLite database
- Implemented automatic job recovery on application startup
- Added process cleanup utilities for orphaned ffmpeg processes
- User-initiated cancellations still properly mark jobs as cancelled
- Jobs are visible in frontend after recovery
The recovery system:
1. Detects interrupted jobs (processing/validating status)
2. Cleans up orphaned ffmpeg processes and temp files
3. Restarts interrupted jobs from beginning
4. Maintains queue order and respects concurrency limits
5. Works with multiple jobs in the queue
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Separate dotfiles (starting with .) from regular locations
- Display dotfiles in gray with disabled cursor
- Keep dotfiles always positioned at the bottom of the list regardless of sort order
- Prevent dotfile selection while maintaining visibility
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
## Overview
Refactored compression system to replace original files in-place instead of creating duplicate files with _compressed suffix. This eliminates disk space waste when compressing videos.
## Key Changes
### Backend (backend/compression.py)
- **Added enhanced validation** (`validate_video_enhanced`):
- FFmpeg decode validation (catches corrupted files)
- File size sanity check (minimum 1KB)
- Duration comparison between original and compressed (±1 second tolerance)
- **Implemented safe file replacement** (`safe_replace_file`):
- Enhanced validation before any file operations
- Atomic file operations using `os.replace()` to prevent race conditions
- Backup strategy: original → .backup during replacement
- Automatic rollback if validation or replacement fails
- Comprehensive logging at each step
- **Updated compress_video function**:
- Removed `_compressed_75` suffix from output filenames
- Changed to use original file path for in-place replacement
- Calls new safe_replace_file method instead of simple os.rename
- Improved logging messages to reflect replacement operation
### Frontend (frontend/src/App.jsx)
- Updated compression success messages to show reduction percentage
- Messages now indicate file is being replaced, not duplicated
- Displays compression reduction percentage (e.g., "Started compressing file.mp4 (75% reduction)")
## Safety Features
1. **Backup-and-restore**: Original file backed up as .backup until verification passes
2. **Enhanced validation**: Three-level validation before committing to replacement
3. **Atomic operations**: Uses os.replace() for atomic file replacements
4. **Automatic rollback**: If any step fails, original is restored from backup
5. **Comprehensive logging**: All file operations logged for debugging
## File Operations Flow
1. Compress to temp file
2. Enhanced validation (decode + size + duration)
3. Create backup: original.mp4 → original.mp4.backup
4. Move compressed: temp → original.mp4
5. Verify final file is accessible
6. Delete backup only after final verification
7. Rollback if any step fails
## Testing
- Docker rebuild: `docker compose down && docker compose build && docker compose up -d`
- Manual testing recommended for compression jobs with various video files
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Increase watchdog timeout to 15 minutes (configurable via COMPRESSION_WATCHDOG_TIMEOUT env var)
- Fix stderr buffer overflow by setting 8MB limit and truncating to last 1000 lines
- Keep failed jobs visible in UI until manually removed (auto-prune only completed/cancelled)
- Add job removal endpoint: DELETE /api/compress/jobs/{job_id}?action=remove
Resolves issue where jobs stuck at 70% were killed prematurely due to stderr buffer overflow
and aggressive 5-minute timeout.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add security headers (X-Frame-Options, X-Content-Type-Options, X-XSS-Protection)
- Implement proper cache control for static assets and prevent index.html caching
- Fix SSE handling to properly fetch new compression jobs
- Immediately refresh job list after queuing compression for better UX feedback
Security fixes:
- Add filename path traversal validation (missing "/" check)
- Prevents attacks like filename="../../../etc/passwd"
Race condition and concurrency fixes:
- Add async locking to get_jobs_snapshot() to prevent dictionary iteration errors
- Fix watchdog loop to detect process completion immediately (move sleep to end)
- Fix EventSource ref updates during SSE reconnection to prevent memory leaks
Precision and calculation fixes:
- Keep duration as float instead of int for accurate bitrate calculations (~1% improvement)
- Prevents cumulative rounding errors in compression
Type safety improvements:
- Import and use Any from typing module instead of lowercase "any"
- Fixes Python type hints for proper static analysis
Media handling improvements:
- Determine MIME types dynamically using mimetypes module
- Supports MOV (video/quicktime), AVI, PNG properly instead of hardcoded types
UX fixes:
- Fix formatETA() to handle 0 seconds correctly (was showing "--" instead of "0m 0s")
- Use stable key for React video element (prevents unnecessary remounts)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Handle __root__ special case in video streaming and image endpoints
- Support locations with files not organized by date folders
- Add visual indicator for root-level file collections in UI
- Implement periodic filesystem write permission checks (60-minute intervals)
- Add real-time health status monitoring with SSE endpoints
- Display system health banner when storage issues detected
- Limit compression to 1 concurrent job with queue support
- Add max queue limit of 10 pending jobs
- Show queue positions for pending compression jobs
- Update button text dynamically (Start/Queue Compression)
- Enable write access to footage mount in Docker
- Add comprehensive logging for health checks and compression
Co-Authored-By: Alihan <alihan@example.com>
Resolved critical issue where compression jobs would get stuck at random
progress percentages (e.g., 35.5%) due to pipe buffer deadlock.
**Root Cause:**
- Python code only read ffmpeg's stdout for progress updates
- ffmpeg's stderr pipe buffer (64KB) would fill with output
- ffmpeg blocked writing to stderr, Python blocked reading stdout
- Result: deadlock with job appearing stuck but ffmpeg still using CPU
**Fixes:**
- Read stdout and stderr concurrently using asyncio.gather()
- Prevents pipe buffer deadlock by consuming both streams
- Added watchdog timer to detect genuinely stuck jobs (5 min timeout)
- Improved error logging with stderr capture
- Better error messages showing exact failure reason
**Additional Improvements:**
- Watchdog sets job.error with informative message before killing
- Captures last 50 lines of stderr on failure for debugging
- Enhanced cancellation handling with multiple checkpoints
Tested with previously stuck video file - progress now updates
continuously throughout encoding process.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- React frontend with video/image browser
- Python FastAPI backend with video compression
- Docker containerized setup
- Video compression with FFmpeg (two-pass encoding)
- Real-time job monitoring with SSE
- Global active jobs monitor
- Clickable header to reset navigation
- Toast notifications for user feedback