Security fixes:
- Add filename path traversal validation (missing "/" check)
- Prevents attacks like filename="../../../etc/passwd"
Race condition and concurrency fixes:
- Add async locking to get_jobs_snapshot() to prevent dictionary iteration errors
- Fix watchdog loop to detect process completion immediately (move sleep to end)
- Fix EventSource ref updates during SSE reconnection to prevent memory leaks
Precision and calculation fixes:
- Keep duration as float instead of int for accurate bitrate calculations (~1% improvement)
- Prevents cumulative rounding errors in compression
Type safety improvements:
- Import and use Any from typing module instead of lowercase "any"
- Fixes Python type hints for proper static analysis
Media handling improvements:
- Determine MIME types dynamically using mimetypes module
- Supports MOV (video/quicktime), AVI, PNG properly instead of hardcoded types
UX fixes:
- Fix formatETA() to handle 0 seconds correctly (was showing "--" instead of "0m 0s")
- Use stable key for React video element (prevents unnecessary remounts)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Handle __root__ special case in video streaming and image endpoints
- Support locations with files not organized by date folders
- Add visual indicator for root-level file collections in UI
- Implement periodic filesystem write permission checks (60-minute intervals)
- Add real-time health status monitoring with SSE endpoints
- Display system health banner when storage issues detected
- Limit compression to 1 concurrent job with queue support
- Add max queue limit of 10 pending jobs
- Show queue positions for pending compression jobs
- Update button text dynamically (Start/Queue Compression)
- Enable write access to footage mount in Docker
- Add comprehensive logging for health checks and compression
Co-Authored-By: Alihan <alihan@example.com>
Resolved critical issue where compression jobs would get stuck at random
progress percentages (e.g., 35.5%) due to pipe buffer deadlock.
**Root Cause:**
- Python code only read ffmpeg's stdout for progress updates
- ffmpeg's stderr pipe buffer (64KB) would fill with output
- ffmpeg blocked writing to stderr, Python blocked reading stdout
- Result: deadlock with job appearing stuck but ffmpeg still using CPU
**Fixes:**
- Read stdout and stderr concurrently using asyncio.gather()
- Prevents pipe buffer deadlock by consuming both streams
- Added watchdog timer to detect genuinely stuck jobs (5 min timeout)
- Improved error logging with stderr capture
- Better error messages showing exact failure reason
**Additional Improvements:**
- Watchdog sets job.error with informative message before killing
- Captures last 50 lines of stderr on failure for debugging
- Enhanced cancellation handling with multiple checkpoints
Tested with previously stuck video file - progress now updates
continuously throughout encoding process.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- React frontend with video/image browser
- Python FastAPI backend with video compression
- Docker containerized setup
- Video compression with FFmpeg (two-pass encoding)
- Real-time job monitoring with SSE
- Global active jobs monitor
- Clickable header to reset navigation
- Toast notifications for user feedback