Commit Graph

10 Commits

Author SHA1 Message Date
Alihan
91141eadcf Implement in-place compression file replacement with enhanced validation
## Overview
Refactored compression system to replace original files in-place instead of creating duplicate files with _compressed suffix. This eliminates disk space waste when compressing videos.

## Key Changes

### Backend (backend/compression.py)
- **Added enhanced validation** (`validate_video_enhanced`):
  - FFmpeg decode validation (catches corrupted files)
  - File size sanity check (minimum 1KB)
  - Duration comparison between original and compressed (±1 second tolerance)

- **Implemented safe file replacement** (`safe_replace_file`):
  - Enhanced validation before any file operations
  - Atomic file operations using `os.replace()` to prevent race conditions
  - Backup strategy: original → .backup during replacement
  - Automatic rollback if validation or replacement fails
  - Comprehensive logging at each step

- **Updated compress_video function**:
  - Removed `_compressed_75` suffix from output filenames
  - Changed to use original file path for in-place replacement
  - Calls new safe_replace_file method instead of simple os.rename
  - Improved logging messages to reflect replacement operation

### Frontend (frontend/src/App.jsx)
- Updated compression success messages to show reduction percentage
- Messages now indicate file is being replaced, not duplicated
- Displays compression reduction percentage (e.g., "Started compressing file.mp4 (75% reduction)")

## Safety Features
1. **Backup-and-restore**: Original file backed up as .backup until verification passes
2. **Enhanced validation**: Three-level validation before committing to replacement
3. **Atomic operations**: Uses os.replace() for atomic file replacements
4. **Automatic rollback**: If any step fails, original is restored from backup
5. **Comprehensive logging**: All file operations logged for debugging

## File Operations Flow
1. Compress to temp file
2. Enhanced validation (decode + size + duration)
3. Create backup: original.mp4 → original.mp4.backup
4. Move compressed: temp → original.mp4
5. Verify final file is accessible
6. Delete backup only after final verification
7. Rollback if any step fails

## Testing
- Docker rebuild: `docker compose down && docker compose build && docker compose up -d`
- Manual testing recommended for compression jobs with various video files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-19 01:25:40 +03:00
Alihan
d8242d47b9 Fix compression job watchdog timeout and error handling
- Increase watchdog timeout to 15 minutes (configurable via COMPRESSION_WATCHDOG_TIMEOUT env var)
- Fix stderr buffer overflow by setting 8MB limit and truncating to last 1000 lines
- Keep failed jobs visible in UI until manually removed (auto-prune only completed/cancelled)
- Add job removal endpoint: DELETE /api/compress/jobs/{job_id}?action=remove

Resolves issue where jobs stuck at 70% were killed prematurely due to stderr buffer overflow
and aggressive 5-minute timeout.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 01:59:28 +03:00
Alihan
dd5fe1617a Fix 9 critical bugs: security, race conditions, precision, and UX improvements
Security fixes:
- Add filename path traversal validation (missing "/" check)
- Prevents attacks like filename="../../../etc/passwd"

Race condition and concurrency fixes:
- Add async locking to get_jobs_snapshot() to prevent dictionary iteration errors
- Fix watchdog loop to detect process completion immediately (move sleep to end)
- Fix EventSource ref updates during SSE reconnection to prevent memory leaks

Precision and calculation fixes:
- Keep duration as float instead of int for accurate bitrate calculations (~1% improvement)
- Prevents cumulative rounding errors in compression

Type safety improvements:
- Import and use Any from typing module instead of lowercase "any"
- Fixes Python type hints for proper static analysis

Media handling improvements:
- Determine MIME types dynamically using mimetypes module
- Supports MOV (video/quicktime), AVI, PNG properly instead of hardcoded types

UX fixes:
- Fix formatETA() to handle 0 seconds correctly (was showing "--" instead of "0m 0s")
- Use stable key for React video element (prevents unnecessary remounts)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 00:07:54 +03:00
Alihan
2acb4d9f4e Fix compression for files in root directory
- Handle __root__ special case in compression start endpoint
- Allow compression of videos not organized in date folders
2025-10-12 23:47:15 +03:00
Alihan
a7b7ad41e9 Fix video playback for files in root directory
- Handle __root__ special case in video streaming and image endpoints
- Support locations with files not organized by date folders
- Add visual indicator for root-level file collections in UI
2025-10-12 23:42:21 +03:00
Alihan
dec49a43f9 Add filesystem health monitoring and compression queue system
- Implement periodic filesystem write permission checks (60-minute intervals)
- Add real-time health status monitoring with SSE endpoints
- Display system health banner when storage issues detected
- Limit compression to 1 concurrent job with queue support
- Add max queue limit of 10 pending jobs
- Show queue positions for pending compression jobs
- Update button text dynamically (Start/Queue Compression)
- Enable write access to footage mount in Docker
- Add comprehensive logging for health checks and compression

Co-Authored-By: Alihan <alihan@example.com>
2025-10-12 22:54:21 +03:00
Alihan
b01fea34aa Refactor codebase: Fix vulnerabilities, improve performance, and eliminate technical debt
## Critical Security Fixes
- Fix path traversal vulnerability with proper sanitization and symlink resolution
- Add CORS configuration via ALLOWED_ORIGINS environment variable
- Validate all user-supplied path components before file operations

## Performance Improvements
- Replace synchronous file.stat() with async aiofiles.os.stat()
- Add TTL-based directory listing cache (60s) for locations/dates/files
- Optimize regex compilation (moved to class level, ~1000x fewer compilations)
- Consolidate duplicate SSE connections into shared useCompressionJobs hook

## Bug Fixes
- Fix race condition in SSE by adding async lock and snapshot method
- Fix memory leak with periodic job pruning (every 5 minutes, max 100 jobs)
- Fix ETA calculation double-counting in pass 1
- Fix video validation to check actual errors, not just stderr presence

## Code Quality
- Replace all print() with proper logging framework (INFO/WARNING/ERROR levels)
- Extract magic numbers to named constants (MAX_STORED_JOBS, WATCHDOG_TIMEOUT, etc)
- Remove dead code (unused CompressionPanel.jsx component)
- Create shared utility modules (formatters.js, useCompressionJobs.js)
- Eliminate duplicate functions (formatFileSize, formatETA across 3 files)

## Impact
- Security: Eliminated path traversal vulnerability
- Stability: Fixed race condition, memory leak, cancellation bugs
- Performance: 2-3x faster directory listings, non-blocking I/O
- Maintainability: Proper logging, DRY principles, configuration constants

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-12 20:06:31 +03:00
Alihan
7cd79216fe Improve compression queue: Add resource limits and security
- Add concurrency limiting with semaphore (max 2 concurrent jobs)
- Add job pruning to prevent unbounded memory growth (max 100 jobs)
- Add file path validation to ensure files within allowed directory
- Fix ffmpeg2pass log cleanup to use source file directory
- Add SSE reconnect handler to re-sync jobs on connection restore

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-12 19:48:46 +03:00
Alihan
752fa4eefd Fix compression job deadlock and add watchdog timer
Resolved critical issue where compression jobs would get stuck at random
progress percentages (e.g., 35.5%) due to pipe buffer deadlock.

**Root Cause:**
- Python code only read ffmpeg's stdout for progress updates
- ffmpeg's stderr pipe buffer (64KB) would fill with output
- ffmpeg blocked writing to stderr, Python blocked reading stdout
- Result: deadlock with job appearing stuck but ffmpeg still using CPU

**Fixes:**
- Read stdout and stderr concurrently using asyncio.gather()
- Prevents pipe buffer deadlock by consuming both streams
- Added watchdog timer to detect genuinely stuck jobs (5 min timeout)
- Improved error logging with stderr capture
- Better error messages showing exact failure reason

**Additional Improvements:**
- Watchdog sets job.error with informative message before killing
- Captures last 50 lines of stderr on failure for debugging
- Enhanced cancellation handling with multiple checkpoints

Tested with previously stuck video file - progress now updates
continuously throughout encoding process.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-12 03:05:14 +03:00
Alihan
0d71830cfb Initial commit: Drone Footage Manager with Video Compression
- React frontend with video/image browser
- Python FastAPI backend with video compression
- Docker containerized setup
- Video compression with FFmpeg (two-pass encoding)
- Real-time job monitoring with SSE
- Global active jobs monitor
- Clickable header to reset navigation
- Toast notifications for user feedback
2025-10-12 02:22:12 +03:00