Files
Fast-Whisper-MCP-Server/run_mcp_server.sh
Alihan 40555592e6 Fix deadlock in job queue and refactor Phase 2 tests
- Fix: Change threading.Lock to threading.RLock in JobQueue to prevent deadlock
  - Issue: list_jobs() acquired lock then called get_job_status() which tried to acquire same lock
  - Solution: Use re-entrant lock (RLock) to allow nested lock acquisition (src/core/job_queue.py:144)

- Refactor: Update test_phase2.py to use real test.mp3 file
  - Changed _create_test_audio_file() to return /home/uad/agents/tools/mcp-transcriptor/data/test.mp3
  - Removed specific text assertion, now just verifies transcription is not empty
  - Tests use tiny model for speed while processing real 6.95s audio file

- Update: Improve audio validation error handling in transcriber.py
  - Changed validate_audio_file() to use exception-based validation
  - Better error messages for API responses

- Add: Job queue configuration to startup scripts
  - Added JOB_QUEUE_MAX_SIZE, JOB_METADATA_DIR, JOB_RETENTION_DAYS env vars
  - Added GPU health monitoring configuration
  - Create job metadata directory on startup
2025-10-10 00:11:36 +03:00

62 lines
2.2 KiB
Bash
Executable File

#!/bin/bash
set -e
datetime_prefix() {
date "+[%Y-%m-%d %H:%M:%S]"
}
# Get current user ID to avoid permission issues
USER_ID=$(id -u)
GROUP_ID=$(id -g)
# Set Python path to include src directory
export PYTHONPATH="/home/uad/agents/tools/mcp-transcriptor/src:$PYTHONPATH"
# Set CUDA library path
export LD_LIBRARY_PATH=/usr/local/cuda-12.4/targets/x86_64-linux/lib:$LD_LIBRARY_PATH
# Set environment variables
export CUDA_VISIBLE_DEVICES=1
export WHISPER_MODEL_DIR="/home/uad/agents/tools/mcp-transcriptor/data/models"
export TRANSCRIPTION_OUTPUT_DIR="/media/raid/agents/tools/mcp-transcriptor/outputs"
export TRANSCRIPTION_BATCH_OUTPUT_DIR="/media/raid/agents/tools/mcp-transcriptor/outputs/batch"
export TRANSCRIPTION_MODEL="large-v3"
export TRANSCRIPTION_DEVICE="cuda"
export TRANSCRIPTION_COMPUTE_TYPE="cuda"
export TRANSCRIPTION_OUTPUT_FORMAT="txt"
export TRANSCRIPTION_BEAM_SIZE="2"
export TRANSCRIPTION_TEMPERATURE="0.0"
export TRANSCRIPTION_USE_TIMESTAMP="false"
export TRANSCRIPTION_FILENAME_PREFIX="test_"
# GPU Auto-Reset Configuration
export GPU_RESET_COOLDOWN_MINUTES=5 # Minimum time between GPU reset attempts
# Job Queue Configuration
export JOB_QUEUE_MAX_SIZE=100
export JOB_METADATA_DIR="/media/raid/agents/tools/mcp-transcriptor/outputs/jobs"
export JOB_RETENTION_DAYS=7
# GPU Health Monitoring
export GPU_HEALTH_CHECK_ENABLED=true
export GPU_HEALTH_CHECK_INTERVAL_MINUTES=10
export GPU_HEALTH_TEST_MODEL="tiny"
# Log start of the script
echo "$(datetime_prefix) Starting whisper server script..."
echo "test: $WHISPER_MODEL_DIR"
# Optional: Verify required directories exist
if [ ! -d "$WHISPER_MODEL_DIR" ]; then
echo "$(datetime_prefix) Error: Whisper model directory does not exist: $WHISPER_MODEL_DIR"
exit 1
fi
# Ensure job metadata directory exists
mkdir -p "$JOB_METADATA_DIR"
# Run the Python script with the defined environment variables
#/home/uad/agents/tools/mcp-transcriptor/venv/bin/python /home/uad/agents/tools/mcp-transcriptor/whisper_server.py 2>&1 | tee /home/uad/agents/tools/mcp-transcriptor/mcp.logs
/home/uad/agents/tools/mcp-transcriptor/venv/bin/python -u /home/uad/agents/tools/mcp-transcriptor/src/servers/whisper_server.py 2>&1 | tee /home/uad/agents/tools/mcp-transcriptor/mcp.logs