Files
Fast-Whisper-MCP-Server/requirements.txt
Alihan 3c0f79645c Clean up documentation and refine production optimizations
- Remove CLAUDE.md and IMPLEMENTATION_PLAN.md (development artifacts)
- Add nginx configuration for reverse proxy setup
- Update .gitignore for better coverage
- Refine GPU reset logic and error handling
- Improve job queue concurrency and resource management
- Enhance model manager retry logic and file locking
- Optimize transcriber batch processing and GPU allocation
- Strengthen API server input validation and monitoring
- Update circuit breaker with better timeout handling
- Adjust supervisor configuration for production stability

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-13 01:25:01 +03:00

38 lines
1.2 KiB
Plaintext

# uv pip install -r ./requirements.txt --index-url https://download.pytorch.org/whl/cu124
faster-whisper
torch #==2.6.0+cu124
torchaudio #==2.6.0+cu124
# uv pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
# pip install faster-whisper>=0.9.0
# pip install mcp[cli]>=1.2.0
mcp[cli]
# REST API dependencies
fastapi>=0.115.0
uvicorn[standard]>=0.32.0
python-multipart>=0.0.9
aiofiles>=23.0.0 # Async file I/O
# Test audio generation dependencies
gTTS>=2.3.0
pyttsx3>=2.90
scipy>=1.10.0
numpy>=1.24.0
# PyTorch Installation Guide:
# Please install the appropriate version of PyTorch based on your CUDA version:
#
# • CUDA 12.6:
# pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126
#
# • CUDA 12.4:
# pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
#
# • CUDA 12.1:
# pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
#
# • CPU version:
# pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cpu
#