Files
Fast-Whisper-MCP-Server/docker-run-mcp.sh
Alihan fb1e5dceba Upgrade to PyTorch 2.6.0 and enhance GPU reset script with Ollama management
- Upgrade PyTorch and torchaudio to 2.6.0 with CUDA 12.4 support
- Update GPU reset script to gracefully stop/start Ollama via supervisorctl
- Add Docker Compose configuration for both API and MCP server modes
- Implement comprehensive Docker entrypoint for multi-mode deployment
- Add GPU health check cleanup to prevent memory leaks
- Fix transcription memory management with proper resource cleanup
- Add filename security validation to prevent path traversal attacks
- Include .dockerignore for optimized Docker builds
- Remove deprecated supervisor configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-27 23:01:22 +03:00

41 lines
1.1 KiB
Bash
Executable File

#!/bin/bash
set -e
datetime_prefix() {
date "+[%Y-%m-%d %H:%M:%S]"
}
echo "$(datetime_prefix) Starting Whisper Transcriptor in MCP mode..."
# Check if image exists
if ! docker image inspect transcriptor-apimcp:latest &> /dev/null; then
echo "$(datetime_prefix) Image not found. Building first..."
./docker-build.sh
fi
# Stop and remove existing container if running
if docker ps -a --format '{{.Names}}' | grep -q '^transcriptor-mcp$'; then
echo "$(datetime_prefix) Stopping existing container..."
docker stop transcriptor-mcp || true
docker rm transcriptor-mcp || true
fi
# Run the container in MCP mode (interactive stdio)
echo "$(datetime_prefix) Starting MCP server in stdio mode..."
echo "$(datetime_prefix) Press Ctrl+C to stop"
echo ""
docker run -it --rm \
--name transcriptor-mcp \
--gpus all \
-e SERVER_MODE=mcp \
-e CUDA_VISIBLE_DEVICES=0 \
-e TRANSCRIPTION_MODEL=large-v3 \
-e TRANSCRIPTION_DEVICE=auto \
-e TRANSCRIPTION_COMPUTE_TYPE=auto \
-e JOB_QUEUE_MAX_SIZE=100 \
-v "$(pwd)/models:/models" \
-v "$(pwd)/outputs:/outputs" \
-v "$(pwd)/logs:/logs" \
transcriptor-apimcp:latest