Files
Fast-Whisper-MCP-Server/docker-build.sh
Alihan fb1e5dceba Upgrade to PyTorch 2.6.0 and enhance GPU reset script with Ollama management
- Upgrade PyTorch and torchaudio to 2.6.0 with CUDA 12.4 support
- Update GPU reset script to gracefully stop/start Ollama via supervisorctl
- Add Docker Compose configuration for both API and MCP server modes
- Implement comprehensive Docker entrypoint for multi-mode deployment
- Add GPU health check cleanup to prevent memory leaks
- Fix transcription memory management with proper resource cleanup
- Add filename security validation to prevent path traversal attacks
- Include .dockerignore for optimized Docker builds
- Remove deprecated supervisor configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-27 23:01:22 +03:00

20 lines
476 B
Bash
Executable File

#!/bin/bash
set -e
datetime_prefix() {
date "+[%Y-%m-%d %H:%M:%S]"
}
echo "$(datetime_prefix) Building Whisper Transcriptor Docker image..."
# Build the Docker image
docker build -t transcriptor-apimcp:latest .
echo "$(datetime_prefix) Build complete!"
echo "$(datetime_prefix) Image: transcriptor-apimcp:latest"
echo ""
echo "Usage:"
echo " API mode: ./docker-run-api.sh"
echo " MCP mode: ./docker-run-mcp.sh"
echo " Or use: docker-compose up transcriptor-api"