* Add Claude Code SDK tutorials and examples
This PR adds comprehensive tutorials and examples for the Claude Code SDK, including:
- Research agent implementation with web search capabilities
- Chief of Staff agent with multi-agent coordination
- Observability agent with Docker configuration
- Supporting utilities and documentation
The examples demonstrate key SDK features:
- Multi-turn conversations with ClaudeSDKClient
- Custom output styles and slash commands
- Hooks for automated actions and governance
- Script execution via Bash tool
- Multi-agent orchestration patterns
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: rodrigo olivares <rodrigoolivares@anthropic.com>
Co-authored-by: Alex Notov <zh@anthropic.com>
Previously, CI workflows only monitored notebooks in the skills/ directory.
This caused PR #193 to merge without pedagogical review since its notebook
was in tool_evaluation/.
Changes:
- Update all notebook-related CI workflows to monitor **/*.ipynb
- Add SAST security monitoring workflow for code security analysis
- Update validate_notebooks.py to check all repository notebooks
- Fix notebook discovery in links.yml workflow
This ensures comprehensive CI coverage for all 9+ directories containing
notebooks (skills/, tool_evaluation/, misc/, tool_use/, third_party/, etc.)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Update all slash commands to include the PR number from GitHub Actions
context variable when posting comments. This ensures Claude knows which
PR to comment on when running in CI.
Add explicit instruction to use 'gh pr comment' command to post
reviews to the PR. Claude was performing reviews but not posting
them without this explicit direction.
- Create .github/slash-commands/ with link-review, model-check, notebook-review
- Update GitHub Actions to use slash commands instead of inline prompts
- Add symlinks in .claude/commands/ for local development
- Document slash commands in CONTRIBUTING.md
- Use claude-code-action@v1 instead of beta
This allows developers to run the same CI validations locally using
Claude Code slash commands before pushing changes.
Major simplification of CI/CD:
- Remove complex Python model validation scripts (400+ lines)
- Let Claude handle model validation intelligently via GitHub Actions
- Claude fetches latest models from docs.anthropic.com/en/docs/about-claude/models/overview.md
- Add comprehensive notebook validation script for local testing
- Interactive dashboard with progress tracking
- Auto-fix for deprecated models
- GitHub issue export format
- Idempotent with state persistence
- Simplify CI to use single Python version (3.11)
- Update workflows to use Claude for all intelligent validation
Benefits:
- No more hardcoded model lists to maintain
- Claude understands context (e.g., educational examples)
- 50% faster CI (removed matrix strategy)
- Single source of truth for models (docs site)
- Brittle check that will break with new model versions
- Claude already provides intelligent model recommendations
- Not all notebooks should use Haiku (some need more capability)
- Use Python 3.11 only (no version-specific code in notebooks)
- Reduces CI runtime by 50%
- Reduces API costs by 50% for notebook execution
- Simplifies PR checks (one instead of two identical)
- Add skip_code_blocks=true (important for notebooks with example code)
- Add require_https=false for development flexibility
- Simplify accept codes to just 403 and 429 (like docs)
- Add www.claude.ai to exclusions
- Exclude .github/ and scripts/ paths from checking
- Better comments explaining each setting
- Remove brittle hardcoded API key checks from validate_notebooks.py
- Enhance Claude review to check for any secrets (not just Anthropic)
- Claude understands context (e.g., educational 'bad examples' are OK)
- Check for clear introduction explaining the notebook's purpose
- Validate configuration instructions are present
- Ensure connecting explanations between cells for better flow
- Add claude-notebook-review.yml for intelligent code review
- Add claude-link-review.yml for link quality checks
- Update notebook-quality.yml to properly capture test outputs
- Use anthropics/claude-code-action@beta like docs repo
Notebook outputs are educational content in cookbook repositories.
They show users what to expect when running the code.
- Remove nbstripout from all dependencies and configurations
- Remove nbstripout check from CI workflow
- Update documentation to explain outputs are intentional
- Make validation scripts non-blocking for POC
- Fix lychee configuration conflict
The CI now validates notebooks without removing demonstration outputs.
- Make ruff checks non-blocking (|| true) for notebooks
- Make model validation report issues but not fail
- This allows the POC to demonstrate issue detection without blocking
The CI now shows what issues exist without preventing the PR from
being merged, which is appropriate for a proof of concept.
Remove nbqa in favor of ruff's native Jupyter support (v0.6.0+).
Replace papermill with nbconvert due to uv dependency resolution issues.
Also remove S105/S106 ignores to enforce better security practices.
- Update pyproject.toml to use ruff v0.12.12 with native notebook support
- Replace papermill with nbconvert for notebook execution
- Remove nbqa from all dependencies and pre-commit hooks
- Update GitHub Actions workflows to use ruff directly
- Remove hardcoded password ignores for better security
- Update documentation to reflect simplified setup
- Add dummy package structure for hatchling build system
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Configure lychee for notebook link validation
- Set up GitHub workflow for PR and scheduled checks
- Exclude API endpoints and localhost from checks
- Add PR comment integration for broken links
- Add nbstripout to clean notebook outputs
- Configure nbqa with ruff for notebook linting
- Add ruff for Python file formatting
- Add custom hooks for model and notebook validation
- Add notebook-quality.yml with papermill execution testing
- Add claude-model-check.yml for model validation
- Add security-scan.yml for secret detection
- Implement tiered testing (full for maintainers, mock for external)
- Add allowed_models.py that fetches latest models from Anthropic docs
- Implement 24-hour cache to avoid excessive requests
- Fall back to hardcoded list if fetch fails
- Add check_models.py to validate model usage in notebooks
- Add validate_notebooks.py for security and structure checks
- Update .gitignore for cache files
Update timestamp generation to snap to bucket boundaries to ensure
consistent data retrieval, as partial dates may not return data.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Comprehensive guide to programmatically accessing Claude API usage and cost data for custom dashboards, cost monitoring, and usage analysis.
Features:
- Basic usage and cost tracking
- Time granularity and filtering options
- Grouping and breakdowns
- Priority Tier analysis
- Pagination for large datasets
- Error handling best practices
- Practical alerting examples