mirror of
https://github.com/promptfoo/promptfoo.git
synced 2023-08-15 01:10:51 +03:00
Update README.md
This commit is contained in:
10
README.md
10
README.md
@@ -4,15 +4,15 @@
|
||||
[](https://github.com/typpo/promptfoo/actions/workflows/main.yml)
|
||||

|
||||
|
||||
`promptfoo` is a tool for testing and evaluating LLM prompt quality.
|
||||
`promptfoo` is a tool for testing and evaluating LLM output quality.
|
||||
|
||||
With promptfoo, you can:
|
||||
|
||||
- **Systematically test prompts** against predefined test cases
|
||||
- **Systematically test prompts & models** against predefined test cases
|
||||
- **Evaluate quality and catch regressions** by comparing LLM outputs side-by-side
|
||||
- **Speed up evaluations** with caching and concurrent tests
|
||||
- **Score outputs automatically** by defining "expectations"
|
||||
- Use as a CLI, or integrate into your workflow as a library
|
||||
- **Speed up evaluations** with caching and concurrency
|
||||
- **Score outputs automatically** by defining test cases
|
||||
- Use as a CLI, library, or in CI/CD
|
||||
- Use OpenAI, Anthropic, open-source models like Llama and Vicuna, or integrate custom API providers for any LLM API
|
||||
|
||||
The goal: **test-driven prompt engineering**, rather than trial-and-error.
|
||||
|
||||
Reference in New Issue
Block a user