10
README.md
10
README.md
@@ -4,28 +4,34 @@ Prompt Lab is a flexible playground for comparing and optimizing LLM prompts. It
|
|||||||
|
|
||||||
[]
|
[]
|
||||||
|
|
||||||
Currently there's a public playground available at [https://promptlab.corbt.com/](https://promptlab.corbt.com/), but the recommended approach is to but the recommended approach is to [run locally](#running-locally).
|
Currently there's a public playground available at [https://promptlab.corbt.com/](https://promptlab.corbt.com/), but the recommended approach is to [run locally](#running-locally).
|
||||||
|
|
||||||
## High-Level Features
|
## High-Level Features
|
||||||
|
|
||||||
**Configure Multiple Prompts**
|
**Configure Multiple Prompts**
|
||||||
|
|
||||||
Set up multiple prompt configurations and compare their output side-by-side. Each configuration can be configured independently.
|
Set up multiple prompt configurations and compare their output side-by-side. Each configuration can be configured independently.
|
||||||
|
|
||||||
**Visualize Responses**
|
**Visualize Responses**
|
||||||
|
|
||||||
Inspect prompt completions side-by-side.
|
Inspect prompt completions side-by-side.
|
||||||
|
|
||||||
**Test Many Inputs**
|
**Test Many Inputs**
|
||||||
|
|
||||||
Prompt Lab lets you *template* a prompt. Use the templating feature to run the prompts you're testing against many potential inputs for broader coverage of your problem space than you'd get with manual testing.
|
Prompt Lab lets you *template* a prompt. Use the templating feature to run the prompts you're testing against many potential inputs for broader coverage of your problem space than you'd get with manual testing.
|
||||||
|
|
||||||
**🪄 Auto-generate Test Scenarios**
|
**🪄 Auto-generate Test Scenarios**
|
||||||
|
|
||||||
Prompt Lab includes a tool to generate new test scenarios based on your existing prompts and scenarios. Just click "Autogenerate Scenario" to try it out!
|
Prompt Lab includes a tool to generate new test scenarios based on your existing prompts and scenarios. Just click "Autogenerate Scenario" to try it out!
|
||||||
|
|
||||||
**Prompt Validation and Typeahead**
|
**Prompt Validation and Typeahead**
|
||||||
|
|
||||||
We use OpenAI's OpenAPI spec to automatically provide typeahead and validate prompts.
|
We use OpenAI's OpenAPI spec to automatically provide typeahead and validate prompts.
|
||||||
|
|
||||||
[]
|
[]
|
||||||
|
|
||||||
**Function Call Support**
|
**Function Call Support**
|
||||||
|
|
||||||
Natively supports [OpenAI function calls](https://openai.com/blog/function-calling-and-other-api-updates) on supported models.
|
Natively supports [OpenAI function calls](https://openai.com/blog/function-calling-and-other-api-updates) on supported models.
|
||||||
|
|
||||||
[]
|
[]
|
||||||
@@ -41,5 +47,5 @@ Prompt Lab currently supports GPT-3.5 and GPT-4. Wider model support is planned.
|
|||||||
4. Clone this repository: `git clone https://github.com/prompt-lab/prompt-lab`
|
4. Clone this repository: `git clone https://github.com/prompt-lab/prompt-lab`
|
||||||
5. Install the dependencies: `cd prompt-lab && pnpm install`
|
5. Install the dependencies: `cd prompt-lab && pnpm install`
|
||||||
6. Create a `.env` file (`cp .env.example .env`) and enter your `OPENAI_API_KEY`.
|
6. Create a `.env` file (`cp .env.example .env`) and enter your `OPENAI_API_KEY`.
|
||||||
7. Start the app: `pnpm start`
|
7. Start the app: `pnpm dev`
|
||||||
8. Navigate to [http://localhost:3000](http://localhost:3000)
|
8. Navigate to [http://localhost:3000](http://localhost:3000)
|
||||||
Reference in New Issue
Block a user