🧪 Prompt Lab
Prompt Lab is a flexible playground for comparing and optimizing LLM prompts. It lets you quickly generate, test and compare candidate prompts with realistic sample data.
[]
Currently there's a public playground available at https://promptlab.corbt.com/, but the recommended approach is to run locally.
High-Level Features
-
Configure Multiple Prompts - Set up multiple prompt configurations and compare their output side-by-side. Each configuration can be configured independently.
-
Visualize Responses - Inspect prompt completions side-by-side.
-
Test Many Inputs - Prompt Lab lets you template a prompt. Use the templating feature to run the prompts you're testing against many potential inputs for broader coverage of your problem space than you'd get with manual testing.
-
🪄 Auto-generate Test Scenarios - Prompt Lab includes a tool to generate new test scenarios based on your existing prompts and scenarios. Just click "Autogenerate Scenario" to try it out!
-
Prompt Validation and Typeahead - We use OpenAI's OpenAPI spec to automatically provide typeahead and validate prompts.
[]
- Function Call Support - Natively supports OpenAI function calls on supported models.
[]
Supported Models
Prompt Lab currently supports GPT-3.5 and GPT-4. Wider model support is planned.
Running Locally
- Install Postgresql.
- Install NodeJS 20 (earlier versions will very likely work but aren't tested).
- Install
pnpm:npm i -g pnpm - Clone this repository:
git clone https://github.com/prompt-lab/prompt-lab - Install the dependencies:
cd prompt-lab && pnpm install - Create a
.envfile (cp .env.example .env) and enter yourOPENAI_API_KEY. - Start the app:
pnpm dev - Navigate to http://localhost:3000