Allows for the creation of user accounts. A few notes on the specifics: - Experiments are the main access control objects. If you can view an experiment, you can view all its prompts/scenarios/evals. If you can edit it, you can edit or delete all of those as well. - Experiments are owned by Organizations in the database. Organizations can have multiple members and members can have roles of ADMIN, MEMBER or VIEWER. - Organizations can either be "personal" or general. Each user has a "personal" organization created as soon as they try to create an experiment. There's currently no UI support for creating general orgs or adding users to them; they're just in the database to future-proof all the ACL logic. - You can require that a user is signed-in to see a route using the `protectedProcedure` helper. When you use `protectedProcedure`, you also have to call `ctx.markAccessControlRun()` (or delegate to a function that does it for you; see accessControl.ts). This is to remind us to actually check for access control when we define a new endpoint.
3.3 KiB
OpenPipe
OpenPipe is a flexible playground for comparing and optimizing LLM prompts. It lets you quickly generate, test and compare candidate prompts with realistic sample data.
Sample Experiments
These are simple experiments users have created that show how OpenPipe works.
You can use our hosted version of OpenPipe at [https://openpipe.ai]. You can also clone this repository and run it locally.
High-Level Features
Configure Multiple Prompts
Set up multiple prompt configurations and compare their output side-by-side. Each configuration can be configured independently.
Visualize Responses
Inspect prompt completions side-by-side.
Test Many Inputs
OpenPipe lets you template a prompt. Use the templating feature to run the prompts you're testing against many potential inputs for broader coverage of your problem space than you'd get with manual testing.
🪄 Auto-generate Test Scenarios
OpenPipe includes a tool to generate new test scenarios based on your existing prompts and scenarios. Just click "Autogenerate Scenario" to try it out!
Prompt Validation and Typeahead
We use OpenAI's OpenAPI spec to automatically provide typeahead and validate prompts.
Function Call Support
Natively supports OpenAI function calls on supported models.
Supported Models
OpenPipe currently supports GPT-3.5 and GPT-4. Wider model support is planned.
Running Locally
- Install Postgresql.
- Install NodeJS 20 (earlier versions will very likely work but aren't tested).
- Install
pnpm:npm i -g pnpm - Clone this repository:
git clone https://github.com/openpipe/openpipe - Install the dependencies:
cd openpipe && pnpm install - Create a
.envfile (cp .env.example .env) and enter yourOPENAI_API_KEY. - Update
DATABASE_URLif necessary to point to your Postgres instance and runpnpm prisma db pushto create the database. - Create a GitHub OAuth App and update the
GITHUB_CLIENT_IDandGITHUB_CLIENT_SECRETvalues. (Note: a PR to make auth optional when running locally would be a great contribution!) - Start the app:
pnpm dev. - Navigate to http://localhost:3000