|
|
|
|
@@ -1,14 +1,52 @@
|
|
|
|
|
# OpenPipe
|
|
|
|
|
<p align="center">
|
|
|
|
|
<a href="https://openpipe.ai">
|
|
|
|
|
<img height="70" src="https://github.com/openpipe/openpipe/assets/41524992/70af25fb-1f90-42d9-8a20-3606e3b5aaba" alt="logo">
|
|
|
|
|
</a>
|
|
|
|
|
</p>
|
|
|
|
|
<h1 align="center">
|
|
|
|
|
OpenPipe
|
|
|
|
|
</h1>
|
|
|
|
|
|
|
|
|
|
OpenPipe is a flexible playground for comparing and optimizing LLM prompts. It lets you quickly generate, test and compare candidate prompts, and can automatically [translate](#-translate-between-model-apis) those prompts between models.
|
|
|
|
|
<p align="center">
|
|
|
|
|
<i>Turn expensive prompts into cheap fine-tuned models.</i>
|
|
|
|
|
</p>
|
|
|
|
|
|
|
|
|
|
<img src="https://github.com/openpipe/openpipe/assets/41524992/66bb1843-cb72-4130-a369-eec2df3b8201" alt="demo">
|
|
|
|
|
<p align="center">
|
|
|
|
|
<a href="/LICENSE"><img alt="License Apache-2.0" src="https://img.shields.io/github/license/openpipe/openpipe?style=flat-square"></a>
|
|
|
|
|
<a href='http://makeapullrequest.com'><img alt='PRs Welcome' src='https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square'/></a>
|
|
|
|
|
<a href="https://github.com/openpipe/openpipe/graphs/commit-activity"><img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/openpipe/openpipe?style=flat-square"/></a>
|
|
|
|
|
<a href="https://github.com/openpipe/openpipe/issues"><img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/openpipe/openpipe?style=flat-square"/></a>
|
|
|
|
|
</p>
|
|
|
|
|
|
|
|
|
|
<p align="center">
|
|
|
|
|
<a href="https://app.openpipe.ai/">Hosted App</a> - <a href="#running-locally">Running Locally</a> - <a href="#sample-experiments">Experiments</a>
|
|
|
|
|
</p>
|
|
|
|
|
|
|
|
|
|
<br>
|
|
|
|
|
Use powerful but expensive LLMs to fine-tune smaller and cheaper models suited to your exact needs. Evaluate model and prompt combinations in the playground. Query your past requests and export optimized training data. Try it out at https://app.openpipe.ai or <a href="#running-locally">run it locally</a>.
|
|
|
|
|
<br>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## 🪛 Features
|
|
|
|
|
|
|
|
|
|
* <b>Experiment</b>
|
|
|
|
|
* Bulk-test wide-reaching scenarios using code templating.
|
|
|
|
|
* Seamlessly translate prompts across different model APIs.
|
|
|
|
|
* Tap into autogenerated scenarios for fresh test perspectives.
|
|
|
|
|
|
|
|
|
|
* <b>Fine-Tune (Beta)</b>
|
|
|
|
|
* Easy integration with OpenPipe's SDK in both Python and JS.
|
|
|
|
|
* Swiftly query logs using intuitive built-in filters.
|
|
|
|
|
* Export data in multiple training formats, including Alpaca and ChatGPT, with deduplication.
|
|
|
|
|
|
|
|
|
|
<img src="https://github.com/openpipe/openpipe/assets/41524992/eaa8b92d-4536-4f63-bbef-4b0b1a60f6b5" alt="fine-tune demo">
|
|
|
|
|
|
|
|
|
|
<!-- <img height="400px" src="https://github.com/openpipe/openpipe/assets/41524992/66bb1843-cb72-4130-a369-eec2df3b8201" alt="playground demo"> -->
|
|
|
|
|
|
|
|
|
|
You can use our hosted version of OpenPipe at https://openpipe.ai. You can also clone this repository and [run it locally](#running-locally).
|
|
|
|
|
|
|
|
|
|
## Sample Experiments
|
|
|
|
|
|
|
|
|
|
These are simple experiments users have created that show how OpenPipe works. Feel free to fork them and start experimenting yourself.
|
|
|
|
|
These are sample experiments users have created that show how OpenPipe works. Feel free to fork them and start experimenting yourself.
|
|
|
|
|
|
|
|
|
|
- [Twitter Sentiment Analysis](https://app.openpipe.ai/experiments/62c20a73-2012-4a64-973c-4b665ad46a57)
|
|
|
|
|
- [Reddit User Needs](https://app.openpipe.ai/experiments/22222222-2222-2222-2222-222222222222)
|
|
|
|
|
@@ -17,37 +55,25 @@ These are simple experiments users have created that show how OpenPipe works. Fe
|
|
|
|
|
|
|
|
|
|
## Supported Models
|
|
|
|
|
|
|
|
|
|
- All models available through the OpenAI [chat completion API](https://platform.openai.com/docs/guides/gpt/chat-completions-api)
|
|
|
|
|
- Llama2 [7b chat](https://replicate.com/a16z-infra/llama7b-v2-chat), [13b chat](https://replicate.com/a16z-infra/llama13b-v2-chat), [70b chat](https://replicate.com/replicate/llama70b-v2-chat).
|
|
|
|
|
- Anthropic's [Claude 1 Instant](https://www.anthropic.com/index/introducing-claude) and [Claude 2](https://www.anthropic.com/index/claude-2)
|
|
|
|
|
|
|
|
|
|
## Features
|
|
|
|
|
|
|
|
|
|
### 🔍 Visualize Responses
|
|
|
|
|
|
|
|
|
|
Inspect prompt completions side-by-side.
|
|
|
|
|
|
|
|
|
|
### 🧪 Bulk-Test
|
|
|
|
|
|
|
|
|
|
OpenPipe lets you _template_ a prompt. Use the templating feature to run the prompts you're testing against many potential inputs for broad coverage of your problem space.
|
|
|
|
|
|
|
|
|
|
### 📟 Translate between Model APIs
|
|
|
|
|
|
|
|
|
|
Write your prompt in one format and automatically convert it to work with any other model.
|
|
|
|
|
|
|
|
|
|
<!-- <img width="480" alt="Screenshot 2023-08-01 at 11 55 38 PM" src="https://github.com/OpenPipe/OpenPipe/assets/41524992/1e19ccf2-96b6-4e93-a3a5-1449710d1b5b" alt="translate between models"> -->
|
|
|
|
|
|
|
|
|
|
### 🛠️ Refine Your Prompts Automatically
|
|
|
|
|
|
|
|
|
|
Use a growing database of best-practice refinements to improve your prompts automatically.
|
|
|
|
|
|
|
|
|
|
<!-- <img width="480" alt="Screenshot 2023-08-01 at 11 55 38 PM" src="https://github.com/OpenPipe/OpenPipe/assets/41524992/87a27fe7-daef-445c-a5e2-1c82b23f9f99" alt="add function call"> -->
|
|
|
|
|
|
|
|
|
|
### 🪄 Auto-generate Test Scenarios
|
|
|
|
|
|
|
|
|
|
OpenPipe includes a tool to generate new test scenarios based on your existing prompts and scenarios. Just click "Autogenerate Scenario" to try it out!
|
|
|
|
|
|
|
|
|
|
<!-- <img width="600" src="https://github.com/openpipe/openpipe/assets/41524992/219a844e-3f4e-4f6b-8066-41348b42977b" alt="auto-generate"> -->
|
|
|
|
|
#### OpenAI
|
|
|
|
|
- [GPT 3.5 Turbo](https://platform.openai.com/docs/guides/gpt/chat-completions-api)
|
|
|
|
|
- [GPT 3.5 Turbo 16k](https://platform.openai.com/docs/guides/gpt/chat-completions-api)
|
|
|
|
|
- [GPT 4](https://openai.com/gpt-4)
|
|
|
|
|
#### Llama2
|
|
|
|
|
- [7b chat](https://replicate.com/a16z-infra/llama7b-v2-chat)
|
|
|
|
|
- [13b chat](https://replicate.com/a16z-infra/llama13b-v2-chat)
|
|
|
|
|
- [70b chat](https://replicate.com/replicate/llama70b-v2-chat)
|
|
|
|
|
#### Llama2 Fine-Tunes
|
|
|
|
|
- [Open-Orca/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B)
|
|
|
|
|
- [Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
|
|
|
|
|
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
|
|
|
|
|
- [jondurbin/airoboros-l2-13b-gpt4-2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-2.0)
|
|
|
|
|
- [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5)
|
|
|
|
|
- [Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
|
|
|
|
|
- [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)
|
|
|
|
|
#### Anthropic
|
|
|
|
|
- [Claude 1 Instant](https://www.anthropic.com/index/introducing-claude)
|
|
|
|
|
- [Claude 2](https://www.anthropic.com/index/claude-2)
|
|
|
|
|
|
|
|
|
|
## Running Locally
|
|
|
|
|
|
|
|
|
|
|