Compare commits

...

10 Commits

Author SHA1 Message Date
Kyle Corbitt
ead981b900 Disable custom models for the moment
We're running into GPU constraints and need to turn off custom models until we find a better provider or can hot-swap them.
2023-08-25 16:01:51 -07:00
Kyle Corbitt
e0d0cc0df1 Merge pull request #193 from OpenPipe/examples
classify-recipes example
2023-08-25 12:46:01 -07:00
arcticfly
7df1c59bd3 Update README.md 2023-08-24 22:23:40 -07:00
arcticfly
c83863f468 Update README.md 2023-08-24 20:59:46 -07:00
arcticfly
33ca98b267 Enlarge fine-tune gif 2023-08-24 14:14:38 -07:00
arcticfly
39c943f2ec Change layout of README.md 2023-08-24 14:13:01 -07:00
arcticfly
2aa4ac1594 Update opening gif in README.md 2023-08-24 12:46:25 -07:00
arcticfly
42ade01f22 Update README.md 2023-08-24 11:14:25 -07:00
David Corbitt
59b79049c1 Move license to top level 2023-08-24 10:41:23 -07:00
arcticfly
0d7433cb7e Update README.md
Include more models
2023-08-24 00:13:24 -07:00
4 changed files with 79 additions and 39 deletions

View File

@@ -1,14 +1,52 @@
# OpenPipe
<p align="center">
<a href="https://openpipe.ai">
<img height="70" src="https://github.com/openpipe/openpipe/assets/41524992/70af25fb-1f90-42d9-8a20-3606e3b5aaba" alt="logo">
</a>
</p>
<h1 align="center">
OpenPipe
</h1>
OpenPipe is a flexible playground for comparing and optimizing LLM prompts. It lets you quickly generate, test and compare candidate prompts, and can automatically [translate](#-translate-between-model-apis) those prompts between models.
<p align="center">
<i>Turn expensive prompts into cheap fine-tuned models.</i>
</p>
<img src="https://github.com/openpipe/openpipe/assets/41524992/66bb1843-cb72-4130-a369-eec2df3b8201" alt="demo">
<p align="center">
<a href="/LICENSE"><img alt="License Apache-2.0" src="https://img.shields.io/github/license/openpipe/openpipe?style=flat-square"></a>
<a href='http://makeapullrequest.com'><img alt='PRs Welcome' src='https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square'/></a>
<a href="https://github.com/openpipe/openpipe/graphs/commit-activity"><img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/openpipe/openpipe?style=flat-square"/></a>
<a href="https://github.com/openpipe/openpipe/issues"><img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/openpipe/openpipe?style=flat-square"/></a>
</p>
<p align="center">
<a href="https://app.openpipe.ai/">Hosted App</a> - <a href="#running-locally">Running Locally</a> - <a href="#sample-experiments">Experiments</a>
</p>
<br>
Use powerful but expensive LLMs to fine-tune smaller and cheaper models suited to your exact needs. Evaluate model and prompt combinations in the playground. Query your past requests and export optimized training data. Try it out at https://app.openpipe.ai or <a href="#running-locally">run it locally</a>.
<br>
## 🪛 Features
* <b>Experiment</b>
* Bulk-test wide-reaching scenarios using code templating.
* Seamlessly translate prompts across different model APIs.
* Tap into autogenerated scenarios for fresh test perspectives.
* <b>Fine-Tune (Beta)</b>
* Easy integration with OpenPipe's SDK in both Python and JS.
* Swiftly query logs using intuitive built-in filters.
* Export data in multiple training formats, including Alpaca and ChatGPT, with deduplication.
<img src="https://github.com/openpipe/openpipe/assets/41524992/eaa8b92d-4536-4f63-bbef-4b0b1a60f6b5" alt="fine-tune demo">
<!-- <img height="400px" src="https://github.com/openpipe/openpipe/assets/41524992/66bb1843-cb72-4130-a369-eec2df3b8201" alt="playground demo"> -->
You can use our hosted version of OpenPipe at https://openpipe.ai. You can also clone this repository and [run it locally](#running-locally).
## Sample Experiments
These are simple experiments users have created that show how OpenPipe works. Feel free to fork them and start experimenting yourself.
These are sample experiments users have created that show how OpenPipe works. Feel free to fork them and start experimenting yourself.
- [Twitter Sentiment Analysis](https://app.openpipe.ai/experiments/62c20a73-2012-4a64-973c-4b665ad46a57)
- [Reddit User Needs](https://app.openpipe.ai/experiments/22222222-2222-2222-2222-222222222222)
@@ -17,37 +55,25 @@ These are simple experiments users have created that show how OpenPipe works. Fe
## Supported Models
- All models available through the OpenAI [chat completion API](https://platform.openai.com/docs/guides/gpt/chat-completions-api)
- Llama2 [7b chat](https://replicate.com/a16z-infra/llama7b-v2-chat), [13b chat](https://replicate.com/a16z-infra/llama13b-v2-chat), [70b chat](https://replicate.com/replicate/llama70b-v2-chat).
- Anthropic's [Claude 1 Instant](https://www.anthropic.com/index/introducing-claude) and [Claude 2](https://www.anthropic.com/index/claude-2)
## Features
### 🔍 Visualize Responses
Inspect prompt completions side-by-side.
### 🧪 Bulk-Test
OpenPipe lets you _template_ a prompt. Use the templating feature to run the prompts you're testing against many potential inputs for broad coverage of your problem space.
### 📟 Translate between Model APIs
Write your prompt in one format and automatically convert it to work with any other model.
<!-- <img width="480" alt="Screenshot 2023-08-01 at 11 55 38 PM" src="https://github.com/OpenPipe/OpenPipe/assets/41524992/1e19ccf2-96b6-4e93-a3a5-1449710d1b5b" alt="translate between models"> -->
### 🛠️ Refine Your Prompts Automatically
Use a growing database of best-practice refinements to improve your prompts automatically.
<!-- <img width="480" alt="Screenshot 2023-08-01 at 11 55 38 PM" src="https://github.com/OpenPipe/OpenPipe/assets/41524992/87a27fe7-daef-445c-a5e2-1c82b23f9f99" alt="add function call"> -->
### 🪄 Auto-generate Test Scenarios
OpenPipe includes a tool to generate new test scenarios based on your existing prompts and scenarios. Just click "Autogenerate Scenario" to try it out!
<!-- <img width="600" src="https://github.com/openpipe/openpipe/assets/41524992/219a844e-3f4e-4f6b-8066-41348b42977b" alt="auto-generate"> -->
#### OpenAI
- [GPT 3.5 Turbo](https://platform.openai.com/docs/guides/gpt/chat-completions-api)
- [GPT 3.5 Turbo 16k](https://platform.openai.com/docs/guides/gpt/chat-completions-api)
- [GPT 4](https://openai.com/gpt-4)
#### Llama2
- [7b chat](https://replicate.com/a16z-infra/llama7b-v2-chat)
- [13b chat](https://replicate.com/a16z-infra/llama13b-v2-chat)
- [70b chat](https://replicate.com/replicate/llama70b-v2-chat)
#### Llama2 Fine-Tunes
- [Open-Orca/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B)
- [Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
- [jondurbin/airoboros-l2-13b-gpt4-2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-2.0)
- [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5)
- [Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
- [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)
#### Anthropic
- [Claude 1 Instant](https://www.anthropic.com/index/introducing-claude)
- [Claude 2](https://www.anthropic.com/index/claude-2)
## Running Locally

View File

@@ -147,9 +147,10 @@ export default function OutputCell({
<ResponseLog
time={response.receivedAt}
title="Response received from API"
message={`statusCode: ${response.statusCode ?? ""}\n ${
response.errorMessage ?? ""
}`}
message={[
response.statusCode ? `Status: ${response.statusCode}\n` : "",
response.errorMessage ?? "",
].join("")}
/>
)}
</Fragment>

View File

@@ -17,10 +17,23 @@ const modelEndpoints: Record<OpenpipeChatInput["model"], string> = {
"NousResearch/Nous-Hermes-llama-2-7b": "https://ua1bpc6kv3dgge-8000.proxy.runpod.net/v1",
};
const CUSTOM_MODELS_ENABLED = false;
export async function getCompletion(
input: OpenpipeChatInput,
onStream: ((partialOutput: OpenpipeChatOutput) => void) | null,
): Promise<CompletionResponse<OpenpipeChatOutput>> {
// Temporarily disable these models because of GPU constraints
if (!CUSTOM_MODELS_ENABLED) {
return {
type: "error",
message:
"We've disabled this model temporarily because of GPU capacity constraints. Check back later.",
autoRetry: false,
};
}
const { model, messages, ...rest } = input;
const templatedPrompt = frontendModelProvider.models[model].templatePrompt?.(messages);