Add docs (#207)
* Add docs folder with introduction, overview, and getting started * Add feature pages * Remove some of Who We Are
@@ -3,6 +3,7 @@
|
|||||||
This client allows you automatically report your OpenAI calls to [OpenPipe](https://openpipe.ai/). OpenPipe
|
This client allows you automatically report your OpenAI calls to [OpenPipe](https://openpipe.ai/). OpenPipe
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
`pip install openpipe`
|
`pip install openpipe`
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
@@ -15,7 +16,7 @@ This client allows you automatically report your OpenAI calls to [OpenPipe](http
|
|||||||
from openpipe import openai, configure_openpipe
|
from openpipe import openai, configure_openpipe
|
||||||
import os
|
import os
|
||||||
|
|
||||||
# Set the OpenPipe API key you got in step (3) above.
|
# Set the OpenPipe API key you got in step (2) above.
|
||||||
# If you have the `OPENPIPE_API_KEY` environment variable set we'll read from it by default.
|
# If you have the `OPENPIPE_API_KEY` environment variable set we'll read from it by default.
|
||||||
configure_openpipe(api_key=os.getenv("OPENPIPE_API_KEY"))
|
configure_openpipe(api_key=os.getenv("OPENPIPE_API_KEY"))
|
||||||
|
|
||||||
@@ -37,4 +38,4 @@ completion = openai.ChatCompletion.create(
|
|||||||
messages=[{"role": "system", "content": "count to 10"}],
|
messages=[{"role": "system", "content": "count to 10"}],
|
||||||
openpipe={"tags": {"prompt_id": "counting"}},
|
openpipe={"tags": {"prompt_id": "counting"}},
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|||||||
23
docs/faq/how-reporting-works.mdx
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
---
|
||||||
|
title: "How reporting works"
|
||||||
|
description: "Our SDK wraps calls and forwards requests"
|
||||||
|
---
|
||||||
|
|
||||||
|
### Does reporting calls add latency to streamed requests?
|
||||||
|
|
||||||
|
Streamed requests won't have any added latency. The SDK forwards each streamed token as it's received from the server while
|
||||||
|
simultaneously collecting it in the response it will report to your OpenPipe instance once the entire response has been received.
|
||||||
|
|
||||||
|
#### Your OpenAI key never leaves your machine.
|
||||||
|
|
||||||
|
Calls to OpenAI are carried out by our SDK **on your machine**, meaning that your API key is secure, and you'll
|
||||||
|
continue getting uninterrupted inference even if your OpenPipe instance goes down.
|
||||||
|
|
||||||
|
## <br />
|
||||||
|
|
||||||
|
### Want to dig deeper? Take a peek in our open-source code.
|
||||||
|
|
||||||
|
We benefit from a growing community of developers and customers who are
|
||||||
|
dedicated to improving the OpenPipe experience. Our [open source repo](https://github.com/openpipe/openpipe)
|
||||||
|
is an opportunity for developers to confirm the quality of our offering
|
||||||
|
and to make improvements when they can.
|
||||||
BIN
docs/favicon.webp
Normal file
|
After Width: | Height: | Size: 490 B |
8
docs/features/experiments.mdx
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
title: "Experiments"
|
||||||
|
description: "
|
||||||
|
Template multiple scenarios into combinations of prompts and models to compare their output. Use flexible regex and GPT-4 evaluations to assess completion quality.
|
||||||
|
Quickly iterate and spot model shortcomings before deployment."
|
||||||
|
---
|
||||||
|
|
||||||
|
<Frame></Frame>
|
||||||
8
docs/features/exporting-data.mdx
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
title: "Export Data - Beta"
|
||||||
|
sidebarTitle: "Export Data"
|
||||||
|
description: "
|
||||||
|
Export your past requests as a JSONL file in an Alpaca or OpenAI fine-tuning format or in their raw form."
|
||||||
|
---
|
||||||
|
|
||||||
|
<Frame></Frame>
|
||||||
8
docs/features/fine-tuning.mdx
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
title: "Fine Tuning - Beta"
|
||||||
|
sidebarTitle: "Fine Tuning"
|
||||||
|
description: "
|
||||||
|
Fine tune your data on specific logs. Filter by prompt id and exclude requests with an undesirable output."
|
||||||
|
---
|
||||||
|
|
||||||
|
<Frame></Frame>
|
||||||
7
docs/features/log-filters.mdx
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
title: "Log Filters"
|
||||||
|
description: "
|
||||||
|
Search and filter your past LLM requests to inspect your responses and build a training dataset."
|
||||||
|
---
|
||||||
|
|
||||||
|
<Frame></Frame>
|
||||||
114
docs/getting-started/openpipe-sdk.mdx
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
title: "Installing the SDK"
|
||||||
|
---
|
||||||
|
|
||||||
|
Use the OpenPipe SDK as a drop-in replacement for the generic OpenAI package. We currently support logging OpenAI calls and support for more LLM providers will be added soon.
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<Tab title="Python">
|
||||||
|
|
||||||
|
Find the SDK at https://pypi.org/project/openpipe/
|
||||||
|
|
||||||
|
## Simple Integration
|
||||||
|
|
||||||
|
Add `OPENPIPE_API_KEY` to your environment variables.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export OPENPIPE_API_KEY=opk-<your-api-key>
|
||||||
|
# Or you can set it in your code, as shown in the example below
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace this line
|
||||||
|
|
||||||
|
```python
|
||||||
|
from openai import openai
|
||||||
|
```
|
||||||
|
|
||||||
|
with this one
|
||||||
|
|
||||||
|
```python
|
||||||
|
from openpipe import openai
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adding Searchable Tags
|
||||||
|
|
||||||
|
OpenPipe has a concept of "tagging." This is very useful for grouping a certain set of completions together.
|
||||||
|
When you're using a dataset for fine-tuning, you can select all the prompts that match a certain set of tags. Here's how you can use the tagging feature:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from openpipe import openai, configure_openpipe
|
||||||
|
import os
|
||||||
|
|
||||||
|
# If you have the `OPENPIPE_API_KEY` environment variable set
|
||||||
|
# we'll read from it by default.
|
||||||
|
configure_openpipe(api_key=os.getenv("OPENPIPE_API_KEY"))
|
||||||
|
|
||||||
|
# Configure OpenAI the same way you would normally
|
||||||
|
openai.api_key = os.getenv("OPENAI_API_KEY")
|
||||||
|
|
||||||
|
completion = openai.ChatCompletion.create(
|
||||||
|
model="gpt-3.5-turbo",
|
||||||
|
messages=[{"role": "system", "content": "count to 10"}],
|
||||||
|
openpipe={"tags": {"prompt_id": "counting", "any_key": "any_value"}},
|
||||||
|
)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
</Tab>
|
||||||
|
<Tab title="NodeJS">
|
||||||
|
|
||||||
|
Find the SDK at https://www.npmjs.com/package/openpipe
|
||||||
|
|
||||||
|
## Simple Integration
|
||||||
|
|
||||||
|
Add `OPENPIPE_API_KEY` to your environment variables.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export OPENPIPE_API_KEY=opk-<your-api-key>
|
||||||
|
# Or you can set it in your code, as shown in the example below
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace this line
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import OpenAI from "openai";
|
||||||
|
```
|
||||||
|
|
||||||
|
with this one
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import OpenAI from "openpipe/openai";
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adding Searchable Tags
|
||||||
|
|
||||||
|
OpenPipe has a concept of "tagging." This is very useful for grouping a certain set of completions together.
|
||||||
|
When you're using a dataset for fine-tuning, you can select all the prompts that match a certain set of tags. Here's how you can use the tagging feature:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Fully compatible with original OpenAI initialization
|
||||||
|
const openai = new OpenAI({
|
||||||
|
apiKey: "my api key", // defaults to process.env["OPENAI_API_KEY"]
|
||||||
|
// openpipe key is optional
|
||||||
|
openpipe: {
|
||||||
|
apiKey: "my api key", // defaults to process.env["OPENPIPE_API_KEY"]
|
||||||
|
baseUrl: "my url", // defaults to process.env["OPENPIPE_BASE_URL"] or https://app.openpipe.ai/api/v1 if not set
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
const completion = await openai.chat.completions.create({
|
||||||
|
messages: [{ role: "user", content: "Count to 10" }],
|
||||||
|
model: "gpt-3.5-turbo",
|
||||||
|
// optional
|
||||||
|
openpipe: {
|
||||||
|
// Add custom searchable tags
|
||||||
|
tags: {
|
||||||
|
prompt_id: "counting",
|
||||||
|
any_key: "any_value",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
35
docs/getting-started/quick-start.mdx
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
---
|
||||||
|
title: "Quick Start"
|
||||||
|
description: "Get started with OpenPipe in a few quick steps."
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Create your OpenPipe Account
|
||||||
|
|
||||||
|
If you don't already have one, create an account with OpenPipe at https://app.openpipe.ai/. You can sign up with GitHub, so you don't need to remember an extra password.
|
||||||
|
|
||||||
|
## Step 2: Find your Project API key
|
||||||
|
|
||||||
|
In order to capture your calls and fine-tune a model on them, we need an API key to authenticate you and determine which project to store your logs under.
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
When you created your account, a project was automatically configured for you as well. Find its
|
||||||
|
API key at https://app.openpipe.ai/project/settings.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
## Step 3: Integrate the OpenPipe SDK
|
||||||
|
|
||||||
|
You're done with the hard part! Learn how to integrate the OpenPipe SDK on the next page.
|
||||||
|
|
||||||
|
<CardGroup cols={2}>
|
||||||
|
<Card
|
||||||
|
title="OpenPipe SDK"
|
||||||
|
icon={
|
||||||
|
<svg role="img" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
|
||||||
|
<title>OpenPipe</title>
|
||||||
|
<path d="M22.2819 9.8211a5.9847 5.9847 0 0 0-.5157-4.9108 6.0462 6.0462 0 0 0-6.5098-2.9A6.0651 6.0651 0 0 0 4.9807 4.1818a5.9847 5.9847 0 0 0-3.9977 2.9 6.0462 6.0462 0 0 0 .7427 7.0966 5.98 5.98 0 0 0 .511 4.9107 6.051 6.051 0 0 0 6.5146 2.9001A5.9847 5.9847 0 0 0 13.2599 24a6.0557 6.0557 0 0 0 5.7718-4.2058 5.9894 5.9894 0 0 0 3.9977-2.9001 6.0557 6.0557 0 0 0-.7475-7.0729zm-9.022 12.6081a4.4755 4.4755 0 0 1-2.8764-1.0408l.1419-.0804 4.7783-2.7582a.7948.7948 0 0 0 .3927-.6813v-6.7369l2.02 1.1686a.071.071 0 0 1 .038.052v5.5826a4.504 4.504 0 0 1-4.4945 4.4944zm-9.6607-4.1254a4.4708 4.4708 0 0 1-.5346-3.0137l.142.0852 4.783 2.7582a.7712.7712 0 0 0 .7806 0l5.8428-3.3685v2.3324a.0804.0804 0 0 1-.0332.0615L9.74 19.9502a4.4992 4.4992 0 0 1-6.1408-1.6464zM2.3408 7.8956a4.485 4.485 0 0 1 2.3655-1.9728V11.6a.7664.7664 0 0 0 .3879.6765l5.8144 3.3543-2.0201 1.1685a.0757.0757 0 0 1-.071 0l-4.8303-2.7865A4.504 4.504 0 0 1 2.3408 7.872zm16.5963 3.8558L13.1038 8.364 15.1192 7.2a.0757.0757 0 0 1 .071 0l4.8303 2.7913a4.4944 4.4944 0 0 1-.6765 8.1042v-5.6772a.79.79 0 0 0-.407-.667zm2.0107-3.0231l-.142-.0852-4.7735-2.7818a.7759.7759 0 0 0-.7854 0L9.409 9.2297V6.8974a.0662.0662 0 0 1 .0284-.0615l4.8303-2.7866a4.4992 4.4992 0 0 1 6.6802 4.66zM8.3065 12.863l-2.02-1.1638a.0804.0804 0 0 1-.038-.0567V6.0742a4.4992 4.4992 0 0 1 7.3757-3.4537l-.142.0805L8.704 5.459a.7948.7948 0 0 0-.3927.6813zm1.0976-2.3654l2.602-1.4998 2.6069 1.4998v2.9994l-2.5974 1.4997-2.6067-1.4997Z" />
|
||||||
|
</svg>
|
||||||
|
}
|
||||||
|
iconType="duotone"
|
||||||
|
href="/getting-started/openpipe-sdk"
|
||||||
|
></Card>
|
||||||
|
</CardGroup>
|
||||||
BIN
docs/images/features/experiments.png
Normal file
|
After Width: | Height: | Size: 416 KiB |
BIN
docs/images/features/exporting-data.png
Normal file
|
After Width: | Height: | Size: 414 KiB |
BIN
docs/images/features/fine-tuning.png
Normal file
|
After Width: | Height: | Size: 404 KiB |
BIN
docs/images/features/log-filters.png
Normal file
|
After Width: | Height: | Size: 321 KiB |
BIN
docs/images/intro/request-logs.png
Normal file
|
After Width: | Height: | Size: 390 KiB |
18
docs/introduction.mdx
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
---
|
||||||
|
title: "OpenPipe Documentation"
|
||||||
|
sidebarTitle: "Introduction"
|
||||||
|
description: "
|
||||||
|
Product-focused teams use OpenPipe's seamless fine-tuning and monitoring services to decrease the cost and latency of their LLM operations.
|
||||||
|
You can use OpenPipe to collect and analyze LLM logs, create fine-tuned models, and compare output from multiple models given the same input."
|
||||||
|
---
|
||||||
|
|
||||||
|
<Frame></Frame>
|
||||||
|
|
||||||
|
<CardGroup cols={2}>
|
||||||
|
<Card title="Get Started" icon="code">
|
||||||
|
Quickly integrate the OpenPipe SDK into your application and start collecting data.
|
||||||
|
</Card>
|
||||||
|
<Card title="Features" icon="lightbulb">
|
||||||
|
View the platform features OpenPipe provides and learn how to use them.
|
||||||
|
</Card>
|
||||||
|
</CardGroup>
|
||||||
25
docs/logo/dark.svg
Normal file
|
After Width: | Height: | Size: 8.3 KiB |
25
docs/logo/light.svg
Normal file
|
After Width: | Height: | Size: 8.3 KiB |
65
docs/mint.json
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
{
|
||||||
|
"name": "OpenPipe",
|
||||||
|
"logo": {
|
||||||
|
"light": "/logo/light.svg",
|
||||||
|
"dark": "/logo/dark.svg"
|
||||||
|
},
|
||||||
|
"favicon": "/favicon.webp",
|
||||||
|
"colors": {
|
||||||
|
"primary": "#FF5733",
|
||||||
|
"light": "#FF5733",
|
||||||
|
"dark": "#FF5733"
|
||||||
|
},
|
||||||
|
"modeToggle": {
|
||||||
|
"default": "light"
|
||||||
|
},
|
||||||
|
"topbarCtaButton": {
|
||||||
|
"name": "Sign In",
|
||||||
|
"url": "https://app.openpipe.ai"
|
||||||
|
},
|
||||||
|
"anchors": [
|
||||||
|
{
|
||||||
|
"name": "GitHub",
|
||||||
|
"icon": "github",
|
||||||
|
"url": "https://github.com/openpipe/openpipe"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"feedback": {
|
||||||
|
"suggestEdit": true,
|
||||||
|
"raiseIssue": true
|
||||||
|
},
|
||||||
|
"navigation": [
|
||||||
|
{
|
||||||
|
"group": "Welcome",
|
||||||
|
"pages": ["introduction", "overview"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"group": "Getting Started",
|
||||||
|
"pages": ["getting-started/quick-start", "getting-started/openpipe-sdk"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"group": "Features",
|
||||||
|
"pages": [
|
||||||
|
"features/log-filters",
|
||||||
|
"features/exporting-data",
|
||||||
|
"features/fine-tuning",
|
||||||
|
"features/experiments"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"group": "FAQ",
|
||||||
|
"pages": ["faq/how-reporting-works"]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"topbarLinks": [
|
||||||
|
{
|
||||||
|
"name": "Github",
|
||||||
|
"url": "https://github.com/OpenPipe/OpenPipe"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"footerSocials": {
|
||||||
|
"twitter": "https://twitter.com/OpenPipeAI",
|
||||||
|
"linkedin": "https://www.linkedin.com/company/openpipe/about/",
|
||||||
|
"github": "https://github.com/OpenPipe/OpenPipe"
|
||||||
|
}
|
||||||
|
}
|
||||||
32
docs/overview.mdx
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
title: "Overview"
|
||||||
|
description: "OpenPipe is a streamlined platform designed to help product-focused teams train specialized LLM models as replacements for slow and expensive prompts."
|
||||||
|
---
|
||||||
|
|
||||||
|
## Who We Are
|
||||||
|
|
||||||
|
We're a team of full-stack engineers and machine learning researchers working to streamline the process of integrating fine-tuned models into any application. Our goal is to make the fine-tuning process accessible to everyone.
|
||||||
|
|
||||||
|
## What We Offer
|
||||||
|
|
||||||
|
Here are a few of the features we provide:
|
||||||
|
|
||||||
|
- **Data Capture**: OpenPipe automatically captures every request and response sent through our drop-in replacement sdk and stores it for your future use.
|
||||||
|
|
||||||
|
- **Monitoring**: OpenPipe provides intuitive tools to view the frequency and cost of your LLM requests, and provides a special tool for viewing requests with error status codes.
|
||||||
|
|
||||||
|
- **Searchable Logs**: We enable you to search your past requests, and provide a simple protocol for tagging them by prompt id for easy filtering.
|
||||||
|
|
||||||
|
- **Fine-Tuning**: With all your LLM requests and responses in one place, it's easy to select the data you want to fine-tune on and kick off a job.
|
||||||
|
|
||||||
|
- **Model Hosting**: After we've trained your model, OpenPipe will automatically begin hosting it. Accessing your model will require an API key from your project.
|
||||||
|
|
||||||
|
- **Unified SDK**: Switching requests from your previous LLM provider to your new model is as simple as changing the model name. All our models implement the OpenAI inference format, so you won't have to change how you parse its response.
|
||||||
|
|
||||||
|
- **Data Export**: OpenPipe allows you to download your request logs or the fine-tuned models you've trained at any time for easy self-hosting.
|
||||||
|
|
||||||
|
- **Experimentation**: The fine-tunes you've created on OpenPipe are immediately available for you to run inference on in our experimentation playground.
|
||||||
|
|
||||||
|
Join us in our mission to make the benefits of hyper-efficient models accessible to everyone. Read through our documentation, and don't hesitate to throw a question our way.
|
||||||
|
|
||||||
|
Welcome to the OpenPipe community!
|
||||||