mirror of
https://github.com/AgentOps-AI/tokencost.git
synced 2024-06-22 04:30:40 +03:00
add expected print outputs to sample code
This commit is contained in:
78
README.md
78
README.md
@@ -4,6 +4,23 @@
|
||||
|
||||
TokenCost is a specialized tool designed for calculating the token count and associated U.S. dollar cost of strings and messages used in Large Language Models (LLMs). This utility is particularly useful for developers and researchers working with language models, enabling them to estimate the computational resources required for processing various inputs and their returned outputs.
|
||||
|
||||
```python
|
||||
from tokencost import calculate_cost
|
||||
|
||||
prompt = "Sample input"
|
||||
response = "Sample response text"
|
||||
model = "gpt-3.5-turbo"
|
||||
|
||||
|
||||
cost = calculate_cost(string_prompt, response, model)
|
||||
print(cost) # in units of TPU, which is 1/10,000,000th of a USD.
|
||||
|
||||
# Prints the below:
|
||||
# prompt_cost=15
|
||||
# completion_cost=20
|
||||
# 120
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Token Counting**: Accurately counts the number of tokens in a given string or message.
|
||||
@@ -15,8 +32,8 @@ TokenCost is a specialized tool designed for calculating the token count and ass
|
||||
|
||||
Tokencost can be installed either via PyPI or GitHub.
|
||||
|
||||
|
||||
#### Recommended: with [PyPI](https://pypi.org/project/tokencost/) (Python package):
|
||||
|
||||
```bash
|
||||
pip install tokencost
|
||||
```
|
||||
@@ -36,30 +53,40 @@ To use TokenCost, follow these steps:
|
||||
1. Import the module:
|
||||
|
||||
- Recommended: If you want to call the functions as `function_name` directly:
|
||||
|
||||
```python
|
||||
from tokencost import count_message_tokens, count_string_tokens, calculate_cost
|
||||
```
|
||||
|
||||
|
||||
- OR if you want to call the functions as `tokencost.function_name`:
|
||||
|
||||
```python
|
||||
import tokencost
|
||||
```
|
||||
|
||||
2. Calculate tokens and cost (using `from tokencost import *`):
|
||||
2. Calculate tokens and cost (using `from tokencost import count_message_tokens, count_string_tokens, calculate_cost`):
|
||||
|
||||
```python
|
||||
|
||||
|
||||
# First example using string input.
|
||||
string_prompt = "Your sample text here"
|
||||
response = "Sample response text"
|
||||
model= "gpt-3.5-turbo"
|
||||
|
||||
string_cost = calculate_cost(string_prompt, response, model)
|
||||
cost = calculate_cost(string_prompt, response, model)
|
||||
|
||||
prompt_string_token_count = count_string_tokens(string_prompt, model)
|
||||
|
||||
print(f"Prompt Token Count: {prompt_string_token_count}, Completion Token Count:{completion_string_token_count}, Cost: ${string_cost/USD_PER_TPU} ({string_cost/CENTS_PER_TPU} cents)")
|
||||
print(f"{prompt_string_token_count=}, {completion_string_token_count=}")
|
||||
print(f"Cost: ${string_cost/USD_PER_TPU} ({cost/CENTS_PER_TPU} cents)")
|
||||
|
||||
# Prints the below:
|
||||
# prompt_cost=15
|
||||
# completion_cost=20
|
||||
# prompt_string_token_count=4, completion_string_token_count=3
|
||||
# Cost: $1.2e-05 (0.0012 cents)
|
||||
|
||||
# Second example using list of message objects instead of string input.
|
||||
messages =[
|
||||
{
|
||||
"role": "user",
|
||||
@@ -77,40 +104,33 @@ messages =[
|
||||
response = "Sample response text"
|
||||
model= "gpt-3.5-turbo"
|
||||
|
||||
message_cost = calculate_cost(messages, response, model)
|
||||
cost = calculate_cost(messages, response, model)
|
||||
|
||||
prompt_message_token_count = count_message_tokens(messages, model)
|
||||
completion_string_token_count = count_string_tokens(response, model)
|
||||
|
||||
print(f"Prompt Token Count: {prompt_message_token_count}, Completion Token Count: {completion_string_token_count}, Cost: ${message_cost/USD_PER_TPU} ({message_cost/CENTS_PER_TPU} cents)")
|
||||
print(f"{prompt_message_token_count=}, {completion_string_token_count=}")
|
||||
print(f"Cost: ${message_cost/USD_PER_TPU} ({cost/CENTS_PER_TPU} cents)")
|
||||
|
||||
# Prints the below:
|
||||
# Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.
|
||||
# prompt_cost=15
|
||||
# completion_cost=20
|
||||
# Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.
|
||||
# prompt_message_token_count=34, completion_string_token_count=3
|
||||
# Cost: $5.7e-05 (0.0057 cents)
|
||||
```
|
||||
|
||||
This is what it should look like when you use iPython:
|
||||
```bash
|
||||
In [1]: from tokencost import *
|
||||
|
||||
In [2]: prompt = "Your sample text here"
|
||||
...: response = "Sample response text"
|
||||
...: model= "gpt-3.5-turbo"
|
||||
...: prompt_token_count = count_string_tokens(prompt, model)
|
||||
...: completion_token_count =count_string_tokens(response, model)
|
||||
...: cost = calculate_cost(prompt, response, model)
|
||||
...:
|
||||
...:
|
||||
...: print(f"Prompt Token Count: {prompt_token_count}, Completion Token Count: {c
|
||||
...: ompletion_token_count}, Cost: ${cost/USD_PER_TPU} ({cost/CENTS_PER_TPU} cent
|
||||
...: s)")
|
||||
Prompt Token Count: 4, Completion Token Count: 3, Cost: $1.2e-05 (0.0012 cents)
|
||||
```
|
||||
|
||||
|
||||
## Running tests
|
||||
0. Install ```pytest``` if you don't have it already
|
||||
|
||||
0. Install `pytest` if you don't have it already
|
||||
|
||||
```python
|
||||
pip install pytest
|
||||
```
|
||||
|
||||
1. Run the `tests/` folder while in the parent directory
|
||||
1. Run the `tests/` folder while in the parent directory
|
||||
|
||||
```python
|
||||
pytest tests
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user