Waldek Mastykarz 72f7e15078 Adds LICENSE
2025-01-20 10:12:51 +01:00
2025-01-20 10:11:28 +01:00
2025-01-20 10:11:28 +01:00
2025-01-20 10:12:51 +01:00
2025-01-20 10:11:28 +01:00
2025-01-20 10:11:28 +01:00
2025-01-20 10:11:28 +01:00
2025-01-20 10:11:28 +01:00

Python Tokenize

This repo contains a Jupyter notebook to calculate the number of tokens in text, files, and folders using tokenizers from Hugging Face and OpenAI.

Installation

uv sync

Usage

Select the model to use for tokenization in the Jupyter notebook. You can choose either a model from the Hugging Face model hub or OpenAI. Set the model's name in the model_name variable.

  • For Hugging Face models, use the user/model name from the Hugging Face model hub, eg. mixedbread-ai/mxbai-embed-large-v1
  • For OpenAI models, use the model name from the OpenAI API, eg. gpt-4o. Available models.

Calculate tokens in a text

Set the text variable to your text. Run all cells.

text = "Your text here"

Calculate tokens in a file

Set the file_path variable to the path of your file. Run all cells.

file_path = 'path/to/your/file.txt'
num_tokens = get_num_tokens_from_file(file_path)
print(f'{file_path}: {num_tokens}')

Calculate tokens in files in a folder

Set the folder_path variable to the path of your folder. Optionally, specify a filter for which files to include. Run all cells.

folder_path = 'path/to/your/folder'
file_fiter = ['.md']  # Include only files with the .md extension
tokens_info = get_num_tokens_from_folder(folder_path, file_filter)

tokens_info_df = pd.DataFrame(tokens_info, columns=['file', 'tokens'])
tokens_info_df = tokens_info_df.sort_values(by='file')

print(f'Tokens in files in folder {folder_path} using model {model_name}:\n')
print(tokens_info_df.to_string(index=False))
Description
Accurately calculate the number of tokens for a string/file/folder
Readme MIT 55 KiB
Languages
Jupyter Notebook 100%