mirror of
https://github.com/waldekmastykarz/python-tokenize.git
synced 2025-02-05 17:15:56 +03:00
1.1 KiB
1.1 KiB
Python Tokenize
This repo contains a Jupyter notebook to calculate the number of tokens in text, files, and folders using tokenizers from Hugging Face and OpenAI.
Installation
uv sync
Usage
Select the model to use for tokenization in the Jupyter notebook. You can choose either a model from the Hugging Face model hub or OpenAI. Set the model's name in the model_name variable.
- For Hugging Face models, use the
user/model namefrom the Hugging Face model hub, eg.mixedbread-ai/mxbai-embed-large-v1 - For OpenAI models, use the model name from the OpenAI API, eg.
gpt-4o. Available models.
Calculate tokens in a text
- Set the
textvariable to your text. - Run all cells.
Calculate tokens in a file
- Set the
file_pathvariable to the path of your file. - Run all cells.
Calculate tokens in files in a folder
- Set the
folder_pathvariable to the path of your folder. - Optionally, specify a filter for which files to include.
- Run all cells.