mirror of
https://github.com/ViralLab/TurkishBERTweet.git
synced 2023-12-19 18:19:59 +03:00
f76842c2de3bc20437d4a1b5acd2ca6995105d21
Table of contents
TurkishBERTweet in the shadow of Large Language Models
Main Results
Models
| Model | #params | Arch. | Max length | Pre-training data |
|---|---|---|---|---|
VRLLab/TurkishBERTweet |
163M | base | 128 | 894M Turkish Tweets (uncased) |
Example usage
import torch
from transformers import AutoTokenizer
from Preprocessor import preprocess
tokenizer = AutoTokenizer.from_pretrained("VRLLab/TurkishBERTweet")
turkishBERTweet = AutoModel.from_pretrained("VRLLab/TurkishBERTweet")
text = """Lab'ımıza "viral" adını verdik çünkü amacımız disiplinler arası sınırları aşmak ve aralarında yeni bağlantılar kurmak! 💥🔬 #ViralLab #DisiplinlerArası #YenilikçiBağlantılar"""
preprocessed_text = preprocess(text)
input_ids = torch.tensor([tokenizer.encode(preprocessed_text)])
with torch.no_grad():
features = turkishBERTweet(input_ids) # Models outputs are now tuples
Citation
@article{najafi2022TurkishBERTweet,
title={TurkishBERTweet in the shadow of Large Language Models},
author={Najafi, Ali and Varol, Onur},
journal={arXiv preprint },
year={2023}
}
Acknowledgments
We thank Fatih Amasyali for providing access to Tweet Sentiment datasets from Kemik group. This material is based upon work supported by the Google Cloud Research Credits program with the award GCP19980904. We also thank TUBITAK (121C220 and 222N311) for funding this project.
Languages
Python
100%
