Text Ranking
sentence-transformers
PyTorch
Safetensors
Korean
electra
sentence_transformers
cross_encoder
Instructions to use ddobokki/electra-small-sts-cross-encoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use ddobokki/electra-small-sts-cross-encoder with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("ddobokki/electra-small-sts-cross-encoder") query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores) - Notebooks
- Google Colab
- Kaggle
Example
from sentence_transformers import CrossEncoder
model = CrossEncoder('ddobokki/electra-small-sts-cross-encoder')
model.predict(["๊ทธ๋
๋ ํ๋ณตํด์ ์์๋ค.", "๊ทธ๋
๋ ์๊ฒจ์ ๋๋ฌผ์ด ๋ฌ๋ค."])
-> 0.8206561
Dataset
- KorSTS
- Train
- Test
- KLUE STS
- Train
- Test
Performance
| Dataset | Pearson corr. | Spearman corr. |
|---|---|---|
| KorSTS(test) + KLUE STS(test) | 0.8528 | 0.8504 |
TODO
Using KLUE 1.1 train, dev data
- Downloads last month
- 33
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support