AIMH/SWMH
Viewer • Updated • 54.4k • 390 • 11
How to use elishaw/deberta_mental with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="elishaw/deberta_mental") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("elishaw/deberta_mental")
model = AutoModelForSequenceClassification.from_pretrained("elishaw/deberta_mental")A fine-tuned DeBERTa v3 small model for detecting mental health conditions from text.
This model is based on microsoft/deberta-v3-small and has been fine-tuned to classify text into 8 mental health categories.
This model was trained on the following datasets:
The model can classify text into the following categories:
| ID | Label | Description |
|---|---|---|
| 0 | Normal | No mental health concerns detected |
| 1 | Offmychest | General venting/sharing |
| 2 | Depression | Depression-related content |
| 3 | Anxiety | Anxiety-related content |
| 4 | Stress | Stress-related content |
| 5 | Bipolar | Bipolar disorder-related content |
| 6 | Personality disorder | Personality disorder-related content |
| 7 | Suicidal | Suicidal ideation (⚠️ requires immediate attention) |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model_path = "deberta-illness"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
# Example text
text = "I've been feeling down lately and can't seem to enjoy anything anymore."
# Tokenize and predict
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
# Get predicted label
predicted_class = torch.argmax(predictions, dim=-1).item()
confidence = predictions[0][predicted_class].item()
print(f"Predicted: {model.config.id2label[str(predicted_class)]}")
print(f"Confidence: {confidence:.2%}")
Please refer to the original microsoft/deberta-v3-small license and any additional licensing terms from the fine-tuning dataset.