Dendritic cell segmentation with a pretrained Yolo model

Example Model training for cracks identification with Ultralytics YOLO based on A Systematic Review of YOLO-Based Object Detection in Medical Imaging: Advances, Challenges, and Future Directions. A set of models is trained with different augmentations (to balance the dataset) and model sizes.

For only detecting detentric cells without segmentation, the selection of pretrained object detection models could be better. A bigger prompt model is also possible.

Task Description: Detecting and Segmenting Dendritic Cells in the Nerve Plexus Layer

In this task, you are going to train a segmentation model to detect/segment dendritic cells (type 1 and type 2) in IVCM images of the corneal nerve plexus layer. The main objective is to obtain accurate segmentation results that can be used to calculate the density of dendritic cells in the analyzed region. Dendritic cells (DCs) are antigen-presenting immune cells that play a central role in initiating and regulating immune responses. In the cornea, variations in the density and morphology of DCs are associated with inflammation, infection, and systemic diseases such as diabetes and autoimmune disorders. Therefore, quantifying DC density serves as an important biomarker for assessing ocular surface inflammation and related systemic conditions.

Metrics overview for yolo26

As expected by the dataset distribution of type 1 and 2 instances, the trained models perform better on type 1 (white labels) than on type 2 (gray labels) across all runs. Data augmentation helps to mitigate this imbalance (compare the simple nano model and the augmented nano model in the following table). Only flipping images with type 2 instance performs similarly to rotating and flipping those images (compare augmented nano with rotation + flip type 2 and flip type 2 only). The flip type 2-only augmentation is used in the training runs for the increased model sizes due to the smaller dataset size. Increasing the model size also increases the performance. The (extra)-large model performs similarly to the previous model due to the small dataset size. Increasing the epochs to 30 helps the large model, but it is still insufficient for the extra-large model.

Legend of yolo performance metrics:

  • Precision
  • Recall
  • mAP: Mean Average Precision
  • mAP50: Mean average precision calculated at an intersection over union (IoU) threshold of 0.50. It's a measure of the model's accuracy considering only the "easy" detections.
  • mAP50-95: The average of the mean average precision calculated at varying IoU thresholds, ranging from 0.50 to 0.95. It gives a comprehensive view of the model's performance across different levels of detection difficulty.
Training setup Class Images Instances Box P Box R Box mAP50 Box mAP50-95 Mask P Mask R Mask mAP50 Mask mAP50-95
Simple 10 epoch nano training all 200 2168 0.445 0.451 0.393 0.182 0.349 0.350 0.253 0.0755
Simple 10 epoch nano training white 197 1784 0.428 0.805 0.636 0.304 0.289 0.586 0.343 0.0956
Simple 10 epoch nano training gray 139 384 0.462 0.0964 0.151 0.0592 0.408 0.115 0.163 0.0554
Augmented nano (rotation + flip type 2) all 200 2168 0.50 0.524 0.483 0.245 0.354 0.399 0.272 0.0776
Augmented nano (rotation + flip type 2) white 197 1784 0.62 0.720 0.699 0.364 0.372 0.460 0.303 0.0812
Augmented nano (rotation + flip type 2) gray 139 384 0.379 0.328 0.266 0.125 0.336 0.339 0.240 0.0740
Augmented nano (flip type 2 only) all 200 2168 0.497 0.521 0.494 0.236 0.406 0.392 0.312 0.0921
Augmented nano (flip type 2 only) white 197 1784 0.599 0.758 0.712 0.349 0.382 0.479 0.324 0.0879
Augmented nano (flip type 2 only) gray 139 384 0.395 0.283 0.276 0.124 0.430 0.305 0.300 0.0963
Augmented small (flip type 2 only) all 200 2168 0.559 0.571 0.561 0.303 0.444 0.430 0.360 0.107
Augmented small (flip type 2 only) white 197 1784 0.687 0.764 0.770 0.447 0.437 0.479 0.352 0.0952
Augmented small (flip type 2 only) gray 139 384 0.430 0.378 0.352 0.160 0.451 0.381 0.367 0.118
Augmented middle (flip type 2 only) all 200 2168 0.602 0.573 0.596 0.334 0.469 0.446 0.375 0.11
Augmented middle (flip type 2 only) white 197 1784 0.737 0.743 0.791 0.458 0.457 0.469 0.343 0.0873
Augmented middle (flip type 2 only) gray 139 384 0.468 0.404 0.401 0.209 0.481 0.422 0.407 0.132
Augmented large (flip type 2 only) all 200 2168 0.608 0.541 0.572 0.336 0.478 0.430 0.377 0.117
Augmented large (flip type 2 only) white 197 1784 0.709 0.739 0.779 0.492 0.484 0.516 0.414 0.126
Augmented large (flip type 2 only) gray 139 384 0.507 0.343 0.365 0.180 0.472 0.344 0.341 0.107
Augmented large (flip type 2 only), batch 8, 30 epochs all 200 2168 0.608 0.637 0.643 0.389 0.476 0.496 0.417 0.122
Augmented large (flip type 2 only), batch 8, 30 epochs white 197 1784 0.744 0.811 0.833 0.535 0.476 0.522 0.395 0.107
Augmented large (flip type 2 only), batch 8, 30 epochs gray 139 384 0.472 0.464 0.453 0.243 0.476 0.471 0.439 0.137
Augmented extra large (flip type 2 only), 30 epochs all 200 2168 0.621 0.609 0.628 0.377 0.491 0.480 0.408 0.119
Augmented extra large (flip type 2 only), 30 epochs white 197 1784 0.783 0.721 0.816 0.529 0.523 0.475 0.402 0.106
Augmented extra large (flip type 2 only), 30 epochs gray 139 384 0.459 0.497 0.440 0.225 0.459 0.484 0.415 0.132

Augmented large training with only flip of type 2 and batch size 8 for 30 epochs

Class Images Instances Box P Box R Box mAP50 Box mAP50-95 Mask P Mask R Mask mAP50 Mask mAP50-95
all 200 2168 0.608 0.637 0.643 0.389 0.476 0.496 0.417 0.122
white 197 1784 0.744 0.811 0.833 0.535 0.476 0.522 0.395 0.107
gray_808080 139 384 0.472 0.464 0.453 0.243 0.476 0.471 0.439 0.137

Prediction example with a larger batch:

simple prediction

The training metrics:

training_metrics

Citation

@model{yolo26_large_segmentation_dendritic-cells,
  title={{EDendritic cell segmentation with a pretrained Yolo model}},
  author={Kalle Hilsenbek},
  url={https://huggingface.co/Bachstelze/yolo26_large_segmentation_dendritic-cells},
  year={2026}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support