Download PDFOpen PDF in browser

A Survey of Loss Functions for Semantic Segmentation

EasyChair Preprint 4058

6 pagesDate: August 19, 2020

Abstract

Image Segmentation has been an active field of research, as it has the potential to fix loopholes in healthcare, and help the mass. In the past 5 years, various papers came up with different objective loss functions used in different cases such as biased data, sparse segmentation, etc. In this paper, we have summarized most of the well-known loss functions widely used in Image segmentation and listed out the cases where their usage can help in fast and better convergence of a Model. Furthermore, We have also introduced a new log-cosh dice loss function and compared its performance on NBFS skull stripping with widely used loss functions. We showcased that certain loss functions perform well across all datasets and can be taken as a good choice in unknown-distribution datasets. The code is available at https://github.com/shruti-jadon/Semantic-Segmentation-Loss-Functions.

Keyphrases: Binary Cross-entropy, Cross Entropy Loss, Dice coefficient, Focal Loss, Hausdorff distance, Healthcare, Optimization, binary cross, computer vision, correlation maximized structural similarity, cosh dice loss, cross-entropy, deep learning, derived loss penalty term, dice loss function, distance map, entropy balanced cross entropy, entropy loss function, exponential logarithmic loss, focal tversky loss log, ground truth, image segmentation, log cosh dice, loss focal tversky loss, loss function, map derived loss penalty, maximized structural similarity loss, medical image, nbf skull stripping, semantic segmentation, tversky loss

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:4058,
  author    = {Shruti Jadon},
  title     = {A Survey of Loss Functions for Semantic Segmentation},
  howpublished = {EasyChair Preprint 4058},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser