Loss functions

Im lost too.

source

mse_variance_loss


def mse_variance_loss(
    preds, target, representations, alpha:float=0.2
):

preds: [bs x num_patch x n_vars x patch_len] targets: [bs x num_patch x n_vars x patch_len] representations: [bs x nvars x d_model x num_patch]


source

smoothl1_loss


def smoothl1_loss(
    preds, target
):

source

huber_loss


def huber_loss(
    preds, target, delta:int=1
):

preds: [bs x num_patch x n_vars x patch_len] targets: [bs x num_patch x n_vars x patch_len]


source

cosine_similarity_loss


def cosine_similarity_loss(
    preds, target
):

preds: [bs x num_patch x n_vars x patch_len] targets: [bs x num_patch x n_vars x patch_len]


source

mape


def mape(
    preds, target
):

source

mae_loss


def mae_loss(
    preds, target
):

preds: [bs x num_patch x n_vars x patch_len] targets: [bs x num_patch x n_vars x patch_len]


source

mse_loss


def mse_loss(
    preds, target
):

preds: [bs x num_patch x n_vars x patch_len] targets: [bs x num_patch x n_vars x patch_len]


source

nll_logistic_hazard


def nll_logistic_hazard(
    phi, events, idx_durations, reduction:str='mean'
):

Adapted from https://github.com/havakv/pycox/blob/3eccdd7fd9844a060f50fdcc315659f33a2d2dc1/pycox/models/loss.py#L18 Negative log-likelihood of the discrete time hazard parametrized model LogisticHazard [1].

Arguments: phi {torch.tensor} – Estimates in (-inf, inf), where hazard = sigmoid(phi). idx_durations {torch.tensor} – Event times represented as indices. events {torch.tensor} – Indicator of event (1.) or censoring (0.). Same length as ‘idx_durations’. reduction {string} – How to reduce the loss. ‘none’: No reduction. ‘mean’: Mean of tensor. ’sum: sum.

Returns: torch.tensor – The negative log-likelihood.

References: [1] Håvard Kvamme and Ørnulf Borgan. Continuous and Discrete-Time Survival Prediction with Neural Networks. arXiv preprint arXiv:1910.06724, 2019. https://arxiv.org/pdf/1910.06724.pdf


source

CrossEntropyLoss


def CrossEntropyLoss(
    ignore_index:int=-100, reduction:str='mean', weight:NoneType=None, label_smoothing:int=0, soft_labels:bool=False
):

Cross entropy loss with ignore_index.


source

FocalLoss


def FocalLoss(
    weight:NoneType=None, gamma:float=2.0, reduction:str='mean', ignore_index:int=-100
):

adapted from tsai, weighted multiclass focal loss https://github.com/timeseriesAI/tsai/blob/bdff96cc8c4c8ea55bc20d7cffd6a72e402f4cb2/tsai/losses.py#L116C1-L140C20


source

KLDivLoss


def KLDivLoss(
    reduction:str='mean'
):

Kullback-Leibler Divergence Loss with masking for ignore_index. Handles soft labels with ignore_index marked as -100.

Args: logits: [bs x n_classes x pred_labels] - model predictions targets: [bs x n_classes x soft_labels] - soft labels, with ignore_index positions marked as 0