Augmentations

Because this will help

Patching


source

create_patch


def create_patch(
    xb, patch_len, stride, constant_pad:bool=False, constant_pad_value:int=0
):

xb: [bs x n_vars x seq_len] out: [bs x num_patch x n_vars x patch_len]


source

mask_patches_simple


def mask_patches_simple(
    xb, mask_ratio
):

Function that masks patches using fixed ratio approach similar to random_masking

xb: [bs x patch_num x n_vars x patch_len] or nested tensor Returns: x_masked: masked tensor with same shape as input mask: binary mask where 1 indicates masked positions (bs x patch_num x n_vars)


source

unpatch


def unpatch(
    x, seq_len, remove_padding:bool=True
):

x: [bs/None x patch_num x n_vars x patch_len] returns x: [bs x n_vars x seq_len]

Value Augmentations


source

jitter_augmentation


def jitter_augmentation(
    x, mask_ratio:float=0.05, jitter_ratio:float=0.05, p:int=1
):

source

remove_values


def remove_values(
    x, mask_ratio
):

Shuffle Augmentations


source

shuffle_dim


def shuffle_dim(
    x, dim:int=1, p:float=0.5
):

shuffles a dimension randomly along dim x: [bs x n channels x n patches x patch len]


source

reverse_sequence


def reverse_sequence(
    x, seq_dim:tuple=(-1,), p:float=0.5
):

source

channel_masking


def channel_masking(
    x, dim:int=1, p:float=0.5, specific_channels:NoneType=None
):

Masks up to n channels - 1 randomly of x or specific channels if provided


source

random_crop


def random_crop(
    x, c_in, min_len, p:float=0.5
):

Args: x: Either a regular tensor [n_channels x seq_len] or a nested tensor with variable lengths Returns: Cropped version of input with random length

Transforms


source

MixupCallbackClassification


def MixupCallbackClassification(
    num_classes, mixup_alpha:float=0.4, # alpha parameter for the beta distribution
    ignore_index:int=-100, # ignore index
):

Mixup for 1D data (e.g., time-series).

This callback applies Mixup to the training data, blending both the input data and the labels.

See tsai implementation here: https://github.com/timeseriesAI/tsai/blob/bdff96cc8c4c8ea55bc20d7cffd6a72e402f4cb2/tsai/data/mixed_augmentation.py#L43

Note that this creates non-integer labels/soft labels. Loss functions should be able to handle this.


source

TransformsCallback


def TransformsCallback(
    transforms
):

Applies a series of transforms to the input data, on train_batch_start.


source

VariableChannelInput


def VariableChannelInput(
    indexes_to_add_channels, n_channels_expected, channel_dim:int=1
):

Randomly adds 0 channels at correct index to the input data to match the number of channels expected by the self supervised model.