Patch Time Series Transformer

Revolutionary!

source

MaskedAutogressionFeedForward


def MaskedAutogressionFeedForward(
    c_in, # the number of input channels
    patch_len, # the length of the patches (either stft or interval length)
    d_model, # the dimension of the initial linear layers for inputting patches into transformer
    shared_recreation:bool=True, # indicator of whether to project each channel individually or together
):

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool


source

TSTBlock


def TSTBlock(
    d_model, # dimension of patch embeddings
    n_heads, # number of attention heads per layer
    d_ff:int=256, # dimension of feedforward layer in each transformer layer
    attn_dropout:int=0, dropout:float=0.0, bias:bool=True, activation:str='gelu', pre_norm:bool=False,
    rotary_pes:bool=False
):

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool


source

PatchTFTSimple


def PatchTFTSimple(
    c_in, patch_size, patch_stride, num_patches, d_model, n_heads, d_ff, num_layers,
    augmentations:list=['patch_mask', 'jitter_zero_mask', 'channel_masking'], mask_ratio:float=0.1,
    shared_embedding:bool=False, pretrain_head:bool=True, dropout:float=0.0, attn_dropout:float=0.0, act:str='gelu',
    pre_norm:bool=False, pe_type:str='tAPE', qkv_bias:bool=True, init_std:float=0.02, tokenizer_type:str='simple',
    tokenizer_kwargs:dict={}
):

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool


source

PatchTFTSimpleLightning


def PatchTFTSimpleLightning(
    learning_rate, train_size, batch_size, n_gpus, metrics:dict={}, loss_func:str='mse', weight_decay:float=0.0,
    epochs:int=100, use_weight_decay_scheduler:bool=False, final_weight_decay:float=0.4, optimizer_type:str='AdamW',
    scheduler_type:str='OneCycle', huber_delta:NoneType=None, # huber loss delta, not used otherwise
    scheduler_kwargs:dict={}, transforms:NoneType=None, patchmeup_kwargs:VAR_KEYWORD
):

Hooks to be used in LightningModule.