Transformers
Contrastive Time Encoder
Torch
PatchTFTSimpleContrastive
def PatchTFTSimpleContrastive(
c_in, patch_size, patch_stride, num_patches, d_model, n_heads, d_ff, num_layers,
augmentations:list=['patch_mask', 'jitter_zero_mask', 'channel_masking'], mask_ratio:float=0.1,
shared_embedding:bool=False, dropout:float=0.0, attn_dropout:float=0.0, act:str='gelu', pre_norm:bool=False,
pe_type:str='tAPE', qkv_bias:bool=True, init_std:float=0.02, tokenizer_type:str='simple',
tokenizer_kwargs:dict={}, pretrain_head:bool=True
):
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.
.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool
PatchTFTSimpleTimeConstrastive
def PatchTFTSimpleTimeConstrastive(
c_in:int, # the number of input channels
patch_size, # the length of the patch of time/interval or short time ft windown length (when time_domain=False)
patch_stride, # the length of the distance between each patch/fft
num_patches, # the number of patches in the sequence
max_seq_len, # maximum sequence len
pos_encoding_type:str='learned', # options include learned or tAPE
use_revin:bool=True, # if time_domain is true, whether or not to instance normalize time data
affine:bool=True, # if time_domain is true, whether or not to learn revin normalization parameters
mask_ratio:tuple=(0.1, 0.5), # amount of signal to mask
augmentations:list=['patch_mask', 'jitter_zero_mask'], # the type of mask to use, options are patch or jitter_zero
n_layers:int=2, # the number of transformer encoder layers to use
d_model:int=512, # the dimension of the input to the transofmrer encoder
n_heads:int=2, # the number of heads in each layer
shared_embedding:bool=False, # indicator for whether or not each channel should be projected with its own set of linear weights to the encoder dimension
d_ff:int=2048, # the feedforward layer size in the transformer
attn_dropout:float=0.0, # dropout in attention
dropout:float=0.1, # dropout for linear layers
act:str='gelu', # activation function
pre_norm:bool=False, # indicator to pre batch or layer norm
pretrain:bool=True
):
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.
.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool
Lightning
PatchTFTContrastiveLightning
def PatchTFTContrastiveLightning(
learning_rate, train_size, batch_size, channels, metrics, loss_func:str='nt_xent', optimizer_type:str='adamw',
scheduler_type:str='OneCycle', weight_decay:float=0.0, temperature:float=0.2, max_lr:float=0.01, epochs:int=100,
patchmeup_kwargs:VAR_KEYWORD
):
Hooks to be used in LightningModule.
Regress Contrast
Torch
PatchTFTRegressContrast
def PatchTFTRegressContrast(
c_in:int, # the number of input channels
win_length, # the length of the patch of time/interval or short time ft windown length (when time_domain=False)
hop_length, # the length of the distance between each patch/fft
max_seq_len, # maximum sequence len
use_flash_attn:bool=False, # indicator to use flash attention
use_revin:bool=True, # if time_domain is true, whether or not to instance normalize time data
dim1reduce:bool=False, # indicator to normalize by timepoint in revin
affine:bool=True, # if time_domain is true, whether or not to learn revin normalization parameters
mask_ratio:float=0.1, # amount of signal to mask
augmentations:list=['patch_mask', 'jitter_zero_mask'], # the type of mask to use, options are patch or jitter_zero
n_layers:int=2, # the number of transformer encoder layers to use
d_model:int=512, # the dimension of the input to the transofmrer encoder
n_heads:int=2, # the number of heads in each layer
shared_embedding:bool=False, # indicator for whether or not each channel should be projected with its own set of linear weights to the encoder dimension
d_ff:int=2048, # the feedforward layer size in the transformer
norm:str='BatchNorm', # BatchNorm or LayerNorm during trianing
attn_dropout:float=0.0, # dropout in attention
dropout:float=0.1, # dropout for linear layers
act:str='gelu', # activation function
res_attention:bool=True, # whether to use residual attention
pre_norm:bool=False, # indicator to pre batch or layer norm
store_attn:bool=False, # indicator to store attention
):
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.
.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool
Lightning
PatchTFTRegressContrastLightning
def PatchTFTRegressContrastLightning(
learning_rate, train_size, batch_size, channels, metrics, use_sequence_padding_mask:bool=False,
loss_func:str='nt_xent', loss_func_regression:str='MSE', weight_decay:float=0.0, optimizer_type:str='Adam',
scheduler_type:str='OneCycle', contrast_scaling:float=1.0, temperature:float=0.2, max_lr:float=0.01,
epochs:int=100, one_cycle_scheduler:bool=True, huber_delta:NoneType=None, # huber loss delta
patchmeup_kwargs:VAR_KEYWORD
):
Hooks to be used in LightningModule.
Time Frequency Contrast
Torch
PatchTFTContrastive
def PatchTFTContrastive(
c_in:int, # the number of input channels
win_length, # the length of the patch of time/interval or short time ft windown length (when time_domain=False)
hop_length, # the length of the distance between each patch/fft
max_seq_len, # maximum sequence len
contrast:str='both', # contrast time / frequency (both), just time, or just freq
use_revin:bool=True, # if time_domain is true, whether or not to instance normalize time data
affine:bool=False, # if time_domain is true, whether or not to learn revin normalization parameters
dim1reduce:bool=False, # indicator to normalize by timepoint in revin
mask_ratio:float=0.1, # amount of signal to mask
mask_type:str='jitter_zero', # the type of mask to use, options are patch or jitter_zero
n_layers:int=3, # the number of transformer encoder layers to use
d_model:int=128, # the dimension of the input to the transofmrer encoder
n_heads:int=16, # the number of heads in each layer
shared_embedding:bool=True, # indicator for whether or not each channel should be projected with its own set of linear weights to the encoder dimension
d_ff:int=256, # the feedforward layer size in the transformer
norm:str='BatchNorm', # BatchNorm or LayerNorm during trianing
attn_dropout:float=0.0, # dropout in attention
dropout:float=0.0, # dropout for linear layers
act:str='gelu', # activation function
res_attention:bool=True, # whether to use residual attention
pre_norm:bool=False, # indicator to pre batch or layer norm
store_attn:bool=False, # indicator to store attention
pre_train:bool=True
):
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.
.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool