Baselines

Probably better than the other models

FCN


source

FCN


def FCN(
    c_in, layers:list=[128, 256, 128], # kwargs for the encoder layer
    kernel_sizes:list=[7, 5, 3], # kwargs for the encoder layer
    n_classes:int=1, # kwargs for the encoder layer
):

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

General Baseline

Lightning


source

GeneralTimeSupervised


def GeneralTimeSupervised(
    supervised_model, # kwargs for the encoder layer
    learning_rate, # desired learning rate, initial learning rate in if one_cycle_scheduler
    train_size, # the training data size (for one_cycle_scheduler=True)
    batch_size, n_gpus, n_classes:int=1, n_labels:int=1, metrics:dict={}, # name:function for metrics to log
    loss_fxn:str='CrossEntropy', # loss function to use, can be CrossEntropy
    gamma:float=2.0, # for focal loss
    class_weights:NoneType=None, # weights of classes to use in CE loss fxn
    label_smoothing:int=0, # label smoothing for cross entropy loss
    y_padding_mask:int=-100, # padded value that was added to target and indice to ignore when computing loss
    epochs:int=100, # number of epochs for one_cycle_scheduler
    optimizer_type:str='AdamW', scheduler_type:str='OneCycle',
    weight_decay:float=0.0, # weight decay for Adam optimizer
    final_weight_decay:float=0.4, # final weight decay for weight decay scheduler
    use_weight_decay_scheduler:bool=False, # whether to use a weight decay scheduler
    transforms:NoneType=None, # transforms to apply to the data
    mixup_callback:NoneType=None, # mixup callback to apply to the data
    scheduler_kwargs:dict={}
):

Hooks to be used in LightningModule.