SSL, Fine Tuning, and Linear Probing Heads
AttentiveClassifier
def AttentiveClassifier(
embed_dim:int=768, num_heads:int=12, mlp_ratio:float=4.0, depth:int=1, norm_layer:type=LayerNorm,
init_std:float=0.02, qkv_bias:bool=True, num_classes:int=1000, complete_block:bool=True, num_queries:int=1,
affine:bool=False, c_in:int=7
):
Attentive Classifier
AttentivePooler
def AttentivePooler(
num_queries:int=1, embed_dim:int=768, num_heads:int=12, mlp_ratio:float=4.0, depth:int=1,
norm_layer:type=LayerNorm, init_std:float=0.02, qkv_bias:bool=True, complete_block:bool=True
):
Attentive Pooler
CrossAttentionBlock
def CrossAttentionBlock(
dim, num_heads, mlp_ratio:float=4.0, qkv_bias:bool=False, act_layer:type=GELU, norm_layer:type=LayerNorm
):
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.
.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool
CrossAttention
def CrossAttention(
dim, num_heads:int=12, qkv_bias:bool=False
):
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.
.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool
AttentiveClassifierNoMelt
def AttentiveClassifierNoMelt(
embed_dim:int=768, num_heads:int=12, mlp_ratio:float=4.0, depth:int=1, norm_layer:type=LayerNorm,
init_std:float=0.02, qkv_bias:bool=True, num_classes:int=1000, complete_block:bool=True, num_queries:int=1,
affine:bool=False, c_in:int=7
):
Attentive Classifier