在迁移学习finetune时我们通常需要冻结前几层的参数不参与训练,在Pytorch中的实现如下:
class Model(nn.Module):
def __init__(self):
super(Transfer_model, self).__init__()
self.linear1 = nn.Linear(20, 50)
self.linear2 = nn.Linear(50, 20)
self.linear3 = nn.Linear(20, 2)
def forward(self, x):
pass
假如我们想要冻结linear1层,需要做如下操作:
mo
1