PyTorch 逻辑回归

逻辑回归

回归模型,作为机器学习的基本模型,上一次练习了线性回归,这次就说说Logistic Regression。

PyTorch代码实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
import torch
import numpy as np
import torch.nn.functional as F
from torchvision import datasets, transforms
import matplotlib.pyplot as plt


in_dim = 28*28
out_class = 10
batch_size = 64
# 迭代次数 100
epochs_num = 100


class LogisticRegression(torch.nn.Module):
def __init__(self):
super(LogisticRegression, self).__init__()
self.logistic = torch.nn.Linear(in_dim, out_class)

def forward(self, x):
return self.logistic(x)

model = LogisticRegression()

# 交叉熵损失
loss_function = torch.nn.CrossEntropyLoss()

# 随机梯度下降,学习率 1e-3
#optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)

# 另外一个优化方法 Adam
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)


def train(model, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(data.size(0), -1)
output = model(data)
loss = loss_function(output, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))

def test(model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data = data.view(data.size(0), -1)
output = model(data)
test_loss += loss_function(output, target).item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()

test_loss /= len(test_loader.dataset)

print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))


train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=True, download=True,
transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True)

test_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=False, transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True)



for epoch in range(epochs_num):
train(model, train_loader, optimizer, epoch)
test(model, test_loader)

可以调整学习率、迭代次数、优化方法,看看不同的调整,会有什么不同的结果

  • 使用SDG,学习率为时,收敛速度有些慢,迭代了73次才达到,最终准确率稳定在$91\%$;
  • 使用Adam,学习率一样为,收敛速度很快,一轮就能达到,接下来就是一直在之间徘徊;

简单的回归,对于多分类还是心有余而力不足,后面会再用二分类练习逻辑回归,也会用神经网络来训练下MNIST数据集。

SDG训练了100次后的结果,如下图:

img

代码也可以查看我的GitHub仓库LogisticRegression,如有错误,欢迎指出。