linear_regression_rainfall_builtins.Rmd
Source: https://medium.com/dsnet/linear-regression-with-pytorch-3dde91d60b50
Original title: Linear Regression and Gradient Descent from scratch in PyTorch
Let’s re-implement the same model using some built-in functions and classes from PyTorch.
# Input (temp, rainfall, humidity)
inputs = np$array(list(
list(73, 67, 43),
list(91, 88, 64),
list(87, 134, 58),
list(102, 43, 37),
list(69, 96, 70),
list(73, 67, 43),
list(91, 88, 64),
list(87, 134, 58),
list(102, 43, 37),
list(69, 96, 70),
list(73, 67, 43),
list(91, 88, 64),
list(87, 134, 58),
list(102, 43, 37),
list(69, 96, 70)
), dtype='float32')
# Targets (apples, oranges)
targets = np$array(list(
list(56, 70),
list(81, 101),
list(119, 133),
list(22, 37),
list(103, 119),
list(56, 70),
list(81, 101),
list(119, 133),
list(22, 37),
list(103, 119),
list(56, 70),
list(81, 101),
list(119, 133),
list(22, 37),
list(103, 119)
), dtype='float32')
We’ll create a TensorDataset
, which allows access to rows from inputs and targets as tuples. We’ll also create a DataLoader, to split the data into batches while training. It also provides other utilities like shuffling and sampling.
# Define dataset
train_ds = TensorDataset(inputs, targets)
train_ds$tensors[1:2]
#> [[1]]
#> tensor([[ 73., 67., 43.],
#> [ 91., 88., 64.],
#> [ 87., 134., 58.],
#> [102., 43., 37.],
#> [ 69., 96., 70.],
#> [ 73., 67., 43.],
#> [ 91., 88., 64.],
#> [ 87., 134., 58.],
#> [102., 43., 37.],
#> [ 69., 96., 70.],
#> [ 73., 67., 43.],
#> [ 91., 88., 64.],
#> [ 87., 134., 58.],
#> [102., 43., 37.],
#> [ 69., 96., 70.]])
#>
#> [[2]]
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
# Define data loader
batch_size = 5L
train_dl = DataLoader(train_ds, batch_size, shuffle = TRUE)
iter_next(import_builtins()$iter(train_dl))
#> [[1]]
#> tensor([[102., 43., 37.],
#> [102., 43., 37.],
#> [ 69., 96., 70.],
#> [ 91., 88., 64.],
#> [ 73., 67., 43.]])
#>
#> [[2]]
#> tensor([[ 22., 37.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 81., 101.],
#> [ 56., 70.]])
nn.Linear
Instead of initializing the weights and biases manually, we can define the model using nn.Linear
.
Instead of manually manipulating the weights & biases using gradients, we can use the optimizer optim.SGD
.
Instead of defining a loss function manually, we can use the built-in loss function mse_loss
.
We are ready to train the model now. We can define a utility function fit which trains the model for a given number of epochs.
fit <- function(num_epochs, model, loss_fn, opt) {
for (epoch in 1:num_epochs) {
for (xy in iterate(train_dl)) {
# Generate predictions
xb <- xy[[1]]; yb <- xy[[2]]
# print(yb)
pred <- model(xb)
loss <- loss_fn(pred, yb)
# Perform gradient descent
loss$backward()
opt$step()
opt$zero_grad()
}
}
cat('Training loss: ')
print(loss_fn(model(inputs), targets))
}
# Define a utility function to train the model
def fit(num_epochs, model, loss_fn, opt):
for epoch in range(num_epochs):
for xb,yb in train_dl:
# Generate predictions
pred = model(xb)
loss = loss_fn(pred, yb)
# Perform gradient descent
loss.backward()
opt.step()
opt.zero_grad()
print('Training loss: ', loss_fn(model(inputs), targets))
# Train the model for 100 epochs
fit(100, model, loss_fn, opt)
#> Training loss: tensor(19.0913, grad_fn=<MseLossBackward>)
# Generate predictions
preds = model(inputs)
preds
#> tensor([[ 58.7359, 71.2371],
#> [ 81.2380, 99.4881],
#> [118.5270, 134.1934],
#> [ 29.9031, 42.2584],
#> [ 95.0715, 114.0064],
#> [ 58.7359, 71.2371],
#> [ 81.2380, 99.4881],
#> [118.5270, 134.1934],
#> [ 29.9031, 42.2584],
#> [ 95.0715, 114.0064],
#> [ 58.7359, 71.2371],
#> [ 81.2380, 99.4881],
#> [118.5270, 134.1934],
#> [ 29.9031, 42.2584],
#> [ 95.0715, 114.0064]], grad_fn=<AddmmBackward>)
# Compare with targets
targets
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
The weights and biases can also be represented as matrices, initialized with random values. The first row of \(w\) and the first element of \(b\) are used to predict the first target variable, i.e. yield for apples, and, similarly, the second for oranges.
# random numbers for weights and biases. Then convert to double()
torch$set_default_dtype(torch$double)
w = torch$randn(2L, 3L, requires_grad=TRUE) #$double()
b = torch$randn(2L, requires_grad=TRUE) #$double()
print(w)
#> tensor([[ 0.6026, -0.8059, 0.0384],
#> [ 0.4729, 0.0350, -0.5002]], requires_grad=True)
print(b)
#> tensor([0.1000, 0.5782], requires_grad=True)
The model is simply a function that performs a matrix multiplication of the input \(x\) and the weights \(w\) (transposed), and adds the bias \(b\) (replicated for each observation).
The matrix obtained by passing the input data to the model is a set of predictions for the target variables.
# Generate predictions
preds = model(inputs)
print(preds)
#> tensor([[ -8.2532, 15.9353],
#> [-13.5234, 14.6778],
#> [-53.2340, 17.3983],
#> [ 28.3319, 31.8103],
#> [-32.9966, 1.5528],
#> [ -8.2532, 15.9353],
#> [-13.5234, 14.6778],
#> [-53.2340, 17.3983],
#> [ 28.3319, 31.8103],
#> [-32.9966, 1.5528],
#> [ -8.2532, 15.9353],
#> [-13.5234, 14.6778],
#> [-53.2340, 17.3983],
#> [ 28.3319, 31.8103],
#> [-32.9966, 1.5528]], grad_fn=<AddBackward0>)
# Compare with targets
print(targets)
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
Because we’ve started with random weights and biases, the model does not a very good job of predicting the target variables.
We can compare the predictions with the actual targets, using the following method:
The result is a single number, known as the mean squared error (MSE).
# MSE loss
mse = function(t1, t2) {
diff <- torch$sub(t1, t2)
mul <- torch$sum(torch$mul(diff, diff))
return(torch$div(mul, diff$numel()))
}
# Compute loss
loss = mse(preds, targets)
print(loss)
#> tensor(9882.1883, grad_fn=<DivBackward0>)
# 46194
# 33060.8070
The resulting number is called the loss, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model.
With PyTorch, we can automatically compute the gradient or derivative of the loss w.r.t. to the weights and biases, because they have requires_grad
set to True.
The gradients are stored in the .grad property of the respective tensors.
# Gradients for weights
print(w)
#> tensor([[ 0.6026, -0.8059, 0.0384],
#> [ 0.4729, 0.0350, -0.5002]], requires_grad=True)
print(w$grad)
#> tensor([[-7402.8766, -9697.1557, -5617.4876],
#> [-6098.5171, -7641.4787, -4593.5236]])
# Gradients for bias
print(b)
#> tensor([0.1000, 0.5782], requires_grad=True)
print(b$grad)
#> tensor([-92.1351, -75.7251])
A key insight from calculus is that the gradient indicates the rate of change of the loss, or the slope of the loss function w.r.t. the weights and biases.
The increase or decrease is proportional to the value of the gradient.
Finally, we’ll reset the gradients to zero before moving forward, because PyTorch accumulates gradients.
We’ll reduce the loss and improve our model using the gradient descent algorithm, which has the following steps:
# Generate predictions
preds = model(inputs)
print(preds)
#> tensor([[ -8.2532, 15.9353],
#> [-13.5234, 14.6778],
#> [-53.2340, 17.3983],
#> [ 28.3319, 31.8103],
#> [-32.9966, 1.5528],
#> [ -8.2532, 15.9353],
#> [-13.5234, 14.6778],
#> [-53.2340, 17.3983],
#> [ 28.3319, 31.8103],
#> [-32.9966, 1.5528],
#> [ -8.2532, 15.9353],
#> [-13.5234, 14.6778],
#> [-53.2340, 17.3983],
#> [ 28.3319, 31.8103],
#> [-32.9966, 1.5528]], grad_fn=<AddBackward0>)
# Calculate the loss
loss = mse(preds, targets)
print(loss)
#> tensor(9882.1883, grad_fn=<DivBackward0>)
# Compute gradients
loss$backward()
print(w$grad)
#> tensor([[-7402.8766, -9697.1557, -5617.4876],
#> [-6098.5171, -7641.4787, -4593.5236]])
print(b$grad)
#> tensor([-92.1351, -75.7251])
# Adjust weights and reset gradients
with(torch$no_grad(), {
print(w); print(b) # requires_grad attribute remains
w$data <- torch$sub(w$data, torch$mul(w$grad$data, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad$data, torch$scalar_tensor(1e-5)))
print(w$grad$data$zero_())
print(b$grad$data$zero_())
})
#> tensor([[ 0.6026, -0.8059, 0.0384],
#> [ 0.4729, 0.0350, -0.5002]], requires_grad=True)
#> tensor([0.1000, 0.5782], requires_grad=True)
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
print(w)
#> tensor([[ 0.6766, -0.7089, 0.0946],
#> [ 0.5339, 0.1114, -0.4543]], requires_grad=True)
print(b)
#> tensor([0.1009, 0.5789], requires_grad=True)
With the new weights and biases, the model should have a lower loss.
To reduce the loss further, we repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an epoch.
# Running all together
# Adjust weights and reset gradients
for (i in 1:100) {
preds = model(inputs)
loss = mse(preds, targets)
loss$backward()
with(torch$no_grad(), {
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad, torch$scalar_tensor(1e-5)))
w$grad$zero_()
b$grad$zero_()
})
}
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
#> tensor(492.6063, grad_fn=<DivBackward0>)
# predictions
preds
#> tensor([[ 65.1631, 76.2174],
#> [ 86.9079, 97.1305],
#> [ 95.1439, 131.5121],
#> [ 67.3979, 70.0327],
#> [ 83.0900, 93.8266],
#> [ 65.1631, 76.2174],
#> [ 86.9079, 97.1305],
#> [ 95.1439, 131.5121],
#> [ 67.3979, 70.0327],
#> [ 83.0900, 93.8266],
#> [ 65.1631, 76.2174],
#> [ 86.9079, 97.1305],
#> [ 95.1439, 131.5121],
#> [ 67.3979, 70.0327],
#> [ 83.0900, 93.8266]], grad_fn=<AddBackward0>)
# Targets
targets
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.],
#> [ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])