Chapter 11 Linear Regression

Last update: Thu Oct 22 16:46:28 2020 -0500 (54a46ea04)

11.1 Rainfall prediction

Select the device: CPU or GPU

11.3 Convert arrays to tensors

Before we build a model, we need to convert inputs and targets to PyTorch tensors.

#> tensor([[ 73.,  67.,  43.],
#>         [ 91.,  88.,  64.],
#>         [ 87., 134.,  58.],
#>         [102.,  43.,  37.],
#>         [ 69.,  96.,  70.]], dtype=torch.float64)
#> tensor([[ 56.,  70.],
#>         [ 81., 101.],
#>         [119., 133.],
#>         [ 22.,  37.],
#>         [103., 119.]], dtype=torch.float64)

The weights and biases can also be represented as matrices, initialized with random values. The first row of \(w\) and the first element of \(b\) are used to predict the first target variable, i.e. yield for apples, and, similarly, the second for oranges.

#> tensor([[ 1.5410, -0.2934, -2.1788],
#>         [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([0.4033, 0.8380], requires_grad=True)

11.4 Build the model

The model is simply a function that performs a matrix multiplication of the input \(x\) and the weights \(w\) (transposed), and adds the bias \(b\) (replicated for each observation).

11.5 Generate predictions

The matrix obtained by passing the input data to the model is a set of predictions for the target variables.

#> tensor([[  -0.4516,  -90.4691],
#>         [ -24.6303, -132.3828],
#>         [ -31.2192, -176.1530],
#>         [  64.3523,  -39.5645],
#>         [ -73.9524, -161.9560]], grad_fn=<AddBackward0>)
#> tensor([[ 56.,  70.],
#>         [ 81., 101.],
#>         [119., 133.],
#>         [ 22.,  37.],
#>         [103., 119.]])

Because we’ve started with random weights and biases, the model does not a very good job of predicting the target variables.

11.6 Loss Function

We can compare the predictions with the actual targets, using the following method:

  • Calculate the difference between the two matrices (preds and targets).
  • Square all elements of the difference matrix to remove negative values.
  • Calculate the average of the elements in the resulting matrix.

The result is a single number, known as the mean squared error (MSE).

#> function(t1, t2) {
#>   diff <- torch$sub(t1, t2)
#>   mul <- torch$sum(torch$mul(diff, diff))
#>   return(torch$div(mul, diff$numel()))
#> }

11.7 Step by step process

11.7.1 Compute the losses

#> tensor(33060.8053, grad_fn=<DivBackward0>)

The resulting number is called the loss, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model.

11.7.2 Compute Gradients

With PyTorch, we can automatically compute the gradient or derivative of the loss w.r.t. to the weights and biases, because they have requires_grad set to True.

The gradients are stored in the .grad property of the respective tensors.

#> tensor([[ 1.5410, -0.2934, -2.1788],
#>         [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([[ -6938.4351,  -9674.6757,  -5744.0206],
#>         [-17408.7861, -20595.9333, -12453.4702]])
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([ -89.3802, -212.1051])

A key insight from calculus is that the gradient indicates the rate of change of the loss, or the slope of the loss function w.r.t. the weights and biases.

  • If a gradient element is positive:
    • increasing the element’s value slightly will increase the loss.
    • decreasing the element’s value slightly will decrease the loss.
  • If a gradient element is negative,
    • increasing the element’s value slightly will decrease the loss.
    • decreasing the element’s value slightly will increase the loss.

The increase or decrease is proportional to the value of the gradient.

11.7.3 Reset the gradients

Finally, we’ll reset the gradients to zero before moving forward, because PyTorch accumulates gradients.

#> tensor([[0., 0., 0.],
#>         [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[0., 0., 0.],
#>         [0., 0., 0.]])
#> tensor([0., 0.])

11.7.3.1 Adjust weights and biases

We’ll reduce the loss and improve our model using the gradient descent algorithm, which has the following steps:

  1. Generate predictions
  2. Calculate the loss
  3. Compute gradients w.r.t the weights and biases
  4. Adjust the weights by subtracting a small quantity proportional to the gradient
  5. Reset the gradients to zero
#> tensor([[  -0.4516,  -90.4691],
#>         [ -24.6303, -132.3828],
#>         [ -31.2192, -176.1530],
#>         [  64.3523,  -39.5645],
#>         [ -73.9524, -161.9560]], grad_fn=<AddBackward0>)
#> tensor(33060.8053, grad_fn=<DivBackward0>)
#> tensor([[ -6938.4351,  -9674.6757,  -5744.0206],
#>         [-17408.7861, -20595.9333, -12453.4702]])
#> tensor([ -89.3802, -212.1051])
#> tensor([[ 1.5410, -0.2934, -2.1788],
#>         [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([[0., 0., 0.],
#>         [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[ 1.6104, -0.1967, -2.1213],
#>         [ 0.7425, -0.8786, -1.2741]], requires_grad=True)
#> tensor([0.4042, 0.8401], requires_grad=True)

With the new weights and biases, the model should have a lower loss.

#> tensor(23432.4894, grad_fn=<DivBackward0>)

11.8 All together

###T Training for multiple epochs To reduce the loss further, we repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an epoch.

#> tensor(1258.0216, grad_fn=<DivBackward0>)
#> tensor([[ 69.2462,  80.2082],
#>         [ 73.7183,  97.2052],
#>         [118.5780, 124.9272],
#>         [ 89.2282,  92.7052],
#>         [ 47.4648,  80.7782]], grad_fn=<AddBackward0>)
#> tensor([[ 56.,  70.],
#>         [ 81., 101.],
#>         [119., 133.],
#>         [ 22.,  37.],
#>         [103., 119.]])