Skip to content

Commit 2fc4b07

Browse files
committed
typos + text format
1 parent 1ac5a71 commit 2fc4b07

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

03_pytorch_computer_vision.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -532,7 +532,7 @@
532532
"![example input and output shapes of the fashionMNIST problem](https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/images/03-computer-vision-input-and-output-shapes.png)\n",
533533
"*Various problems will have various input and output shapes. But the premise remains: encode data into numbers, build a model to find patterns in those numbers, convert those patterns into something meaningful.*\n",
534534
"\n",
535-
"If `color_channels=3`, the image comes in pixel values for red, green and blue (this is also known a the [RGB color model](https://en.wikipedia.org/wiki/RGB_color_model)).\n",
535+
"If `color_channels=3`, the image comes in pixel values for red, green and blue (this is also known as the [RGB color model](https://en.wikipedia.org/wiki/RGB_color_model)).\n",
536536
"\n",
537537
"The order of our current tensor is often referred to as `CHW` (Color Channels, Height, Width).\n",
538538
"\n",
@@ -802,7 +802,7 @@
802802
"\n",
803803
"But I think coding a model in PyTorch would be faster.\n",
804804
"\n",
805-
"> **Question:** Do you think the above data can be model with only straight (linear) lines? Or do you think you'd also need non-straight (non-linear) lines?"
805+
"> **Question:** Do you think the above data can be modeled with only straight (linear) lines? Or do you think you'd also need non-straight (non-linear) lines?"
806806
]
807807
},
808808
{
@@ -999,7 +999,7 @@
999999
"\n",
10001000
"Our baseline will consist of two [`nn.Linear()`](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) layers.\n",
10011001
"\n",
1002-
"We've done this in a previous section but there's going to one slight difference.\n",
1002+
"We've done this in a previous section but there's going to be one slight difference.\n",
10031003
"\n",
10041004
"Because we're working with image data, we're going to use a different layer to start things off.\n",
10051005
"\n",
@@ -1430,7 +1430,7 @@
14301430
" # 1. Forward pass\n",
14311431
" test_pred = model_0(X)\n",
14321432
" \n",
1433-
" # 2. Calculate loss (accumatively)\n",
1433+
" # 2. Calculate loss (accumulatively)\n",
14341434
" test_loss += loss_fn(test_pred, y) # accumulatively add up the loss per epoch\n",
14351435
"\n",
14361436
" # 3. Calculate accuracy (preds need to be same as y_true)\n",
@@ -1578,7 +1578,7 @@
15781578
"\n",
15791579
"Now let's setup some [device-agnostic code](https://pytorch.org/docs/stable/notes/cuda.html#best-practices) for our models and data to run on GPU if it's available.\n",
15801580
"\n",
1581-
"If you're running this notebook on Google Colab, and you don't a GPU turned on yet, it's now time to turn one on via `Runtime -> Change runtime type -> Hardware accelerator -> GPU`. If you do this, your runtime will likely reset and you'll have to run all of the cells above by going `Runtime -> Run before`."
1581+
"If you're running this notebook on Google Colab, and you don't have a GPU turned on yet, it's now time to turn one on via `Runtime -> Change runtime type -> Hardware accelerator -> GPU`. If you do this, your runtime will likely reset and you'll have to run all of the cells above by going `Runtime -> Run before`."
15821582
]
15831583
},
15841584
{
@@ -1855,7 +1855,7 @@
18551855
"\n",
18561856
"We'll do so inside another loop for each epoch.\n",
18571857
"\n",
1858-
"That way for each epoch we're going a training and a testing step.\n",
1858+
"That way, for each epoch, we're going through a training step and a testing step.\n",
18591859
"\n",
18601860
"> **Note:** You can customize how often you do a testing step. Sometimes people do them every five epochs or 10 epochs or in our case, every epoch.\n",
18611861
"\n",
@@ -1966,7 +1966,7 @@
19661966
"\n",
19671967
"> **Note:** The training time on CUDA vs CPU will depend largely on the quality of the CPU/GPU you're using. Read on for a more explained answer.\n",
19681968
"\n",
1969-
"> **Question:** \"I used a a GPU but my model didn't train faster, why might that be?\"\n",
1969+
"> **Question:** \"I used a GPU but my model didn't train faster, why might that be?\"\n",
19701970
">\n",
19711971
"> **Answer:** Well, one reason could be because your dataset and model are both so small (like the dataset and model we're working with) the benefits of using a GPU are outweighed by the time it actually takes to transfer the data there.\n",
19721972
"> \n",

0 commit comments

Comments
 (0)