Skip to content

Commit 21455a2

Browse files
authored
Merge pull request #806 from lombardo-luca/patch-1
Typos in notebooks 01 and 02
2 parents d9978ec + 4eb28cc commit 21455a2

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

01_pytorch_workflow.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -899,7 +899,7 @@
899899
"\n",
900900
"| Number | Step name | What does it do? | Code example |\n",
901901
"| ----- | ----- | ----- | ----- |\n",
902-
"| 1 | Forward pass | The model goes through all of the training data once, performing its `forward()` function calculations. | `model(x_test)` |\n",
902+
"| 1 | Forward pass | The model goes through all of the testing data once, performing its `forward()` function calculations. | `model(x_test)` |\n",
903903
"| 2 | Calculate the loss | The model's outputs (predictions) are compared to the ground truth and evaluated to see how wrong they are. | `loss = loss_fn(y_pred, y_test)` | \n",
904904
"| 3 | Calulate evaluation metrics (optional) | Alongisde the loss value you may want to calculate other evaluation metrics such as accuracy on the test set. | Custom functions |\n",
905905
"\n",

02_pytorch_classification.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2318,7 +2318,7 @@
23182318
"\n",
23192319
"But the data we've been working with is non-linear (circles).\n",
23202320
"\n",
2321-
"What do you think will happen when we introduce the capability for our model to use **non-linear actviation functions**?\n",
2321+
"What do you think will happen when we introduce the capability for our model to use **non-linear activation functions**?\n",
23222322
"\n",
23232323
"Well let's see.\n",
23242324
"\n",
@@ -2487,7 +2487,7 @@
24872487
" # 1. Forward pass\n",
24882488
" test_logits = model_3(X_test).squeeze()\n",
24892489
" test_pred = torch.round(torch.sigmoid(test_logits)) # logits -> prediction probabilities -> prediction labels\n",
2490-
" # 2. Calcuate loss and accuracy\n",
2490+
" # 2. Calculate loss and accuracy\n",
24912491
" test_loss = loss_fn(test_logits, y_test)\n",
24922492
" test_acc = accuracy_fn(y_true=y_test,\n",
24932493
" y_pred=test_pred)\n",
@@ -3740,7 +3740,7 @@
37403740
" * Feel free to use any combination of PyTorch layers (linear and non-linear) you want.\n",
37413741
"3. Setup a binary classification compatible loss function and optimizer to use when training the model.\n",
37423742
"4. Create a training and testing loop to fit the model you created in 2 to the data you created in 1.\n",
3743-
" * To measure model accuray, you can create your own accuracy function or use the accuracy function in [TorchMetrics](https://torchmetrics.readthedocs.io/en/latest/).\n",
3743+
" * To measure model accuracy, you can create your own accuracy function or use the accuracy function in [TorchMetrics](https://torchmetrics.readthedocs.io/en/latest/).\n",
37443744
" * Train the model for long enough for it to reach over 96% accuracy.\n",
37453745
" * The training loop should output progress every 10 epochs of the model's training and test set loss and accuracy.\n",
37463746
"5. Make predictions with your trained model and plot them using the `plot_decision_boundary()` function created in this notebook.\n",

0 commit comments

Comments
 (0)