|
956 | 956 | "| Stochastic Gradient Descent (SGD) optimizer | Classification, regression, many others. | [`torch.optim.SGD()`](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) |\n",
|
957 | 957 | "| Adam Optimizer | Classification, regression, many others. | [`torch.optim.Adam()`](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html) |\n",
|
958 | 958 | "| Binary cross entropy loss | Binary classification | [`torch.nn.BCELossWithLogits`](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) or [`torch.nn.BCELoss`](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html) |\n",
|
959 |
| - "| Cross entropy loss | Mutli-class classification | [`torch.nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) |\n", |
| 959 | + "| Cross entropy loss | Multi-class classification | [`torch.nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) |\n", |
960 | 960 | "| Mean absolute error (MAE) or L1 Loss | Regression | [`torch.nn.L1Loss`](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html) | \n",
|
961 | 961 | "| Mean squared error (MSE) or L2 Loss | Regression | [`torch.nn.MSELoss`](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss) | \n",
|
962 | 962 | "\n",
|
|
2913 | 2913 | "id": "f5Ephtx6f1jB"
|
2914 | 2914 | },
|
2915 | 2915 | "source": [
|
2916 |
| - "### 8.1 Creating mutli-class classification data\n", |
| 2916 | + "### 8.1 Creating multi-class classification data\n", |
2917 | 2917 | "\n",
|
2918 | 2918 | "To begin a multi-class classification problem, let's create some multi-class data.\n",
|
2919 | 2919 | "\n",
|
|
3027 | 3027 | "\n",
|
3028 | 3028 | "You might also be starting to get an idea of how flexible neural networks are.\n",
|
3029 | 3029 | "\n",
|
3030 |
| - "How about we build one similar to `model_3` but this still capable of handling multi-class data?\n", |
| 3030 | + "How about we build one similar to `model_3` but this is still capable of handling multi-class data?\n", |
3031 | 3031 | "\n",
|
3032 | 3032 | "To do so, let's create a subclass of `nn.Module` that takes in three hyperparameters:\n",
|
3033 | 3033 | "* `input_features` - the number of `X` features coming into the model.\n",
|
|
3354 | 3354 | "id": "yhwu9ln1sbl7"
|
3355 | 3355 | },
|
3356 | 3356 | "source": [
|
3357 |
| - "These prediction probablities are essentially saying how much the model *thinks* the target `X` sample (the input) maps to each class.\n", |
| 3357 | + "These prediction probabilities are essentially saying how much the model *thinks* the target `X` sample (the input) maps to each class.\n", |
3358 | 3358 | "\n",
|
3359 | 3359 | "Since there's one value for each class in `y_pred_probs`, the index of the *highest* value is the class the model thinks the specific data sample *most* belongs to.\n",
|
3360 | 3360 | "\n",
|
|
3507 | 3507 | "source": [
|
3508 | 3508 | "### 8.6 Making and evaluating predictions with a PyTorch multi-class model\n",
|
3509 | 3509 | "\n",
|
3510 |
| - "It looks like our trained model is performaning pretty well.\n", |
| 3510 | + "It looks like our trained model is performing pretty well.\n", |
3511 | 3511 | "\n",
|
3512 | 3512 | "But to make sure of this, let's make some predictions and visualize them."
|
3513 | 3513 | ]
|
|
3776 | 3776 | "* Write down 3 problems where you think machine classification could be useful (these can be anything, get creative as you like, for example, classifying credit card transactions as fraud or not fraud based on the purchase amount and purchase location features). \n",
|
3777 | 3777 | "* Research the concept of \"momentum\" in gradient-based optimizers (like SGD or Adam), what does it mean?\n",
|
3778 | 3778 | "* Spend 10-minutes reading the [Wikipedia page for different activation functions](https://en.wikipedia.org/wiki/Activation_function#Table_of_activation_functions), how many of these can you line up with [PyTorch's activation functions](https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity)?\n",
|
3779 |
| - "* Research when accuracy might be a poor metric to use (hint: read [\"Beyond Accuracy\" by by Will Koehrsen](https://willkoehrsen.github.io/statistics/learning/beyond-accuracy-precision-and-recall/) for ideas).\n", |
| 3779 | + "* Research when accuracy might be a poor metric to use (hint: read [\"Beyond Accuracy\" by Will Koehrsen](https://willkoehrsen.github.io/statistics/learning/beyond-accuracy-precision-and-recall/) for ideas).\n", |
3780 | 3780 | "* **Watch:** For an idea of what's happening within our neural networks and what they're doing to learn, watch [MIT's Introduction to Deep Learning video](https://youtu.be/7sB052Pz0sQ)."
|
3781 | 3781 | ]
|
3782 | 3782 | }
|
|
0 commit comments