Skip to content

Commit 904e329

Browse files
committed
some more sentence changed
1 parent fb6a830 commit 904e329

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

02_pytorch_classification.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2170,7 +2170,7 @@
21702170
"\n",
21712171
"> **Note:** A helpful troubleshooting step when building deep learning models is to start as small as possible to see if the model works before scaling it up. \n",
21722172
">\n",
2173-
"> This could mean starting with a simple neural network (not many layers, not many hidden neurons) and a small dataset (like the one we've made) and then **overfitting** (making the model perform too well) on that small example before increasing the amount data or the model size/design to *reduce* overfitting.\n",
2173+
"> This could mean starting with a simple neural network (not many layers, not many hidden neurons) and a small dataset (like the one we've made) and then **overfitting** (making the model perform too well) on that small example before increasing the amount of data or the model size/design to *reduce* overfitting.\n",
21742174
"\n",
21752175
"So what could it be?\n",
21762176
"\n",
@@ -2322,7 +2322,7 @@
23222322
"\n",
23232323
"Well let's see.\n",
23242324
"\n",
2325-
"PyTorch has a bunch of [ready-made non-linear activation functions](https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity) that do similiar but different things. \n",
2325+
"PyTorch has a bunch of [ready-made non-linear activation functions](https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity) that do similar but different things. \n",
23262326
"\n",
23272327
"One of the most common and best performing is [ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) (rectified linear-unit, [`torch.nn.ReLU()`](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html)).\n",
23282328
"\n",
@@ -2386,7 +2386,7 @@
23862386
"\n",
23872387
"> **Question:** *Where should I put the non-linear activation functions when constructing a neural network?*\n",
23882388
">\n",
2389-
"> A rule of thumb is to put them in between hidden layers and just after the output layer, however, there is no set in stone option. As you learn more about neural networks and deep learning you'll find a bunch of different ways of putting things together. In the meantine, best to experiment, experiment, experiment.\n",
2389+
"> A rule of thumb is to put them in between hidden layers and just after the output layer, however, there is no set in stone option. As you learn more about neural networks and deep learning you'll find a bunch of different ways of putting things together. In the meantime, best to experiment, experiment, experiment.\n",
23902390
"\n",
23912391
"Now we've got a model ready to go, let's create a binary classification loss function as well as an optimizer."
23922392
]

0 commit comments

Comments
 (0)