Skip to content

Commit 8d98438

Browse files
committed
all typos done
1 parent df8b596 commit 8d98438

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

08_pytorch_paper_replicating.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4103,7 +4103,7 @@
41034103
"id": "45a65cda-db08-441c-9f60-cf79138e029d"
41044104
},
41054105
"source": [
4106-
"Then we'll setup device-agonistc code."
4106+
"Then we'll setup device-agnostic code."
41074107
]
41084108
},
41094109
{
@@ -4327,7 +4327,7 @@
43274327
"source": [
43284328
"Finally, we'll transform our images into tensors and turn the tensors into DataLoaders.\n",
43294329
"\n",
4330-
"Since we're using a pretrained model form `torchvision.models` we can call the `transforms()` method on it to get its required transforms.\n",
4330+
"Since we're using a pretrained model from `torchvision.models` we can call the `transforms()` method on it to get its required transforms.\n",
43314331
"\n",
43324332
"Remember, if you're going to use a pretrained model, it's generally important to **ensure your own custom data is transformed/formatted in the same way the data the original model was trained on**.\n",
43334333
"\n",
@@ -4372,7 +4372,7 @@
43724372
"source": [
43734373
"And now we've got transforms ready, we can turn our images into DataLoaders using the `data_setup.create_dataloaders()` method we created in [05. PyTorch Going Modular section 2](https://www.learnpytorch.io/05_pytorch_going_modular/#2-create-datasets-and-dataloaders-data_setuppy).\n",
43744374
"\n",
4375-
"Since we're using a feature extractor model (less trainable parameters), we could increase the batch size to a higher value (if we set it to 1024, we'd be mimicing an improvement found in [*Better plain ViT baselines for ImageNet-1k*](https://arxiv.org/abs/2205.01580), a paper which improves upon the original ViT paper and suggested extra reading). But since we only have ~200 training samples total, we'll stick with 32."
4375+
"Since we're using a feature extractor model (less trainable parameters), we could increase the batch size to a higher value (if we set it to 1024, we'd be mimicking an improvement found in [*Better plain ViT baselines for ImageNet-1k*](https://arxiv.org/abs/2205.01580), a paper which improves upon the original ViT paper and suggested extra reading). But since we only have ~200 training samples total, we'll stick with 32."
43764376
]
43774377
},
43784378
{
@@ -4649,7 +4649,7 @@
46494649
"\n",
46504650
"> **Note:** ^ the EffNetB2 model in reference was trained with 20% of pizza, steak and sushi data (double the amount of images) rather than the ViT feature extractor which was trained with 10% of pizza, steak and sushi data. An exercise would be to train the ViT feature extractor model on the same amount of data and see how much the results improve.\n",
46514651
"\n",
4652-
"The EffNetB2 model is ~11x smaller than the ViT model with similiar results for test loss and accuracy.\n",
4652+
"The EffNetB2 model is ~11x smaller than the ViT model with similar results for test loss and accuracy.\n",
46534653
"\n",
46544654
"However, the ViT model's results may improve more when trained with the same data (20% pizza, steak and sushi data).\n",
46554655
"\n",

0 commit comments

Comments
 (0)