Skip to content

Commit 085d3e3

Browse files
authored
Merge pull request #1045 from pritesh2000/main
Update 06_pytorch_transfer_learning.ipynb and 07_pytorch_experiment_tracking.ipynb changed some typos and update some information.
2 parents ed35077 + 8cccd2b commit 085d3e3

File tree

2 files changed

+15
-44
lines changed

2 files changed

+15
-44
lines changed

06_pytorch_transfer_learning.ipynb

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -400,7 +400,7 @@
400400
"| 3 | A mean of `[0.485, 0.456, 0.406]` (values across each colour channel). | `torchvision.transforms.Normalize(mean=...)` to adjust the mean of our images. |\n",
401401
"| 4 | A standard deviation of `[0.229, 0.224, 0.225]` (values across each colour channel). | `torchvision.transforms.Normalize(std=...)` to adjust the standard deviation of our images. | \n",
402402
"\n",
403-
"> **Note:** ^some pretrained models from `torchvision.models` in different sizes to `[3, 224, 224]`, for example, some might take them in `[3, 240, 240]`. For specific input image sizes, see the documentation.\n",
403+
"> **Note:** some pretrained models from `torchvision.models` in different sizes to `[3, 224, 224]`, for example, some might take them in `[3, 240, 240]`. For specific input image sizes, see the documentation.\n",
404404
"\n",
405405
"> **Question:** *Where did the mean and standard deviation values come from? Why do we need to do this?*\n",
406406
">\n",
@@ -495,7 +495,7 @@
495495
"```\n",
496496
"\n",
497497
"Where,\n",
498-
"* `EfficientNet_B0_Weights` is the model architecture weights we'd like to use (there are many differnt model architecture options in `torchvision.models`).\n",
498+
"* `EfficientNet_B0_Weights` is the model architecture weights we'd like to use (there are many different model architecture options in `torchvision.models`).\n",
499499
"* `DEFAULT` means the *best available* weights (the best performance in ImageNet).\n",
500500
" * **Note:** Depending on the model architecture you choose, you may also see other options such as `IMAGENET_V1` and `IMAGENET_V2` where generally the higher version number the better. Though if you want the best available, `DEFAULT` is the easiest option. See the [`torchvision.models` documentation](https://pytorch.org/vision/main/models.html) for more.\n",
501501
" \n",
@@ -530,7 +530,7 @@
530530
"id": "cebcdf20-4ab7-40ba-8691-9d9af8962dab",
531531
"metadata": {},
532532
"source": [
533-
"And now to access the transforms assosciated with our `weights`, we can use the `transforms()` method.\n",
533+
"And now to access the transforms associated with our `weights`, we can use the `transforms()` method.\n",
534534
"\n",
535535
"This is essentially saying \"get the data transforms that were used to train the `EfficientNet_B0_Weights` on ImageNet\"."
536536
]
@@ -657,7 +657,7 @@
657657
"\n",
658658
"But if you've got unlimited compute power, as [*The Bitter Lesson*](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) states, you'd likely take the biggest, most compute hungry model you can.\n",
659659
"\n",
660-
"Understanding this **performance vs. speed vs. size tradeoff** will come with time and practice.\n",
660+
"Understanding this **performance vs. speed vs. size** tradeoff will come with time and practice.\n",
661661
"\n",
662662
"For me, I've found a nice balance in the `efficientnet_bX` models. \n",
663663
"\n",
@@ -1267,7 +1267,7 @@
12671267
"* **Same shape** - If our images are different shapes to what our model was trained on, we'll get shape errors.\n",
12681268
"* **Same datatype** - If our images are a different datatype (e.g. `torch.int8` vs. `torch.float32`) we'll get datatype errors.\n",
12691269
"* **Same device** - If our images are on a different device to our model, we'll get device errors.\n",
1270-
"* **Same transformations** - If our model is trained on images that have been transformed in certain way (e.g. normalized with a specific mean and standard deviation) and we try and make preidctions on images transformed in a different way, these predictions may be off.\n",
1270+
"* **Same transformations** - If our model is trained on images that have been transformed in certain way (e.g. normalized with a specific mean and standard deviation) and we try and make predictions on images transformed in a different way, these predictions may be off.\n",
12711271
"\n",
12721272
"> **Note:** These requirements go for all kinds of data if you're trying to make predictions with a trained model. Data you'd like to predict on should be in the same format as your model was trained on.\n",
12731273
"\n",
@@ -1359,7 +1359,7 @@
13591359
"\n",
13601360
"We can get a list of all the test image paths using `list(Path(test_dir).glob(\"*/*.jpg\"))`, the stars in the `glob()` method say \"any file matching this pattern\", in other words, any file ending in `.jpg` (all of our images).\n",
13611361
"\n",
1362-
"And then we can randomly sample a number of these using Python's [`random.sample(populuation, k)`](https://docs.python.org/3/library/random.html#random.sample) where `population` is the sequence to sample and `k` is the number of samples to retrieve."
1362+
"And then we can randomly sample a number of these using Python's [`random.sample(population, k)`](https://docs.python.org/3/library/random.html#random.sample) where `population` is the sequence to sample and `k` is the number of samples to retrieve."
13631363
]
13641364
},
13651365
{
@@ -1445,7 +1445,7 @@
14451445
"\n",
14461446
"That's where the real fun of machine learning is!\n",
14471447
"\n",
1448-
"Predicting on your own custom data, outisde of any training or test set.\n",
1448+
"Predicting on your own custom data, outside of any training or test set.\n",
14491449
"\n",
14501450
"To test our model on a custom image, let's import the old faithful `pizza-dad.jpeg` image (an image of my dad eating pizza).\n",
14511451
"\n",
@@ -1521,7 +1521,7 @@
15211521
"metadata": {},
15221522
"source": [
15231523
"## Main takeaways\n",
1524-
"* **Transfer learning** often allows to you get good results with a relatively small amount of custom data.\n",
1524+
"* **Transfer learning** often allows you to get good results with a relatively small amount of custom data.\n",
15251525
"* Knowing the power of transfer learning, it's a good idea to ask at the start of every problem, \"does an existing well-performing model exist for my problem?\"\n",
15261526
"* When using a pretrained model, it's important that your custom data be formatted/preprocessed in the same way that the original model was trained on, otherwise you may get degraded performance.\n",
15271527
"* The same goes for predicting on custom data, ensure your custom data is in the same format as the data your model was trained on.\n",
@@ -1560,8 +1560,8 @@
15601560
" * You may want to try an EfficientNet with a higher number than our B0, perhaps `torchvision.models.efficientnet_b2()`?\n",
15611561
" \n",
15621562
"## Extra-curriculum\n",
1563-
"* Look up what \"model fine-tuning\" is and spend 30-minutes researching different methods to perform it with PyTorch. How would we change our code to fine-tine? Tip: fine-tuning usually works best if you have *lots* of custom data, where as, feature extraction is typically better if you have less custom data.\n",
1564-
"* Check out the new/upcoming [PyTorch multi-weights API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) (still in beta at time of writing, May 2022), it's a new way to perform transfer learning in PyTorch. What changes to our code would need to made to use the new API?\n",
1563+
"* Look up what \"model fine-tuning\" is and spend 30-minutes researching different methods to perform it with PyTorch. How would we change our code to fine-tune? Tip: fine-tuning usually works best if you have *lots* of custom data, where as, feature extraction is typically better if you have less custom data.\n",
1564+
"* Check out the new/upcoming [PyTorch multi-weights API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) (still in beta at time of writing, May 2022), it's a new way to perform transfer learning in PyTorch. What changes to our code would need to be made to use the new API?\n",
15651565
"* Try to create your own classifier on two classes of images, for example, you could collect 10 photos of your dog and your friends dog and train a model to classify the two dogs. This would be a good way to practice creating a dataset as well as building a model on that dataset."
15661566
]
15671567
}

07_pytorch_experiment_tracking.ipynb

Lines changed: 5 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
"| **1. Get data** | Let's get the pizza, steak and sushi image classification dataset we've been using to try and improve our FoodVision Mini model's results. |\n",
119119
"| **2. Create Datasets and DataLoaders** | We'll use the `data_setup.py` script we wrote in chapter 05. PyTorch Going Modular to setup our DataLoaders. |\n",
120120
"| **3. Get and customise a pretrained model** | Just like the last section, 06. PyTorch Transfer Learning we'll download a pretrained model from `torchvision.models` and customise it to our own problem. | \n",
121-
"| **4. Train model amd track results** | Let's see what it's like to train and track the training results of a single model using TensorBoard. |\n",
121+
"| **4. Train model and track results** | Let's see what it's like to train and track the training results of a single model using TensorBoard. |\n",
122122
"| **5. View our model's results in TensorBoard** | Previously we visualized our model's loss curves with a helper function, now let's see what they look like in TensorBoard. |\n",
123123
"| **6. Creating a helper function to track experiments** | If we're going to be adhering to the machine learner practitioner's motto of *experiment, experiment, experiment!*, we best create a function that will help us save our modelling experiment results. |\n",
124124
"| **7. Setting up a series of modelling experiments** | Instead of running experiments one by one, how about we write some code to run several experiments at once, with different models, different amounts of data and different training times. | \n",
@@ -613,7 +613,7 @@
613613
"source": [
614614
"Wonderful!\n",
615615
"\n",
616-
"Now we've got a pretrained model let's turn into a feature extractor model.\n",
616+
"Now we've got a pretrained model let's turn it into a feature extractor model.\n",
617617
"\n",
618618
"In essence, we'll freeze the base layers of the model (we'll use these to extract features from our input images) and we'll change the classifier head (output layer) to suit the number of classes we're working with (we've got 3 classes: pizza, steak, sushi).\n",
619619
"\n",
@@ -1034,8 +1034,6 @@
10341034
"| VS Code (notebooks or Python scripts) | Press `SHIFT + CMD + P` to open the Command Palette and search for the command \"Python: Launch TensorBoard\". | [VS Code Guide on TensorBoard and PyTorch](https://code.visualstudio.com/docs/datascience/pytorch-support#_tensorboard-integration) |\n",
10351035
"| Jupyter and Colab Notebooks | Make sure [TensorBoard is installed](https://pypi.org/project/tensorboard/), load it with `%load_ext tensorboard` and then view your results with `%tensorboard --logdir DIR_WITH_LOGS`. | [`torch.utils.tensorboard`](https://pytorch.org/docs/stable/tensorboard.html) and [Get started with TensorBoard](https://www.tensorflow.org/tensorboard/get_started) |\n",
10361036
"\n",
1037-
"You can also upload your experiments to [tensorboard.dev](https://tensorboard.dev/) to share them publicly with others.\n",
1038-
"\n",
10391037
"Running the following code in a Google Colab or Jupyter Notebook will start an interactive TensorBoard session to view TensorBoard files in the `runs/` directory.\n",
10401038
"\n",
10411039
"```python\n",
@@ -1067,8 +1065,7 @@
10671065
"*Viewing a single modelling experiment's results for accuracy and loss in TensorBoard.*\n",
10681066
"\n",
10691067
"> **Note:** For more information on running TensorBoard in notebooks or in other locations, see the following:\n",
1070-
"> * [Using TensorBoard in Notebooks guide by TensorFlow](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks)\n",
1071-
"> * [Get started with TensorBoard.dev](https://tensorboard.dev/#get-started) (helpful for uploading your TensorBoard logs to a shareable link)"
1068+
"> * [Using TensorBoard in Notebooks guide by TensorFlow](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks)"
10721069
]
10731070
},
10741071
{
@@ -1585,7 +1582,7 @@
15851582
"# Find the number of samples/batches per dataloader (using the same test_dataloader for both experiments)\n",
15861583
"print(f\"Number of batches of size {BATCH_SIZE} in 10 percent training data: {len(train_dataloader_10_percent)}\")\n",
15871584
"print(f\"Number of batches of size {BATCH_SIZE} in 20 percent training data: {len(train_dataloader_20_percent)}\")\n",
1588-
"print(f\"Number of batches of size {BATCH_SIZE} in testing data: {len(train_dataloader_10_percent)} (all experiments will use the same test set)\")\n",
1585+
"print(f\"Number of batches of size {BATCH_SIZE} in testing data: {len(test_dataloader)} (all experiments will use the same test set)\")\n",
15891586
"print(f\"Number of classes: {len(class_names)}, class names: {class_names}\")"
15901587
]
15911588
},
@@ -2305,33 +2302,7 @@
23052302
"\n",
23062303
"<img src=\"https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/images/07-tensorboard-lowest-test-loss.png\" alt=\"various modelling experiments visualized on tensorboard with model that has the lowest test loss highlighted\" width=900/>\n",
23072304
"\n",
2308-
"*Visualizing the test loss values for the different modelling experiments in TensorBoard, you can see that the EffNetB0 model trained for 10 epochs and with 20% of the data achieves the lowest loss. This sticks with the overall trend of the experiments that: more data, larger model and longer training time is generally better.*\n",
2309-
"\n",
2310-
"You can also upload your TensorBoard experiment results to [tensorboard.dev](https://tensorboard.dev) to host them publically for free.\n",
2311-
"\n",
2312-
"For example, running code similiar to the following: "
2313-
]
2314-
},
2315-
{
2316-
"cell_type": "code",
2317-
"execution_count": 31,
2318-
"metadata": {},
2319-
"outputs": [],
2320-
"source": [
2321-
"# # Upload the results to TensorBoard.dev (uncomment to try it out)\n",
2322-
"# !tensorboard dev upload --logdir runs \\\n",
2323-
"# --name \"07. PyTorch Experiment Tracking: FoodVision Mini model results\" \\\n",
2324-
"# --description \"Comparing results of different model size, training data amount and training time.\""
2325-
]
2326-
},
2327-
{
2328-
"attachments": {},
2329-
"cell_type": "markdown",
2330-
"metadata": {},
2331-
"source": [
2332-
"Running the cell above results in the experiments from this notebook being publically viewable at: https://tensorboard.dev/experiment/VySxUYY7Rje0xREYvCvZXA/\n",
2333-
"\n",
2334-
"> **Note:** Beware that anything you upload to tensorboard.dev is publically available for anyone to see. So if you do upload your experiments, be careful they don't contain sensitive information. "
2305+
"*Visualizing the test loss values for the different modelling experiments in TensorBoard, you can see that the EffNetB0 model trained for 10 epochs and with 20% of the data achieves the lowest loss. This sticks with the overall trend of the experiments that: more data, larger model and longer training time is generally better.*"
23352306
]
23362307
},
23372308
{

0 commit comments

Comments
 (0)