Skip to content

Commit a2273e4

Browse files
authored
Merge pull request #1065 from pritesh2000/gram-1/07
07_pytorch_experiment_tracking.ipynb
2 parents 344c834 + 4f7e678 commit a2273e4

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

07_pytorch_experiment_tracking.ipynb

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
"\n",
2222
"We've trained a fair few models now on the journey to making FoodVision Mini (an image classification model to classify images of pizza, steak or sushi).\n",
2323
"\n",
24-
"And so far we've keep track of them via Python dictionaries.\n",
24+
"And so far we've kept track of them via Python dictionaries.\n",
2525
"\n",
2626
"Or just comparing them by the metric print outs during training.\n",
2727
"\n",
@@ -83,7 +83,7 @@
8383
"source": [
8484
"## Different ways to track machine learning experiments \n",
8585
"\n",
86-
"There are as many different ways to track machine learning experiments as there is experiments to run.\n",
86+
"There are as many different ways to track machine learning experiments as there are experiments to run.\n",
8787
"\n",
8888
"This table covers a few.\n",
8989
"\n",
@@ -92,7 +92,7 @@
9292
"| Python dictionaries, CSV files, print outs | None | Easy to setup, runs in pure Python | Hard to keep track of large numbers of experiments | Free |\n",
9393
"| [TensorBoard](https://www.tensorflow.org/tensorboard/get_started) | Minimal, install [`tensorboard`](https://pypi.org/project/tensorboard/) | Extensions built into PyTorch, widely recognized and used, easily scales. | User-experience not as nice as other options. | Free |\n",
9494
"| [Weights & Biases Experiment Tracking](https://wandb.ai/site/experiment-tracking) | Minimal, install [`wandb`](https://docs.wandb.ai/quickstart), make an account | Incredible user experience, make experiments public, tracks almost anything. | Requires external resource outside of PyTorch. | Free for personal use | \n",
95-
"| [MLFlow](https://mlflow.org/) | Minimal, install `mlflow` and starting tracking | Fully open-source MLOps lifecycle management, many integrations. | Little bit harder to setup a remote tracking server than other services. | Free | \n",
95+
"| [MLFlow](https://mlflow.org/) | Minimal, install `mlflow` and start tracking | Fully open-source MLOps lifecycle management, many integrations. | Little bit harder to setup a remote tracking server than other services. | Free | \n",
9696
"\n",
9797
"<img src=\"https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/images/07-different-places-to-track-experiments.png\" alt=\"various places to track machine learning experiments\" width=900/>\n",
9898
"\n",
@@ -276,7 +276,7 @@
276276
"\n",
277277
"Let's create a function to \"set the seeds\" called `set_seeds()`.\n",
278278
"\n",
279-
"> **Note:** Recall a [random seed](https://en.wikipedia.org/wiki/Random_seed) is a way of flavouring the randomness generated by a computer. They aren't necessary to always set when running machine learning code, however, they help ensure there's an element of reproducibility (the numbers I get with my code are similar to the numbers you get with your code). Outside of an education or experimental setting, random seeds generally aren't required."
279+
"> **Note:** Recalling a [random seed](https://en.wikipedia.org/wiki/Random_seed) is a way of flavouring the randomness generated by a computer. They aren't necessary to always set when running machine learning code, however, they help ensure there's an element of reproducibility (the numbers I get with my code are similar to the numbers you get with your code). Outside of an educational or experimental setting, random seeds generally aren't required."
280280
]
281281
},
282282
{
@@ -313,7 +313,7 @@
313313
"\n",
314314
"So how about we run some experiments and try to further improve our results?\n",
315315
"\n",
316-
"To do so, we'll use similar code to the previous section to download the [`pizza_steak_sushi.zip`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/data/pizza_steak_sushi.zip) (if the data doesn't already exist) except this time its been functionised.\n",
316+
"To do so, we'll use similar code to the previous section to download the [`pizza_steak_sushi.zip`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/data/pizza_steak_sushi.zip) (if the data doesn't already exist) except this time it's been functionalised.\n",
317317
"\n",
318318
"This will allow us to use it again later. "
319319
]
@@ -421,7 +421,7 @@
421421
"\n",
422422
"And since we'll be using transfer learning and specifically pretrained models from [`torchvision.models`](https://pytorch.org/vision/stable/models.html), we'll create a transform to prepare our images correctly.\n",
423423
"\n",
424-
"To transform our images in tensors, we can use:\n",
424+
"To transform our images into tensors, we can use:\n",
425425
"1. Manually created transforms using `torchvision.transforms`.\n",
426426
"2. Automatically created transforms using `torchvision.models.MODEL_NAME.MODEL_WEIGHTS.DEFAULT.transforms()`.\n",
427427
" * Where `MODEL_NAME` is a specific `torchvision.models` architecture, `MODEL_WEIGHTS` is a specific set of pretrained weights and `DEFAULT` means the \"best available weights\".\n",
@@ -959,7 +959,7 @@
959959
"source": [
960960
"> **Note:** You might notice the results here are slightly different to what our model got in 06. PyTorch Transfer Learning. The difference comes from using the `engine.train()` and our modified `train()` function. Can you guess why? The [PyTorch documentation on randomness](https://pytorch.org/docs/stable/notes/randomness.html) may help more.\n",
961961
"\n",
962-
"Running the cell above we get similar outputs we got in [06. PyTorch Transfer Learning section 4: Train model](https://www.learnpytorch.io/06_pytorch_transfer_learning/#4-train-model) but the difference is behind the scenes our `writer` instance has created a `runs/` directory storing our model's results.\n",
962+
"Running the cell above we get similar outputs we got in [06. PyTorch Transfer Learning section 4: Train model](https://www.learnpytorch.io/06_pytorch_transfer_learning/#4-train-model) but the difference is that behind the scenes our `writer` instance has created a `runs/` directory storing our model's results.\n",
963963
"\n",
964964
"For example, the save location might look like:\n",
965965
"\n",
@@ -1361,7 +1361,7 @@
13611361
"\n",
13621362
"With practice and running many different experiments, you'll start to build an intuition of what *might* help your model.\n",
13631363
"\n",
1364-
"I say *might* on purpose because there's no guarantees.\n",
1364+
"I say *might* on purpose because there's no guarantee.\n",
13651365
"\n",
13661366
"But generally, in light of [*The Bitter Lesson*](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) (I've mentioned this twice now because it's an important essay in the world of AI), generally the bigger your model (more learnable parameters) and the more data you have (more opportunities to learn), the better the performance.\n",
13671367
"\n",
@@ -1692,7 +1692,7 @@
16921692
"\n",
16931693
"# Create an EffNetB0 feature extractor\n",
16941694
"def create_effnetb0():\n",
1695-
" # 1. Get the base mdoel with pretrained weights and send to target device\n",
1695+
" # 1. Get the base model with pretrained weights and send to target device\n",
16961696
" weights = torchvision.models.EfficientNet_B0_Weights.DEFAULT\n",
16971697
" model = torchvision.models.efficientnet_b0(weights=weights).to(device)\n",
16981698
"\n",
@@ -2417,7 +2417,7 @@
24172417
"cell_type": "markdown",
24182418
"metadata": {},
24192419
"source": [
2420-
"Looks like our best model so far is 29 MB in size. We'll keep this in mind if we wanted to deploy it later on.\n",
2420+
"Looks like our best model so far is 29 MB in size. We'll keep this in mind if we want to deploy it later on.\n",
24212421
"\n",
24222422
"Time to make and visualize some predictions.\n",
24232423
"\n",
@@ -2595,7 +2595,7 @@
25952595
"\n",
25962596
"The main ideas you should take away from this Milestone Project 1 are:\n",
25972597
"\n",
2598-
"* The machine learning practioner's motto: *experiment, experiment, experiment!* (though we've been doing plenty of this already).\n",
2598+
"* The machine learning practitioner's motto: *experiment, experiment, experiment!* (though we've been doing plenty of this already).\n",
25992599
"* In the beginning, keep your experiments small so you can work fast, your first few experiments shouldn't take more than a few seconds to a few minutes to run.\n",
26002600
"* The more experiments you do, the quicker you can figure out what *doesn't* work.\n",
26012601
"* Scale up when you find something that works. For example, since we've found a pretty good performing model with EffNetB2 as a feature extractor, perhaps you'd now like to see what happens when you scale it up to the whole [Food101 dataset](https://pytorch.org/vision/main/generated/torchvision.datasets.Food101.html) from `torchvision.datasets`.\n",
@@ -2666,7 +2666,7 @@
26662666
"NUM_WORKERS = os.cpu_count() # use maximum number of CPUs for workers to load data \n",
26672667
"\n",
26682668
"# Note: this is an update version of data_setup.create_dataloaders to handle\n",
2669-
"# differnt train and test transforms.\n",
2669+
"# different train and test transforms.\n",
26702670
"def create_dataloaders(\n",
26712671
" train_dir, \n",
26722672
" test_dir, \n",

0 commit comments

Comments
 (0)