Skip to content

Commit 72a625f

Browse files
authored
Merge pull request #1063 from pritesh2000/gram-1/09
09_pytorch_model_deployment.ipynb
2 parents 85a4644 + efe3af4 commit 72a625f

File tree

1 file changed

+23
-23
lines changed

1 file changed

+23
-23
lines changed

09_pytorch_model_deployment.ipynb

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -205,10 +205,10 @@
205205
"\n",
206206
"These two scenarios are generally referred to as:\n",
207207
"\n",
208-
"* **Online (real-time)** - Predicitions/inference happen **immediately**. For example, someone uploads an image, the image gets transformed and predictions are returned or someone makes a purchase and the transaction is verified to be non-fradulent by a model so the purchase can go through.\n",
208+
"* **Online (real-time)** - Predictions/inference happen **immediately**. For example, someone uploads an image, the image gets transformed and predictions are returned or someone makes a purchase and the transaction is verified to be non-fraudulent by a model so the purchase can go through.\n",
209209
"* **Offline (batch)** - Predictions/inference happen **periodically**. For example, a photos application sorts your images into different categories (such as beach, mealtime, family, friends) whilst your mobile device is plugged into charge.\n",
210210
"\n",
211-
"> **Note:** \"Batch\" refers to inference being performed on multiple samples at a time. However, to add a little confusion, batch processing can happen immediately/online (multiple images being classified at once) and/or offline (mutliple images being predicted/trained on at once). \n",
211+
"> **Note:** \"Batch\" refers to inference being performed on multiple samples at a time. However, to add a little confusion, batch processing can happen immediately/online (multiple images being classified at once) and/or offline (multiple images being predicted/trained on at once). \n",
212212
"\n",
213213
"The main difference between each being: predictions being made immediately or periodically.\n",
214214
"\n",
@@ -631,7 +631,7 @@
631631
"source": [
632632
"Excellent! To change the classifier head to suit our own problem, let's replace the `out_features` variable with the same number of classes we have (in our case, `out_features=3`, one for pizza, steak, sushi).\n",
633633
"\n",
634-
"> **Note:** This process of changing the output layers/classifier head will be dependent on the problem you're working on. For example, if you wanted a different *number* of outputs or a different *kind* of ouput, you would have to change the output layers accordingly. "
634+
"> **Note:** This process of changing the output layers/classifier head will be dependent on the problem you're working on. For example, if you wanted a different *number* of outputs or a different *kind* of output, you would have to change the output layers accordingly. "
635635
]
636636
},
637637
{
@@ -663,7 +663,7 @@
663663
"\n",
664664
"We'll call it `create_effnetb2_model()` and it'll take a customizable number of classes and a random seed parameter for reproducibility.\n",
665665
"\n",
666-
"Ideally, it will return an EffNetB2 feature extractor along with its assosciated transforms."
666+
"Ideally, it will return an EffNetB2 feature extractor along with its associated transforms."
667667
]
668668
},
669669
{
@@ -1573,7 +1573,7 @@
15731573
"2. Create an empty list to store prediction dictionaries (we want the function to return a list of dictionaries, one for each prediction).\n",
15741574
"3. Loop through the target input paths (steps 4-14 will happen inside the loop).\n",
15751575
"4. Create an empty dictionary for each iteration in the loop to store prediction values per sample.\n",
1576-
"5. Get the sample path and ground truth class name (we can do this by infering the class from the path).\n",
1576+
"5. Get the sample path and ground truth class name (we can do this by inferring the class from the path).\n",
15771577
"6. Start the prediction timer using Python's [`timeit.default_timer()`](https://docs.python.org/3/library/timeit.html#timeit.default_timer).\n",
15781578
"7. Open the image using [`PIL.Image.open(path)`](https://pillow.readthedocs.io/en/stable/reference/Image.html#functions).\n",
15791579
"8. Transform the image so it's capable of being used with the target model as well as add a batch dimension and send the image to the target device.\n",
@@ -1611,7 +1611,7 @@
16111611
" class_names: List[str], \n",
16121612
" device: str = \"cuda\" if torch.cuda.is_available() else \"cpu\") -> List[Dict]:\n",
16131613
" \n",
1614-
" # 2. Create an empty list to store prediction dictionaires\n",
1614+
" # 2. Create an empty list to store prediction dictionaries\n",
16151615
" pred_list = []\n",
16161616
" \n",
16171617
" # 3. Loop through target paths\n",
@@ -2298,7 +2298,7 @@
22982298
"\n",
22992299
"To do so, let's turn our `effnetb2_stats` and `vit_stats` dictionaries into a pandas DataFrame.\n",
23002300
"\n",
2301-
"We'll add a column to view the model names as well as the convert the test accuracy to a whole percentage rather than decimal."
2301+
"We'll add a column to view the model names as well as convert the test accuracy to a whole percentage rather than decimal."
23022302
]
23032303
},
23042304
{
@@ -2598,7 +2598,7 @@
25982598
"\n",
25992599
"Why create a demo of your models?\n",
26002600
"\n",
2601-
"Because metrics on the test set look nice but you never really know how you're model performs until you use it in the wild.\n",
2601+
"Because metrics on the test set look nice but you never really know how your model performs until you use it in the wild.\n",
26022602
"\n",
26032603
"So let's get deploying!\n",
26042604
"\n",
@@ -2700,7 +2700,7 @@
27002700
"\n",
27012701
"We created a function earlier called `pred_and_store()` to make predictions with a given model across a list of target files and store them in a list of dictionaries.\n",
27022702
"\n",
2703-
"How about we create a similar function but this time focusing on a making a prediction on a single image with our EffNetB2 model?\n",
2703+
"How about we create a similar function but this time focusing on making a prediction on a single image with our EffNetB2 model?\n",
27042704
"\n",
27052705
"More specifically, we want a function that takes an image as input, preprocesses (transforms) it, makes a prediction with EffNetB2 and then returns the prediction (pred or pred label for short) as well as the prediction probability (pred prob).\n",
27062706
"\n",
@@ -3067,9 +3067,9 @@
30673067
"Where:\n",
30683068
"* `09_pretrained_effnetb2_feature_extractor_pizza_steak_sushi_20_percent.pth` is our trained PyTorch model file.\n",
30693069
"* `app.py` contains our Gradio app (similar to the code that launched the app).\n",
3070-
" * **Note:** `app.py` is the default filename used for Hugging Face Spaces, if you deploy your app there, Spaces will by default look for a file called `app.py` to run. This is changable in settings.\n",
3070+
" * **Note:** `app.py` is the default filename used for Hugging Face Spaces, if you deploy your app there, Spaces will by default look for a file called `app.py` to run. This is changeable in settings.\n",
30713071
"* `examples/` contains example images to use with our Gradio app.\n",
3072-
"* `model.py` contains the model defintion as well as any transforms assosciated with the model.\n",
3072+
"* `model.py` contains the model definition as well as any transforms associated with the model.\n",
30733073
"* `requirements.txt` contains the dependencies to run our app such as `torch`, `torchvision` and `gradio`.\n",
30743074
"\n",
30753075
"Why this way?\n",
@@ -3511,7 +3511,7 @@
35113511
"\n",
35123512
"Feel free to read the documentation on both options but we're going to go with option two.\n",
35133513
"\n",
3514-
"> **Note:** To host anything on Hugging Face, you will to [sign up for a free Hugging Face account](https://huggingface.co/join). "
3514+
"> **Note:** To host anything on Hugging Face, you will need to [sign up for a free Hugging Face account](https://huggingface.co/join). "
35153515
]
35163516
},
35173517
{
@@ -3657,7 +3657,7 @@
36573657
"source": [
36583658
"### 9.3 Uploading to Hugging Face\n",
36593659
"\n",
3660-
"We've verfied our FoodVision Mini app works locally, however, the fun of creating a machine learning demo is to show it to other people and allow them to use it.\n",
3660+
"We've verified our FoodVision Mini app works locally, however, the fun of creating a machine learning demo is to show it to other people and allow them to use it.\n",
36613661
"\n",
36623662
"To do so, we're going to upload our FoodVision Mini demo to Hugging Face. \n",
36633663
"\n",
@@ -3747,7 +3747,7 @@
37473747
"\n",
37483748
"We'll go from three classes to 101!\n",
37493749
"\n",
3750-
"From pizza, steak, sushi to pizza, steak, sushi, hot dog, apple pie, carrot cake, chocolate cake, french fires, garlic bread, ramen, nachos, tacos and more!\n",
3750+
"From pizza, steak, sushi to pizza, steak, sushi, hot dog, apple pie, carrot cake, chocolate cake, french fries, garlic bread, ramen, nachos, tacos and more!\n",
37513751
"\n",
37523752
"How?\n",
37533753
"\n",
@@ -3816,7 +3816,7 @@
38163816
" \n",
38173817
"Nice!\n",
38183818
"\n",
3819-
"See how just like our EffNetB2 model for FoodVision Mini the base layers are frozen (these are pretrained on ImageNet) and the outer layers (the `classifier` layers) are trainble with an ouput shape of `[batch_size, 101]` (`101` for 101 classes in Food101). \n",
3819+
"See how just like our EffNetB2 model for FoodVision Mini the base layers are frozen (these are pretrained on ImageNet) and the outer layers (the `classifier` layers) are trainable with an output shape of `[batch_size, 101]` (`101` for 101 classes in Food101). \n",
38203820
"\n",
38213821
"Now since we're going to be dealing with a fair bit more data than usual, how about we add a little data augmentation to our transforms (`effnetb2_transforms`) to augment the training data.\n",
38223822
"\n",
@@ -4371,7 +4371,7 @@
43714371
"\n",
43724372
"Our FoodVision Big model is capable of classifying 101 classes versus FoodVision Mini's 3 classes, a 33.6x increase!\n",
43734373
"\n",
4374-
"How does this effect the model size?\n",
4374+
"How does this affect the model size?\n",
43754375
"\n",
43764376
"Let's find out."
43774377
]
@@ -4448,7 +4448,7 @@
44484448
"* `app.py` contains our FoodVision Big Gradio app.\n",
44494449
"* `class_names.txt` contains all of the class names for FoodVision Big.\n",
44504450
"* `examples/` contains example images to use with our Gradio app.\n",
4451-
"* `model.py` contains the model defintion as well as any transforms assosciated with the model.\n",
4451+
"* `model.py` contains the model definition as well as any transforms associated with the model.\n",
44524452
"* `requirements.txt` contains the dependencies to run our app such as `torch`, `torchvision` and `gradio`."
44534453
]
44544454
},
@@ -4521,7 +4521,7 @@
45214521
"source": [
45224522
"### 11.2 Saving Food101 class names to file (`class_names.txt`)\n",
45234523
"\n",
4524-
"Because there are so many classes in the Food101 dataset, instead of storing them as a list in our `app.py` file, let's saved them to a `.txt` file and read them in when necessary instead.\n",
4524+
"Because there are so many classes in the Food101 dataset, instead of storing them as a list in our `app.py` file, let's save them to a `.txt` file and read them in when necessary instead.\n",
45254525
"\n",
45264526
"We'll just remind ourselves what they look like first by checking out `food101_class_names`."
45274527
]
@@ -4708,7 +4708,7 @@
47084708
"1. **Imports and class names setup** - The `class_names` variable will be a list for all of the Food101 classes rather than pizza, steak, sushi. We can access these via `demos/foodvision_big/class_names.txt`.\n",
47094709
"2. **Model and transforms preparation** - The `model` will have `num_classes=101` rather than `num_classes=3`. We'll also be sure to load the weights from `\"09_pretrained_effnetb2_feature_extractor_food101_20_percent.pth\"` (our FoodVision Big model path).\n",
47104710
"3. **Predict function** - This will stay the same as FoodVision Mini's `app.py`.\n",
4711-
"4. **Gradio app** - The Gradio interace will have different `title`, `description` and `article` parameters to reflect the details of FoodVision Big.\n",
4711+
"4. **Gradio app** - The Gradio interface will have different `title`, `description` and `article` parameters to reflect the details of FoodVision Big.\n",
47124712
"\n",
47134713
"We'll also make sure to save it to `demos/foodvision_big/app.py` using the `%%writefile` magic command."
47144714
]
@@ -4907,7 +4907,7 @@
49074907
"4. Select a license (I used [MIT](https://opensource.org/licenses/MIT)).\n",
49084908
"5. Select Gradio as the Space SDK (software development kit). \n",
49094909
" * **Note:** You can use other options such as Streamlit but since our app is built with Gradio, we'll stick with that.\n",
4910-
"6. Choose whether your Space is it's public or private (I selected public since I'd like my Space to be available to others).\n",
4910+
"6. Choose whether your Space is public or private (I selected public since I'd like my Space to be available to others).\n",
49114911
"7. Click \"Create Space\".\n",
49124912
"8. Clone the repo locally by running: `git clone https://huggingface.co/spaces/[YOUR_USERNAME]/[YOUR_SPACE_NAME]` in terminal or command prompt.\n",
49134913
" * **Note:** You can also add files via uploading them under the \"Files and versions\" tab.\n",
@@ -4962,7 +4962,7 @@
49624962
}
49634963
],
49644964
"source": [
4965-
"# IPython is a library to help work with Python iteractively \n",
4965+
"# IPython is a library to help work with Python interactively\n",
49664966
"from IPython.display import IFrame\n",
49674967
"\n",
49684968
"# Embed FoodVision Big Gradio demo as an iFrame\n",
@@ -5024,7 +5024,7 @@
50245024
" * What model architecture does it use?\n",
50255025
"6. Write down 1-3 potential failure points of our deployed FoodVision models and what some potential solutions might be.\n",
50265026
" * For example, what happens if someone was to upload a photo that wasn't of food to our FoodVision Mini model?\n",
5027-
"7. Pick any dataset from [`torchvision.datasets`](https://pytorch.org/vision/stable/datasets.html) and train a feature extractor model on it using a model from [`torchvision.models`](https://pytorch.org/vision/stable/models.html) (you could use one of the model's we've already created, e.g. EffNetB2 or ViT) for 5 epochs and then deploy your model as a Gradio app to Hugging Face Spaces. \n",
5027+
"7. Pick any dataset from [`torchvision.datasets`](https://pytorch.org/vision/stable/datasets.html) and train a feature extractor model on it using a model from [`torchvision.models`](https://pytorch.org/vision/stable/models.html) (you could use one of the models we've already created, e.g. EffNetB2 or ViT) for 5 epochs and then deploy your model as a Gradio app to Hugging Face Spaces. \n",
50285028
" * You may want to pick smaller dataset/make a smaller split of it so training doesn't take too long.\n",
50295029
" * I'd love to see your deployed models! So be sure to share them in Discord or on the [course GitHub Discussions page](https://github.com/mrdbourke/pytorch-deep-learning/discussions)."
50305030
]
@@ -5043,7 +5043,7 @@
50435043
" * The [Gradio Blocks API](https://gradio.app/docs/#blocks) for more advanced workflows.\n",
50445044
" * The Hugging Face Course chapter on [how to use Gradio with Hugging Face](https://huggingface.co/course/chapter9/1).\n",
50455045
"* Edge devices aren't limited to mobile phones, they include small computers like the Raspberry Pi and the PyTorch team have a [fantastic blog post tutorial](https://pytorch.org/tutorials/intermediate/realtime_rpi.html) on deploying a PyTorch model to one.\n",
5046-
"* For a fanstastic guide on developing AI and ML-powered applications, see [Google's People + AI Guidebook](https://pair.withgoogle.com/guidebook). One of my favourites is the section on [setting the right expectations](https://pair.withgoogle.com/guidebook/patterns#set-the-right-expectations).\n",
5046+
"* For a fantastic guide on developing AI and ML-powered applications, see [Google's People + AI Guidebook](https://pair.withgoogle.com/guidebook). One of my favourites is the section on [setting the right expectations](https://pair.withgoogle.com/guidebook/patterns#set-the-right-expectations).\n",
50475047
" * I covered more of these kinds of resources, including guides from Apple, Microsoft and more in the [April 2021 edition of Machine Learning Monthly](https://zerotomastery.io/blog/machine-learning-monthly-april-2021/) (a monthly newsletter I send out with the latest and greatest of the ML field).\n",
50485048
"* If you'd like to speed up your model's runtime on CPU, you should be aware of [TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html), [ONNX](https://pytorch.org/docs/stable/onnx.html) (Open Neural Network Exchange) and [OpenVINO](https://docs.openvino.ai/latest/notebooks/102-pytorch-onnx-to-openvino-with-output.html). Going from pure PyTorch to ONNX/OpenVINO models I've seen a ~2x+ increase in performance.\n",
50495049
"* For turning models into a deployable and scalable API, see the [TorchServe library](https://pytorch.org/serve/).\n",

0 commit comments

Comments
 (0)