|
3747 | 3747 | "\n",
|
3748 | 3748 | "We'll go from three classes to 101!\n",
|
3749 | 3749 | "\n",
|
3750 |
| - "From pizza, steak, sushi to pizza, steak, sushi, hot dog, apple pie, carrot cake, chocolate cake, french fires, garlic bread, ramen, nachos, tacos and more!\n", |
| 3750 | + "From pizza, steak, sushi to pizza, steak, sushi, hot dog, apple pie, carrot cake, chocolate cake, french fries, garlic bread, ramen, nachos, tacos and more!\n", |
3751 | 3751 | "\n",
|
3752 | 3752 | "How?\n",
|
3753 | 3753 | "\n",
|
|
3816 | 3816 | " \n",
|
3817 | 3817 | "Nice!\n",
|
3818 | 3818 | "\n",
|
3819 |
| - "See how just like our EffNetB2 model for FoodVision Mini the base layers are frozen (these are pretrained on ImageNet) and the outer layers (the `classifier` layers) are trainble with an output shape of `[batch_size, 101]` (`101` for 101 classes in Food101). \n", |
| 3819 | + "See how just like our EffNetB2 model for FoodVision Mini the base layers are frozen (these are pretrained on ImageNet) and the outer layers (the `classifier` layers) are trainable with an output shape of `[batch_size, 101]` (`101` for 101 classes in Food101). \n", |
3820 | 3820 | "\n",
|
3821 | 3821 | "Now since we're going to be dealing with a fair bit more data than usual, how about we add a little data augmentation to our transforms (`effnetb2_transforms`) to augment the training data.\n",
|
3822 | 3822 | "\n",
|
|
4371 | 4371 | "\n",
|
4372 | 4372 | "Our FoodVision Big model is capable of classifying 101 classes versus FoodVision Mini's 3 classes, a 33.6x increase!\n",
|
4373 | 4373 | "\n",
|
4374 |
| - "How does this effect the model size?\n", |
| 4374 | + "How does this affect the model size?\n", |
4375 | 4375 | "\n",
|
4376 | 4376 | "Let's find out."
|
4377 | 4377 | ]
|
|
4448 | 4448 | "* `app.py` contains our FoodVision Big Gradio app.\n",
|
4449 | 4449 | "* `class_names.txt` contains all of the class names for FoodVision Big.\n",
|
4450 | 4450 | "* `examples/` contains example images to use with our Gradio app.\n",
|
4451 |
| - "* `model.py` contains the model defintion as well as any transforms associated with the model.\n", |
| 4451 | + "* `model.py` contains the model definition as well as any transforms associated with the model.\n", |
4452 | 4452 | "* `requirements.txt` contains the dependencies to run our app such as `torch`, `torchvision` and `gradio`."
|
4453 | 4453 | ]
|
4454 | 4454 | },
|
|
4521 | 4521 | "source": [
|
4522 | 4522 | "### 11.2 Saving Food101 class names to file (`class_names.txt`)\n",
|
4523 | 4523 | "\n",
|
4524 |
| - "Because there are so many classes in the Food101 dataset, instead of storing them as a list in our `app.py` file, let's saved them to a `.txt` file and read them in when necessary instead.\n", |
| 4524 | + "Because there are so many classes in the Food101 dataset, instead of storing them as a list in our `app.py` file, let's save them to a `.txt` file and read them in when necessary instead.\n", |
4525 | 4525 | "\n",
|
4526 | 4526 | "We'll just remind ourselves what they look like first by checking out `food101_class_names`."
|
4527 | 4527 | ]
|
|
4708 | 4708 | "1. **Imports and class names setup** - The `class_names` variable will be a list for all of the Food101 classes rather than pizza, steak, sushi. We can access these via `demos/foodvision_big/class_names.txt`.\n",
|
4709 | 4709 | "2. **Model and transforms preparation** - The `model` will have `num_classes=101` rather than `num_classes=3`. We'll also be sure to load the weights from `\"09_pretrained_effnetb2_feature_extractor_food101_20_percent.pth\"` (our FoodVision Big model path).\n",
|
4710 | 4710 | "3. **Predict function** - This will stay the same as FoodVision Mini's `app.py`.\n",
|
4711 |
| - "4. **Gradio app** - The Gradio interace will have different `title`, `description` and `article` parameters to reflect the details of FoodVision Big.\n", |
| 4711 | + "4. **Gradio app** - The Gradio interface will have different `title`, `description` and `article` parameters to reflect the details of FoodVision Big.\n", |
4712 | 4712 | "\n",
|
4713 | 4713 | "We'll also make sure to save it to `demos/foodvision_big/app.py` using the `%%writefile` magic command."
|
4714 | 4714 | ]
|
|
4962 | 4962 | }
|
4963 | 4963 | ],
|
4964 | 4964 | "source": [
|
4965 |
| - "# IPython is a library to help work with Python iteractively \n", |
| 4965 | + "# IPython is a library to help work with Python interactively\n", |
4966 | 4966 | "from IPython.display import IFrame\n",
|
4967 | 4967 | "\n",
|
4968 | 4968 | "# Embed FoodVision Big Gradio demo as an iFrame\n",
|
|
5024 | 5024 | " * What model architecture does it use?\n",
|
5025 | 5025 | "6. Write down 1-3 potential failure points of our deployed FoodVision models and what some potential solutions might be.\n",
|
5026 | 5026 | " * For example, what happens if someone was to upload a photo that wasn't of food to our FoodVision Mini model?\n",
|
5027 |
| - "7. Pick any dataset from [`torchvision.datasets`](https://pytorch.org/vision/stable/datasets.html) and train a feature extractor model on it using a model from [`torchvision.models`](https://pytorch.org/vision/stable/models.html) (you could use one of the model's we've already created, e.g. EffNetB2 or ViT) for 5 epochs and then deploy your model as a Gradio app to Hugging Face Spaces. \n", |
| 5027 | + "7. Pick any dataset from [`torchvision.datasets`](https://pytorch.org/vision/stable/datasets.html) and train a feature extractor model on it using a model from [`torchvision.models`](https://pytorch.org/vision/stable/models.html) (you could use one of the models we've already created, e.g. EffNetB2 or ViT) for 5 epochs and then deploy your model as a Gradio app to Hugging Face Spaces. \n", |
5028 | 5028 | " * You may want to pick smaller dataset/make a smaller split of it so training doesn't take too long.\n",
|
5029 | 5029 | " * I'd love to see your deployed models! So be sure to share them in Discord or on the [course GitHub Discussions page](https://github.com/mrdbourke/pytorch-deep-learning/discussions)."
|
5030 | 5030 | ]
|
|
5043 | 5043 | " * The [Gradio Blocks API](https://gradio.app/docs/#blocks) for more advanced workflows.\n",
|
5044 | 5044 | " * The Hugging Face Course chapter on [how to use Gradio with Hugging Face](https://huggingface.co/course/chapter9/1).\n",
|
5045 | 5045 | "* Edge devices aren't limited to mobile phones, they include small computers like the Raspberry Pi and the PyTorch team have a [fantastic blog post tutorial](https://pytorch.org/tutorials/intermediate/realtime_rpi.html) on deploying a PyTorch model to one.\n",
|
5046 |
| - "* For a fanstastic guide on developing AI and ML-powered applications, see [Google's People + AI Guidebook](https://pair.withgoogle.com/guidebook). One of my favourites is the section on [setting the right expectations](https://pair.withgoogle.com/guidebook/patterns#set-the-right-expectations).\n", |
| 5046 | + "* For a fantastic guide on developing AI and ML-powered applications, see [Google's People + AI Guidebook](https://pair.withgoogle.com/guidebook). One of my favourites is the section on [setting the right expectations](https://pair.withgoogle.com/guidebook/patterns#set-the-right-expectations).\n", |
5047 | 5047 | " * I covered more of these kinds of resources, including guides from Apple, Microsoft and more in the [April 2021 edition of Machine Learning Monthly](https://zerotomastery.io/blog/machine-learning-monthly-april-2021/) (a monthly newsletter I send out with the latest and greatest of the ML field).\n",
|
5048 | 5048 | "* If you'd like to speed up your model's runtime on CPU, you should be aware of [TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html), [ONNX](https://pytorch.org/docs/stable/onnx.html) (Open Neural Network Exchange) and [OpenVINO](https://docs.openvino.ai/latest/notebooks/102-pytorch-onnx-to-openvino-with-output.html). Going from pure PyTorch to ONNX/OpenVINO models I've seen a ~2x+ increase in performance.\n",
|
5049 | 5049 | "* For turning models into a deployable and scalable API, see the [TorchServe library](https://pytorch.org/serve/).\n",
|
|
0 commit comments