Skip to content

Commit 53af15b

Browse files
updating docs for object_detection for v1 (#284)
Signed-off-by: greg pereira <grpereir@redhat.com>
1 parent dc885a6 commit 53af15b

File tree

2 files changed

+63
-29
lines changed

2 files changed

+63
-29
lines changed

recipes/audio/audio_to_text/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://githu
1515

1616
# Build the Application
1717

18-
The rest of this document will explain how to build and run the application from the terminal, and will go into greater detail on how each container in the Pod above is built, run, and what purpose it serves in the overall application. All the recipes use a central [Makefile](../../common/Makefile.common) that includes variables populated with default values to simplify getting started. Please review the [Makefile docs](../../common/README.md), to learn about further customizing your application.
18+
The rest of this document will explain how to build and run the application from the terminal, and will go into greater detail on how each container in the application above is built, run, and what purpose it serves in the overall application. All the recipes use a central [Makefile](../../common/Makefile.common) that includes variables populated with default values to simplify getting started. Please review the [Makefile docs](../../common/README.md), to learn about further customizing your application.
1919

2020
* [Download a model](#download-a-model)
2121
* [Build the Model Service](#build-the-model-service)
@@ -88,7 +88,7 @@ Once the streamlit application is up and running, you should be able to access i
8888
From here, you can upload audio files from your local machine and translate the audio files as shown below.
8989

9090
By using this recipe and getting this starting point established,
91-
users should now have an easier time customizing and building their own LLM enabled applications.
91+
users should now have an easier time customizing and building their own AI enabled applications.
9292

9393
#### Input audio files
9494

Lines changed: 61 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,58 +1,92 @@
11
# Object Detection
22

3-
This recipe provides an example for running an object detection model service and its associated client locally.
3+
This recipe helps developers start building their own custom AI enabled object detection applications. It consists of two main components: the Model Service and the AI Application.
44

5-
## Build and run the model service
5+
There are a few options today for local Model Serving, but this recipe will use our FastAPI [`object_detection_python`](../../../model_servers/object_detection_python/src/object_detection_server.py) model server. There is a Containerfile provided that can be used to build this Model Service within the repo, [`model_servers/object_detection_python/base/Containerfile`](/model_servers/object_detection_python/base/Containerfile).
66

7-
```bash
8-
cd object_detection/model_server
9-
podman build -t object_detection_service .
10-
```
7+
The AI Application will connect to the Model Service via an API. The recipe relies on [Streamlit](https://streamlit.io/) for the UI layer. You can find an example of the object detection application below.
8+
9+
![](/assets/object_detection.png)
10+
11+
## Try the Object Detection Application:
12+
13+
The [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://github.com/containers/podman-desktop-extension-ai-lab) includes this recipe among others. To try it out, open `Recipes Catalog` -> `Object Detection` and follow the instructions to start the application.
14+
15+
# Build the Application
16+
17+
The rest of this document will explain how to build and run the application from the terminal, and will go into greater detail on how each container in the application above is built, run, and what purpose it serves in the overall application. All the Model Server elements of the recipe use a central Model Server [Makefile](../../../model_servers/common/Makefile.common) that includes variables populated with default values to simplify getting started. Currently we do not have a Makefile for the Application elements of the Recipe, but this coming soon, and will leverage the recipes common [Makefile](../../common/Makefile.common) to provide variable configuration and reasonable defaults to this Recipe's application.
18+
19+
* [Download a model](#download-a-model)
20+
* [Build the Model Service](#build-the-model-service)
21+
* [Deploy the Model Service](#deploy-the-model-service)
22+
* [Build the AI Application](#build-the-ai-application)
23+
* [Deploy the AI Application](#deploy-the-ai-application)
24+
* [Interact with the AI Application](#interact-with-the-ai-application)
25+
26+
## Download a model
27+
28+
If you are just getting started, we recommend using [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101).
29+
This is a well performant model with an Apache-2.0 license.
30+
It's simple to download a copy of the model from [huggingface.co](https://huggingface.co)
31+
32+
You can use the `download-model-facebook-detr-resnet-101` make target in the `model_servers/object_detection_python` directory to download and move the model into the models directory for you:
1133

1234
```bash
13-
podman run -it --rm -p 8000:8000 object_detection_service
35+
# from path model_servers/object_detection_python from repo containers/ai-lab-recipes
36+
make download-model-facebook-detr-resnet-101
1437
```
1538

16-
By default the model service will use [`facebook/detr-resnet-101`](https://huggingface.co/facebook/detr-resnet-101), which has an apache-2.0 license. The model is relatively small, but it will be downloaded fresh each time the model server is started unless a local model is provided (see additional instructions below).
39+
## Build the Model Service
1740

41+
The You can build the Model Service from the [object_detection_python model-service directory](../../../model_servers/object_detection_python).
1842

19-
## Use a different or local model
20-
21-
If you'd like to use a different model hosted on huggingface, simply use the environment variable `MODEL_PATH` and set it to the correct `org/model` path on [huggingface.co](https://huggingface.co/) when starting your container.
43+
```bash
44+
# from path model_servers/object_detection_python from repo containers/ai-lab-recipes
45+
make build
46+
```
2247

23-
If you'd like to download models locally so that they are not pulled each time the container restarts, you can use the following python snippet to a model to your `models/` directory.
48+
Checkout the [Makefile](../../../model_servers/object_detection_python/Makefile) to get more details on different options for how to build.
2449

25-
```python
26-
from huggingface_hub import snapshot_download
50+
## Deploy the Model Service
2751

28-
snapshot_download(repo_id="facebook/detr-resnet-101",
29-
revision="no_timm",
30-
local_dir="<PATH_TO>/locallm/models/vision/object_detection/facebook/detr-resnet-101",
31-
local_dir_use_symlinks=False)
52+
The local Model Service relies on a volume mount to the localhost to access the model files. It also employs environment variables to dictate the model used and where its served. You can start your local Model Service using the following `make` command from the [`model_servers/object_detection_python`](../../../model_servers/object_detection_python) directory, which will be set with reasonable defaults:
3253

54+
```bash
55+
# from path model_servers/object_detection_python from repo containers/ai-lab-recipes
56+
make run
3357
```
3458

35-
When using a model other than the default, you will need to set the `MODEL_PATH` environment variable. Here is an example of running the model service with a local model:
59+
As stated above, by default the model service will use [`facebook/detr-resnet-101`](https://huggingface.co/facebook/detr-resnet-101). However you can use other compatabale models. Simply pass the new `MODEL_NAME` and `MODEL_PATH` to the make command. Make sure the model is downloaded and exists in the [models directory](../../../models/):
3660

3761
```bash
38-
podman run -it --rm -p 8000:8000 -v <PATH/TO>/locallm/models/vision/:/locallm/models -e MODEL_PATH=models/object_detection/facebook/detr-resnet-50/ object_detection_service
62+
# from path model_servers/object_detection_python from repo containers/ai-lab-recipes
63+
make MODEL_NAME=facebook/detr-resnet-50 MODEL_PATH=/models/facebook/detr-resnet-50 run
3964
```
4065

41-
## Build and run the client application
66+
## Build the AI Application
67+
68+
Now that the Model Service is running we want to build and deploy our AI Application. Use the provided Containerfile to build the AI Application
69+
image from the [`object_detection/`](./) recipe directory.
4270

4371
```bash
44-
cd object_detection/client
72+
# from path recipes/computer_vision/object_detection from repo containers/ai-lab-recipes
4573
podman build -t object_detection_client .
4674
```
4775

76+
### Deploy the AI Application
77+
78+
Make sure the Model Service is up and running before starting this container image.
79+
When starting the AI Application container image we need to direct it to the correct `MODEL_ENDPOINT`.
80+
This could be any appropriately hosted Model Service (running locally or in the cloud) using a compatible API.
81+
The following Podman command can be used to run your AI Application:
82+
4883
```bash
4984
podman run -p 8501:8501 -e MODEL_ENDPOINT=http://10.88.0.1:8000/detection object_detection_client
5085
```
5186

52-
Once the client is up a running, you should be able to access it at `http://localhost:8501`. From here you can upload images from your local machine and detect objects in the image as shown below.
53-
54-
<p align="center">
55-
<img src="../../../assets/object_detection.png" width="70%">
56-
</p>
87+
### Interact with the AI Application
5788

89+
Once the client is up a running, you should be able to access it at `http://localhost:8501`. From here you can upload images from your local machine and detect objects in the image as shown below.
5890

91+
By using this recipe and getting this starting point established,
92+
users should now have an easier time customizing and building their own AI enabled applications.

0 commit comments

Comments
 (0)