Skip to content

Commit 2390920

Browse files
Add VisualQnA docker for both Gaudi and Xeon using TGI serving (opea-project#547)
* Add VisualQnA docker for both Gaudi and Xeon Signed-off-by: lvliang-intel <liang1.lv@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update token length Signed-off-by: lvliang-intel <liang1.lv@intel.com> --------- Signed-off-by: lvliang-intel <liang1.lv@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 02a1536 commit 2390920

File tree

9 files changed

+595
-39
lines changed

9 files changed

+595
-39
lines changed

VisualQnA/README.md

Lines changed: 41 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -18,61 +18,63 @@ This example guides you through how to deploy a [LLaVA](https://llava-vl.github.
1818
![llava screenshot](./assets/img/llava_screenshot1.png)
1919
![llava-screenshot](./assets/img/llava_screenshot2.png)
2020

21-
## Start the LLaVA service
21+
# Deploy VisualQnA Service
2222

23-
1. Build the Docker image needed for starting the service
23+
The VisualQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
2424

25-
```
26-
cd serving/
27-
docker build . --build-arg http_proxy=${http_proxy} --build-arg https_proxy=${http_proxy} -t intel/gen-ai-examples:llava-gaudi
28-
```
25+
Currently we support deploying VisualQnA services with docker compose.
2926

30-
2. Start the LLaVA service on Intel Gaudi2
27+
## Setup Environment Variable
3128

32-
```
33-
docker run -d -p 8085:8000 -v ./data:/root/.cache/huggingface/hub/ -e http_proxy=$http_proxy -e https_proxy=$http_proxy --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host intel/gen-ai-examples:llava-gaudi
34-
```
29+
To set up environment variables for deploying VisualQnA services, follow these steps:
3530

36-
Here are some explanation about the above parameters:
31+
1. Set the required environment variables:
3732

38-
- `-p 8085:8000`: This will map the 8000 port of the LLaVA service inside the container to the 8085 port on the host
39-
- `-v ./data:/root/.cache/huggingface/hub/`: This is to prevent from re-downloading model files
40-
- `http_proxy` and `https_proxy` are used if you have some proxy setting
41-
- `--runtime=habana ...` is required for running this service on Intel Gaudi2
33+
```bash
34+
# Example: host_ip="192.168.1.1"
35+
export host_ip="External_Public_IP"
36+
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
37+
export no_proxy="Your_No_Proxy"
38+
```
4239

43-
Now you have a LLaVa service with the exposed port `8085` and you can check whether this service is up by:
40+
2. If you are in a proxy environment, also set the proxy-related environment variables:
4441

45-
```
46-
curl localhost:8085/health -v
47-
```
42+
```bash
43+
export http_proxy="Your_HTTP_Proxy"
44+
export https_proxy="Your_HTTPs_Proxy"
45+
```
4846

49-
If the reply has a `200 OK`, then the service is up.
47+
3. Set up other environment variables:
5048

51-
## Start the Gradio app
49+
> Notice that you can only choose **one** command below to set up envs according to your hardware. Other that the port numbers may be set incorrectly.
5250
53-
Now you have two options to start the frontend UI by following commands:
51+
```bash
52+
# on Gaudi
53+
source ./docker/gaudi/set_env.sh
54+
# on Xeon
55+
source ./docker/xeon/set_env.sh
56+
```
5457

55-
### English Interface (Default)
58+
## Deploy VisualQnA on Gaudi
5659

57-
```
58-
cd ui/
59-
pip install -r requirements.txt
60-
http_proxy= python app.py --host 0.0.0.0 --port 7860 --worker-addr http://localhost:8085 --share
61-
```
60+
Refer to the [Gaudi Guide](./docker/gaudi/README.md) to build docker images from source.
6261

63-
### Chinese Interface
62+
Find the corresponding [compose.yaml](./docker/gaudi/compose.yaml).
6463

65-
```
66-
cd ui/
67-
pip install -r requirements.txt
68-
http_proxy= python app.py --host 0.0.0.0 --port 7860 --worker-addr http://localhost:8085 --lang CN --share
64+
```bash
65+
cd GenAIExamples/VisualQnA/docker/gaudi/
66+
docker compose up -d
6967
```
7068

71-
Here are some explanation about the above parameters:
69+
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
7270
73-
- `--host`: the host of the gradio app
74-
- `--port`: the port of the gradio app, by default 7860
75-
- `--worker-addr`: the LLaVA service IP address. If you setup the service on a different machine, please replace `localhost` to the IP address of your Gaudi2 host machine
76-
- `--lang`: Specify this parameter to use the Chinese interface. The default UI language is English and can be used without any additional parameter.
71+
## Deploy VisualQnA on Xeon
7772

78-
SCRIPT USAGE NOTICE:  By downloading and using any script file included with the associated software package (such as files with .bat, .cmd, or .JS extensions, Docker files, or any other type of file that, when executed, automatically downloads and/or installs files onto your system) (the “Script File”), it is your obligation to review the Script File to understand what files (e.g.,  other software, AI models, AI Datasets) the Script File will download to your system (“Downloaded Files”). Furthermore, by downloading and using the Downloaded Files, even if they are installed through a silent install, you agree to any and all terms and conditions associated with such files, including but not limited to, license terms, notices, or disclaimers.
73+
Refer to the [Xeon Guide](./docker/xeon/README.md) for more instructions on building docker images from source.
74+
75+
Find the corresponding [compose.yaml](./docker/xeon/compose.yaml).
76+
77+
```bash
78+
cd GenAIExamples/VisualQnA/docker/xeon/
79+
docker compose up -d
80+
```

VisualQnA/docker/Dockerfile

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
2+
3+
# Copyright (C) 2024 Intel Corporation
4+
# SPDX-License-Identifier: Apache-2.0
5+
6+
FROM python:3.11-slim
7+
8+
RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
9+
libgl1-mesa-glx \
10+
libjemalloc-dev \
11+
vim \
12+
git
13+
14+
RUN useradd -m -s /bin/bash user && \
15+
mkdir -p /home/user && \
16+
chown -R user /home/user/
17+
18+
WORKDIR /home/user/
19+
RUN git clone https://github.com/opea-project/GenAIComps.git
20+
21+
WORKDIR /home/user/GenAIComps
22+
RUN pip install --no-cache-dir --upgrade pip && \
23+
pip install --no-cache-dir -r /home/user/GenAIComps/requirements.txt
24+
25+
COPY ./visualqna.py /home/user/visualqna.py
26+
27+
ENV PYTHONPATH=$PYTHONPATH:/home/user/GenAIComps
28+
29+
USER user
30+
31+
WORKDIR /home/user
32+
33+
ENTRYPOINT ["python", "visualqna.py"]

VisualQnA/docker/gaudi/README.md

Lines changed: 139 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,139 @@
1+
# Build MegaService of VisualQnA on Gaudi
2+
3+
This document outlines the deployment process for a VisualQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Gaudi server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as llm. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.
4+
5+
## 🚀 Build Docker Images
6+
7+
First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
8+
9+
### 1. Source Code install GenAIComps
10+
11+
```bash
12+
git clone https://github.com/opea-project/GenAIComps.git
13+
cd GenAIComps
14+
```
15+
16+
### 2. Build LLM Image
17+
18+
```bash
19+
docker build --no-cache -t opea/lvm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/Dockerfile_tgi .
20+
```
21+
22+
### 3. Build TGI Gaudi Image
23+
24+
Since TGI Gaudi has not supported llava-next in main branch, we'll need to build it from a PR branch for now.
25+
26+
```bash
27+
git clone https://github.com/yuanwu2017/tgi-gaudi.git
28+
cd tgi-gaudi/
29+
git checkout v2.0.4
30+
docker build -t opea/llava-tgi:latest .
31+
cd ../
32+
```
33+
34+
### 4. Build MegaService Docker Image
35+
36+
To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the `visuralqna.py` Python script. Build the MegaService Docker image using the command below:
37+
38+
```bash
39+
git clone https://github.com/opea-project/GenAIExamples.git
40+
cd GenAIExamples/VisualQnA/docker
41+
docker build --no-cache -t opea/visualqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
42+
cd ../../..
43+
```
44+
45+
### 5. Build UI Docker Image
46+
47+
Build frontend Docker image via below command:
48+
49+
```bash
50+
cd GenAIExamples/VisualQnA/docker/ui/
51+
docker build --no-cache -t opea/visualqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
52+
cd ../../../..
53+
```
54+
55+
Then run the command `docker images`, you will have the following 4 Docker Images:
56+
57+
1. `opea/llava-tgi:latest`
58+
2. `opea/lvm-tgi:latest`
59+
3. `opea/visualqna:latest`
60+
4. `opea/visualqna-ui:latest`
61+
62+
## 🚀 Start MicroServices and MegaService
63+
64+
### Setup Environment Variables
65+
66+
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.
67+
68+
```bash
69+
export no_proxy=${your_no_proxy}
70+
export http_proxy=${your_http_proxy}
71+
export https_proxy=${your_http_proxy}
72+
export LVM_MODEL_ID="llava-hf/llava-v1.6-mistral-7b-hf"
73+
export LVM_ENDPOINT="http://${host_ip}:8399"
74+
export LVM_SERVICE_PORT=9399
75+
export MEGA_SERVICE_HOST_IP=${host_ip}
76+
export LVM_SERVICE_HOST_IP=${host_ip}
77+
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/visualqna"
78+
```
79+
80+
Note: Please replace with `host_ip` with you external IP address, do **NOT** use localhost.
81+
82+
### Start all the services Docker Containers
83+
84+
```bash
85+
cd GenAIExamples/VisualQnA/docker/gaudi/
86+
```
87+
88+
```bash
89+
docker compose -f compose.yaml up -d
90+
```
91+
92+
> **_NOTE:_** Users need at least one Gaudi cards to run the VisualQnA successfully.
93+
94+
### Validate MicroServices and MegaService
95+
96+
Follow the instructions to validate MicroServices.
97+
98+
1. LLM Microservice
99+
100+
```bash
101+
http_proxy="" curl http://${host_ip}:9399/v1/lvm -XPOST -d '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}' -H 'Content-Type: application/json'
102+
```
103+
104+
2. MegaService
105+
106+
```bash
107+
curl http://${host_ip}:8888/v1/visualqna -H "Content-Type: application/json" -d '{
108+
"messages": [
109+
{
110+
"role": "user",
111+
"content": [
112+
{
113+
"type": "text",
114+
"text": "What'\''s in this image?"
115+
},
116+
{
117+
"type": "image_url",
118+
"image_url": {
119+
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
120+
}
121+
}
122+
]
123+
}
124+
],
125+
"max_tokens": 300
126+
}'
127+
```
128+
129+
## 🚀 Launch the UI
130+
131+
To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below:
132+
133+
```yaml
134+
visualqna-gaudi-ui-server:
135+
image: opea/visualqna-ui:latest
136+
...
137+
ports:
138+
- "80:5173"
139+
```

VisualQnA/docker/gaudi/compose.yaml

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
2+
# Copyright (C) 2024 Intel Corporation
3+
# SPDX-License-Identifier: Apache-2.0
4+
5+
version: "3.8"
6+
7+
services:
8+
llava-tgi-service:
9+
image: opea/llava-tgi:latest
10+
container_name: tgi-llava-gaudi-server
11+
ports:
12+
- "8399:80"
13+
volumes:
14+
- "./data:/data"
15+
environment:
16+
no_proxy: ${no_proxy}
17+
http_proxy: ${http_proxy}
18+
https_proxy: ${https_proxy}
19+
HF_HUB_DISABLE_PROGRESS_BARS: 1
20+
HF_HUB_ENABLE_HF_TRANSFER: 0
21+
HABANA_VISIBLE_DEVICES: all
22+
OMPI_MCA_btl_vader_single_copy_mechanism: none
23+
runtime: habana
24+
cap_add:
25+
- SYS_NICE
26+
ipc: host
27+
command: --model-id ${LVM_MODEL_ID} --max-input-length 4096 --max-total-tokens 8192
28+
lvm-tgi:
29+
image: opea/lvm-tgi:latest
30+
container_name: lvm-tgi-gaudi-server
31+
depends_on:
32+
- llava-tgi-service
33+
ports:
34+
- "9399:9399"
35+
ipc: host
36+
environment:
37+
no_proxy: ${no_proxy}
38+
http_proxy: ${http_proxy}
39+
https_proxy: ${https_proxy}
40+
LVM_ENDPOINT: ${LVM_ENDPOINT}
41+
HF_HUB_DISABLE_PROGRESS_BARS: 1
42+
HF_HUB_ENABLE_HF_TRANSFER: 0
43+
restart: unless-stopped
44+
visualqna-gaudi-backend-server:
45+
image: opea/visualqna:latest
46+
container_name: visualqna-gaudi-backend-server
47+
depends_on:
48+
- llava-tgi-service
49+
- lvm-tgi
50+
ports:
51+
- "8888:8888"
52+
environment:
53+
- no_proxy=${no_proxy}
54+
- https_proxy=${https_proxy}
55+
- http_proxy=${http_proxy}
56+
- MEGA_SERVICE_HOST_IP=${MEGA_SERVICE_HOST_IP}
57+
- LVM_SERVICE_HOST_IP=${LVM_SERVICE_HOST_IP}
58+
ipc: host
59+
restart: always
60+
visualqna-gaudi-ui-server:
61+
image: opea/visualqna-ui:latest
62+
container_name: visualqna-gaudi-ui-server
63+
depends_on:
64+
- visualqna-gaudi-backend-server
65+
ports:
66+
- "5173:5173"
67+
environment:
68+
- no_proxy=${no_proxy}
69+
- https_proxy=${https_proxy}
70+
- http_proxy=${http_proxy}
71+
- CHAT_BASE_URL=${BACKEND_SERVICE_ENDPOINT}
72+
ipc: host
73+
restart: always
74+
75+
networks:
76+
default:
77+
driver: bridge

VisualQnA/docker/gaudi/set_env.sh

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
#!/usr/bin/env bash
2+
3+
# Copyright (C) 2024 Intel Corporation
4+
# SPDX-License-Identifier: Apache-2.0
5+
6+
export LVM_MODEL_ID="llava-hf/llava-v1.6-mistral-7b-hf"
7+
export LVM_ENDPOINT="http://${host_ip}:8399"
8+
export LVM_SERVICE_PORT=9399
9+
export MEGA_SERVICE_HOST_IP=${host_ip}
10+
export LVM_SERVICE_HOST_IP=${host_ip}
11+
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/visualqna"

0 commit comments

Comments
 (0)