Skip to content

Commit dd12799

Browse files
committed
Add http request example
Signed-off-by: Sherlock113 <sherlockxu07@gmail.com>
1 parent 90a9931 commit dd12799

File tree

1 file changed

+15
-0
lines changed

1 file changed

+15
-0
lines changed

docs/llm-inference-basics/openai-compatible-api.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,21 @@ response = client.chat.completions.create(
5858
print(response.choices[0].message)
5959
```
6060

61+
You can also call the API directly using a simple HTTP request. Here's an example using `curl`:
62+
63+
```bash
64+
curl https://your-custom-endpoint.com/v1/chat/completions \
65+
-H "Authorization: Bearer your-api-key" \
66+
-H "Content-Type: application/json" \
67+
-d '{
68+
"model": "your-model-name",
69+
"messages": [
70+
{"role": "system", "content": "You are a helpful assistant."},
71+
{"role": "user", "content": "How can I integrate OpenAI-compatible APIs?"}
72+
]
73+
}'
74+
```
75+
6176
If you’re already using OpenAI’s SDKs or REST interface, you can simply redirect them to your own API endpoint. This allows you to keep control over your LLM deployment, reduce vendor lock-in, and ensure your application remains future-proof.
6277

6378
<LinkList>

0 commit comments

Comments
 (0)