-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Description
Describe the bug
The litellm provider uses crates/goose/src/providers/formats/openai.rs as it's create_request (https://github.com/block/goose/blob/main/crates/goose/src/providers/litellm.rs#L170), but this create_request function does very specific openai processing, like here: https://github.com/block/goose/blob/main/crates/goose/src/providers/formats/openai.rs#L580
This means that if my litellm proxy has a model name that starts with "o", like "open-mistral-small-3.1", then it will send a reasoning_effort parameter regardless of whether or not it's a reasoning model.
To Reproduce
Steps to reproduce the behavior:
- Stand up litellm with a model that starts with "o"
- Send a request through goose
- See that it sends a reasoning_effort parameter regardless of model
I've written a quick test for this behavior that fails on the current main branch: https://github.com/myaple/goose/blob/myaple/litellm-reasoning-bug/crates/goose/src/providers/litellm.rs#L339
Expected behavior
Reasoning parameters should only be sent to be models that support reasoning. Additionally, there should be an environment variable to override the reasoning parameter for the openai models or disable it completely.
Please provide following information:
- OS & Arch: ubuntu 24
- Interface: cli
- Version: 1.5.0
- Extensions enabled: n/a
- Provider & Model: litellm, open-mistral-small-3.1