You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Has anyone considered Open Manus a candidate for prompt caching? I've been using it fairly successfully with a local llm (deepseek-r1-aqwen-14b has performed most reliably thus far) with not very complex tasks. I'm reticent to begin testing with a usage api out of fear of the doom loop. As such I came across prompt-caching in anthropic's documentation, nevermind improving context window performance, a quick look considering how the prompt builds during step execution, this could cut api costs from 20-50% perhaps? Anyhow, I'm going to try to implement but if someone else has looked into it already I'd like to hear about it before spending too much time on it. A quick search of the discussion board here yielded no results.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Has anyone considered Open Manus a candidate for prompt caching? I've been using it fairly successfully with a local llm (deepseek-r1-aqwen-14b has performed most reliably thus far) with not very complex tasks. I'm reticent to begin testing with a usage api out of fear of the doom loop. As such I came across prompt-caching in anthropic's documentation, nevermind improving context window performance, a quick look considering how the prompt builds during step execution, this could cut api costs from 20-50% perhaps? Anyhow, I'm going to try to implement but if someone else has looked into it already I'd like to hear about it before spending too much time on it. A quick search of the discussion board here yielded no results.
Beta Was this translation helpful? Give feedback.
All reactions