You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A new post-processing type for verifying hit questions using LLM has been added (#669)
I think that using LLM for verification can yield better results in some cases where quality requirements are higher. Especially in the customer service field and the like, other users' questions may lead to privacy leaks or mention other brands, thus causing interference and so on.
In terms of specific details, I maintained the operation of openai==0.28.0 on the interface and was compatible with the operation of openai>1.0.0 and above.
I have added the test case, the example test file, and updated the example/readme.md.
I upgraded the version of onnxruntime to 1.21.1 because the previous version 1.14.0 is no longer in use.
-[Suitable for embedding methods consisting of a cached storage and vector store](#suitable-for-embedding-methods-consisting-of-a-cached-storage-and-vector-store)
8
+
-[Custom embedding](#custom-embedding)
9
+
-[How to set the `data manager` class](#how-to-set-the-data-manager-class)
10
+
-[How to set the `similarity evaluation` interface](#how-to-set-the-similarity-evaluation-interface)
You can use the LlmVerifier() function to process the cached answer list after recall. This is similar to `first`or`random_one`, but it will call a LLM to verify whether the recalled question is truly similar to the user's question. You can define your own system prompt to decide under what circumstances the LLM should actively reject. You can also choose a small model to perform the verification step, so only a small additional cost is required.
0 commit comments