Skip to content

Ollama Support (#149) #156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jan 6, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,22 @@
4. Build the server (`npm run build`)
5. Run (`npm start`)

#### Ollama Configuration Guide

- It's recommended if you can run bigger LLM than 14b parameter.
- You do not need to provide the API KEY
- Set LLM_PROVIDER to Ollama (It is going to connect to default ollama endpoint)
- Set LLM_MODELNAME to the model name you can see from Ollama using the command `ollama ls`
- It is recommended to set TOKEN_PROCESSING_CHARACTER_LIMIT between 10000-20000 (Approx 300-600 lines of code) if you are using low param LLM (ex. 8b, 14b)

**Example:**

```
LLM_PROVIDER=ollama
LLM_APIKEY=
LLM_MODELNAME=qwen2.5:14b
```

### Additional Information

> [!CAUTION]
Expand Down
26 changes: 26 additions & 0 deletions src/llm/provider/ollama.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import { ChatOllama } from '@langchain/ollama';
import LLMConfig from '../llm-config';
import { HistoryItem, LLMProvider } from '../llm-provider';

export class OllamaProvider extends LLMProvider {
private llm: ChatOllama;

constructor(
modelName: string,
llmconfig: LLMConfig) {
super('', modelName, llmconfig, '');
this.llm = new ChatOllama({
model: modelName,
temperature: llmconfig.temperature,
// format: 'json', // Forcing JSON output degraded the quality of the responses
topP: llmconfig.topP,
topK: llmconfig.topK,
});
}

async run(userPrompt: string, history: HistoryItem[]): Promise<string> {
const response = await this.llm.invoke(userPrompt);
console.log(response.content.toString());
return response.content.toString();
}
}
11 changes: 11 additions & 0 deletions src/service/llm-factory.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ import DeepSeekProvider from '@/llm/provider/deepseek'
import GoogleProvider from '@/llm/provider/google'
import LLMConfig from '@/llm/llm-config'
import dotenv from 'dotenv'
import { OllamaProvider } from '@/llm/provider/ollama'
dotenv.config()

/**
Expand All @@ -19,11 +20,21 @@ export default class LLMFactory {
const apiKey = process.env.LLM_APIKEY
const modelName = process.env.LLM_MODELNAME

if (!provider) {
throw new Error('LLM Provider is not specified. Please set LLM_PROVIDER in the environment\nExample: LLM_PROVIDER=google, LLM_PROVIDER=deepseek, LLM_PROVIDER=ollama')
}

if(!modelName) {
throw new Error('LLM Model name is not specified. Example: LLM_MODELNAME=llama3.3 for llama3.3')
}

switch (provider) {
case 'google':
return new GoogleProvider(apiKey!, modelName!, llmConfig)
case 'deepseek':
return new DeepSeekProvider(apiKey!, modelName!, llmConfig)
case 'ollama':
return new OllamaProvider(modelName!, llmConfig)
default:
throw new Error(`Unsupported LLM provider: ${provider}`)
}
Expand Down
Loading