Skip to content

Commit f47381d

Browse files
authored
Merge pull request #73 from tak-bro/feature/add-deep-seek
Feature/add deep seek
2 parents 64b8301 + fa4cead commit f47381d

File tree

6 files changed

+301
-85
lines changed

6 files changed

+301
-85
lines changed

README.md

Lines changed: 110 additions & 84 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
4242
- [Cohere](https://cohere.com/)
4343
- [Groq](https://groq.com/)
4444
- [Perplexity](https://docs.perplexity.ai/)
45+
- [DeepSeek](https://www.deepseek.com/)
4546
- [Huggingface **(Unofficial)**](https://huggingface.co/chat/)
4647

4748
### Local
@@ -216,20 +217,21 @@ model[]=codestral
216217
The following settings can be applied to most models, but support may vary.
217218
Please check the documentation for each specific model to confirm which settings are supported.
218219

219-
| Setting | Description | Default |
220-
|--------------------|---------------------------------------------------------------------|--------------|
221-
| `systemPrompt` | System Prompt text | - |
222-
| `systemPromptPath` | Path to system prompt file | - |
223-
| `exclude` | Files to exclude from AI analysis | - |
224-
| `timeout` | Request timeout (milliseconds) | 10000 |
225-
| `temperature` | Model's creativity (0.0 - 2.0) | 0.7 |
226-
| `maxTokens` | Maximum number of tokens to generate | 1024 |
227-
| `locale` | Locale for the generated commit messages | en |
228-
| `generate` | Number of commit messages to generate | 1 |
229-
| `type` | Type of commit message to generate | conventional |
230-
| `maxLength` | Maximum character length of the Subject of generated commit message | 50 |
231-
| `logging` | Enable logging | true |
232-
| `ignoreBody` | Whether the commit message includes body | true |
220+
| Setting | Description | Default |
221+
|--------------------|---------------------------------------------------------------------|---------------|
222+
| `systemPrompt` | System Prompt text | - |
223+
| `systemPromptPath` | Path to system prompt file | - |
224+
| `exclude` | Files to exclude from AI analysis | - |
225+
| `type` | Type of commit message to generate | conventional |
226+
| `locale` | Locale for the generated commit messages | en |
227+
| `generate` | Number of commit messages to generate | 1 |
228+
| `logging` | Enable logging | true |
229+
| `ignoreBody` | Whether the commit message includes body | true |
230+
| `maxLength` | Maximum character length of the Subject of generated commit message | 50 |
231+
| `timeout` | Request timeout (milliseconds) | 10000 |
232+
| `temperature` | Model's creativity (0.0 - 2.0) | 0.7 |
233+
| `maxTokens` | Maximum number of tokens to generate | 1024 |
234+
| `topP` | Nucleus sampling | 1 |
233235

234236
> 👉 **Tip:** To set the General Settings for each model, use the following command.
235237
> ```shell
@@ -266,35 +268,17 @@ aicommit2 config set exclude="*.ts,*.json"
266268
```
267269

268270
> NOTE: `exclude` option does not support per model. It is **only** supported by General Settings.
269-
270-
##### timeout
271-
272-
The timeout for network requests in milliseconds.
273-
274-
Default: `10_000` (10 seconds)
275-
276-
```sh
277-
aicommit2 config set timeout=20000 # 20s
278-
```
279-
280-
##### temperature
281-
282-
The temperature (0.0-2.0) is used to control the randomness of the output
283-
284-
Default: `0.7`
285271
286-
```sh
287-
aicommit2 config set temperature=0.3
288-
```
272+
##### type
289273

290-
##### maxTokens
274+
Default: `conventional`
291275

292-
The maximum number of tokens that the AI models can generate.
276+
Supported: `conventional`, `gitmoji`
293277

294-
Default: `1024`
278+
The type of commit message to generate. Set this to "conventional" to generate commit messages that follow the Conventional Commits specification:
295279

296280
```sh
297-
aicommit2 config set maxTokens=3000
281+
aicommit2 config set type="conventional"
298282
```
299283

300284
##### locale
@@ -319,18 +303,40 @@ Note, this will use more tokens as it generates more results.
319303
aicommit2 config set generate=2
320304
```
321305

322-
##### type
306+
##### logging
323307

324-
Default: `conventional`
308+
Default: `true`
325309

326-
Supported: `conventional`, `gitmoji`
310+
Option that allows users to decide whether to generate a log file capturing the responses.
311+
The log files will be stored in the `~/.aicommit2_log` directory(user's home).
327312

328-
The type of commit message to generate. Set this to "conventional" to generate commit messages that follow the Conventional Commits specification:
313+
![log-path](https://github.com/tak-bro/aicommit2/blob/main/img/log_path.png?raw=true)
314+
315+
- You can remove all logs below comamnd.
329316

330317
```sh
331-
aicommit2 config set type="conventional"
318+
aicommit2 log removeAll
319+
```
320+
321+
##### ignoreBody
322+
323+
Default: `true`
324+
325+
This option determines whether the commit message includes body. If you want to include body in message, you can set it to `false`.
326+
327+
```sh
328+
aicommit2 config set ignoreBody="false"
332329
```
333330

331+
![ignore_body_false](https://github.com/tak-bro/aicommit2/blob/main/img/demo_body_min.gif?raw=true)
332+
333+
334+
```sh
335+
aicommit2 config set ignoreBody="true"
336+
```
337+
338+
![ignore_body_true](https://github.com/tak-bro/aicommit2/blob/main/img/ignore_body_true.png?raw=true)
339+
334340
##### maxLength
335341

336342
The maximum character length of the Subject of generated commit message
@@ -341,39 +347,63 @@ Default: `50`
341347
aicommit2 config set maxLength=100
342348
```
343349

344-
##### logging
350+
##### timeout
345351

346-
Default: `true`
352+
The timeout for network requests in milliseconds.
347353

348-
Option that allows users to decide whether to generate a log file capturing the responses.
349-
The log files will be stored in the `~/.aicommit2_log` directory(user's home).
354+
Default: `10_000` (10 seconds)
350355

351-
![log-path](https://github.com/tak-bro/aicommit2/blob/main/img/log_path.png?raw=true)
356+
```sh
357+
aicommit2 config set timeout=20000 # 20s
358+
```
352359

353-
- You can remove all logs below comamnd.
360+
##### temperature
361+
362+
The temperature (0.0-2.0) is used to control the randomness of the output
363+
364+
Default: `0.7`
354365

355366
```sh
356-
aicommit2 log removeAll
367+
aicommit2 config set temperature=0.3
357368
```
358369

359-
##### ignoreBody
370+
##### maxTokens
360371

361-
Default: `true`
372+
The maximum number of tokens that the AI models can generate.
362373

363-
This option determines whether the commit message includes body. If you want to include body in message, you can set it to `false`.
374+
Default: `1024`
364375

365376
```sh
366-
aicommit2 config set ignoreBody="false"
377+
aicommit2 config set maxTokens=3000
367378
```
368379

369-
![ignore_body_false](https://github.com/tak-bro/aicommit2/blob/main/img/demo_body_min.gif?raw=true)
380+
##### topP
381+
382+
Default: `1`
370383

384+
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
371385

372386
```sh
373-
aicommit2 config set ignoreBody="true"
387+
aicommit2 config set topP=0.2
374388
```
375389

376-
![ignore_body_true](https://github.com/tak-bro/aicommit2/blob/main/img/ignore_body_true.png?raw=true)
390+
## Available General Settings by Model
391+
| | timeout | temperature | maxTokens | topP |
392+
|:--------------------:|:-------:|:-----------:|:---------:|:----:|
393+
| **OpenAI** |||||
394+
| **Anthropic Claude** | ||| |
395+
| **Gemini** | ||| |
396+
| **Mistral AI** |||||
397+
| **Codestral** |||||
398+
| **Cohere** | ||| |
399+
| **Groq** |||||
400+
| **Perplexity** |||||
401+
| **DeepSeek** |||||
402+
| **Huggingface** | | | | |
403+
| **Ollama** ||| | |
404+
405+
> All AI support the following options in General Settings.
406+
> - systemPrompt, systemPromptPath, exclude, type, locale, generate, logging, ignoreBody, maxLength
377407
378408
## Model-Specific Settings
379409

@@ -388,7 +418,6 @@ aicommit2 config set ignoreBody="true"
388418
| `url` | API endpoint URL | https://api.openai.com |
389419
| `path` | API path | /v1/chat/completions |
390420
| `proxy` | Proxy settings | - |
391-
| `topP` | Nucleus sampling | 1 |
392421

393422
##### OPENAI.key
394423

@@ -482,6 +511,7 @@ aicommit2 config set OLLAMA.timeout=<timeout>
482511
Ollama does not support the following options in General Settings.
483512

484513
- maxTokens
514+
- topP
485515

486516
### HuggingFace
487517

@@ -524,6 +554,7 @@ Huggingface does not support the following options in General Settings.
524554
- maxTokens
525555
- timeout
526556
- temperature
557+
- topP
527558

528559
### Gemini
529560

@@ -558,6 +589,7 @@ aicommit2 config set GEMINI.model="gemini-1.5-pro-exp-0801"
558589
Gemini does not support the following options in General Settings.
559590

560591
- timeout
592+
- topP
561593

562594
### Anthropic
563595

@@ -589,14 +621,14 @@ aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
589621
Anthropic does not support the following options in General Settings.
590622

591623
- timeout
624+
- topP
592625

593626
### Mistral
594627

595628
| Setting | Description | Default |
596629
|----------|------------------|----------------|
597630
| `key` | API key | - |
598631
| `model` | Model to use | `mistral-tiny` |
599-
| `topP` | Nucleus sampling | 1 |
600632

601633
##### MISTRAL.key
602634

@@ -622,23 +654,12 @@ Supported:
622654
- `mistral-large-2402`
623655
- `mistral-embed`
624656

625-
##### MISTRAL.topP
626-
627-
Default: `1`
628-
629-
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
630-
631-
```sh
632-
aicommit2 config set MISTRAL.topP=0.2
633-
```
634-
635657
### Codestral
636658

637659
| Setting | Description | Default |
638660
|---------|------------------|--------------------|
639661
| `key` | API key | - |
640662
| `model` | Model to use | `codestral-latest` |
641-
| `topP` | Nucleus sampling | 1 |
642663

643664
##### CODESTRAL.key
644665

@@ -656,16 +677,6 @@ Supported:
656677
aicommit2 config set CODESTRAL.model="codestral-2405"
657678
```
658679

659-
##### CODESTRAL.topP
660-
661-
Default: `1`
662-
663-
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
664-
665-
```sh
666-
aicommit2 config set CODESTRAL.topP=0.1
667-
```
668-
669680
#### Cohere
670681

671682
| Setting | Description | Default |
@@ -696,6 +707,7 @@ aicommit2 config set COHERE.model="command-nightly"
696707
Cohere does not support the following options in General Settings.
697708

698709
- timeout
710+
- topP
699711

700712
### Groq
701713

@@ -721,6 +733,8 @@ Supported:
721733
- `llama3-8b-8192`
722734
- `llama3-groq-70b-8192-tool-use-preview`
723735
- `llama3-groq-8b-8192-tool-use-preview`
736+
- `llama-guard-3-8b`
737+
- `mixtral-8x7b-32768`
724738

725739
```sh
726740
aicommit2 config set GROQ.model="llama3-8b-8192"
@@ -732,7 +746,6 @@ aicommit2 config set GROQ.model="llama3-8b-8192"
732746
|----------|------------------|-----------------------------------|
733747
| `key` | API key | - |
734748
| `model` | Model to use | `llama-3.1-sonar-small-128k-chat` |
735-
| `topP` | Nucleus sampling | 1 |
736749

737750
##### PERPLEXITY.key
738751

@@ -758,14 +771,27 @@ Supported:
758771
aicommit2 config set PERPLEXITY.model="llama-3.1-70b"
759772
```
760773

761-
##### PERPLEXITY.topP
774+
### DeepSeek
762775

763-
Default: `1`
776+
| Setting | Description | Default |
777+
|---------|------------------|--------------------|
778+
| `key` | API key | - |
779+
| `model` | Model to use | `deepseek-coder` |
764780

765-
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
781+
##### DEEPSEEK.key
782+
783+
The DeepSeek API key. If you don't have one, please sign up and subscribe in [DeepSeek Platform](https://platform.deepseek.com/).
784+
785+
##### DEEPSEEK.model
786+
787+
Default: `deepseek-coder`
788+
789+
Supported:
790+
- `deepseek-coder`
791+
- `deepseek-chat`
766792

767793
```sh
768-
aicommit2 config set PERPLEXITY.topP=0.3
794+
aicommit2 config set DEEPSEEK.model="deepseek-chat"
769795
```
770796

771797
## Upgrading

package.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,8 @@
3333
"cohere",
3434
"groq",
3535
"codestral",
36-
"perplexity"
36+
"perplexity",
37+
"deepseek"
3738
],
3839
"license": "MIT",
3940
"repository": "tak-bro/aicommit2",

src/managers/ai-request.manager.ts

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ import { AIServiceFactory } from '../services/ai/ai-service.factory.js';
66
import { AnthropicService } from '../services/ai/anthropic.service.js';
77
import { CodestralService } from '../services/ai/codestral.service.js';
88
import { CohereService } from '../services/ai/cohere.service.js';
9+
import { DeepSeekService } from '../services/ai/deep-seek.service.js';
910
import { GeminiService } from '../services/ai/gemini.service.js';
1011
import { GroqService } from '../services/ai/groq.service.js';
1112
import { HuggingFaceService } from '../services/ai/hugging-face.service.js';
@@ -90,6 +91,12 @@ export class AIRequestManager {
9091
stagedDiff: this.stagedDiff,
9192
keyName: ai,
9293
}).generateCommitMessage$();
94+
case 'DEEPSEEK':
95+
return AIServiceFactory.create(DeepSeekService, {
96+
config: this.config.DEEPSEEK,
97+
stagedDiff: this.stagedDiff,
98+
keyName: ai,
99+
}).generateCommitMessage$();
93100
default:
94101
const prefixError = chalk.red.bold(`[${ai}]`);
95102
return of({

src/services/ai/codestral.service.ts

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,9 @@ export class CodestralService extends AIService {
115115
stream: false,
116116
safe_prompt: false,
117117
random_seed: getRandomNumber(10, 1000),
118+
response_format: {
119+
type: 'json_object',
120+
},
118121
})
119122
.execute();
120123
const result: CreateChatCompletionsResponse = response.data;

0 commit comments

Comments
 (0)