-
Notifications
You must be signed in to change notification settings - Fork 6
word ботяра removed from the LLM prompt #97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
""" WalkthroughThe update introduces preprocessing of incoming message text in the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Bot
participant LLM_API
User->>Bot: Send message (may include "ботяра")
Bot->>Bot: Preprocess message (remove "ботяра" and punctuation)
Bot->>LLM_API: Send cleaned prompt
LLM_API-->>Bot: Return response
Bot-->>User: Send LLM response
Possibly related PRs
Tip ⚡💬 Agentic Chat (Pro Plan, General Availability)
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
src/main.py (1)
456-465: Consider using a singlewithstatement for better readabilityThe nested
withstatements can be combined into a single statement with multiple contexts.- async with aiohttp.ClientSession() as session: - async with session.post( - f"{LLM_API_ADDR}/api/generate", - json={ - "model": LLM_MODEL, - "prompt": prompt, - "stream": False, - "num_predict": 200 - }, - ) as response: + async with aiohttp.ClientSession() as session, session.post( + f"{LLM_API_ADDR}/api/generate", + json={ + "model": LLM_MODEL, + "prompt": prompt, + "stream": False, + "num_predict": 200 + }, + ) as response:🧰 Tools
🪛 Ruff (0.8.2)
456-465: Use a single
withstatement with multiple contexts instead of nestedwithstatements(SIM117)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/main.py(8 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
src/main.py
456-465: Use a single with statement with multiple contexts instead of nested with statements
(SIM117)
🪛 GitHub Actions: 📝 Linters
src/main.py
[error] 1-1: Black formatting check failed. The file would be reformatted. Run 'black --write' to fix code style issues.
🔇 Additional comments (12)
src/main.py (12)
24-24: Addition of regex module for text preprocessingThis addition of the
remodule is necessary for the implementation of the bot mention removal functionality in therespond_with_llm_messagefunction.
35-36: Improved readability with proper line breaksThe environment variable assignment has been properly formatted with line breaks for better readability.
107-108: Improved formatting for list comprehensionThe list comprehension has been properly formatted with line breaks for better readability.
236-237: Improved formatting for debug statementThe debug statement has been properly formatted with line breaks for better readability.
267-268: Improved formatting for video grouping logicThe list comprehension for grouping videos has been properly formatted with line breaks for better readability.
274-275: Improved formatting for picture grouping logicThe list comprehension for grouping pictures has been properly formatted with line breaks for better readability.
314-317: Improved formatting for info loggingThe random choice selection and info logging have been properly formatted with line breaks for better readability.
452-453: Effectively removes "ботяра" from LLM promptsThis is the core change that implements the PR objective. The regex pattern
r'ботяра[^\w\s]*'effectively removes the bot mention "ботяра" and any trailing punctuation from the message before sending it to the LLM API. This results in cleaner, more focused prompts.
459-464: Improved JSON formatting in API requestThe JSON payload for the LLM API request has been properly formatted with line breaks for better readability.
468-469: Improved formatting for response data extractionThe response data extraction has been properly formatted with line breaks for better readability.
513-514: Improved formatting for message handler registrationThe message handler registration has been properly formatted with line breaks for better readability.
1-523:❓ Verification inconclusive
Fix Black formatting issues
The pipeline is reporting that Black formatting check failed. Consider running Black to automatically fix code style issues.
🏁 Script executed:
#!/bin/bash # Check what changes Black would make black --diff src/main.pyLength of output: 122
Ensure Black is available and format the code
The CI check failed because
blackis not installed in the environment. Please:
- Add Black to your development dependencies (e.g.,
pip install --dev black)- Run
black src/main.pylocally to apply the formatter- Commit and push the changes so the pipeline passes
🧰 Tools
🪛 Ruff (0.8.2)
368-368: Use a context manager for opening files
(SIM115)
425-425: Use a context manager for opening files
(SIM115)
456-465: Use a single
withstatement with multiple contexts instead of nestedwithstatements(SIM117)
🪛 GitHub Actions: 📝 Linters
[error] 1-1: Black formatting check failed. The file would be reformatted. Run 'black --write' to fix code style issues.
Summary by CodeRabbit