Skip to content

fix: prevent memory leaks in streaming chat responses #676

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Vijaykv5
Copy link
Contributor

@Vijaykv5 Vijaykv5 commented Aug 1, 2025

Fixes : #466

What this PR does :

Added buffer size limits and memory management to prevent unbounded growth during long-running streaming sessions.
It Implements1MB buffer limit, 16KB chunk truncation, and 10MB total message size capacity to maintain browser performance during code analysis.

Signed-off-by: vijaykv5 <vijaykv2228@gmail.com>
@Copilot Copilot AI review requested due to automatic review settings August 1, 2025 07:40
@Vijaykv5 Vijaykv5 requested a review from peterj as a code owner August 1, 2025 07:40
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR addresses memory leak issues in streaming chat responses by implementing buffer size limits and memory management controls. The changes prevent unbounded memory growth during long-running streaming sessions by enforcing strict limits on buffer sizes and message processing.

  • Adds configurable memory limits (1MB buffer, 16KB chunk truncation, 10MB total message size)
  • Implements buffer truncation logic to prevent excessive memory usage
  • Adds error handling for messages exceeding size limits

@@ -119,9 +124,26 @@ export class KagentA2AClient {
if (done) {
break;
}
processedSize += value.length;
Copy link
Preview

Copilot AI Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The processedSize is tracking the raw byte length of chunks, but buffer length is measured in characters after decoding. This mismatch could cause incorrect size calculations, especially with multi-byte UTF-8 characters. Consider tracking decoded character length instead or use value.byteLength for consistent byte-based measurement.

Suggested change
processedSize += value.length;
processedSize += value.byteLength;

Copilot uses AI. Check for mistakes.


// If the last line is still too large, truncate it
if (buffer.length > MAX_BUFFER_SIZE) {
buffer = buffer.slice(-CHUNK_SIZE);
Copy link
Preview

Copilot AI Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using slice(-CHUNK_SIZE) could truncate in the middle of a multi-byte UTF-8 character, potentially corrupting the data. This could lead to decoding errors or malformed SSE events. Consider implementing UTF-8 safe truncation or truncating at character boundaries.

Copilot uses AI. Check for mistakes.

// If the last line is still too large, truncate it
if (buffer.length > MAX_BUFFER_SIZE) {
buffer = buffer.slice(-CHUNK_SIZE);
console.warn("Buffer truncated due to size limit");
Copy link
Preview

Copilot AI Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using console.warn for production logging may not be appropriate. Consider using a proper logging framework or callback mechanism to handle this warning, as it could indicate data loss that the application should handle gracefully.

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Add buffer size limits to prevent memory leaks in streaming
1 participant