Skip to content

support multi turn questions #420

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 20, 2025
Merged

support multi turn questions #420

merged 1 commit into from
Jul 20, 2025

Conversation

helloyongyang
Copy link
Collaborator

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @helloyongyang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the LLaVA model's capabilities by adding robust support for multi-turn question answering. It ensures that the model can maintain conversational context and efficiently process subsequent questions by reusing image token pruning decisions from the initial turn. A new evaluation mode is also introduced to facilitate testing of these multi-turn interactions.

Highlights

  • Multi-Turn Question Answering Support: Introduced core functionality to enable the LLaVA model to handle multi-turn conversations, allowing it to maintain context across successive questions related to the same image(s). This is crucial for interactive AI applications.
  • Context-Aware Token Reduction: Modified the random token pruning mechanism to be aware of conversational turns. Image token pruning is now applied only during the first turn of a multi-turn conversation, and the resulting pruned indices are reused for subsequent turns, ensuring consistency and efficiency.
  • New Multi-Turn Evaluation Pipeline: Added a new evaluation class (CustomGenerateJustInfer) and integrated it into the evaluation utility. This new pipeline specifically supports inferring responses for custom multi-turn question datasets, facilitating testing and validation of the new conversational capabilities.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@helloyongyang helloyongyang merged commit 8edf37b into main Jul 20, 2025
2 checks passed
@helloyongyang helloyongyang deleted the devv branch July 20, 2025 09:45
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for multi-turn conversations in the LLaVA model and adds a corresponding evaluation utility. There are opportunities for code cleanup, such as removing debug statements and improving inheritance. Addressing these issues will improve the correctness and maintainability of the new functionality.


custom_samples_ans[data_idx]['answer'].append(outputs[0])

input_ids_old = torch.cat((input_ids, output_ids), dim=1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The history input_ids_old is updated by concatenating input_ids with output_ids. Since output_ids already includes input_ids, this leads to duplicated context. Update the history by assigning output_ids to input_ids_old.

Suggested change
input_ids_old = torch.cat((input_ids, output_ids), dim=1)
input_ids_old = output_ids

from .eval_base import BaseEval


class CustomGenerateJustInfer:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The class CustomGenerateJustInfer should inherit from BaseEval to maintain consistency with other evaluation classes and ensure compatibility with the evaluation framework.

Suggested change
class CustomGenerateJustInfer:
class CustomGenerateJustInfer(BaseEval):

@@ -62,22 +63,37 @@ def input_hook(module, input_args, pruning_paras):
@prefill_wrapper
def random_pruning_hook(module, args, kwargs, pruning_paras):

logger.info(' ========random_pruning_hook======== ')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider removing this logger.info call as it seems to be for debugging purposes. Leaving it in could clutter the logs.

rate = pruning_paras['rate']
image_token_start_index = pruning_paras['image_token_start_index']
image_token_length = pruning_paras['image_token_length']

hidden_states = args[0]
causal_mask = kwargs['attention_mask']

logger.info(f'before hidden_states : {hidden_states.shape}')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logger.info call appears to be for debugging. It should be removed to avoid unnecessary logging in production.

vision_indexes = vision_indexes[rand_idx]

if self.model.first_turn_question:
logger.info(' -----first_turn_question-----')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logger.info call seems to be for debugging. It should be removed to keep production logs clean.

# save vision_indexes to module
module.register_buffer('vision_indexes', vision_indexes)
else:
logger.info(' -----not first_turn_question-----')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logger.info call appears to be for debugging. It should be removed to avoid unnecessary logging in production.

@@ -115,6 +131,7 @@ def random_pruning_hook(module, args, kwargs, pruning_paras):
position_embeddings[0].resize_as_(new_pe0).copy_(new_pe0)
position_embeddings[1].resize_as_(new_pe0).copy_(new_pe1)

logger.info(f'after hidden_states : {hidden_states.shape}')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider removing this logger.info call as it seems to be for debugging purposes. Leaving it in could clutter the logs.

self.eval_cfg = config.eval

@torch.no_grad()
def eval(self, model, eval_pos=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model parameter in the eval method is not used and shadows the class attribute self.model. Remove it to avoid confusion.

Suggested change
def eval(self, model, eval_pos=None):
def eval(self, eval_pos=None):

self.eval_cfg
)

with open(os.path.join('custom_samples_ans.json'), 'w') as f:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The output filename custom_samples_ans.json is hardcoded. Consider saving the output in the same directory as the input data, using self.eval_cfg.path for better flexibility.

Suggested change
with open(os.path.join('custom_samples_ans.json'), 'w') as f:
with open(os.path.join(self.eval_cfg.path, 'custom_samples_ans.json'), 'w') as f:

Comment on lines +222 to +227
print('--------------------------------')
print(f'data_idx: {data_idx}')
print(f'question_idx: {question_idx}')
print(f'question: {question}')
print(f'outputs: {outputs}')
print('--------------------------------')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These print statements appear to be for debugging. They should be removed or replaced with structured logging (e.g., logger.debug) if this information is valuable for debugging, to avoid cluttering the standard output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants