A lightweight, multi-agent system built with LangGraph that customizes the research process using AI analysts and experts to generate comprehensive reports.
The Research Assistant leverages multiple AI agents to conduct in-depth research on complex topics. The system generates specialized AI analysts, conducts parallel interviews with expert agents, and synthesizes findings into comprehensive reports.
- Dynamic Analyst Generation: Creates specialized AI analysts based on your research topic
- Human-in-the-Loop: Allows refinement of analysts before research begins
- Parallel Processing: Conducts multiple expert interviews simultaneously using map-reduce patterns
- Multi-Source Research: Integrates web search (Tavily) and Wikipedia for comprehensive coverage
- Structured Output: Generates well-formatted reports with introduction, content, and conclusion
- LangGraph Studio Compatible: Includes multiple graph implementations for different use cases
The system consists of several key components:
- Analyst Creation: Generates AI personas focused on different aspects of your topic
- Interview Orchestration: Conducts structured interviews between analysts and experts
- Parallel Research: Multiple interviews run simultaneously for efficiency
- Report Synthesis: Combines all insights into a coherent final report
research_assistant.py
: Main research workflowparallelization.py
: Demonstrates parallel execution patternssub_graphs.py
: Shows sub-graph implementationsmap_reduce.py
: Implements map-reduce patterns for scaling
- Python 3.11+
- OpenAI API key
- Tavily API key (for web search)
- LangSmith API key (optional, for tracing)
- Clone the repository:
git clone https://github.com/hadeelbkh/research-assistant
cd research-assistant
- Create a virtual environment:
python -m venv ra-env
# On Windows:
ra-env\Scripts\activate
# On macOS/Linux:
source ra-env/bin/activate
- Install dependencies:
pip install -r requirements.txt
Set up your API keys:
export OPENAI_API_KEY="your-openai-key"
export TAVILY_API_KEY="your-tavily-key"
export LANGSMITH_API_KEY="your-langsmith-key" # Optional
Open and run research-assistant.ipynb
for an interactive experience with detailed explanations.
- Navigate to the studio directory:
cd studio
- Deploy with LangGraph Studio:
langgraph dev
- Access the web interface to interact with different graph implementations.
from studio.research_assistant import graph
from langgraph.checkpoint.memory import MemorySaver
# Initialize
memory = MemorySaver()
thread = {"configurable": {"thread_id": "1"}}
# Define research parameters
topic = "The impact of AI on healthcare"
max_analysts = 3
# Run research
for event in graph.stream({
"topic": topic,
"max_analysts": max_analysts
}, thread, stream_mode="values"):
# Process results
if "final_report" in event:
print(event["final_report"])
Modify the analyst_instructions
in research_assistant.py
to customize how analysts are generated:
analyst_instructions = """You are tasked with creating a set of AI analyst personas..."""
Adjust interview depth by modifying max_num_turns
in the interview state:
interview_state = {
"max_num_turns": 5, # Adjust conversation depth
# ... other parameters
}
The system supports multiple research sources:
- Web Search: Via Tavily API
- Wikipedia: For encyclopedic information
- Custom Sources: Extend by adding new search functions
research-assistant/
├── research-assistant.ipynb # Interactive tutorial notebook
├── studio/ # LangGraph Studio files
│ ├── research_assistant.py # Main research graph
│ ├── parallelization.py # Parallel execution examples
│ ├── sub_graphs.py # Sub-graph patterns
│ ├── map_reduce.py # Map-reduce implementation
│ ├── langgraph.json # Studio configuration
│ └── requirements.txt # Studio-specific dependencies
├── requirements.txt # Main project dependencies
├── research_assistant.jpg # Architecture diagram
└── README.md # This file
- Creates specialized AI personas based on research topics
- Supports human feedback for refinement
- Generates diverse perspectives for comprehensive coverage
- Structured conversations between analysts and experts
- Multi-turn dialogues for deep insights
- Automatic question generation and follow-up
- Multiple interviews run simultaneously
- Map-reduce patterns for scalability
- Efficient resource utilization
- Structured output with clear sections
- Source attribution and citations
- Professional formatting
- LangGraph: Multi-agent orchestration
- LangChain: LLM integration and tooling
- OpenAI: Language model provider
- Tavily: Web search API
- Wikipedia: Knowledge base access
- Jupyter: Interactive development
- LangSmith: Observability and tracing
- Pydantic: Data validation and models
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes
- Add tests if applicable
- Submit a pull request
- API Key Errors: Ensure all required API keys are set in environment variables
- Memory Issues: Reduce
max_analysts
for complex topics - Rate Limiting: Add delays between API calls if hitting rate limits
- Check the Jupyter notebook for detailed examples
- Review LangGraph documentation for advanced patterns
- Open an issue for bug reports or feature requests