A comprehensive AI-powered code review platform that leverages multiple Large Language Models (LLMs) to provide thorough, multi-dimensional analysis of Python codebases. The system combines the power of Groq (Llama) and Google Gemini models to deliver detailed feedback across eight critical code quality dimensions.
- Dual LLM Architecture: Utilizes both Groq (Llama 3.3 70B) and Google Gemini for comprehensive reviews
- Specialized Agent System: Eight dedicated AI agents, each focused on specific code quality aspects
- Comparative Analysis: Side-by-side reviews from different models for balanced insights
- Correctness: Logic validation, bug detection, and edge case identification
- Readability: Code clarity, naming conventions, and maintainability
- Documentation: Docstring quality and code documentation standards
- Security: Vulnerability detection and secure coding practices
- Performance: Optimization opportunities and efficiency analysis
- Structure: Code organization and architectural patterns
- Error Handling: Exception management and robustness evaluation
- Test Coverage: Testing strategy and coverage assessment
- React + TypeScript Frontend: Modern, responsive dashboard
- Dark/Light Mode: User preference support
- Real-time Progress: Live analysis progress tracking
- File Navigation: Tabbed interface for multi-file repositories
- Expandable Sections: Organized review presentation
- Direct Repository Analysis: Input any public GitHub repository URL
- Automated File Discovery: Scans and analyzes all Python files
- RESTful API: Clean backend API for extensibility
Agentic-AI/
├── Backend/ # Django REST API
│ ├── manage.py # Django management
│ ├── codeagent/ # Core Django project
│ │ ├── settings.py # Configuration
│ │ ├── urls.py # URL routing
│ │ └── wsgi.py # WSGI application
│ ├── review/ # Review application
│ │ ├── views.py # API endpoints
│ │ ├── logic.py # Core review logic
│ │ ├── models.py # Data models
│ │ └── urls.py # App URL patterns
│ └── prompts/ # AI Agent Prompts
│ ├── correctness.txt # Logic validation prompts
│ ├── security.txt # Security analysis prompts
│ ├── performance.txt # Performance review prompts
│ └── ... # Other specialized prompts
├── Frontend/ # React TypeScript App
│ ├── src/
│ │ ├── App.tsx # Main application component
│ │ ├── main.tsx # Application entry point
│ │ └── index.css # Styling
│ ├── package.json # Dependencies
│ └── vite.config.ts # Build configuration
├── agent.py # Simple AI agent example
├── codereview.py # Single-model reviewer
└── modelfusion.py # Multi-model fusion system
- Python 3.10+
- Node.js 18+
- npm or yarn
- API Keys for:
- Groq (Llama models)
- Google Gemini
-
Clone the repository
git clone https://github.com/Hariharanpugazh/Agentic-AI.git cd Agentic-AI
-
Backend Setup
cd Backend # Create virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install django djangorestframework django-cors-headers pip install groq google-generativeai python-dotenv requests tqdm # Set up environment variables echo "GROQ_API_KEY=your_groq_api_key_here" > .env echo "GEMINI_API_KEY=your_gemini_api_key_here" >> .env # Run migrations python manage.py migrate # Start Django server python manage.py runserver
-
Frontend Setup
cd Frontend # Install dependencies npm install # Start development server npm run dev
-
Access the Application
- Frontend: http://localhost:5173
- Backend API: http://localhost:8000
- Open the dashboard at http://localhost:5173
- Enter a GitHub repository URL (e.g.,
https://github.com/user/repo
) - Press Enter or click the analyze button
- View comprehensive AI-powered reviews across all code quality dimensions
python agent.py
Basic example of AI-powered code review for a single function.
python codereview.py
# Enter GitHub repo URL when prompted
Command-line tool using Groq/Llama for repository analysis.
python modelfusion.py
# Enter GitHub repo URL when prompted
Advanced CLI tool combining Groq and Gemini for comprehensive analysis.
POST /review/review_repo/
Content-Type: application/json
{
"repo_url": "https://github.com/username/repository"
}
Response Structure:
{
"review": {
"file_path.py": {
"correctness": {
"groq": "Analysis from Groq model...",
"gemini": "Analysis from Gemini model..."
},
"security": {
"groq": "Security review from Groq...",
"gemini": "Security review from Gemini..."
},
// ... other dimensions
}
}
}
Each agent is powered by specialized prompts designed for specific analysis:
Agent | Focus Area | Key Capabilities |
---|---|---|
Correctness | Logic & Bugs | Edge cases, logic errors, undefined variables |
Readability | Code Clarity | Naming conventions, code structure, maintainability |
Documentation | Docstrings | API documentation, comment quality |
Security | Vulnerabilities | Hardcoded secrets, injection risks, insecure functions |
Performance | Optimization | Algorithm efficiency, resource usage, bottlenecks |
Structure | Architecture | Code organization, design patterns |
Error Handling | Robustness | Exception management, error recovery |
Test Coverage | Quality Assurance | Testing strategies, coverage analysis |
- API Key Management: Store sensitive keys in environment variables
- CORS Configuration: Properly configured for frontend-backend communication
- Input Validation: GitHub URL validation and sanitization
- Rate Limiting: Consider implementing for production use
- Responsive Design: Tailwind CSS for modern, mobile-friendly interface
- Dark/Light Mode: System preference detection and manual toggle
- Progress Tracking: Real-time analysis progress indicators
- Code Formatting: Syntax highlighting and formatted output
- Expandable Sections: Organized review presentation with collapsible panels
- Multi-tab Navigation: Easy switching between analyzed files
- Create new prompt file in
Backend/prompts/
- Add agent name to
AGENT_NAMES
inlogic.py
- Update frontend interfaces in
App.tsx
- Test with sample repositories
- Add new client initialization in
logic.py
- Implement model-specific functions
- Update review combination logic
- Add configuration options
- Concurrent Processing: Parallel API calls to different LLM providers
- Caching Strategy: Consider implementing Redis for repeated analyses
- Rate Limiting: Built-in handling for API rate limits
- Progress Tracking: Real-time feedback for long-running analyses
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Multi-language Support: Extend beyond Python to JavaScript, Java, C++
- Advanced Metrics: Code complexity scores and quality ratings
- Integration APIs: GitHub App, VS Code extension
- Custom Agents: User-defined review criteria
- Report Generation: PDF/HTML export functionality
- Historical Tracking: Repository improvement over time
- Team Collaboration: Shared reviews and comments
- Large repositories may require extended processing time
- API rate limits may affect analysis speed
- Some edge cases in file parsing for complex repository structures
This project is licensed under the MIT License - see the LICENSE file for details.
- Groq for providing fast Llama model inference
- Google for Gemini API access
- Django and React communities for excellent frameworks
- GitHub for repository hosting and API access
For questions, issues, or contributions:
- Create an issue on GitHub
- Bug reports are always welcome
- Feature requests and suggestions appreciated
Built with care by Hariharanpugazh
Empowering developers with AI-driven code quality insights