A modern web-based code editor that combines Monaco Editor with TypeScript Language Server Protocol (LSP) support, AI-powered error analysis, and inline code completions. The system provides real-time TypeScript diagnostics with intelligent fix suggestions and GitHub Copilot-style completions powered by Claude AI.
The project consists of three main components:
- Client - React-based Monaco editor with LSP integration
- Bridge Server - WebSocket bridge to TypeScript Language Server
- AI Server - Express server providing AI-powered code analysis
Monaco Editor (Browser) <--WebSocket--> Bridge Server <--stdio--> TypeScript LSP
|
v
AI Server (Port 3002) <--HTTP--> Anthropic Claude API
- 📝 Monaco Editor with full TypeScript support
- 🔌 Language Server Protocol integration for real-time diagnostics
- 🤖 AI-Powered Fixes using Claude 4 Sonnet for intelligent code suggestions
- ✨ Inline AI Completions - GitHub Copilot-style code completions as you type
- 📊 Activity Logging with tabbed interface for LSP and AI activity
- 🔧 Auto-fix Integration - Apply AI suggestions directly in the editor
- 💾 Smart Caching for improved performance
- 🔒 Rate Limiting to prevent API abuse
- 🎨 Clean UI with tabbed panels for AI fixes and logs
- ⚡ Fast Completions - Optimized for < 500ms response time
- Node.js 18+
- npm or yarn
- Anthropic API key (for AI features)
- TypeScript Language Server:
npm install -g typescript-language-server typescript
-
Clone the repository:
git clone <repository-url> cd monaco-lsp
-
Install dependencies for all components:
# Install root dependencies npm install # Install client dependencies cd client && npm install # Install bridge server dependencies cd ../bridge-server && npm install # Install AI server dependencies cd ../ai-server && npm install
-
Configure the AI server:
cd ai-server cp .env.example .env
Edit
.env
and add your Anthropic API key:ANTHROPIC_API_KEY=your-anthropic-api-key DEFAULT_MODEL=claude-4-sonnet-20250514
-
Start all services:
In separate terminals:
# Terminal 1: Start Bridge server (port 3001) cd bridge-server npm start # Terminal 2: Start AI server (port 3002) cd ai-server npm run dev # Terminal 3: Start client (port 5173) cd client npm run dev
-
Open the application: Navigate to
http://localhost:5173
monaco-lsp/
├── client/ # React-based Monaco editor
│ ├── src/
│ │ ├── components/ # UI components
│ │ │ ├── AIFixPanel.tsx # AI-powered fix suggestions
│ │ │ ├── LogPanel.tsx # Real-time activity logs
│ │ │ ├── MonacoVSCodeEditor.tsx # Monaco editor wrapper
│ │ │ ├── TabbedPanel.tsx # Tab container
│ │ │ └── index.ts # Barrel exports
│ │ ├── services/ # Business logic
│ │ │ ├── aiAgent.ts # AI agent (functional)
│ │ │ ├── aiCompletions.ts # Inline completions
│ │ │ ├── lspMonitor.ts # LSP health monitoring
│ │ │ └── index.ts # Service exports
│ │ ├── utils/ # Utilities
│ │ │ ├── logger.ts # Event-driven logger
│ │ │ └── index.ts # Utility exports
│ │ ├── lsp/ # LSP integration
│ │ │ └── directLSPSetup.ts # Manual LSP implementation
│ │ ├── types/ # Shared TypeScript types
│ │ │ └── index.ts # Type definitions
│ │ ├── constants/ # Configuration
│ │ │ └── index.ts # API endpoints, config
│ │ ├── hooks/ # Custom React hooks (reserved)
│ │ ├── App.tsx # Main application
│ │ └── main.tsx # Entry point
│ └── package.json
│
├── bridge-server/ # WebSocket LSP bridge
│ ├── src/
│ │ └── index.ts # WebSocket to LSP translation
│ └── package.json
│
└── ai-server/ # AI analysis server
├── src/
│ ├── routes/ # API endpoints
│ ├── services/ # AI and code analysis
│ ├── types/ # TypeScript definitions
│ └── prompts/ # AI prompt templates
└── package.json
- User types code in Monaco Editor
- Editor sends LSP requests via WebSocket to Bridge server (port 3001)
- Bridge server forwards messages to TypeScript Language Server
- LSP sends diagnostics back through Bridge to Monaco
- directLSPSetup.ts converts diagnostics to Monaco markers
- AI Agent processes diagnostics automatically:
- Sends errors to AI server (port 3002)
- Falls back to local patterns if AI unavailable
- Caches suggestions and notifies subscribers
- AIFixPanel updates via subscription pattern
- User applies fixes with one click, updating editor directly
- Inline completions trigger as you type:
- Debounced requests to AI server
- Smart context extraction
- Ghost text appears with Tab to accept
Analyzes TypeScript code and returns AI-generated fix suggestions.
Request Body:
{
"code": "const x: string = 123;",
"diagnostics": [{
"range": {
"start": { "line": 0, "character": 18 },
"end": { "line": 0, "character": 21 }
},
"severity": 1,
"message": "Type 'number' is not assignable to type 'string'."
}],
"language": "typescript"
}
Response:
{
"suggestions": [{
"id": "1234567890-0",
"title": "Convert to string",
"description": "Convert the number to a string using toString()",
"fix": {
"range": {
"startLine": 0,
"startColumn": 18,
"endLine": 0,
"endColumn": 21
},
"text": "123.toString()"
},
"confidence": 0.9,
"explanation": "This converts the number to a string to match the expected type"
}],
"model": "claude-4-sonnet-20250514",
"processingTime": 1234
}
Generates inline code completions.
Request Body:
{
"context": {
"before": "function calculateTotal(",
"after": "\n // implementation\n}",
"language": "typescript"
},
"prefix": "function calculateTotal(",
"language": "typescript"
}
Response:
{
"completion": "items: Item[]): number {",
"model": "claude-4-sonnet-20250514",
"processingTime": 234
}
Health check endpoint.
Response:
{
"status": "ok",
"timestamp": "2024-01-01T12:00:00.000Z",
"aiService": "connected",
"model": "claude-4-sonnet-20250514"
}
Variable | Description | Default |
---|---|---|
ANTHROPIC_API_KEY |
Anthropic API key for Claude | Required |
OPENAI_API_KEY |
OpenAI API key (optional) | Optional |
PORT |
AI server port | 3002 |
CORS_ORIGIN |
Allowed CORS origin | http://localhost:5173 |
DEFAULT_MODEL |
Default AI model | claude-4-sonnet-20250514 |
MAX_TOKENS |
Max tokens for AI response | 2000 |
TEMPERATURE |
AI creativity (0-1) | 0.3 |
MAX_REQUESTS_PER_MINUTE |
Rate limit per IP | 20 |
- Runs on port
3001
- WebSocket endpoint:
ws://localhost:3001
- Spawns TypeScript Language Server process
- Development port:
5173
- Configuration centralized in
constants/index.ts
:- LSP WebSocket URL:
ws://localhost:3001
- AI Server URL:
http://localhost:3002
- Editor options and default content
- AI confidence thresholds
- Completion debounce delays
- LSP WebSocket URL:
- Clean architecture with:
- Functional AI agent service
- AI completions provider
- Shared types in
types/
- Barrel exports for cleaner imports
-
Anthropic (Recommended):
claude-4-sonnet-20250514
- Latest and most capableclaude-3-opus
,claude-3-sonnet
- Previous versions
-
OpenAI (Optional):
gpt-4o
,gpt-4
,gpt-3.5-turbo
- Context Extraction: Gathers code around errors with 10 lines of context
- Smart Caching: Caches suggestions for 5 minutes to reduce API calls
- Structured Output: Uses Zod schemas for reliable suggestion format
- Confidence Scoring: Each suggestion includes a confidence score (0-1)
- Fallback Handling: Returns empty array if AI fails (no breaking)
- Trigger Detection: Smart patterns detect when to show completions
- Context Building: Extracts ~20 lines before and 5 after cursor
- Debouncing: Waits 300ms after typing stops before requesting
- Fast Response: Optimized prompts for < 500ms latency
- Multi-line Support: Detects functions, classes for longer completions
# Client development
cd client
npm run dev # Start dev server
npm run build # Build for production
npm run preview # Preview production build
# Bridge Server
cd bridge-server
npm run dev # Start with nodemon
npm start # Start production
# AI Server
cd ai-server
npm run dev # Start with hot reload
npm run build # Compile TypeScript
npm run typecheck # Type checking
npm start # Start production
- Open the editor at
http://localhost:5173
- Type some TypeScript code with errors
- Watch the Activity Log for LSP messages
- Click on "AI Fixes" tab to see suggestions
- Click "Apply" to fix errors automatically
- Start typing to see inline completions (ghost text)
- Press Tab to accept completions
- React 18 - UI framework with functional components
- Monaco Editor - VSCode's code editor
- @codingame/monaco-vscode-api - VSCode service integration
- TypeScript - Full type safety
- Vite - Fast build tool with HMR
- Tailwind CSS - Utility-first styling
- Architecture:
- Functional programming approach
- Event-driven logging system
- Observer pattern for state updates
- Centralized configuration
- Node.js - Runtime
- ws - WebSocket library
- TypeScript Language Server - LSP implementation
- vscode-languageserver-protocol - Protocol types
- vscode-ws-jsonrpc - JSON-RPC over WebSocket
- Express.js - HTTP framework
- @anthropic-ai/sdk - Official Anthropic SDK
- Zod - Runtime type validation
- TypeScript - Type safety
- In-memory cache - Performance optimization
- express-rate-limit - Rate limiting
- API keys stored in environment variables
- CORS configured for local development
- Rate limiting prevents abuse
- Request size limited to 1MB
- Input validation with Zod schemas
- Caching: 5-minute cache for AI suggestions, 30s for completions
- Debouncing: Editor changes debounced before analysis (300ms for completions)
- Selective Analysis: Only analyzes code with diagnostics
- Context Limiting: Sends only relevant code context
- Connection Pooling: Reuses WebSocket connections
- Completion Optimization: Lower temperature, fewer tokens for speed
- Smart Triggers: Only shows completions after relevant patterns
-
"Cannot connect to LSP server"
- Ensure Bridge server is running on port 3001
- Check WebSocket URL in client config
-
"AI analysis failed"
- Verify API key is set correctly
- Check AI server logs for errors
- Ensure model name is correct
-
"No fix suggestions appearing"
- Check browser console for errors
- Verify AI server is running on port 3002
- Look at Activity Log for error messages
# Build all components
cd client && npm run build
cd ../bridge-server && npm run build
cd ../ai-server && npm run build
- Set production environment variables
- Configure CORS for production domain
- Set up reverse proxy for WebSocket
- Enable HTTPS for security
- ARCHITECTURE.md - Complete system architecture
- API_REFERENCE.md - Detailed API documentation
- ai-server/ARCHITECTURE.md - AI server specifics
- Inline JSDoc comments throughout the codebase
- AI Agent (
services/aiAgent.ts
) - Functional AI integration for error fixes - AI Completions (
services/aiCompletions.ts
) - Inline completion provider - Logger (
utils/logger.ts
) - Event-driven logging system - LSP Setup (
lsp/directLSPSetup.ts
) - Manual LSP implementation - LSP Monitor (
services/lspMonitor.ts
) - Connection health tracking - Types (
types/index.ts
) - Shared TypeScript interfaces - Constants (
constants/index.ts
) - Centralized configuration
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details