Skip to content

Web-based code editor that combines Monaco Editor with TypeScript Language Server Protocol (LSP) support, AI-powered error analysis, and inline code completions

Notifications You must be signed in to change notification settings

michaelshimeles/code-editor-lsp

Repository files navigation

Monaco LSP with AI-Powered Code Analysis

A modern web-based code editor that combines Monaco Editor with TypeScript Language Server Protocol (LSP) support, AI-powered error analysis, and inline code completions. The system provides real-time TypeScript diagnostics with intelligent fix suggestions and GitHub Copilot-style completions powered by Claude AI.

🏗️ Architecture Overview

The project consists of three main components:

  1. Client - React-based Monaco editor with LSP integration
  2. Bridge Server - WebSocket bridge to TypeScript Language Server
  3. AI Server - Express server providing AI-powered code analysis
Monaco Editor (Browser) <--WebSocket--> Bridge Server <--stdio--> TypeScript LSP
         |
         v
    AI Server (Port 3002) <--HTTP--> Anthropic Claude API

✨ Features

  • 📝 Monaco Editor with full TypeScript support
  • 🔌 Language Server Protocol integration for real-time diagnostics
  • 🤖 AI-Powered Fixes using Claude 4 Sonnet for intelligent code suggestions
  • Inline AI Completions - GitHub Copilot-style code completions as you type
  • 📊 Activity Logging with tabbed interface for LSP and AI activity
  • 🔧 Auto-fix Integration - Apply AI suggestions directly in the editor
  • 💾 Smart Caching for improved performance
  • 🔒 Rate Limiting to prevent API abuse
  • 🎨 Clean UI with tabbed panels for AI fixes and logs
  • Fast Completions - Optimized for < 500ms response time

🚀 Quick Start

Prerequisites

  • Node.js 18+
  • npm or yarn
  • Anthropic API key (for AI features)
  • TypeScript Language Server:
    npm install -g typescript-language-server typescript

Installation

  1. Clone the repository:

    git clone <repository-url>
    cd monaco-lsp
  2. Install dependencies for all components:

    # Install root dependencies
    npm install
    
    # Install client dependencies
    cd client && npm install
    
    # Install bridge server dependencies
    cd ../bridge-server && npm install
    
    # Install AI server dependencies
    cd ../ai-server && npm install
  3. Configure the AI server:

    cd ai-server
    cp .env.example .env

    Edit .env and add your Anthropic API key:

    ANTHROPIC_API_KEY=your-anthropic-api-key
    DEFAULT_MODEL=claude-4-sonnet-20250514
  4. Start all services:

    In separate terminals:

    # Terminal 1: Start Bridge server (port 3001)
    cd bridge-server
    npm start
    
    # Terminal 2: Start AI server (port 3002)
    cd ai-server
    npm run dev
    
    # Terminal 3: Start client (port 5173)
    cd client
    npm run dev
  5. Open the application: Navigate to http://localhost:5173

📁 Project Structure

monaco-lsp/
├── client/              # React-based Monaco editor
│   ├── src/
│   │   ├── components/  # UI components
│   │   │   ├── AIFixPanel.tsx      # AI-powered fix suggestions
│   │   │   ├── LogPanel.tsx        # Real-time activity logs
│   │   │   ├── MonacoVSCodeEditor.tsx  # Monaco editor wrapper
│   │   │   ├── TabbedPanel.tsx     # Tab container
│   │   │   └── index.ts            # Barrel exports
│   │   ├── services/    # Business logic
│   │   │   ├── aiAgent.ts          # AI agent (functional)
│   │   │   ├── aiCompletions.ts    # Inline completions
│   │   │   ├── lspMonitor.ts       # LSP health monitoring
│   │   │   └── index.ts            # Service exports
│   │   ├── utils/       # Utilities
│   │   │   ├── logger.ts           # Event-driven logger
│   │   │   └── index.ts            # Utility exports
│   │   ├── lsp/         # LSP integration
│   │   │   └── directLSPSetup.ts   # Manual LSP implementation
│   │   ├── types/       # Shared TypeScript types
│   │   │   └── index.ts            # Type definitions
│   │   ├── constants/   # Configuration
│   │   │   └── index.ts            # API endpoints, config
│   │   ├── hooks/       # Custom React hooks (reserved)
│   │   ├── App.tsx      # Main application
│   │   └── main.tsx     # Entry point
│   └── package.json
│
├── bridge-server/       # WebSocket LSP bridge
│   ├── src/
│   │   └── index.ts     # WebSocket to LSP translation
│   └── package.json
│
└── ai-server/           # AI analysis server
    ├── src/
    │   ├── routes/      # API endpoints
    │   ├── services/    # AI and code analysis
    │   ├── types/       # TypeScript definitions
    │   └── prompts/     # AI prompt templates
    └── package.json

🔄 Data Flow

  1. User types code in Monaco Editor
  2. Editor sends LSP requests via WebSocket to Bridge server (port 3001)
  3. Bridge server forwards messages to TypeScript Language Server
  4. LSP sends diagnostics back through Bridge to Monaco
  5. directLSPSetup.ts converts diagnostics to Monaco markers
  6. AI Agent processes diagnostics automatically:
    • Sends errors to AI server (port 3002)
    • Falls back to local patterns if AI unavailable
    • Caches suggestions and notifies subscribers
  7. AIFixPanel updates via subscription pattern
  8. User applies fixes with one click, updating editor directly
  9. Inline completions trigger as you type:
    • Debounced requests to AI server
    • Smart context extraction
    • Ghost text appears with Tab to accept

📡 AI Server API

POST /api/analyze-errors

Analyzes TypeScript code and returns AI-generated fix suggestions.

Request Body:

{
  "code": "const x: string = 123;",
  "diagnostics": [{
    "range": {
      "start": { "line": 0, "character": 18 },
      "end": { "line": 0, "character": 21 }
    },
    "severity": 1,
    "message": "Type 'number' is not assignable to type 'string'."
  }],
  "language": "typescript"
}

Response:

{
  "suggestions": [{
    "id": "1234567890-0",
    "title": "Convert to string",
    "description": "Convert the number to a string using toString()",
    "fix": {
      "range": {
        "startLine": 0,
        "startColumn": 18,
        "endLine": 0,
        "endColumn": 21
      },
      "text": "123.toString()"
    },
    "confidence": 0.9,
    "explanation": "This converts the number to a string to match the expected type"
  }],
  "model": "claude-4-sonnet-20250514",
  "processingTime": 1234
}

POST /api/complete

Generates inline code completions.

Request Body:

{
  "context": {
    "before": "function calculateTotal(",
    "after": "\n  // implementation\n}",
    "language": "typescript"
  },
  "prefix": "function calculateTotal(",
  "language": "typescript"
}

Response:

{
  "completion": "items: Item[]): number {",
  "model": "claude-4-sonnet-20250514",
  "processingTime": 234
}

GET /api/health

Health check endpoint.

Response:

{
  "status": "ok",
  "timestamp": "2024-01-01T12:00:00.000Z",
  "aiService": "connected",
  "model": "claude-4-sonnet-20250514"
}

⚙️ Configuration

AI Server Environment Variables

Variable Description Default
ANTHROPIC_API_KEY Anthropic API key for Claude Required
OPENAI_API_KEY OpenAI API key (optional) Optional
PORT AI server port 3002
CORS_ORIGIN Allowed CORS origin http://localhost:5173
DEFAULT_MODEL Default AI model claude-4-sonnet-20250514
MAX_TOKENS Max tokens for AI response 2000
TEMPERATURE AI creativity (0-1) 0.3
MAX_REQUESTS_PER_MINUTE Rate limit per IP 20

Bridge Server Configuration

  • Runs on port 3001
  • WebSocket endpoint: ws://localhost:3001
  • Spawns TypeScript Language Server process

Client Configuration

  • Development port: 5173
  • Configuration centralized in constants/index.ts:
    • LSP WebSocket URL: ws://localhost:3001
    • AI Server URL: http://localhost:3002
    • Editor options and default content
    • AI confidence thresholds
    • Completion debounce delays
  • Clean architecture with:
    • Functional AI agent service
    • AI completions provider
    • Shared types in types/
    • Barrel exports for cleaner imports

🤖 AI Integration

Supported Models

  • Anthropic (Recommended):

    • claude-4-sonnet-20250514 - Latest and most capable
    • claude-3-opus, claude-3-sonnet - Previous versions
  • OpenAI (Optional):

    • gpt-4o, gpt-4, gpt-3.5-turbo

How AI Analysis Works

  1. Context Extraction: Gathers code around errors with 10 lines of context
  2. Smart Caching: Caches suggestions for 5 minutes to reduce API calls
  3. Structured Output: Uses Zod schemas for reliable suggestion format
  4. Confidence Scoring: Each suggestion includes a confidence score (0-1)
  5. Fallback Handling: Returns empty array if AI fails (no breaking)

How AI Completions Work

  1. Trigger Detection: Smart patterns detect when to show completions
  2. Context Building: Extracts ~20 lines before and 5 after cursor
  3. Debouncing: Waits 300ms after typing stops before requesting
  4. Fast Response: Optimized prompts for < 500ms latency
  5. Multi-line Support: Detects functions, classes for longer completions

🛠️ Development

Available Scripts

# Client development
cd client
npm run dev          # Start dev server
npm run build        # Build for production
npm run preview      # Preview production build

# Bridge Server
cd bridge-server
npm run dev          # Start with nodemon
npm start            # Start production

# AI Server
cd ai-server
npm run dev          # Start with hot reload
npm run build        # Compile TypeScript
npm run typecheck    # Type checking
npm start            # Start production

Testing the System

  1. Open the editor at http://localhost:5173
  2. Type some TypeScript code with errors
  3. Watch the Activity Log for LSP messages
  4. Click on "AI Fixes" tab to see suggestions
  5. Click "Apply" to fix errors automatically
  6. Start typing to see inline completions (ghost text)
  7. Press Tab to accept completions

🏗️ Technical Stack

Client

  • React 18 - UI framework with functional components
  • Monaco Editor - VSCode's code editor
  • @codingame/monaco-vscode-api - VSCode service integration
  • TypeScript - Full type safety
  • Vite - Fast build tool with HMR
  • Tailwind CSS - Utility-first styling
  • Architecture:
    • Functional programming approach
    • Event-driven logging system
    • Observer pattern for state updates
    • Centralized configuration

Bridge Server

  • Node.js - Runtime
  • ws - WebSocket library
  • TypeScript Language Server - LSP implementation
  • vscode-languageserver-protocol - Protocol types
  • vscode-ws-jsonrpc - JSON-RPC over WebSocket

AI Server

  • Express.js - HTTP framework
  • @anthropic-ai/sdk - Official Anthropic SDK
  • Zod - Runtime type validation
  • TypeScript - Type safety
  • In-memory cache - Performance optimization
  • express-rate-limit - Rate limiting

🔒 Security Considerations

  • API keys stored in environment variables
  • CORS configured for local development
  • Rate limiting prevents abuse
  • Request size limited to 1MB
  • Input validation with Zod schemas

📈 Performance Optimizations

  1. Caching: 5-minute cache for AI suggestions, 30s for completions
  2. Debouncing: Editor changes debounced before analysis (300ms for completions)
  3. Selective Analysis: Only analyzes code with diagnostics
  4. Context Limiting: Sends only relevant code context
  5. Connection Pooling: Reuses WebSocket connections
  6. Completion Optimization: Lower temperature, fewer tokens for speed
  7. Smart Triggers: Only shows completions after relevant patterns

🐛 Troubleshooting

Common Issues

  1. "Cannot connect to LSP server"

    • Ensure Bridge server is running on port 3001
    • Check WebSocket URL in client config
  2. "AI analysis failed"

    • Verify API key is set correctly
    • Check AI server logs for errors
    • Ensure model name is correct
  3. "No fix suggestions appearing"

    • Check browser console for errors
    • Verify AI server is running on port 3002
    • Look at Activity Log for error messages

🚢 Deployment

Production Build

# Build all components
cd client && npm run build
cd ../bridge-server && npm run build
cd ../ai-server && npm run build

Environment Setup

  1. Set production environment variables
  2. Configure CORS for production domain
  3. Set up reverse proxy for WebSocket
  4. Enable HTTPS for security

📚 Documentation

Key Client Modules

  • AI Agent (services/aiAgent.ts) - Functional AI integration for error fixes
  • AI Completions (services/aiCompletions.ts) - Inline completion provider
  • Logger (utils/logger.ts) - Event-driven logging system
  • LSP Setup (lsp/directLSPSetup.ts) - Manual LSP implementation
  • LSP Monitor (services/lspMonitor.ts) - Connection health tracking
  • Types (types/index.ts) - Shared TypeScript interfaces
  • Constants (constants/index.ts) - Centralized configuration

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

📄 License

MIT License - see LICENSE file for details

About

Web-based code editor that combines Monaco Editor with TypeScript Language Server Protocol (LSP) support, AI-powered error analysis, and inline code completions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages