Skip to content

Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.

License

Notifications You must be signed in to change notification settings

juspay/neurolink

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

🧠 NeuroLink

NPM Version Downloads GitHub Stars License TypeScript CI

Enterprise AI Development Platform with universal provider support, factory pattern architecture, and access to 100+ AI models through LiteLLM integration. Production-ready with TypeScript support.

NeuroLink is an Enterprise AI Development Platform that unifies 12 major AI providers with intelligent fallback and built-in tool support. Available as both a programmatic SDK and professional CLI tool. Features LiteLLM integration for 100+ models, plus 6 core tools working across all providers. Extracted from production use at Juspay.

πŸŽ‰ NEW: LiteLLM Integration - Access 100+ AI Models

NeuroLink now supports LiteLLM, providing unified access to 100+ AI models from all major providers through a single interface:

  • πŸ”„ Universal Access: OpenAI, Anthropic, Google, Mistral, Meta, and more
  • 🎯 Unified Interface: OpenAI-compatible API for all models
  • πŸ’° Cost Optimization: Automatic routing to cost-effective models
  • ⚑ Load Balancing: Automatic failover and load distribution
  • πŸ“Š Analytics: Built-in usage tracking and monitoring
# Quick start with LiteLLM
pip install litellm && litellm --port 4000

# Use any of 100+ models through one interface
npx @juspay/neurolink generate "Hello" --provider litellm --model "openai/gpt-4o"
npx @juspay/neurolink generate "Hello" --provider litellm --model "anthropic/claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello" --provider litellm --model "google/gemini-2.0-flash"

πŸ“– Complete LiteLLM Integration Guide - Setup, configuration, and 100+ model access

πŸŽ‰ NEW: SageMaker Integration - Deploy Your Custom AI Models

NeuroLink now supports Amazon SageMaker, enabling you to deploy and use your own custom trained models through NeuroLink's unified interface:

  • πŸ—οΈ Custom Model Hosting - Deploy your fine-tuned models on AWS infrastructure
  • πŸ’° Cost Control - Pay only for inference usage with auto-scaling capabilities
  • πŸ”’ Enterprise Security - Full control over model infrastructure and data privacy
  • ⚑ Performance - Dedicated compute resources with predictable latency
  • πŸ“Š Monitoring - Built-in CloudWatch metrics and logging
# Quick start with SageMaker
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"

# Use your custom deployed models
npx @juspay/neurolink generate "Analyze this data" --provider sagemaker
npx @juspay/neurolink sagemaker status  # Check endpoint health
npx @juspay/neurolink sagemaker benchmark my-endpoint  # Performance testing

πŸ“– Complete SageMaker Integration Guide - Setup, deployment, and custom model access

πŸš€ Enterprise Platform Features

  • 🏭 Factory Pattern Architecture - Unified provider management through BaseProvider inheritance
  • πŸ”§ Tools-First Design - All providers include built-in tool support without additional configuration
  • πŸ”— LiteLLM Integration - 100+ models from all major providers through unified interface
  • 🏒 Enterprise Proxy Support - Comprehensive corporate proxy support with MCP compatibility
  • πŸ—οΈ Enterprise Architecture - Production-ready with clean abstractions
  • πŸ”„ Configuration Management - Flexible provider configuration with automatic backups
  • βœ… Type Safety - Industry-standard TypeScript interfaces
  • ⚑ Performance - Fast response times with streaming support and 68% improved status checks
  • πŸ›‘οΈ Error Recovery - Graceful failures with provider fallback and retry logic
  • πŸ“Š Analytics & Evaluation - Built-in usage tracking and AI-powered quality assessment
  • 🎯 Real-time Event Monitoring - EventEmitter integration for progress tracking and debugging
  • πŸ”§ External MCP Integration - Model Context Protocol with 6 built-in tools + full external MCP server support
  • πŸš€ Lighthouse Integration - Unified tool registration API supporting both object and array formats for seamless Lighthouse tool import

πŸš€ Quick Start

Install & Run (2 minutes)

# Option 1: LiteLLM - Access 100+ models through one interface
pip install litellm && litellm --port 4000
export LITELLM_BASE_URL="http://localhost:4000"
export LITELLM_API_KEY="sk-anything"

# Use any of 100+ models
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "openai/gpt-4o"
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Option 2: OpenAI Compatible - Use any OpenAI-compatible endpoint with auto-discovery
export OPENAI_COMPATIBLE_BASE_URL="https://api.openrouter.ai/api/v1"
export OPENAI_COMPATIBLE_API_KEY="sk-or-v1-your-api-key"
# Auto-discovers available models via /v1/models endpoint
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible

# Or specify a model explicitly
export OPENAI_COMPATIBLE_MODEL="claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible

# Option 3: Direct Provider - Quick setup with Google AI Studio (free tier)
export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
npx @juspay/neurolink generate "Hello, AI" --provider google-ai

# Option 4: Amazon SageMaker - Use your custom deployed models
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
npx @juspay/neurolink generate "Hello, AI" --provider sagemaker

# CLI Commands - No installation required
npx @juspay/neurolink generate "Explain AI"  # Auto-selects best provider
npx @juspay/neurolink gen "Write code"       # Shortest form
npx @juspay/neurolink stream "Tell a story" # Real-time streaming
npx @juspay/neurolink status                # Check all providers
# SDK Installation for using in your typescript projects
npm install @juspay/neurolink

# πŸ†• NEW: External MCP Server Integration Quick Test
node -e "
const { NeuroLink } = require('@juspay/neurolink');
(async () => {
  const neurolink = new NeuroLink();

  // Add external filesystem MCP server
  await neurolink.addExternalMCPServer('filesystem', {
    command: 'npx',
    args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
    transport: 'stdio'
  });

  // External tools automatically available in generate()
  const result = await neurolink.generate({
    input: { text: 'List files in the current directory' }
  });
  console.log('πŸŽ‰ External MCP integration working!');
  console.log(result.content);
})();
"

Basic Usage

import { NeuroLink } from "@juspay/neurolink";

// Auto-select best available provider
const neurolink = new NeuroLink();
const autoResult = await neurolink.generate({
  input: { text: "Write a business email" },
  provider: "google-ai", // or let it auto-select
  timeout: "30s",
});

console.log(autoResult.content);
console.log(`Used: ${autoResult.provider}`);

Conversation Memory

NeuroLink supports automatic conversation history management that maintains context across multiple turns within sessions. This enables AI to remember previous interactions and provide contextually aware responses. Session-based memory isolation ensures privacy between different conversations.

// Enable conversation memory with configurable limits
const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    maxSessions: 50, // Keep last 50 sessions
    maxTurnsPerSession: 20, // Keep last 20 turns per session
  },
});

πŸ”— CLI-SDK Consistency (NEW! ✨)

Method aliases that match CLI command names:

// The following methods are equivalent:
const result1 = await provider.generate({ input: { text: "Hello" } }); // Original
const result2 = await provider.gen({ input: { text: "Hello" } }); // Matches CLI 'gen'

// Use whichever style you prefer:
const provider = createBestAIProvider();

// Detailed method name
const story = await provider.generate({
  input: { text: "Write a short story about AI" },
  maxTokens: 200,
});

// CLI-style method names
const poem = await provider.generate({ input: { text: "Write a poem" } });
const joke = await provider.gen({ input: { text: "Tell me a joke" } });

Enhanced Features

CLI with Analytics & Evaluation

# Basic AI generation with auto-provider selection
npx @juspay/neurolink generate "Write a business email"

# LiteLLM with specific model
npx @juspay/neurolink generate "Write code" --provider litellm --model "anthropic/claude-3-5-sonnet"

# With analytics and evaluation
npx @juspay/neurolink generate "Write a proposal" --enable-analytics --enable-evaluation --debug

# Streaming with tools (default behavior)
npx @juspay/neurolink stream "What time is it and write a file with the current date"

SDK and Enhancement Features

import { NeuroLink } from "@juspay/neurolink";

// Enhanced generation with analytics
const neurolink = new NeuroLink();
const result = await neurolink.generate({
  input: { text: "Write a business proposal" },
  enableAnalytics: true, // Get usage & cost data
  enableEvaluation: true, // Get AI quality scores
  context: { project: "Q1-sales" },
});

console.log("πŸ“Š Usage:", result.analytics);
console.log("⭐ Quality:", result.evaluation);
console.log("Response:", result.content);

Environment Setup

# Create .env file (automatically loaded by CLI)
echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
echo 'GOOGLE_AI_API_KEY="AIza-your-google-ai-key"' >> .env
echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env

# πŸ†• NEW: Google Vertex AI for Websearch Tool
echo 'GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"' >> .env
echo 'GOOGLE_VERTEX_PROJECT="your-gcp-project-id"' >> .env
echo 'GOOGLE_VERTEX_LOCATION="us-central1"' >> .env

# Test configuration
npx @juspay/neurolink status

JSON Format Support (Complete)

NeuroLink provides comprehensive JSON input/output support for both CLI and SDK:

# CLI JSON Output - Structured data for scripts
npx @juspay/neurolink generate "Summary of AI trends" --format json
npx @juspay/neurolink gen "Create a user profile" --format json --provider google-ai

# Example JSON Output:
{
  "content": "AI trends include increased automation...",
  "provider": "google-ai",
  "model": "gemini-2.5-flash",
  "usage": {
    "promptTokens": 15,
    "completionTokens": 127,
    "totalTokens": 142
  },
  "responseTime": 1234
}
// SDK JSON Input/Output - Full TypeScript support
import { createBestAIProvider } from "@juspay/neurolink";

const provider = createBestAIProvider();

// Structured input
const result = await provider.generate({
  input: { text: "Create a product specification" },
  schema: {
    type: "object",
    properties: {
      name: { type: "string" },
      price: { type: "number" },
      features: { type: "array", items: { type: "string" } },
    },
  },
});

// Access structured response
const productData = JSON.parse(result.content);
console.log(productData.name, productData.price, productData.features);

πŸ“– Complete Setup Guide - All providers with detailed instructions

πŸ” NEW: Websearch Tool with Google Vertex AI Grounding

NeuroLink now includes a powerful websearch tool that uses Google's native search grounding technology for real-time web information:

  • πŸ” Native Google Search - Uses Google's search grounding via Vertex AI
  • 🎯 Real-time Results - Access current web information during AI conversations
  • πŸ”’ Credential Protection - Only activates when Google Vertex AI credentials are properly configured

Quick Setup & Test

# 1. Build the project first
pnpm run build

# 2. Set up environment variables (see detailed setup below)
cp .env.example .env
# Edit .env with your Google Vertex AI credentials

# 3. Test the websearch tool directly
node test-websearch-grounding.j

Complete Google Vertex AI Setup

Configure Environment Variables

# Add to your .env file
GOOGLE_APPLICATION_CREDENTIALS="/absolute/path/to/neurolink-service-account.json"
GOOGLE_VERTEX_PROJECT="YOUR-PROJECT-ID"
GOOGLE_VERTEX_LOCATION="us-central1"

Step 3: Test the Setup

# Build the project first
pnpm run build

# Run the dedicated test script
node test-websearch-grounding.js

### Using the Websearch Tool

#### CLI Usage (Works with All Providers)

# With specific providers - websearch works across all providers
npx @juspay/neurolink generate "Weather in Tokyo now" --provider vertex

**Note:** The websearch tool gracefully handles missing credentials - it only activates when valid Google Vertex AI credentials are configured. Without proper credentials, other tools continue to work normally and AI responses fall back to training data.

## ✨ Key Features

- πŸ”— **LiteLLM Integration** - **Access 100+ AI models** from all major providers through unified interface
- πŸ” **Smart Model Auto-Discovery** - OpenAI Compatible provider automatically detects available models via `/v1/models` endpoint
- 🏭 **Factory Pattern Architecture** - Unified provider management with BaseProvider inheritance
- πŸ”§ **Tools-First Design** - All providers automatically include 7 direct tools (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles, websearchGrounding)
- πŸ”„ **12 AI Providers** - OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic, Azure, **LiteLLM**, **OpenAI Compatible**, Hugging Face, Ollama, Mistral AI, **SageMaker**
- πŸ’° **Cost Optimization** - Automatic selection of cheapest models and LiteLLM routing
- ⚑ **Automatic Fallback** - Never fail when providers are down, intelligent provider switching
- πŸ–₯️ **CLI + SDK** - Use from command line or integrate programmatically with TypeScript support
- πŸ›‘οΈ **Production Ready** - Enterprise-grade error handling, performance optimization, extracted from production
- 🏒 **Enterprise Proxy Support** - Comprehensive corporate proxy support with zero configuration
- βœ… **External MCP Integration** - Model Context Protocol with built-in tools + full external MCP server support
- πŸ” **Smart Model Resolution** - Fuzzy matching, aliases, and capability-based search across all providers
- 🏠 **Local AI Support** - Run completely offline with Ollama or through LiteLLM proxy
- 🌍 **Universal Model Access** - Direct providers + 100,000+ models via Hugging Face + 100+ models via LiteLLM
- 🧠 **Automatic Context Summarization** - Stateful, long-running conversations with automatic history summarization.
- πŸ“Š **Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment

## πŸ› οΈ External MCP Integration Status βœ… **PRODUCTION READY**

| Component              | Status         | Description                                                      |
| ---------------------- | -------------- | ---------------------------------------------------------------- |
| Built-in Tools         | βœ… **Working** | 6 core tools fully functional across all providers               |
| SDK Custom Tools       | βœ… **Working** | Register custom tools programmatically                           |
| **External MCP Tools** | βœ… **Working** | **Full external MCP server support with dynamic tool discovery** |
| Tool Execution         | βœ… **Working** | Real-time AI tool calling with all tool types                    |
| **Streaming Support**  | βœ… **Working** | **External MCP tools work with streaming generation**            |
| **Multi-Provider**     | βœ… **Working** | **External tools work across all AI providers**                  |
| **CLI Integration**    | βœ… **READY**   | **Production-ready with external MCP support**                   |

### βœ… External MCP Integration Demo

```bash
# Test built-in tools (works immediately)
npx @juspay/neurolink generate "What time is it?" --debug

# πŸ†• NEW: External MCP server integration (SDK)
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

// Add external MCP server (e.g., Bitbucket)
await neurolink.addExternalMCPServer('bitbucket', {
  command: 'npx',
  args: ['-y', '@nexus2520/bitbucket-mcp-server'],
  transport: 'stdio',
  env: {
    BITBUCKET_USERNAME: process.env.BITBUCKET_USERNAME,
    BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
    BITBUCKET_BASE_URL: 'https://bitbucket.example.com'
  }
});

// Use external MCP tools in generation
const result = await neurolink.generate({
  input: { text: 'Get pull request #123 details from the main repository' },
  disableTools: false // External MCP tools automatically available
});

# Discover available MCP servers
npx @juspay/neurolink mcp discover --format table

πŸ”§ SDK Custom Tool Registration (NEW!)

Register your own tools programmatically with the SDK:

import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();

// Register a simple tool
neurolink.registerTool("weatherLookup", {
  description: "Get current weather for a city",
  parameters: z.object({
    city: z.string().describe("City name"),
    units: z.enum(["celsius", "fahrenheit"]).optional(),
  }),
  execute: async ({ city, units = "celsius" }) => {
    // Your implementation here
    return {
      city,
      temperature: 22,
      units,
      condition: "sunny",
    };
  },
});

// Use it in generation
const result = await neurolink.generate({
  input: { text: "What's the weather in London?" },
  provider: "google-ai",
});

// Register multiple tools - Object format (existing)
neurolink.registerTools({
  stockPrice: {
    description: "Get stock price",
    execute: async () => ({ price: 150.25 }),
  },
  calculator: {
    description: "Calculate math",
    execute: async () => ({ result: 42 }),
  },
});

// Register multiple tools - Array format (Lighthouse compatible)
neurolink.registerTools([
  {
    name: "lighthouseTool1",
    tool: {
      description: "Lighthouse analytics tool",
      parameters: z.object({
        merchantId: z.string(),
        dateRange: z.string().optional(),
      }),
      execute: async ({ merchantId, dateRange }) => {
        // Lighthouse tool implementation with Zod schema
        return { data: "analytics result" };
      },
    },
  },
  {
    name: "lighthouseTool2",
    tool: {
      description: "Payment processing tool",
      execute: async () => ({ status: "processed" }),
    },
  },
]);

πŸ’° Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • πŸ’° Automatic Cost Optimization: Selects cheapest models for simple tasks
  • πŸ”„ LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • πŸ” Capability-Based Selection: Find models with specific features (vision, function calling)
  • ⚑ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

πŸ’» Essential Examples

CLI Commands

# Text generation with automatic MCP tool detection (default)
npx @juspay/neurolink generate "What time is it?"

# Alternative short form
npx @juspay/neurolink gen "What time is it?"

# Disable tools for training-data-only responses
npx @juspay/neurolink generate "What time is it?" --disable-tools

# With custom timeout for complex prompts
npx @juspay/neurolink generate "Explain quantum computing in detail" --timeout 1m

# Real-time streaming with agent support (default)
npx @juspay/neurolink stream "What time is it?"

# Streaming without tools (traditional mode)
npx @juspay/neurolink stream "Tell me a story" --disable-tools

# Streaming with extended timeout
npx @juspay/neurolink stream "Write a long story" --timeout 5m

# Provider diagnostics
npx @juspay/neurolink status --verbose

# Batch processing
echo -e "Write a haiku\nExplain gravity" > prompts.txt
npx @juspay/neurolink batch prompts.txt --output results.json

# Batch with custom timeout per request
npx @juspay/neurolink batch prompts.txt --timeout 45s --output results.json

SDK Integration

// SvelteKit API route with timeout handling
export const POST: RequestHandler = async ({ request }) => {
  const { message } = await request.json();
  const provider = createBestAIProvider();

  try {
    // NEW: Primary streaming method (recommended)
    const result = await provider.stream({
      input: { text: message },
      timeout: "2m", // 2 minutes for streaming
    });

    // Process stream
    for await (const chunk of result.stream) {
      // Handle streaming content
      console.log(chunk.content);
    }

    // LEGACY: Backward compatibility (still works)
    const legacyResult = await provider.stream({ input: { text:
      prompt: message,
      timeout: "2m", // 2 minutes for streaming
    });
    return new Response(result.toReadableStream());
  } catch (error) {
    if (error.name === "TimeoutError") {
      return new Response("Request timed out", { status: 408 });
    }
    throw error;
  }
};

// Next.js API route with timeout
export async function POST(request: NextRequest) {
  const { prompt } = await request.json();
  const provider = createBestAIProvider();

  const result = await provider.generate({
    prompt,
    timeout: process.env.AI_TIMEOUT || "30s", // Configurable timeout
  });

  return NextResponse.json({ text: result.content });
}

🎬 See It In Action

No installation required! Experience NeuroLink through comprehensive visual documentation:

πŸ“± Interactive Web Demo

cd neurolink-demo && node server.js
# Visit http://localhost:9876 for live demo
  • Real AI Integration: All 9 providers functional with live generation
  • Complete Use Cases: Business, creative, and developer scenarios
  • Performance Metrics: Live provider analytics and response times
  • Privacy Options: Test local AI with Ollama

πŸ–₯️ CLI Demonstrations

🌐 Web Interface Videos

πŸ“– Complete Visual Documentation - All screenshots and videos

πŸ“š Documentation

Getting Started

Advanced Features

Reference

πŸ—οΈ Supported Providers & Models

Provider Models Auth Method Free Tier Tool Support Key Benefit
πŸ”— LiteLLM πŸ†• 100+ Models (All Providers) Proxy Server Varies βœ… Full Universal Access
πŸ”— OpenAI Compatible πŸ†• Any OpenAI-compatible endpoint API Key + Base URL Varies βœ… Full Auto-Discovery + Flexibility
Google AI Studio Gemini 2.5 Flash/Pro API Key βœ… βœ… Full Free Tier Available
OpenAI GPT-4o, GPT-4o-mini API Key ❌ βœ… Full Industry Standard
Anthropic Claude 3.5 Sonnet API Key ❌ βœ… Full Advanced Reasoning
Amazon Bedrock Claude 3.5/3.7 Sonnet AWS Credentials ❌ βœ… Full* Enterprise Scale
Google Vertex AI Gemini 2.5 Flash Service Account ❌ βœ… Full Enterprise Google
Azure OpenAI GPT-4, GPT-3.5 API Key + Endpoint ❌ βœ… Full Microsoft Ecosystem
Ollama πŸ†• Llama 3.2, Gemma, Mistral (Local) None (Local) βœ… ⚠️ Partial Complete Privacy
Hugging Face πŸ†• 100,000+ open source models API Key βœ… ⚠️ Partial Open Source
Mistral AI πŸ†• Tiny, Small, Medium, Large API Key βœ… βœ… Full European/GDPR
Amazon SageMaker πŸ†• Custom Models (Your Endpoints) AWS Credentials ❌ βœ… Full Custom Model Hosting

Tool Support Legend:

  • βœ… Full: All tools working correctly
  • ⚠️ Partial: Tools visible but may not execute properly
  • ❌ Limited: Issues with model or configuration
  • * Bedrock requires valid AWS credentials, Ollama requires specific models like gemma3n for tool support

✨ Auto-Selection: NeuroLink automatically chooses the best available provider based on speed, reliability, and configuration.

πŸ” Smart Model Auto-Discovery (OpenAI Compatible)

The OpenAI Compatible provider includes intelligent model discovery that automatically detects available models from any endpoint:

# Setup - no model specified
export OPENAI_COMPATIBLE_BASE_URL="https://api.your-endpoint.ai/v1"
export OPENAI_COMPATIBLE_API_KEY="your-api-key"

# Auto-discovers and uses first available model
npx @juspay/neurolink generate "Hello!" --provider openai-compatible
# β†’ πŸ” Auto-discovered model: claude-sonnet-4 from 3 available models

# Or specify explicitly to skip discovery
export OPENAI_COMPATIBLE_MODEL="gemini-2.5-pro"
npx @juspay/neurolink generate "Hello!" --provider openai-compatible

How it works:

  • Queries /v1/models endpoint to discover available models
  • Automatically selects the first available model when none specified
  • Falls back gracefully if discovery fails
  • Works with any OpenAI-compatible service (OpenRouter, vLLM, LiteLLM, etc.)

🎯 Production Features

Enterprise-Grade Reliability

  • Automatic Failover: Seamless provider switching on failures
  • Error Recovery: Comprehensive error handling and logging
  • Performance Monitoring: Built-in analytics and metrics
  • Type Safety: Full TypeScript support with IntelliSense

AI Platform Capabilities

  • MCP Foundation: Universal AI development platform with 10+ specialized tools
  • Analysis Tools: Usage optimization, performance benchmarking, parameter tuning
  • Workflow Tools: Test generation, code refactoring, documentation, debugging
  • Extensibility: Connect external tools and services via MCP protocol
  • πŸ†• Dynamic Server Management: Programmatically add MCP servers at runtime

πŸ”§ External MCP Server Management βœ… AVAILABLE NOW

External MCP integration is now production-ready:

  • βœ… 6 built-in tools working across all providers
  • βœ… SDK custom tool registration
  • βœ… External MCP server management (add, remove, list, test servers)
  • βœ… Dynamic tool discovery (automatic tool registration from external servers)
  • βœ… Multi-provider support (external tools work with all AI providers)
  • βœ… Streaming integration (external tools work with real-time streaming)
  • βœ… Enhanced tool tracking (proper parameter extraction and execution logging)
// Complete external MCP server API
const neurolink = new NeuroLink();

// Server management
await neurolink.addExternalMCPServer(serverId, config);
await neurolink.removeExternalMCPServer(serverId);
const servers = neurolink.listExternalMCPServers();
const server = neurolink.getExternalMCPServer(serverId);

// Tool management
const tools = neurolink.getExternalMCPTools();
const serverTools = neurolink.getExternalMCPServerTools(serverId);

// Direct tool execution
const result = await neurolink.executeExternalMCPTool(
  serverId,
  toolName,
  params,
);

// Statistics and monitoring
const stats = neurolink.getExternalMCPStatistics();
await neurolink.shutdownExternalMCPServers();

🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Development Setup

git clone https://github.com/juspay/neurolink
cd neurolink
pnpm install
npx husky install          # Setup git hooks for build rule enforcement
pnpm setup:complete        # One-command setup with all automation
pnpm test:adaptive         # Intelligent testing
pnpm build:complete       # Full build pipeline

Enterprise Developer Experience

NeuroLink features enterprise-grade build rule enforcement with comprehensive quality validation:

# Quality & Validation (required for all commits)
pnpm run validate:all      # Run all validation checks
pnpm run validate:security # Security scanning with gitleaks
pnpm run validate:env      # Environment consistency checks
pnpm run quality:metrics   # Generate quality score report

# Development Workflow
pnpm run check:all         # Pre-commit validation simulation
pnpm run format           # Auto-fix code formatting
pnpm run lint             # ESLint validation with zero-error tolerance

# Environment & Setup (2-minute initialization)
pnpm setup:complete        # Complete project setup
pnpm env:setup             # Safe .env configuration
pnpm env:backup            # Environment backup

# Testing (60-80% faster)
pnpm test:adaptive         # Intelligent test selection
pnpm test:providers        # AI provider validation

# Documentation & Content
pnpm docs:sync             # Cross-file documentation sync
pnpm content:generate      # Automated content creation

# Build & Deployment
pnpm build:complete        # 7-phase enterprise pipeline
pnpm dev:health            # System health monitoring

Build Rule Enforcement: All commits automatically validated with pre-commit hooks. See Contributing Guidelines for complete requirements.

πŸ“– Complete Automation Guide - All 72+ commands and automation features

πŸ“„ License

MIT Β© Juspay Technologies

πŸ”— Related Projects


Built with ❀️ by Juspay Technologies

About

Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published