Enterprise AI Development Platform with universal provider support, factory pattern architecture, and access to 100+ AI models through LiteLLM integration. Production-ready with TypeScript support.
NeuroLink is an Enterprise AI Development Platform that unifies 11 major AI providers with intelligent fallback and built-in tool support. Available as both a programmatic SDK and professional CLI tool. Features LiteLLM integration for 100+ models, plus 6 core tools working across all providers. Extracted from production use at Juspay.
NeuroLink now supports LiteLLM, providing unified access to 100+ AI models from all major providers through a single interface:
- π Universal Access: OpenAI, Anthropic, Google, Mistral, Meta, and more
- π― Unified Interface: OpenAI-compatible API for all models
- π° Cost Optimization: Automatic routing to cost-effective models
- β‘ Load Balancing: Automatic failover and load distribution
- π Analytics: Built-in usage tracking and monitoring
# Quick start with LiteLLM
pip install litellm && litellm --port 4000
# Use any of 100+ models through one interface
npx @juspay/neurolink generate "Hello" --provider litellm --model "openai/gpt-4o"
npx @juspay/neurolink generate "Hello" --provider litellm --model "anthropic/claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello" --provider litellm --model "google/gemini-2.0-flash"
π Complete LiteLLM Integration Guide - Setup, configuration, and 100+ model access
NeuroLink now supports Amazon SageMaker, enabling you to deploy and use your own custom trained models through NeuroLink's unified interface:
- ποΈ Custom Model Hosting - Deploy your fine-tuned models on AWS infrastructure
- π° Cost Control - Pay only for inference usage with auto-scaling capabilities
- π Enterprise Security - Full control over model infrastructure and data privacy
- β‘ Performance - Dedicated compute resources with predictable latency
- π Monitoring - Built-in CloudWatch metrics and logging
# Quick start with SageMaker
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
# Use your custom deployed models
npx @juspay/neurolink generate "Analyze this data" --provider sagemaker
npx @juspay/neurolink sagemaker status # Check endpoint health
npx @juspay/neurolink sagemaker benchmark my-endpoint # Performance testing
π Complete SageMaker Integration Guide - Setup, deployment, and custom model access
- π Factory Pattern Architecture - Unified provider management through BaseProvider inheritance
- π§ Tools-First Design - All providers include built-in tool support without additional configuration
- π LiteLLM Integration - 100+ models from all major providers through unified interface
- ποΈ Enterprise Architecture - Production-ready with clean abstractions
- π Configuration Management - Flexible provider configuration with automatic backups
- β Type Safety - Industry-standard TypeScript interfaces
- β‘ Performance - Fast response times with streaming support and 68% improved status checks
- π‘οΈ Error Recovery - Graceful failures with provider fallback and retry logic
- π Analytics & Evaluation - Built-in usage tracking and AI-powered quality assessment
- π― Real-time Event Monitoring - EventEmitter integration for progress tracking and debugging
- π§ External MCP Integration - Model Context Protocol with 6 built-in tools + full external MCP server support
- π Lighthouse Integration - Unified tool registration API supporting both object and array formats for seamless Lighthouse tool import
# Option 1: LiteLLM - Access 100+ models through one interface
pip install litellm && litellm --port 4000
export LITELLM_BASE_URL="http://localhost:4000"
export LITELLM_API_KEY="sk-anything"
# Use any of 100+ models
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "openai/gpt-4o"
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "anthropic/claude-3-5-sonnet"
# Option 2: OpenAI Compatible - Use any OpenAI-compatible endpoint with auto-discovery
export OPENAI_COMPATIBLE_BASE_URL="https://api.openrouter.ai/api/v1"
export OPENAI_COMPATIBLE_API_KEY="sk-or-v1-your-api-key"
# Auto-discovers available models via /v1/models endpoint
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
# Or specify a model explicitly
export OPENAI_COMPATIBLE_MODEL="claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
# Option 3: Direct Provider - Quick setup with Google AI Studio (free tier)
export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
npx @juspay/neurolink generate "Hello, AI" --provider google-ai
# Option 3: Amazon SageMaker - Use your custom deployed models
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
npx @juspay/neurolink generate "Hello, AI" --provider sagemaker
# CLI Commands - No installation required
npx @juspay/neurolink generate "Explain AI" # Auto-selects best provider
npx @juspay/neurolink gen "Write code" # Shortest form
npx @juspay/neurolink stream "Tell a story" # Real-time streaming
npx @juspay/neurolink status # Check all providers
# SDK Installation for using in your typescript projects
npm install @juspay/neurolink
# π NEW: External MCP Server Integration Quick Test
node -e "
const { NeuroLink } = require('@juspay/neurolink');
(async () => {
const neurolink = new NeuroLink();
// Add external filesystem MCP server
await neurolink.addExternalMCPServer('filesystem', {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
transport: 'stdio'
});
// External tools automatically available in generate()
const result = await neurolink.generate({
input: { text: 'List files in the current directory' }
});
console.log('π External MCP integration working!');
console.log(result.content);
})();
"
import { NeuroLink, AIProviderFactory } from "@juspay/neurolink";
// LiteLLM - Access 100+ models through unified interface
const litellmProvider = await AIProviderFactory.createProvider(
"litellm",
"openai/gpt-4o",
);
const result = await litellmProvider.generate({
input: { text: "Write a haiku about programming" },
});
// Compare multiple models simultaneously
const models = [
"openai/gpt-4o",
"anthropic/claude-3-5-sonnet",
"google/gemini-2.0-flash",
];
const comparisons = await Promise.all(
models.map(async (model) => {
const provider = await AIProviderFactory.createProvider("litellm", model);
const result = await provider.generate({
input: { text: "Explain quantum computing" },
});
return { model, response: result.content, provider: result.provider };
}),
);
// Auto-select best available provider
const neurolink = new NeuroLink();
const autoResult = await neurolink.generate({
input: { text: "Write a business email" },
provider: "google-ai", // or let it auto-select
timeout: "30s",
});
console.log(result.content);
console.log(`Used: ${result.provider}`);
NeuroLink supports automatic conversation history management that maintains context across multiple turns within sessions. This enables AI to remember previous interactions and provide contextually aware responses. Session-based memory isolation ensures privacy between different conversations.
// Enable conversation memory with configurable limits
const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
maxSessions: 50, // Keep last 50 sessions
maxTurnsPerSession: 20, // Keep last 20 turns per session
},
});
Method aliases that match CLI command names:
// All three methods are equivalent:
const result1 = await provider.generate({ input: { text: "Hello" } }); // Original
const result2 = await provider.generate({ input: { text: "Hello" } }); // Matches CLI 'generate'
const result3 = await provider.gen({ input: { text: "Hello" } }); // Matches CLI 'gen'
// Use whichever style you prefer:
const provider = createBestAIProvider();
// Detailed method name
const story = await provider.generate({
input: { text: "Write a short story about AI" },
maxTokens: 200,
});
// CLI-style method names
const poem = await provider.generate({ input: { text: "Write a poem" } });
const joke = await provider.gen({ input: { text: "Tell me a joke" } });
# Basic AI generation with auto-provider selection
npx @juspay/neurolink generate "Write a business email"
# LiteLLM with specific model
npx @juspay/neurolink generate "Write code" --provider litellm --model "anthropic/claude-3-5-sonnet"
# With analytics and evaluation
npx @juspay/neurolink generate "Write a proposal" --enable-analytics --enable-evaluation --debug
# Streaming with tools (default behavior)
npx @juspay/neurolink stream "What time is it and write a file with the current date"
import { NeuroLink, AIProviderFactory } from "@juspay/neurolink";
// LiteLLM multi-model comparison
const models = [
"openai/gpt-4o",
"anthropic/claude-3-5-sonnet",
"google/gemini-2.0-flash",
];
const comparisons = await Promise.all(
models.map(async (model) => {
const provider = await AIProviderFactory.createProvider("litellm", model);
return await provider.generate({
input: { text: "Explain the benefits of renewable energy" },
enableAnalytics: true,
enableEvaluation: true,
});
}),
);
// Enhanced generation with analytics
const neurolink = new NeuroLink();
const result = await neurolink.generate({
input: { text: "Write a business proposal" },
enableAnalytics: true, // Get usage & cost data
enableEvaluation: true, // Get AI quality scores
context: { project: "Q1-sales" },
});
console.log("π Usage:", result.analytics);
console.log("β Quality:", result.evaluation);
console.log("Response:", result.content);
# Create .env file (automatically loaded by CLI)
echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
echo 'GOOGLE_AI_API_KEY="AIza-your-google-ai-key"' >> .env
echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
# Test configuration
npx @juspay/neurolink status
NeuroLink provides comprehensive JSON input/output support for both CLI and SDK:
# CLI JSON Output - Structured data for scripts
npx @juspay/neurolink generate "Summary of AI trends" --format json
npx @juspay/neurolink gen "Create a user profile" --format json --provider google-ai
# Example JSON Output:
{
"content": "AI trends include increased automation...",
"provider": "google-ai",
"model": "gemini-2.5-flash",
"usage": {
"promptTokens": 15,
"completionTokens": 127,
"totalTokens": 142
},
"responseTime": 1234
}
// SDK JSON Input/Output - Full TypeScript support
import { createBestAIProvider } from "@juspay/neurolink";
const provider = createBestAIProvider();
// Structured input
const result = await provider.generate({
input: { text: "Create a product specification" },
schema: {
type: "object",
properties: {
name: { type: "string" },
price: { type: "number" },
features: { type: "array", items: { type: "string" } },
},
},
});
// Access structured response
const productData = JSON.parse(result.content);
console.log(productData.name, productData.price, productData.features);
π Complete Setup Guide - All providers with detailed instructions
- π LiteLLM Integration - Access 100+ AI models from all major providers through unified interface
- π Smart Model Auto-Discovery - OpenAI Compatible provider automatically detects available models via
/v1/models
endpoint - π Factory Pattern Architecture - Unified provider management with BaseProvider inheritance
- π§ Tools-First Design - All providers automatically include 6 direct tools (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles)
- π 12 AI Providers - OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic, Azure, LiteLLM, OpenAI Compatible, Hugging Face, Ollama, Mistral AI, SageMaker
- π° Cost Optimization - Automatic selection of cheapest models and LiteLLM routing
- β‘ Automatic Fallback - Never fail when providers are down, intelligent provider switching
- π₯οΈ CLI + SDK - Use from command line or integrate programmatically with TypeScript support
- π‘οΈ Production Ready - Enterprise-grade error handling, performance optimization, extracted from production
- β External MCP Integration - Model Context Protocol with built-in tools + full external MCP server support
- π Smart Model Resolution - Fuzzy matching, aliases, and capability-based search across all providers
- π Local AI Support - Run completely offline with Ollama or through LiteLLM proxy
- π Universal Model Access - Direct providers + 100,000+ models via Hugging Face + 100+ models via LiteLLM
- π§ Automatic Context Summarization - Stateful, long-running conversations with automatic history summarization.
- π Analytics & Evaluation - Built-in usage tracking and AI-powered quality assessment
Component | Status | Description |
---|---|---|
Built-in Tools | β Working | 6 core tools fully functional across all providers |
SDK Custom Tools | β Working | Register custom tools programmatically |
External MCP Tools | β Working | Full external MCP server support with dynamic tool discovery |
Tool Execution | β Working | Real-time AI tool calling with all tool types |
Streaming Support | β Working | External MCP tools work with streaming generation |
Multi-Provider | β Working | External tools work across all AI providers |
CLI Integration | β READY | Production-ready with external MCP support |
# Test built-in tools (works immediately)
npx @juspay/neurolink generate "What time is it?" --debug
# π NEW: External MCP server integration (SDK)
import { NeuroLink } from '@juspay/neurolink';
const neurolink = new NeuroLink();
// Add external MCP server (e.g., Bitbucket)
await neurolink.addExternalMCPServer('bitbucket', {
command: 'npx',
args: ['-y', '@nexus2520/bitbucket-mcp-server'],
transport: 'stdio',
env: {
BITBUCKET_USERNAME: process.env.BITBUCKET_USERNAME,
BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
BITBUCKET_BASE_URL: 'https://bitbucket.example.com'
}
});
// Use external MCP tools in generation
const result = await neurolink.generate({
input: { text: 'Get pull request #123 details from the main repository' },
disableTools: false // External MCP tools automatically available
});
# Discover available MCP servers
npx @juspay/neurolink mcp discover --format table
Register your own tools programmatically with the SDK:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Register a simple tool
neurolink.registerTool("weatherLookup", {
description: "Get current weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
units: z.enum(["celsius", "fahrenheit"]).optional(),
}),
execute: async ({ city, units = "celsius" }) => {
// Your implementation here
return {
city,
temperature: 22,
units,
condition: "sunny",
};
},
});
// Use it in generation
const result = await neurolink.generate({
input: { text: "What's the weather in London?" },
provider: "google-ai",
});
// Register multiple tools - Object format (existing)
neurolink.registerTools({
stockPrice: {
description: "Get stock price",
execute: async () => ({ price: 150.25 }),
},
calculator: {
description: "Calculate math",
execute: async () => ({ result: 42 }),
},
});
// Register multiple tools - Array format (Lighthouse compatible)
neurolink.registerTools([
{
name: "lighthouseTool1",
tool: {
description: "Lighthouse analytics tool",
parameters: z.object({
merchantId: z.string(),
dateRange: z.string().optional(),
}),
execute: async ({ merchantId, dateRange }) => {
// Lighthouse tool implementation with Zod schema
return { data: "analytics result" };
},
},
},
{
name: "lighthouseTool2",
tool: {
description: "Payment processing tool",
execute: async () => ({ status: "processed" }),
},
},
]);
NeuroLink features intelligent model selection and cost optimization:
- π° Automatic Cost Optimization: Selects cheapest models for simple tasks
- π LiteLLM Model Routing: Access 100+ models with automatic load balancing
- π Capability-Based Selection: Find models with specific features (vision, function calling)
- β‘ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost
# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"
# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider
# Text generation with automatic MCP tool detection (default)
npx @juspay/neurolink generate "What time is it?"
# Alternative short form
npx @juspay/neurolink gen "What time is it?"
# Disable tools for training-data-only responses
npx @juspay/neurolink generate "What time is it?" --disable-tools
# With custom timeout for complex prompts
npx @juspay/neurolink generate "Explain quantum computing in detail" --timeout 1m
# Real-time streaming with agent support (default)
npx @juspay/neurolink stream "What time is it?"
# Streaming without tools (traditional mode)
npx @juspay/neurolink stream "Tell me a story" --disable-tools
# Streaming with extended timeout
npx @juspay/neurolink stream "Write a long story" --timeout 5m
# Provider diagnostics
npx @juspay/neurolink status --verbose
# Batch processing
echo -e "Write a haiku\nExplain gravity" > prompts.txt
npx @juspay/neurolink batch prompts.txt --output results.json
# Batch with custom timeout per request
npx @juspay/neurolink batch prompts.txt --timeout 45s --output results.json
// SvelteKit API route with timeout handling
export const POST: RequestHandler = async ({ request }) => {
const { message } = await request.json();
const provider = createBestAIProvider();
try {
// NEW: Primary streaming method (recommended)
const result = await provider.stream({
input: { text: message },
timeout: "2m", // 2 minutes for streaming
});
// Process stream
for await (const chunk of result.stream) {
// Handle streaming content
console.log(chunk.content);
}
// LEGACY: Backward compatibility (still works)
const legacyResult = await provider.stream({ input: { text:
prompt: message,
timeout: "2m", // 2 minutes for streaming
});
return new Response(result.toReadableStream());
} catch (error) {
if (error.name === "TimeoutError") {
return new Response("Request timed out", { status: 408 });
}
throw error;
}
};
// Next.js API route with timeout
export async function POST(request: NextRequest) {
const { prompt } = await request.json();
const provider = createBestAIProvider();
const result = await provider.generate({
prompt,
timeout: process.env.AI_TIMEOUT || "30s", // Configurable timeout
});
return NextResponse.json({ text: result.content });
}
No installation required! Experience NeuroLink through comprehensive visual documentation:
cd neurolink-demo && node server.js
# Visit http://localhost:9876 for live demo
- Real AI Integration: All 9 providers functional with live generation
- Complete Use Cases: Business, creative, and developer scenarios
- Performance Metrics: Live provider analytics and response times
- Privacy Options: Test local AI with Ollama
- CLI Help & Commands - Complete command reference
- Provider Status Check - Connectivity verification (now with authentication and model availability checks)
- Text Generation - Real AI content creation
- Business Use Cases - Professional applications
- Developer Tools - Code generation and APIs
- Creative Tools - Content creation
π Complete Visual Documentation - All screenshots and videos
- π§ Provider Setup - Complete environment configuration
- π₯οΈ CLI Guide - All commands and options
- ποΈ SDK Integration - Next.js, SvelteKit, React
- βοΈ Environment Variables - Full configuration guide
- π Factory Pattern Migration - Guide to the new unified provider architecture
- π MCP Foundation - Model Context Protocol architecture
- β‘ Dynamic Models - Self-updating model configurations and cost optimization
- π§ AI Analysis Tools - Usage optimization and benchmarking
- π οΈ AI Workflow Tools - Development lifecycle assistance
- π¬ Visual Demos - Screenshots and videos
- π API Reference - Complete TypeScript API
- π Framework Integration - SvelteKit, Next.js, Express.js
Provider | Models | Auth Method | Free Tier | Tool Support | Key Benefit |
---|---|---|---|---|---|
π LiteLLM π | 100+ Models (All Providers) | Proxy Server | Varies | β Full | Universal Access |
π OpenAI Compatible π | Any OpenAI-compatible endpoint | API Key + Base URL | Varies | β Full | Auto-Discovery + Flexibility |
Google AI Studio | Gemini 2.5 Flash/Pro | API Key | β | β Full | Free Tier Available |
OpenAI | GPT-4o, GPT-4o-mini | API Key | β | β Full | Industry Standard |
Anthropic | Claude 3.5 Sonnet | API Key | β | β Full | Advanced Reasoning |
Amazon Bedrock | Claude 3.5/3.7 Sonnet | AWS Credentials | β | β Full* | Enterprise Scale |
Google Vertex AI | Gemini 2.5 Flash | Service Account | β | β Full | Enterprise Google |
Azure OpenAI | GPT-4, GPT-3.5 | API Key + Endpoint | β | β Full | Microsoft Ecosystem |
Ollama π | Llama 3.2, Gemma, Mistral (Local) | None (Local) | β | Complete Privacy | |
Hugging Face π | 100,000+ open source models | API Key | β | Open Source | |
Mistral AI π | Tiny, Small, Medium, Large | API Key | β | β Full | European/GDPR |
Amazon SageMaker π | Custom Models (Your Endpoints) | AWS Credentials | β | β Full | Custom Model Hosting |
Tool Support Legend:
- β Full: All tools working correctly
β οΈ Partial: Tools visible but may not execute properly- β Limited: Issues with model or configuration
- * Bedrock requires valid AWS credentials, Ollama requires specific models like gemma3n for tool support
β¨ Auto-Selection: NeuroLink automatically chooses the best available provider based on speed, reliability, and configuration.
The OpenAI Compatible provider includes intelligent model discovery that automatically detects available models from any endpoint:
# Setup - no model specified
export OPENAI_COMPATIBLE_BASE_URL="https://api.your-endpoint.ai/v1"
export OPENAI_COMPATIBLE_API_KEY="your-api-key"
# Auto-discovers and uses first available model
npx @juspay/neurolink generate "Hello!" --provider openai-compatible
# β π Auto-discovered model: claude-sonnet-4 from 3 available models
# Or specify explicitly to skip discovery
export OPENAI_COMPATIBLE_MODEL="gemini-2.5-pro"
npx @juspay/neurolink generate "Hello!" --provider openai-compatible
How it works:
- Queries
/v1/models
endpoint to discover available models - Automatically selects the first available model when none specified
- Falls back gracefully if discovery fails
- Works with any OpenAI-compatible service (OpenRouter, vLLM, LiteLLM, etc.)
- Automatic Failover: Seamless provider switching on failures
- Error Recovery: Comprehensive error handling and logging
- Performance Monitoring: Built-in analytics and metrics
- Type Safety: Full TypeScript support with IntelliSense
- MCP Foundation: Universal AI development platform with 10+ specialized tools
- Analysis Tools: Usage optimization, performance benchmarking, parameter tuning
- Workflow Tools: Test generation, code refactoring, documentation, debugging
- Extensibility: Connect external tools and services via MCP protocol
- π Dynamic Server Management: Programmatically add MCP servers at runtime
External MCP integration is now production-ready:
- β 6 built-in tools working across all providers
- β SDK custom tool registration
- β External MCP server management (add, remove, list, test servers)
- β Dynamic tool discovery (automatic tool registration from external servers)
- β Multi-provider support (external tools work with all AI providers)
- β Streaming integration (external tools work with real-time streaming)
- β Enhanced tool tracking (proper parameter extraction and execution logging)
// Complete external MCP server API
const neurolink = new NeuroLink();
// Server management
await neurolink.addExternalMCPServer(serverId, config);
await neurolink.removeExternalMCPServer(serverId);
const servers = neurolink.listExternalMCPServers();
const server = neurolink.getExternalMCPServer(serverId);
// Tool management
const tools = neurolink.getExternalMCPTools();
const serverTools = neurolink.getExternalMCPServerTools(serverId);
// Direct tool execution
const result = await neurolink.executeExternalMCPTool(
serverId,
toolName,
params,
);
// Statistics and monitoring
const stats = neurolink.getExternalMCPStatistics();
await neurolink.shutdownExternalMCPServers();
We welcome contributions! Please see our Contributing Guidelines for details.
git clone https://github.com/juspay/neurolink
cd neurolink
pnpm install
npx husky install # Setup git hooks for build rule enforcement
pnpm setup:complete # One-command setup with all automation
pnpm test:adaptive # Intelligent testing
pnpm build:complete # Full build pipeline
NeuroLink features enterprise-grade build rule enforcement with comprehensive quality validation:
# Quality & Validation (required for all commits)
pnpm run validate:all # Run all validation checks
pnpm run validate:security # Security scanning with gitleaks
pnpm run validate:env # Environment consistency checks
pnpm run quality:metrics # Generate quality score report
# Development Workflow
pnpm run check:all # Pre-commit validation simulation
pnpm run format # Auto-fix code formatting
pnpm run lint # ESLint validation with zero-error tolerance
# Environment & Setup (2-minute initialization)
pnpm setup:complete # Complete project setup
pnpm env:setup # Safe .env configuration
pnpm env:backup # Environment backup
# Testing (60-80% faster)
pnpm test:adaptive # Intelligent test selection
pnpm test:providers # AI provider validation
# Documentation & Content
pnpm docs:sync # Cross-file documentation sync
pnpm content:generate # Automated content creation
# Build & Deployment
pnpm build:complete # 7-phase enterprise pipeline
pnpm dev:health # System health monitoring
Build Rule Enforcement: All commits automatically validated with pre-commit hooks. See Contributing Guidelines for complete requirements.
π Complete Automation Guide - All 72+ commands and automation features
MIT Β© Juspay Technologies
- Vercel AI SDK - Underlying provider implementations
- SvelteKit - Web framework used in this project
- Model Context Protocol - Tool integration standard
Built with β€οΈ by Juspay Technologies
# Force fresh deployment after GitHub Pages source change # Trigger fresh CI run