Skip to content

AIDC-AI/Pixelle-MCP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🎨 Pixelle MCP - Omnimodal Agent Framework

English | δΈ­ζ–‡

✨ An AIGC solution based on the MCP protocol, seamlessly converting ComfyUI workflows into MCP tools with zero code, empowering LLM and ComfyUI integration.

video_en.mp4

πŸ“‹ Recent Updates

  • βœ… 2025-08-12: Integrated the LiteLLM framework, adding multi-model support for Gemini, DeepSeek, Claude, Qwen, and more

πŸš€ Features

  • βœ… πŸ”„ Full-modal Support: Supports TISV (Text, Image, Sound/Speech, Video) full-modal conversion and generation
  • βœ… 🧩 ComfyUI Ecosystem: Server-side is built on ComfyUI, inheriting all capabilities from the open ComfyUI ecosystem
  • βœ… πŸ”§ Zero-code Development: Defines and implements the Workflow-as-MCP Tool solution, enabling zero-code development and dynamic addition of new MCP Tools
  • βœ… πŸ—„οΈ MCP Server: Server provides functionality based on the MCP protocol, supporting integration with any MCP client (including but not limited to Cursor, Claude Desktop, etc.)
  • βœ… 🌐 MCP Client: Client is developed based on the Chainlit framework, inheriting Chainlit's UI controls and supporting integration with more MCP Servers
  • βœ… πŸ”„ Flexible Deployment: Supports standalone deployment of Server-side only as MCP Server, or standalone deployment of Client-side only as MCP Client, or combined deployment
  • βœ… βš™οΈ Unified Configuration: Uses YAML configuration scheme, one config file manages all services
  • βœ… πŸ€– Multi-LLM Support: Supports multiple mainstream LLMs, including OpenAI, Ollama, Gemini, DeepSeek, Claude, Qwen, and more

πŸ“ Project Structure

  • mcp-base: πŸ”§ Basic service, provides file storage and shared service capabilities
  • mcp-client: 🌐 MCP client, a web interface built on Chainlit
  • mcp-server: πŸ—„οΈ MCP server, provides various AIGC tools and services

πŸƒβ€β™‚οΈ Quick Start

πŸ“₯ 1. Clone the Source Code & Configure Services

πŸ“¦ 1.1 Clone the Source Code

git clone https://github.com/AIDC-AI/Pixelle-MCP.git
cd Pixelle-MCP

βš™οΈ 1.2 Configure Services

The project uses a unified YAML configuration scheme:

# Copy the configuration example file
cp config.yml.example config.yml
# Edit configuration items as needed

πŸ“‹ Detailed Configuration Instructions:

The configuration file contains three main sections: Basic Service, MCP Server, and MCP Client. Each section has detailed configuration item descriptions in config.yml.example.

πŸ” Configuration Checklist:

  • βœ… Copied config.yml.example to config.yml
  • βœ… Configured ComfyUI service address (ensure ComfyUI is running)
  • βœ… Configured at least one LLM model (OpenAI or Ollama)
  • βœ… Port numbers are not occupied by other services (9001, 9002, 9003)

πŸ”§ 2. Add MCP Tool (Optional)

This step is optional and only affects your Agent's capabilities. You can skip it if not needed for now.

The mcp-server/workflows directory contains a set of popular workflows by default. Run the following command to copy them to your mcp-server. When the service starts, they will be automatically converted into MCP Tools for LLM use.

Note: It is strongly recommended to test the workflow in your ComfyUI canvas before copying, to ensure smooth execution later.

cp -r mcp-server/workflows/* mcp-server/data/custom_workflows/

πŸš€ 3. Start the Services

🎯 3.1 Start with Docker (Recommended)

# Start all services
docker compose up -d

πŸ› οΈ 3.2 One-click Script Start

Requires uv environment.

Linux/macOS users:

# Start all services (foreground)
./run.sh

# Or

# Start all services (background)
./run.sh start --daemon

Windows users:

Simply double-click the run.bat script in the root directory

πŸ› οΈ 3.3 Manual Service Start

Requires uv environment.

Start Basic Service (mcp-base):

cd mcp-base
# Install dependencies (only needed on first run or after updates)
uv sync
# Start service
uv run main.py

Start Server (mcp-server):

cd mcp-server
# Install dependencies (only needed on first run or after updates)
uv sync
# Start service
uv run main.py

Start Client (mcp-client):

cd mcp-client
# Install dependencies (only needed on first run or after updates)
uv sync
# Start service (for hot-reload in dev mode: uv run chainlit run main.py -w --port 9003)
uv run main.py

🌐 4. Access the Services

After startup, the service addresses are as follows:

πŸ› οΈ Add Your Own MCP Tool

⚑ One workflow = One MCP Tool

🎯 1. Add the Simplest MCP Tool

  • πŸ“ Build a workflow in ComfyUI for image Gaussian blur (Get it here), then set the LoadImage node's title to $image.image! as shown below:

  • πŸ“€ Export it as an API format file and rename it to i_blur.json. You can export it yourself or use our pre-exported version (Get it here)

  • πŸ“‹ Copy the exported API workflow file (must be API format), input it on the web page, and let the LLM add this Tool

  • ✨ After sending, the LLM will automatically convert this workflow into an MCP Tool

  • 🎨 Now, refresh the page and send any image to perform Gaussian blur processing via LLM

πŸ”Œ 2. Add a Complex MCP Tool

The steps are the same as above, only the workflow part differs (Download workflow: UI format and API format)

πŸ”§ ComfyUI Workflow Custom Specification

🎨 Workflow Format

The system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.

πŸ“ Parameter Definition Specification

In the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:

$<param_name>.[~]<field_name>[!][:<description>]

πŸ” Syntax Explanation:

  • param_name: The parameter name for the generated MCP tool function
  • ~: Optional, indicates URL parameter upload processing, returns relative path
  • field_name: The corresponding input field in the node
  • !: Indicates this parameter is required
  • description: Description of the parameter

πŸ’‘ Example:

Required parameter example:

  • Set LoadImage node title to: $image.image!:Input image URL
  • Meaning: Creates a required parameter named image, mapped to the node's image field

URL upload processing example:

  • Set any node title to: $image.~image!:Input image URL
  • Meaning: Creates a required parameter named image, system will automatically download URL and upload to ComfyUI, returns relative path

πŸ“ Note: LoadImage, VHS_LoadAudioUpload, VHS_LoadVideo and other nodes have built-in functionality, no need to add ~ marker

Optional parameter example:

  • Set EmptyLatentImage node title to: $width.width:Image width, default 512
  • Meaning: Creates an optional parameter named width, mapped to the node's width field, default value is 512

🎯 Type Inference Rules

The system automatically infers parameter types based on the current value of the node field:

  • πŸ”’ int: Integer values (e.g. 512, 1024)
  • πŸ“Š float: Floating-point values (e.g. 1.5, 3.14)
  • βœ… bool: Boolean values (e.g. true, false)
  • πŸ“ str: String values (default type)

πŸ“€ Output Definition Specification

πŸ€– Method 1: Auto-detect Output Nodes

The system will automatically detect the following common output nodes:

  • πŸ–ΌοΈ SaveImage - Image save node
  • 🎬 SaveVideo - Video save node
  • πŸ”Š SaveAudio - Audio save node
  • πŸ“Ή VHS_SaveVideo - VHS video save node
  • 🎡 VHS_SaveAudio - VHS audio save node

🎯 Method 2: Manual Output Marking

Usually used for multiple outputs Use $output.var_name in any node title to mark output:

  • Set node title to: $output.result
  • The system will use this node's output as the tool's return value

πŸ“„ Tool Description Configuration (Optional)

You can add a node titled MCP in the workflow to provide a tool description:

  1. Add a String (Multiline) or similar text node (must have a single string property, and the node field should be one of: value, text, string)
  2. Set the node title to: MCP
  3. Enter a detailed tool description in the value field

⚠️ Important Notes

  1. πŸ”’ Parameter Validation: Optional parameters (without !) must have default values set in the node
  2. πŸ”— Node Connections: Fields already connected to other nodes will not be parsed as parameters
  3. 🏷️ Tool Naming: Exported file name will be used as the tool name, use meaningful English names
  4. πŸ“‹ Detailed Descriptions: Provide detailed parameter descriptions for better user experience
  5. 🎯 Export Format: Must export as API format, do not export as UI format

πŸ’¬ Community

Scan the QR codes below to join our communities for latest updates and technical support:

Discord Community WeChat Group
Discord Community WeChat Group

🀝 How to Contribute

We welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:

πŸ› Report Issues

  • πŸ“‹ Submit bug reports on the Issues page
  • πŸ” Please search for similar issues before submitting
  • πŸ“ Describe the reproduction steps and environment in detail

πŸ’‘ Feature Suggestions

  • πŸš€ Submit feature requests in Issues
  • πŸ’­ Describe the feature you want and its use case
  • 🎯 Explain how it improves user experience

πŸ”§ Code Contributions

πŸ“‹ Contribution Process

  1. 🍴 Fork this repo to your GitHub account
  2. 🌿 Create a feature branch: git checkout -b feature/your-feature-name
  3. πŸ’» Develop and add corresponding tests
  4. πŸ“ Commit changes: git commit -m "feat: add your feature"
  5. πŸ“€ Push to your repo: git push origin feature/your-feature-name
  6. πŸ”„ Create a Pull Request to the main repo

🎨 Code Style

  • 🐍 Python code follows PEP 8 style guide
  • πŸ“– Add appropriate documentation and comments for new features

🧩 Contribute Workflows

  • πŸ“¦ Share your ComfyUI workflows with the community
  • πŸ› οΈ Submit tested workflow files
  • πŸ“š Add usage instructions and examples for workflows

πŸ™ Acknowledgements

❀️ Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.

License

This project is released under the MIT License (LICENSE, SPDX-License-identifier: MIT).

About

An Open-Source Multimodal AIGC Solution based on ComfyUI + MCP + LLM https://pixelle.ai

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 6