Universal multi-engine TTS extension for ComfyUI - evolved from the original ChatterBox Voice project.
A comprehensive ComfyUI extension providing unified Text-to-Speech and Voice Conversion capabilities through multiple engines including ChatterboxTTS, F5-TTS, Higgs Audio 2, and RVC (Real-time Voice Conversion), with modular architecture designed for extensibility and future engine integrations.
- π₯ Demo Videos
- Features
- π What's New in my Project?
- SRT Timing and TTS Node
- π F5-TTS Integration and π Audio Analyzer
- π£οΈ Silent Speech Analyzer
- π Character & Narrator Switching
- π Language Switching with Bracket Syntax
- π Iterative Voice Conversion
- π΅ RVC Voice Conversion Integration
- βΈοΈ Pause Tags System
- π Multi-language ChatterBox Support
- βοΈ Universal Streaming Architecture
- ποΈ Higgs Audio 2 Voice Cloning
- π Quick Start
- Installation
- Enhanced Features
- Usage
- π Example Workflows
- Settings Guide
- Text Processing Capabilities
- License
- Credits
- π Links
- Voice Recording: Smart silence detection for voice capture
- Enhanced Chunking: Intelligent text splitting with multiple combination methods
- Unlimited Text Length: No character limits with smart processing
Original creator: ShmuelRonen
- π€ Multi-Engine TTS - ChatterBox TTS, F5-TTS, and Higgs Audio 2 with voice cloning, reference audio synthesis, and production-grade quality
- ποΈ Higgs Audio 2 Voice Cloning - State-of-the-art voice cloning with 30+ second reference audio and multi-speaker conversation support
- π Voice Conversion - ChatterBox VC with iterative refinement + RVC real-time conversion using .pth character models
- ποΈ Voice Capture & Recording - Smart silence detection and voice input recording
- π Character & Language Switching - Multi-character TTS with
[CharacterName]
tags, alias system, and[language:character]
syntax for seamless model switching - π Multi-language Support - ChatterBox (English, German, Norwegian) + F5-TTS (English, German, Spanish, French, Japanese, Hindi, and more)
- π€ Emotion Control - Unique exaggeration parameter for expressive speech
- π Enhanced Chunking - Intelligent text splitting for long content with multiple combination methods
- π΅ Advanced Audio Processing - Optional FFmpeg support for premium audio quality with graceful fallback
- π€ Vocal/Noise Removal - AI-powered vocal separation, noise reduction, and echo removal with GPU acceleration β π Complete Guide
- π Audio Wave Analyzer - Interactive waveform visualization and precise timing extraction for F5-TTS workflows β π Complete Guide
- π£οΈ Silent Speech Analyzer - Video analysis with experimental viseme detection, mouth movement tracking, and base SRT timing generation from silent video using MediaPipe
- βοΈ Parallel Processing - Configurable worker-based processing via
batch_size
parameter (Note: sequential processing withbatch_size=0
remains optimal for performance)

The "ChatterBox SRT Voice TTS" node allows TTS generation by processing SRT content (SubRip Subtitle) files, ensuring precise timing and synchronization with your audio.
Key SRT Features:
- SRT style Processing: Uses SRT style to generate TTS, aligning audio with subtitle timings
smart_natural
Timing Mode: Intelligent shifting logic that prevents overlaps and ensures natural speech flowAdjusted_SRT
Output: Provides actual timings for generated audio for accurate post-processing- Segment-Level Caching: Only regenerates modified segments, significantly speeding up workflows
For comprehensive technical information, refer to the SRT_IMPLEMENTATION.md file.

- F5-TTS Voice Synthesis: High-quality voice cloning with reference audio + text
- Audio Wave Analyzer: Interactive waveform visualization for precise timing extraction
- Multi-language Support: English, German, Spanish, French, Japanese models
- Speech Editing Workflows: Advanced F5-TTS editing capabilities
NEW in v4.4.0: Video analysis and mouth movement detection for silent video processing!
- Mouth Movement Analysis: Real-time detection of mouth shapes and movements from video
- Experimental Viseme Classification: Approximate detection of vowels (A, E, I, O, U) and consonants (B, F, M, etc.) - results are experimental approximations, not precise
- 3-Level Analysis System:
- Frame-level mouth movement detection
- Syllable grouping with temporal analysis
- Word prediction using CMU Pronouncing Dictionary (135K+ words)
- Base SRT Generation: Creates timing-focused SRT files with start/end speech timing as foundation for user editing
- MediaPipe Integration: Production-ready analysis using Google's MediaPipe framework
- Visual Feedback: Preview videos with overlaid detection results
- Automatic Phonetic Placeholders: Word predictions provide phonetically-sensible placeholders, but phrases require user editing for meaningful content
- TTS Integration: SRT output designed for use with TTS SRT nodes after manual content editing
Perfect for:
- Creating base timing templates from silent video footage
- Animation and VFX reference timing
- Foundation for manual subtitle creation
Important Notes:
- OpenSeeFace provider is experimental and not recommended for production use - MediaPipe is the stable solution
- Viseme detection is experimental approximation - expect to manually edit both timing and content
- Generated text placeholders are phonetic suggestions, not meaningful sentences
NEW in v4.5.0: State-of-the-art voice cloning technology with advanced neural voice replication!
- High-Quality Voice Cloning: Clone any voice from 30+ second reference audio with exceptional fidelity
- Multi-Speaker Conversations: Native support for character switching within conversations
- Real-Time Processing: Generate speech in cloned voices with minimal latency
- Universal Integration: Works seamlessly with existing TTS Text and TTS SRT nodes
Key Capabilities:
- Voice Cloning from Reference Audio: Upload any 30+ second audio file for voice replication
- Multi-Language Support: English (tested), with potential support for Chinese, Korean, German, and Spanish (based on model training data)
- Character Switching: Use
[CharacterName]
syntax for multi-speaker dialogues - Advanced Generation Control: Fine-tune temperature, top-p, top-k, and token limits
- Smart Chunking: Automatic handling of unlimited text length with seamless audio combination
- Intelligent Caching: Instant regeneration of previously processed content
Technical Features:
- Modular Architecture: Clean integration with unified TTS system
- Automatic Model Management: Downloads and organizes models in
ComfyUI/models/TTS/HiggsAudio/
structure - Progress Tracking: Real-time generation feedback with tqdm progress bars
- Voice Reference Discovery: Flexible voice file management system
Quick Start:
- Add
Higgs Audio Engine
node to configure voice cloning parameters - Connect to
TTS Text
orTTS SRT
node for generation - Specify reference audio file or use voice discovery system
- Generate high-quality cloned speech with automatic optimization
Perfect for:
- Voice acting and character dialogue creation
- Audiobook narration with consistent voice characteristics
- Multi-speaker content with distinct voice personalities
- Professional voice replication for content creation
NEW in v3.1.0: Seamless character switching for both F5TTS and ChatterBox engines!
- Multi-Character Support: Use
[CharacterName]
tags to switch between different voices - Voice Folder Integration: Organized character voice management system
- π·οΈ Character Aliases: User-friendly alias system - use
[Alice]
instead of[female_01]
with#character_alias_map.txt
- Robust Fallback: Graceful handling when characters not found (no errors!)
- Universal Compatibility: Works with both F5TTS and ChatterBox TTS engines
- SRT Integration: Character switching within subtitle timing
- Backward Compatible: Existing workflows work unchanged
π Complete Character Switching Guide
Example usage:
Hello! This is the narrator speaking.
[Alice] Hi there! I'm Alice, nice to meet you.
[Bob] And I'm Bob! Great to meet you both.
Back to the narrator for the conclusion.
NEW in v3.4.0: Seamless language switching using simple bracket notation!
- Language Code Syntax: Use
[language:character]
tags to switch languages and models automatically - Smart Model Loading: Automatically loads correct language models (F5-DE, F5-FR, German, Norwegian, etc.)
- Flexible Aliases (v3.4.3): Support for
[German:Alice]
,[Brazil:Bob]
,[USA:]
,[Portugal:]
- no need to remember language codes! - Standard Format: Also supports traditional
[fr:Alice]
,[de:Bob]
, or[es:]
(language only) patterns - Character Integration: Combines perfectly with character switching and alias system
- Performance Optimized: Language groups processed efficiently to minimize model switching
- Alias Support: Language defaults work with character alias system
Supported Languages:
- F5-TTS: English (en), German (de), Spanish (es), French (fr), Italian (it), Japanese (jp), Thai (th), Portuguese (pt), Hindi (hi)
- ChatterBox: English (en), German (de), Norwegian (no/nb/nn)
Example usage:
Hello! This is English text with the default model.
[de:Alice] Hallo! Ich spreche Deutsch mit Alice's Stimme.
[fr:] Bonjour! Je parle franΓ§ais avec la voix du narrateur.
[es:Bob] Β‘Hola! Soy Bob hablando en espaΓ±ol.
Back to English with the original model.
Advanced SRT Integration:
1
00:00:01,000 --> 00:00:04,000
Hello! Welcome to our multilingual show.
2
00:00:04,500 --> 00:00:08,000
[de:female_01] Willkommen zu unserer mehrsprachigen Show!
3
00:00:08,500 --> 00:00:12,000
[fr:] Bienvenue Γ notre Γ©mission multilingue!
NEW: Progressive voice refinement with intelligent caching for instant experimentation!
- Refinement Passes: Multiple conversion iterations (1-30, recommended 1-5)
- Smart Caching: Results cached up to 5 iterations - change from 5β3β4 passes instantly
- Progressive Quality: Each pass refines output to sound more like target voice
NEW in v4.1.0: Professional-grade Real-time Voice Conversion with .pth character models!
- RVC Character Models: Load .pth voice models with π Load RVC Character Model node
- Unified Voice Changer: Full RVC integration in the Voice Changer node
- Iterative Refinement: 1-30 passes with smart caching (like ChatterBox)
- Enhanced Quality: Automatic .index file loading for improved voice similarity
- Auto-Download: Required models download from official sources automatically
- Cache Intelligence: Skip recomputation - change 5β3β4 passes instantly
- Neural Network Quality: High-quality voice conversion using trained RVC models
π See RVC Models Setup for detailed installation guide
How it works:
- Load your .pth RVC model with π Load RVC Character Model
- Connect to π Voice Changer, select "RVC" engine
- Process with iterative refinement for progressive quality improvement
- Results cached for instant experimentation with different pass counts
NEW: Intelligent pause insertion for natural speech timing control!
- Smart Pause Syntax: Use pause tags anywhere in your text with multiple aliases
- Flexible Duration Formats:
- Seconds:
[pause:1.5]
,[wait:2s]
,[stop:3]
- Milliseconds:
[pause:500ms]
,[wait:1200ms]
,[stop:800ms]
- Supported aliases:
pause
,wait
,stop
(all work identically)
- Seconds:
- Character Integration: Pause tags work seamlessly with character switching
- Intelligent Caching: Changing pause durations won't regenerate unchanged text segments
- Universal Support: Works across all TTS nodes (ChatterBox, F5-TTS, SRT)
- Automatic Processing: No additional parameters needed - just add tags to your text
Example usage:
Welcome to our show! [pause:1s] Today we'll discuss exciting topics.
[Alice] I'm really excited! [wait:500ms] This will be great.
[stop:2] Let's get started with the main content.
NEW in v3.3.0: ChatterBox TTS and SRT nodes now support multiple languages with automatic model management!
Supported Languages:
- πΊπΈ English: Original ResembleAI model (default)
- π©πͺ German: High-quality German ChatterBox model (stlohrey/chatterbox_de)
- π³π΄ Norwegian: Norwegian ChatterBox model (akhbar/chatterbox-tts-norwegian)
Key Features:
- Language Dropdown: Simple language selection in all ChatterBox nodes
- Auto-Download: Models download automatically on first use (~1GB per language)
- Local Priority: Prefers locally installed models over downloads for offline use
- Safetensors Support: Modern format support for newer language models
- Seamless Integration: Works with existing workflows - just select your language
Usage: Select language from dropdown β First generation downloads model β Subsequent generations use cached model
NEW in v4.3.0: Complete architectural overhaul implementing universal streaming system with parallel processing capabilities!
Key Features:
- Universal Streaming Infrastructure: Unified processing system eliminating engine-specific code complexity
- Parallel Processing: Configurable worker-based processing via
batch_size
parameter - Thread-Safe Design: Stateless wrapper architecture eliminates shared state corruption
- Future-Proof: New engines require only adapter implementation
Performance Notes:
- Sequential Recommended: Use
batch_size=0
for optimal performance (sequential processing) - Parallel Available:
batch_size > 1
enables parallel workers but typically slower due to GPU inference characteristics - Memory Efficiency: Improved model sharing prevents memory exhaustion when switching modes
One-click installation with intelligent dependency management:
- Use ComfyUI Manager to install "TTS Audio Suite"
- That's it! ComfyUI Manager automatically runs our install.py script which handles:
- β Python 3.13 compatibility (MediaPipe β OpenSeeFace fallback)
- β Dependency conflicts (NumPy, librosa, etc.)
- β All bundled engines (ChatterBox, F5-TTS, Higgs Audio)
- β RVC voice conversion dependencies
- β Intelligent conflict resolution with --no-deps handling
Python 3.13 Support:
- π’ All TTS engines: ChatterBox, F5-TTS, Higgs Audio β Working
- π’ RVC voice conversion: β Working
- π’ OpenSeeFace mouth movement: β Working (experimental)
- π΄ MediaPipe mouth movement: β Incompatible (use OpenSeeFace)
Same intelligent installer, manual setup:
-
Clone the repository
cd ComfyUI/custom_nodes git clone https://github.com/diodiogod/TTS-Audio-Suite.git cd TTS-Audio-Suite
-
Run the intelligent installer:
ComfyUI Portable:
# Windows: ..\..\..\python_embeded\python.exe install.py # Linux/Mac: ../../../python_embeded/python.exe install.py
ComfyUI with venv/conda:
# First activate your ComfyUI environment, then: python install.py
The installer automatically handles all dependency conflicts and Python version compatibility.
-
Download Models (Required)
- Download from HuggingFace ChatterBox
- Place in
ComfyUI/models/TTS/chatterbox/
(recommended) orComfyUI/models/chatterbox/
(legacy)
-
Try a Workflow
- Download: ChatterBox Integration Workflow
- Drag into ComfyUI and start generating!
-
Restart ComfyUI and look for π€ TTS Audio Suite nodes
π§ͺ Python 3.13 Users: Installation is fully supported! The system automatically uses OpenSeeFace for mouth movement analysis when MediaPipe is unavailable.
Need F5-TTS? Also download F5-TTS models to
ComfyUI/models/F5-TTS/
from the links in the detailed installation below.
π Detailed Installation Guide (Click to expand if you're having dependency issues)
This section provides a detailed guide for installing TTS Audio Suite, covering different ComfyUI installation methods.
- ComfyUI installation (Portable, Direct with venv, or through Manager)
- Python 3.12 or higher
- PortAudio library (required for voice recording features):
- Linux:
sudo apt-get install portaudio19-dev
- macOS:
brew install portaudio
- Windows: Usually bundled with pip packages (no action needed)
- Linux:
For portable installations, follow these steps:
-
Clone the repository into the
ComfyUI/custom_nodes
folder:cd ComfyUI/custom_nodes git clone https://github.com/diodiogod/TTS-Audio-Suite.git
-
Navigate to the cloned directory:
cd TTS-Audio-Suite
-
Install the required dependencies. Important: Use the
python.exe
executable located in your ComfyUI portable installation with environment isolation flags.../../../python_embeded/python.exe -m pip install -r requirements.txt --no-user
Why the
--no-user
flag?- Prevents installing to your system Python's user directory, which can cause import conflicts
- Ensures packages install only to the portable environment for proper isolation
If you have a direct installation with a virtual environment (venv), follow these steps:
-
Clone the repository into the
ComfyUI/custom_nodes
folder:cd ComfyUI/custom_nodes git clone https://github.com/diodiogod/TTS-Audio-Suite.git
-
Activate your ComfyUI virtual environment. This is crucial to ensure dependencies are installed in the correct environment. The method to activate the venv may vary depending on your setup. Here's a common example:
cd ComfyUI . ./venv/bin/activate
or on Windows:
ComfyUI\venv\Scripts\activate
-
Navigate to the cloned directory:
cd custom_nodes/TTS-Audio-Suite
-
Install the required dependencies using
pip
:pip install -r requirements.txt
-
Install the ComfyUI Manager if you haven't already.
-
Use the Manager to install the "TTS Audio Suite" node.
-
The manager might handle dependencies automatically, but it's still recommended to verify the installation. Navigate to the node's directory:
cd ComfyUI/custom_nodes/TTS-Audio-Suite
-
Activate your ComfyUI virtual environment (see instructions in "Direct Installation with venv").
-
If you encounter issues, manually install the dependencies:
pip install -r requirements.txt
A common problem is installing dependencies in the wrong Python environment. Always ensure you are installing dependencies within your ComfyUI's Python environment.
-
Verify your Python environment: After activating your venv or navigating to your portable ComfyUI installation, check the Python executable being used:
which python
This should point to the Python executable within your ComfyUI installation (e.g.,
ComfyUI/python_embeded/python.exe
orComfyUI/venv/bin/python
). -
If
s3tokenizer
fails to install: This dependency can be problematic. Try upgrading your pip and setuptools:python -m pip install --upgrade pip setuptools wheel
Then, try installing the requirements again.
-
If you cloned the node manually (without the Manager): Make sure you install the requirements.txt file.
To update the node to the latest version:
-
Navigate to the node's directory:
cd ComfyUI/custom_nodes/TTS-Audio-Suite
-
Pull the latest changes from the repository:
git pull
-
Reinstall the dependencies (in case they have been updated):
pip install -r requirements.txt
cd ComfyUI/custom_nodes
git clone https://github.com/diodiogod/TTS-Audio-Suite.git
Some dependencies, particularly s3tokenizer
, can occasionally cause installation issues on certain Python setups (e.g., Python 3.10, sometimes used by tools like Stability Matrix).
To minimize potential problems, it's highly recommended to first ensure your core packaging tools are up-to-date in your ComfyUI's virtual environment:
python -m pip install --upgrade pip setuptools wheel
After running the command above, install the node's specific requirements:
pip install -r requirements.txt
ChatterBox Voice now supports FFmpeg for high-quality audio stretching. While not required, it's recommended for the best audio quality:
Windows:
winget install FFmpeg
# or with Chocolatey
choco install ffmpeg
macOS:
brew install ffmpeg
Linux:
# Ubuntu/Debian
sudo apt-get install ffmpeg
# Fedora
sudo dnf install ffmpeg
If FFmpeg is not available, ChatterBox will automatically fall back to using the built-in phase vocoder method for audio stretching - your workflows will continue to work without interruption.
Download the ChatterboxTTS models and place them in the new organized structure:
ComfyUI/models/TTS/chatterbox/ β Recommended (new structure)
Or use the legacy location (still supported):
ComfyUI/models/chatterbox/ β Legacy (still works)
Required files:
conds.pt
(105 KB)s3gen.pt
(~1 GB)t3_cfg.pt
(~1 GB)tokenizer.json
(25 KB)ve.pt
(5.5 MB)
Download from: https://huggingface.co/ResembleAI/chatterbox/tree/main
NEW in v3.3.0: ChatterBox now supports multiple languages! Models will auto-download on first use, or you can manually install them for offline use.
For manual installation, create language-specific folders in the organized structure:
ComfyUI/models/TTS/chatterbox/ β Recommended structure
βββ English/ # Optional - for explicit English organization
β βββ conds.pt
β βββ s3gen.pt
β βββ t3_cfg.pt
β βββ tokenizer.json
β βββ ve.pt
βββ German/ # German language models
β βββ conds.safetensors
β βββ s3gen.safetensors
β βββ t3_cfg.safetensors
β βββ tokenizer.json
β βββ ve.safetensors
βββ Norwegian/ # Norwegian language models
βββ conds.safetensors
βββ s3gen.safetensors
βββ t3_cfg.safetensors
βββ tokenizer.json
βββ ve.safetensors
Note: Legacy location
ComfyUI/models/chatterbox/
still works for backward compatibility.
Available ChatterBox Language Models:
Language | HuggingFace Repository | Format | Auto-Download |
---|---|---|---|
English | ResembleAI/chatterbox | .pt | β |
German | stlohrey/chatterbox_de | .safetensors | β |
Norwegian | akhbar/chatterbox-tts-norwegian | .safetensors | β |
Usage: Simply select your desired language from the dropdown in ChatterBox TTS or SRT nodes. First generation will auto-download the model (~1GB per language).
For F5-TTS voice synthesis capabilities, download F5-TTS models and place them in the organized structure:
ComfyUI/models/TTS/F5-TTS/ β Recommended (new structure)
Or use the legacy location (still supported):
ComfyUI/models/F5-TTS/ β Legacy (still works)
Available F5-TTS Models:
Model | Language | Download | Size |
---|---|---|---|
F5TTS_Base | English | HuggingFace | ~1.2GB |
F5TTS_v1_Base | English (v1) | HuggingFace | ~1.2GB |
E2TTS_Base | English (E2-TTS) | HuggingFace | ~1.2GB |
F5-DE | German | HuggingFace | ~1.2GB |
F5-ES | Spanish | HuggingFace | ~1.2GB |
F5-FR | French | HuggingFace | ~1.2GB |
F5-JP | Japanese | HuggingFace | ~1.2GB |
F5-Hindi-Small | Hindi | HuggingFace | ~632MB |
Vocoder (Optional but Recommended):
ComfyUI/models/TTS/F5-TTS/vocos/ β Recommended
βββ config.yaml
βββ pytorch_model.bin
βββ vocab.txt
Legacy location also supported: ComfyUI/models/F5-TTS/vocos/
Download from: Vocos Mel-24kHz
Complete Folder Structure (Recommended):
ComfyUI/models/TTS/F5-TTS/
βββ F5TTS_Base/
β βββ model_1200000.safetensors β Main model file
β βββ vocab.txt β Vocabulary file
βββ vocos/ β For offline vocoder
β βββ config.yaml
β βββ pytorch_model.bin
βββ F5TTS_v1_Base/
βββ model_1250000.safetensors
βββ vocab.txt
Required Files for Each Model:
model_XXXXXX.safetensors
- The main model weightsvocab.txt
- Vocabulary/tokenizer file (download from same HuggingFace repo)
Note: F5-TTS uses internal config files, no config.yaml needed. Vocos vocoder doesn't need vocab.txt.
Note: F5-TTS models and vocoder will auto-download from HuggingFace if not found locally. The first generation may take longer while downloading (~1.2GB per model).
For easy voice reference management, create a dedicated voices folder:
ComfyUI/models/voices/
βββ character1.wav
βββ character1.reference.txt β Contains: "Hello, I am character one speaking clearly."
βββ character1.txt β Contains: "BBC Radio sample, licensed under CC3..."
βββ narrator.wav
βββ narrator.txt β Contains: "This is the narrator voice for storytelling."
βββ my_voice.wav
βββ my_voice.txt β Contains: "This is my personal voice sample."
Voice Reference Requirements:
- Audio files: WAV format, 5-30 seconds, clean speech, 24kHz recommended
- Text files: Exact transcription of what's spoken in the audio file
- Naming:
filename.wav
+filename.reference.txt
(preferred) orfilename.txt
(fallback) - Character Names: Character name = audio filename (without extension). Subfolders supported for organization.
π F5-TTS Inference Guidelines
To avoid possible inference failures, make sure you follow these F5-TTS optimization guidelines:
-
Reference Audio Duration: Use reference audio <12s and leave proper silence space (e.g. 1s) at the end. Otherwise there is a risk of truncating in the middle of word, leading to suboptimal generation.
-
Letter Case Handling: Uppercased letters (best with form like K.F.C.) will be uttered letter by letter, and lowercased letters used for common words.
-
Pause Control: Add some spaces (blank: " ") or punctuations (e.g. "," ".") to explicitly introduce some pauses.
-
Punctuation Spacing: If English punctuation marks the end of a sentence, make sure there is a space " " after it. Otherwise not regarded as sentence chunk.
-
Number Processing: Preprocess numbers to Chinese letters if you want to have them read in Chinese, otherwise they will be read in English.
These guidelines help ensure optimal F5-TTS generation quality and prevent common audio artifacts.
For state-of-the-art voice cloning capabilities, Higgs Audio 2 models are automatically downloaded to the organized structure:
ComfyUI/models/TTS/HiggsAudio/ β Recommended (new structure)
βββ higgs-audio-v2-3B/ β Main model directory
β βββ generation/ β Generation model files
β β βββ config.json
β β βββ model.safetensors.index.json
β β βββ model-00001-of-00003.safetensors (~3GB)
β β βββ model-00002-of-00003.safetensors (~3GB)
β β βββ model-00003-of-00003.safetensors (~3GB)
β β βββ generation_config.json
β β βββ tokenizer.json
β β βββ tokenizer_config.json
β β βββ special_tokens_map.json
β βββ tokenizer/ β Audio tokenizer files
β βββ config.json
β βββ model.pth (~200MB)
βββ voices/ β Voice reference files
βββ character1.wav β 30+ second reference audio
βββ character1.txt β Exact transcription
βββ narrator.wav
βββ narrator.txt
Available Higgs Audio Models (Auto-Download):
Model | Type | Source | Size | Auto-Download |
---|---|---|---|---|
higgs-audio-v2-3B | Voice Cloning | bosonai/higgs-audio-v2-generation-3B-base | ~9GB | β |
Audio Tokenizer | Tokenization | bosonai/higgs-audio-v2-tokenizer | ~200MB | β |
Voice Reference Requirements:
- Audio files: WAV format, 30+ seconds, clean speech, single speaker
- Text files: Exact transcription of the reference audio
- Naming:
filename.wav
+filename.txt
(transcription) - Quality: Clear, noise-free audio for best voice cloning results
How Higgs Audio Auto-Download Works:
- Select Model: Choose "higgs-audio-v2-3B" in Higgs Audio Engine node
- Auto-Download: Both generation model (~9GB) and tokenizer (~200MB) download automatically
- Voice References: Place reference audio and transcriptions in voices/ folder
- Local Cache: Once downloaded, models are used from local cache for fast loading
Manual Installation (Optional):
To pre-download models for offline use:
# Download generation model files to:
# ComfyUI/models/TTS/HiggsAudio/higgs-audio-v2-3B/generation/
# Download tokenizer files to:
# ComfyUI/models/TTS/HiggsAudio/higgs-audio-v2-3B/tokenizer/
Usage: Simply use the βοΈ Higgs Audio 2 Engine node β Select model β All required files download automatically!
For Real-time Voice Conversion capabilities, RVC models are automatically downloaded to the organized structure:
ComfyUI/models/TTS/RVC/ β Recommended (new structure)
βββ Claire.pth β Character voice models
βββ Sayano.pth
βββ Mae_v2.pth
βββ Fuji.pth
βββ Monika.pth
βββ content-vec-best.safetensors β Base models (auto-download)
βββ rmvpe.pt
βββ hubert/ β HuBERT models (auto-organized)
β βββ hubert-base-rvc.safetensors
β βββ hubert-soft-japanese.safetensors
β βββ hubert-soft-korean.safetensors
βββ .index/ β Index files for better similarity
βββ added_IVF1063_Flat_nprobe_1_Sayano_v2.index
βββ added_IVF985_Flat_nprobe_1_Fuji_v2.index
βββ Monika_v2_40k.index
βββ Sayano_v2_40k.index
Note: Legacy location
ComfyUI/models/RVC/
still works for backward compatibility.
Available RVC Character Models (Auto-Download):
Model | Type | Source | Auto-Download |
---|---|---|---|
Claire.pth | Character | SayanoAI RVC-Studio | β |
Sayano.pth | Character | SayanoAI RVC-Studio | β |
Mae_v2.pth | Character | SayanoAI RVC-Studio | β |
Fuji.pth | Character | SayanoAI RVC-Studio | β |
Monika.pth | Character | SayanoAI RVC-Studio | β |
Required Base Models (Auto-Download):
Model | Purpose | Source | Size |
---|---|---|---|
content-vec-best.safetensors | Voice features | lengyue233/content-vec-best | ~300MB |
rmvpe.pt | Pitch extraction | lj1995/VoiceConversionWebUI | ~55MB |
How RVC Auto-Download Works:
- Select Character Model: Choose from available models in π Load RVC Character Model node
- Auto-Download: Models download automatically when first selected (with auto_download=True)
- Base Models: Required base models download automatically when RVC engine first runs
- Index Files: Optional FAISS index files download for improved voice similarity
- Local Cache: Once downloaded, models are used from local cache for fast loading
UVR Models for Vocal Separation (Auto-Download):
Additional models for the π€ Noise or Vocal Removal node download to ComfyUI/models/TTS/UVR/
(recommended) or ComfyUI/models/UVR/
(legacy) as needed.
Usage: Simply use the π Load RVC Character Model node β Select a character β Connect to Voice Changer node. All required models download automatically!
Long text support with smart processing:
- Character-based limits (100-1000 chars per chunk)
- Sentence boundary preservation - won't cut mid-sentence
- Multiple combination methods:
auto
- Smart selection based on text lengthconcatenate
- Simple joiningsilence_padding
- Add configurable silence between chunkscrossfade
- Smooth audio blending
- Comma-based splitting for very long sentences
- Backward compatible - works with existing workflows
Chunking Controls (all optional):
enable_chunking
- Enable/disable smart chunking (default: True)max_chars_per_chunk
- Chunk size limit (default: 400)chunk_combination_method
- How to join audio (default: auto)silence_between_chunks_ms
- Silence duration (default: 100ms)
Auto-selection logic:
- Text > 1000 chars β silence_padding (natural pauses)
- Text > 500 chars β crossfade (smooth blending)
- Text < 500 chars β concatenate (simple joining)
Priority-based model detection:
- Bundled models in node folder (self-contained)
- ComfyUI models in standard location
- HuggingFace download with authentication
Console output shows source:
π¦ Using BUNDLED ChatterBox (self-contained)
π¦ Loading from bundled models: ./models/chatterbox
β
ChatterboxTTS model loaded from bundled!
- Add "π€ ChatterBox Voice Capture" node
- Select your microphone from the dropdown
- Adjust recording settings:
- Silence Threshold: How quiet to consider "silence" (0.001-0.1)
- Silence Duration: How long to wait before stopping (0.5-5.0 seconds)
- Sample Rate: Audio quality (8000-96000 Hz, default 44100)
- Change the Trigger value to start a new recording
- Connect output to TTS (for voice cloning) or VC nodes
- Add "π€ ChatterBox Voice TTS" node
- Enter your text (any length - automatic chunking)
- Optionally connect reference audio for voice cloning
- Adjust TTS settings:
- Exaggeration: Emotion intensity (0.25-2.0)
- Temperature: Randomness (0.05-5.0)
- CFG Weight: Guidance strength (0.0-1.0)
- Add "π€ F5-TTS Voice Generation" node
- Enter your target text (any length - automatic chunking)
- Required: Connect reference audio for voice cloning
- Required: Enter reference text that matches the reference audio exactly
π Voice Reference Setup Options
Two ways to provide voice references:
- Easy Method: Select voice from
reference_audio_file
dropdown β text auto-detected from companion.txt
file - Manual Method: Set
reference_audio_file
to "none" β connectopt_reference_audio
+opt_reference_text
inputs
- Select F5-TTS model:
- F5TTS_Base: English base model (recommended)
- F5TTS_v1_Base: English v1 model
- E2TTS_Base: E2-TTS model
- F5-DE: German model
- F5-ES: Spanish model
- F5-FR: French model
- F5-JP: Japanese model
- Adjust F5-TTS settings:
- Temperature: Voice variation (0.1-2.0, default: 0.8)
- Speed: Speech speed (0.5-2.0, default: 1.0)
- CFG Strength: Guidance strength (0.0-10.0, default: 2.0)
- NFE Step: Quality vs speed (1-100, default: 32)
- Add "π ChatterBox Voice Conversion" node
- Connect source audio (voice to convert)
- Connect target audio (voice style to copy)
- Configure refinement settings:
- Refinement Passes: Number of conversion iterations (1-30, recommended 1-5)
- Each pass refines the output to sound more like the target
- Smart Caching: Results cached up to 5 iterations for instant experimentation
π§ Intelligent Caching Examples:
- Run 3 passes β caches iterations 1, 2, 3
- Change to 5 passes β resumes from cached 3, runs 4, 5
- Change to 2 passes β returns cached iteration 2 instantly
- Change to 4 passes β resumes from cached 3, runs 4
π‘ Pro Tip: Start with 1 pass, then experiment with 2-5 passes to find the sweet spot for your audio. Each iteration can improves voice similarity!
Ready-to-use ComfyUI workflows - Download and drag into ComfyUI:
Workflow | Description | Status | Files |
---|---|---|---|
Unified Voice Changer - RVC X ChatterBox | Modern unified voice conversion with RVC and ChatterBox engines | β Updated for v4.3 | π JSON |
Workflow | Description | Status | Files |
---|---|---|---|
ChatterBox SRT | SRT subtitle timing and TTS generation | π JSON | |
ChatterBox Integration | General ChatterBox TTS and Voice Conversion | β Updated for v4 | π JSON |
Workflow | Description | Status | Files |
---|---|---|---|
Audio Wave Analyzer + F5 Speech Edit | Interactive waveform analysis for F5-TTS speech editing | π JSON | |
F5-TTS SRT and Normal Generation | F5-TTS integration with SRT subtitle processing | π JSON |
Note: To use workflows, download the
.json
files and drag them directly into your ComfyUI interface. The workflows will automatically load with the proper node connections.
β οΈ Workflow Status: After the v4 architecture changes, most workflows need updates except ChatterBox Integration which has been verified and updated. The other workflows are available but may need node reconnections.
For Long Articles/Books:
max_chars_per_chunk=600
,combination_method=silence_padding
,silence_between_chunks_ms=200
For Natural Speech:
max_chars_per_chunk=400
,combination_method=auto
(default - works well)
For Fast Processing:
max_chars_per_chunk=800
,combination_method=concatenate
For Smooth Audio:
max_chars_per_chunk=300
,combination_method=crossfade
General Recording:
silence_threshold=0.01
,silence_duration=2.0
(default settings)
Noisy Environment:
- Higher
silence_threshold
(~0.05) to ignore background noise - Longer
silence_duration
(~3.0) to avoid cutting off speech
Quiet Environment:
- Lower
silence_threshold
(~0.005) for sensitive detection - Shorter
silence_duration
(~1.0) for quick stopping
General Use:
exaggeration=0.5
,cfg_weight=0.5
(default settings work well)
Expressive Speech:
- Lower
cfg_weight
(~0.3) + higherexaggeration
(~0.7) - Higher exaggeration speeds up speech; lower CFG slows it down
Unlike many TTS systems:
- OpenAI TTS: 4096 character limit
- ElevenLabs: 2500 character limit
- ChatterBox: No documented limits + intelligent chunking
Sentence Boundary Detection:
- Splits on
.!?
with proper spacing - Preserves sentence integrity
- Handles abbreviations and edge cases
Long Sentence Handling:
- Splits on commas when sentences are too long
- Maintains natural speech patterns
- Falls back to character limits only when necessary
MIT License - Same as ChatterboxTTS
- ResembleAI for ChatterboxTTS
- ComfyUI team for the amazing framework
- sounddevice library for audio recording functionality
- ShmuelRonen for the Original ChatteBox Voice TTS node
- Diogod for the TTS Audio Suite universal multi-engine implementation
- Resemble AI ChatterBox
- Model Downloads (Hugging Face) β¬ οΈ Download models here
- ChatterBox Demo
- ComfyUI
- Resemble AI Official Site
Note: The original ChatterBox model includes Resemble AI's Perth watermarking system for responsible AI usage. This ComfyUI integration includes the Perth dependency but has watermarking disabled by default to ensure maximum compatibility. Users can re-enable watermarking by modifying the code if needed, while maintaining the full quality and capabilities of the underlying TTS model.