This repository contains a free n8n workflow template designed to automate the process of converting long-form YouTube videos into multiple YouTube Shorts. It integrates with Swiftia.io for video analysis/rendering and uses a Large Language Model (LLM) for metadata generation, finally uploading and scheduling the shorts to your YouTube channel.
Save time and streamline your content repurposing!
- Form Trigger: Easily start the process by providing a YouTube Video ID, scheduling parameters, and optional styling info via an n8n form.
- External API Integration: Uses HTTP Request nodes to interact with Swiftia.io for:
- Analyzing long videos to identify potential short clips.
- Rendering individual short clips, optionally applying custom caption styling/branding.
- Flexible LLM Integration: Leverages n8n's LangChain nodes to generate optimized titles, descriptions, tags, and YouTube category IDs using your preferred LLM provider (OpenAI, Google Gemini, Groq, etc.).
- Automatic Scheduling: Calculates and sets publication dates for each Short based on your specified start time and interval.
- Direct YouTube Upload: Downloads rendered shorts and uploads them directly to your YouTube channel using the official YouTube Data API v3 (resumable uploads).
- Adaptable Template: Built as a foundation – adapt the API calls to your specific video service.
- This is a template workflow it works with Swiftia's API. You will need to adapt the HTTP Request nodes (
generateShorts
,get_shorts
,renderShort
,getRender
) in case you prefer using other platforms (such as Opus clip, klap app, getmunch, or spikes studio) you will need to match their specific API endpoints, request formats, and authentication methods.
Before you start, ensure you have the following:
- n8n Instance: A running instance (self-hosted or Cloud).
- [Self-Hosted Users] Video processing can be memory-intensive. Consider increasing allocated RAM or setting the environment variable
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
(ensure sufficient disk space).
- [Self-Hosted Users] Video processing can be memory-intensive. Consider increasing allocated RAM or setting the environment variable
- Video Analysis/Rendering Service: An account and API Key/Credentials for a service capable of identifying clips in long videos and rendering them via API (in this template we are using swiftia.io) .
- Google Account & YouTube Channel: The target channel for uploads.
- Google Cloud Platform (GCP) Project:
- YouTube Data API v3 Enabled.
- OAuth 2.0 Credentials (Client ID & Secret).
- LLM Provider Account & API Key: An API key for your chosen provider (e.g., OpenAI, Google AI/Gemini, Groq, Anthropic).
- n8n Credentials: Ready to configure credentials within n8n for the services above.
- n8n LangChain Nodes: (If required by your LLM provider) Ensure the
@n8n/n8n-nodes-langchain
package (or similar) is available in your n8n instance. - (Optional) Caption Styling Info: Knowledge of the format (e.g., JSON) required by your chosen video service for caption styling.
- Download/Clone: Get the
video_to_shorts_Automation.json
file from this repository. - Import: Import the workflow file into your n8n instance.
- Create n8n Credentials:
- Video Service Auth: Configure the necessary authentication credential in n8n for your chosen video service (e.g., Header Auth, API Key, OAuth2).
- YouTube: Create a "YouTube OAuth2 API" credential using your GCP OAuth details and authenticate it.
- LLM Provider: Create the appropriate n8n credential for your chosen LLM provider (e.g., "OpenAI API", "Google Gemini API").
- Configure Workflow Nodes:
- IMPORTANT - Adapt HTTP Requests: Modify the following HTTP Request nodes to match your chosen video service's API documentation in case you are using something other than swiftia:
generateShorts
(Initiate analysis)get_shorts
(Check analysis status)renderShort
(Initiate rendering)getRender
(Check rendering status)
- Select Credentials: In the YouTube nodes (
setupMetaData
,Sendshorttoyoutube
) and the LLM Chat Model node, select the corresponding credentials you created in n8n. - LLM Node: The template uses "Google Gemini Chat Model". If using a different provider, delete this node and add the appropriate one (e.g., "OpenAI Chat Model"). Connect it correctly within the LangChain steps and select your LLM credential.
- Review ALL Nodes: Double-check all nodes for any remaining placeholder values (like URLs, keys in headers if not using credentials properly) and replace them.
- IMPORTANT - Adapt HTTP Requests: Modify the following HTTP Request nodes to match your chosen video service's API documentation in case you are using something other than swiftia:
- Activate: Save and activate the workflow.
- Once active, locate the "Webhook URL" provided by the "n8n Form Trigger" node in the workflow editor.
- Open this URL in your browser.
- Fill out the form:
- YouTube Video ID (e.g.,
dQw4w9WgXcQ
) - First publication date/time (ISO 8601 format, e.g.,
2025-05-10T08:00:00Z
) - Interval between shorts (in hours)
- (Optional) Caption styling information (as required by your video service)
- YouTube Video ID (e.g.,
- Submit the form. n8n will start the process.
- Costs: Be mindful of potential costs associated with your chosen video processing service, the YouTube Data API (beyond free quotas), and your LLM provider.
- Testing: Strongly recommended: Initially set the
privacyStatus
in thesetupMetaData
node toprivate
for testing purposes before usingpublishAt
for scheduled public/unlisted shorts. - Error Handling: This template has basic checks but can be enhanced with more robust error handling using n8n's built-in features.
Feel free to modify and enhance this workflow:
- Adjust the prompt in the
generatingMetaData
node for different LLM outputs. - Change the maximum number of shorts processed in the
maxShortsnumber
node. - Add notification steps (e.g., Slack, Discord) upon completion or failure.
- Improve error handling logic.
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions, issues, and feature requests are welcome! Feel free to check issues page.
If you have questions, please open an issue on the GitHub repository issues page.