This n8n workflow is designed to simulate advanced chain-of-thought reasoning using multiple prompting techniques and API calls. It leverages lightweight non-thinking LLMs like Google Gemini Flash 2.0 to mimic the behavior of more advanced reasoning LLMs such as DeepSeek R1 or OpenAI o3-mini-high.
The workflow constructs multi-step reasoning chains, progressively refining responses through structured processing, auto-parsing, and iterative feedback loops.
- Simulated Chain of Thought (CoT): Uses step-by-step logic refinement to mimic reasoning LLMs.
- Multi-Prompt Processing: Breaks down complex questions into sub-tasks and reassembles them into final conclusions.
- Automated API Calls: Interfaces with Google Gemini and Groq API to enhance processing speed and accuracy.
- Error Handling & Refinement: Uses auto-fixing nodes to correct inconsistencies and refine responses.
- Structured Outputs: Utilizes n8n output parsers to ensure clean and interpretable results.
- n8n installed (via Docker, npm, or cloud instance).
- API credentials for Groq and Google Gemini (PaLM).
- Clone this repository:
git clone https://github.com/genius-harry/n8n-workflow.git
- Navigate to the project directory:
cd n8n-workflow
- Import the workflow:
- In n8n, navigate to "Workflows" > "Import from File".
- Select
CoT_n8n.json
and upload it.
- Set Up API Credentials:
- In n8n, go to "Credentials" and add your API keys for Groq and Google Gemini.
- Ensure that the credential names match those referenced in the workflow.
- Activate the Workflow:
- Enable the workflow and start testing!
📁 n8n-workflow
├── README.md # Documentation
├── LICENSE # MIT License
├── .gitignore # Ignore unnecessary files
└── CoT_n8n.json # The exported n8n workflow (sanitized)
This project is licensed under the MIT License. See LICENSE
for details.
Contributions are welcome! Feel free to submit a pull request or open an issue for suggestions.
This workflow does not include API keys. Ensure you add your own credentials securely in n8n.