-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Welcome to the AI-Thinking-Module project wiki. This system continuously collects, stores, and analyzes artificial intelligence news using a hybrid pipeline of RSS scraping, NewsAPI aggregation, and LLM-powered narrative interpretation via GROQ.
- Overview
- Getting Started
- System Architecture
- Core Concepts
- API Integration
- Usage Example
- Roadmap
- License
The AI-Thinking-Module simulates the flow of consciousness in an artificial mind by interpreting real-world news about AI. It:
- Aggregates news via RSS, HTML scraping, and NewsAPI.
- Stores data in structured JSON snapshots.
- Sends collected articles to the GROQ API (LLaMA 3) for interpretive, introspective analysis.
- Outputs a creative "thought stream" β not summaries β mimicking human reflection.
- Python 3.8+
-
pip
(Python package manager) - API keys for:
git clone https://github.com/your-username/AI-Thinking-Module.git
cd AI-Thinking-Module
pip install -r requirements.txt
Edit your API keys in the script:
NEWS_API_KEY = "INSERT_NEWS_API_KEY_HERE"
GROQ_API_KEY = "INSERT_GROQ_API_KEY_HERE"
βββββββββββββββ ββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββ β RSS/HTML βββββββΆβ News CollectorβββββββΆβ JSON Article Dump βββββββΆβ GROQ LLM API β β + NewsAPI β ββββββββββββββββ ββββββββββββββββββββ ββββββββ¬ββββββββ βββββββββββββββ βΌ ββββββββββββββββ β Thought Log β ββββββββββββββββ
- Scraper Layer: Gathers data from 5+ sources (TechCrunch, Wired, etc.)
- Storage Layer: Saves articles in timestamped JSON files
- Reasoning Layer: GROQ LLM processes articles as introspective "thoughts"
- RSS via
feedparser
- HTML via
BeautifulSoup
- NewsAPI JSON via REST
Instead of summarizing, the GROQ model is prompted to:
- Contemplate connections between ideas
- Form introspective or speculative thoughts
- Drift like human cognition, avoiding hard conclusions
- Retry logic with exponential backoff for GROQ API
- SSL warnings suppressed for robust scraping
-
Endpoint:
https://api.groq.com/openai/v1/chat/completions
-
Model:
llama3-8b-8192
- Max Tokens: 2048
- Temperature: 0.85
-
Endpoint:
https://newsapi.org/v2/everything
- Filters: artificial intelligence keywords
- Max articles: 5 per run
python ai_thinking_pipeline.py
- Add database or vector store backend (e.g., SQLite, Pinecone)
- Integrate open-source LLMs as fallback (e.g., Ollama, LM Studio)
- Stream to a web dashboard (FastAPI + frontend)
- Weekly digest mode (email output)
MIT License. See LICENSE for details.