Skip to content
#

ai-interpretability

Here are 6 public repositories matching this topic...

WFGY 2.0 — Semantic Reasoning Engine for LLMs (MIT). Fixes RAG/OCR drift, collapse & “ghost matches” via symbolic overlays + logic patches. Autoboot; OneLine & Flagship. ⭐ Star if you explore semantic RAG or hallucination mitigation.

  • Updated Aug 17, 2025
  • Python

Recursive Critique for AI-Generated Imagery - A framework for applying structural pressure, interpretability reasoning, and constraint-based analysis to expose image collapse and machine vision failure modes - and rebuilding images outside of engine defaults – No AI training, dataset scraping, or derivative generation permitted.

  • Updated Aug 12, 2025

Improve this page

Add a description, image, and links to the ai-interpretability topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-interpretability topic, visit your repo's landing page and select "manage topics."

Learn more