Replies: 1 comment
-
Don't run it on Longfast. This is a public channel, and its common courtesy not to shout over everyone in public places, let alone have AI do it. Make the bot Direct Message access only. I would have thought running Ai bots on Longfast would be frowned upon in most areas. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
As the adoption of MESH-AI continues to grow, a critical scalability issue has emerged when multiple nodes are running MESH-AI within the same mesh or MQTT network.
🔁 The Problem: Multi-Node Response Chaos
Right now, any node running MESH-AI that hears an /ai command will respond, whether it’s local, long-range (longfast), or over MQTT. This creates a major problem in shared or large-scale environments:
Multiple bots reply at once.
Bandwidth is wasted.
Mesh traffic floods.
Users receive overlapping or conflicting responses.
This behavior is fine in isolated test setups, but becomes unmanageable in:
Urban or public meshes.
Emergency deployments.
Global bridges via MQTT.
🤖 AI-to-AI Messaging Loop Warning
A more dangerous issue occurs when a message generated by one AI-enabled node is received by another AI-enabled node.
If the message itself contains something the second node interprets as an /ai command or input, it responds. If that response is received by the original node, the cycle continues—creating an infinite AI loop.
This can result in:
Continuous, self-sustaining chatter between two nodes.
Rapid battery drain or device lockups.
Channel flooding that renders the mesh unusable.
A feedback loop that can persist even after rebooting the devices.
Currently, the only way to stop this behavior is to physically disconnect at least one node.
🧪 Current Mitigations Being Explored
To help reduce accidental cross-talk and chaos, I am experimenting with:
✅ A config toggle to disable longfast responses for AI commands.
🛑 Optionally disabling longfast entirely for MESH-AI to preserve bandwidth and prevent command duplication.
🔧 Allowing users to customize the /ai command string per node.
🎲 Auto-generating unique suffixes or identifiers for the /ai command on first setup (e.g., /ai-TB42).
💡 Additional Fixes Under Consideration
To address future scalability and robustness, here are some ideas I’m exploring:
Command UUID Tagging – Assign each /ai request a unique hash so nodes ignore duplicates or foreign requests.
Addressed Commands – Require formatting like /ai TBOT: so only named nodes respond.
Designated AI Nodes – Allow networks to define one AI responder per channel or group.
Cooldown or Rate-Limiting Logic – Prevent repeated /ai responses within a short time window.
MQTT Scoped Routing – Ensure only one MQTT-connected node responds per command, avoiding echo storms.
📍 Summary and Next Steps
These are critical challenges for MESH-AI as it scales, particularly in large, bridged, or emergency-ready networks. While fun in isolated environments, AI-to-AI loops and uncontrolled command floods could cripple real-world deployments.
I am aware of these risks and am actively working on fixes to prevent:
Unintended AI loops,
Multi-node response storms,
And mesh saturation in future builds.
💬 Community Input Welcome
Have suggestions, workarounds, or experience with decentralized bot architectures?
Join the discussion! Let’s build MESH-AI into a smarter, safer, and more scalable platform — together.
— TBOT 🛰️
Developer of MESH-AI
Beta Was this translation helpful? Give feedback.
All reactions