Raspberry Pi with Ollama Support Confirmed #25
Pinned
mr-tbot
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hey all - just wanted to take a moment to share my personal insights into getting this working on Raspberry Pi.
Steps are slightly dependent on your OS of choice - obviously - but the basic concept remains the same regardless of the Linux OS.
Long story short - the hardest part of this is updating Python to the version required to run this correctly.
Out of the box - RPI units still ship with Python 3.11 - which is 2 years old now as of the time of this writing... The latest "current" bleeding edge version is actually 3.14 - but in order to keep this thing stable - I am staying in 3.13 land for now.
So - you need to update the Python version on your PI to get this going ... I won't go into details here I'll save that for the updated readme I'll be pushing soon - but long story short - you download the latest tar file from Python's site... you then have to compile it for your Pi - making sure to include the SSL modules... You use make for this. You then have to symlink it in your bin folder so it can be called by the Python command in terminal.
It's a pretty lengthy wait to compile on the pi - ChatGPT can help you figure it out if you really want it before I do the write up...
But - I managed to get this running on the latest Raspberry Pi OS - after updating to Python 3.13... Flawless.
I even managed to get Ollama running a small llama3.2 model - and it was actually reasonably quick to respond.
Mind boggling how small of a form factor a Mesh-AI node can truly be - and how powerful they can be!
Cheers!
Beta Was this translation helpful? Give feedback.
All reactions