The manifests in manifests/
were adapted from the official Vernissage documentation for a Docker deployment: https://docs.joinvernissage.org/documentation/vernissageserver/dockercontainers.
The docker-compose file is stored in source_files/
for reference.
This is not a Helm chart. The manifests in manifests/
are intended to be applied directly via kubectl apply -f MANIFEST_NAME.yaml
.
Find the ConfigMap in your cluster that contains the CoreDNS Corefile. This is usually named coredns
in the kube-system
namespace.
You will need to add the following lines to the Corefile to allow Vernissage to resolve its own internal services:
rewrite name vernissage-api.internal vernissage-api.vernissage.svc.cluster.local.
rewrite name vernissage-web.internal vernissage-web.vernissage.svc.cluster.local.
rewrite name vernissage-proxy.internal vernissage-proxy.vernissage.svc.cluster.local.
rewrite name vernissage-push.internal vernissage-push.vernissage.svc.cluster.local.
rewrite name vernissage-redis.internal vernissage-redis.vernissage.svc.cluster.local.
I have included my Corefile in this repository at Corefile, to make it easier to know where to add these lines.
Edit the Corefile ConfigMap in your cluster with the following command:
kubectl edit configmap coredns -n kube-system
Add the lines above to the Corefile, save, and exit.
- Namespace
- ConfigMaps
- Postgres storage (PersistentVolumeClaim)
- Postgres database
- Redis cache
- NGINX S3 Proxy
- Vernissage API backend
- Vernissage Web frontend
- Vernissage HTTP proxy
- Vernissage WebPush server
All the resources in the manifests/
directory need to live in a namespace.
The resources, as written, use the vernissage
namespace. You can change this in the manifests if you prefer a different namespace.
To create the namespace, run the following command:
kubectl create namespace vernissage
The following manifests in the manifests/
directory are ConfigMaps:
env.yaml
- The environment variables listed in this file are sourced from
source_files/env
, from the original Docker deployment. Environment variables that I did not use are prefixed with "z", but are left inenv.yaml
in case they are needed in the future.
- The environment variables listed in this file are sourced from
postgres-secret.yaml
- Remember to set your Postgres password in this file.
Look through both files and set values the way that you want them.
Apply them as follows:
kubectl apply -f manifests/env.yaml
kubectl apply -f manifests/postgres-secret.yaml
These instructions assume that you are familiar with PersistentVolumes and PersistentVolumeClaims.
The PVC manifest in this rpeository is just an example. I built it with the Rancher UI. It probably will not work if you directly apply it via kubectl. I left the PVC manifest in this repository so that you can see what values may be relevant to set.
Before deploying the Postgres database, you need to create a PersistentVolumeClaim for the Postgres storage.
The manifest for the PersistentVolumeClaim is in manifests/postgres-pvc-vernissage.yaml
.
Look through that file and confirm that it will work in your cluster -- for example, you may need to change the storageClassName to a storageClass that you have in your cluster.
Find what storageClassNames you have available in your cluster with the following command:
kubectl get storageclass
Notes about the Postgres PVC that you should set in your version of the manifest:
accessModes
: should be ReadWriteMany, in case you have multiple replicas of the Postgres database pods.resources.requests.storage
: should be at least 10Gi, but you can set it to whatever you want. This is the size of the Postgres storage volume.name
: keep it aspostgres-pvc-vernissage
-- if you change it, you will need to modify thepostgres.yaml
Deployment manifest to match the name you set.
Once you are satisfied with your PVC manifest, apply it with the following command:
kubectl apply -f manifests/postgres-pvc-vernissage.yaml
Deploy the Postgres database with the following command:
kubectl apply -f manifests/postgres.yaml
kubectl apply -f manifests/postgres-service.yaml
Deploy the Redis cache with the following command:
kubectl apply -f manifests/vernissage-redis.yaml
kubectl apply -f manifests/vernissage-redis-service.yaml
Deploy the NGINX S3 Proxy with the following command:
kubectl apply -f manifests/nginx-s3-proxy.yaml
kubectl apply -f manifests/nginx-s3-proxy-service.yaml
Deploy the Vernissage API backend with the following command:
kubectl apply -f manifests/vernissage-api.yaml
kubectl apply -f manifests/vernissage-api-service.yaml
Deploy the Vernissage Web frontend with the following command:
kubectl apply -f manifests/vernissage-web.yaml
kubectl apply -f manifests/vernissage-web-service.yaml
Deploy the Vernissage HTTP proxy with the following command:
kubectl apply -f manifests/vernissage-proxy.yaml
kubectl apply -f manifests/vernissage-proxy-service.yaml
Deploy the Vernissage WebPush server with the following command:
kubectl apply -f manifests/vernissage-push.yaml
kubectl apply -f manifests/vernissage-push-service.yaml
You can verify that all the pods are running with the following command:
kubectl get pods -n vernissage
You should see all the pods in the vernissage
namespace in a Running
state. If any pods are not running, you can check the logs of the pod with the following command:
kubectl logs POD_NAME -n vernissage
If you need to troubleshoot a specific pod, you can also use the following command to get more information about the pod:
kubectl describe pod POD_NAME -n vernissage
Once all the pods are running, you can access Vernissage via the HTTP proxy service.
The HTTP proxy service routes traffic to either the Vernissage Web frontend or the Vernissage API backend, depending on the URL path.
The way I accessed my installation of Vernissage was by running cloudflared
in my cluster, and configuring cloudflared
to route a public hostname to the following kubernetes-cluster-internal service:
http://vernissage-proxy.vernissage.svc.cluster.local.:8080
You can also use kubectl port-forward
to access the Vernissage HTTP Proxy locally:
kubectl port-forward service/vernissage-proxy -n vernissage 8080:8080
Then you can access Vernissage at http://localhost:8080
.
- Optional - Backups: If you want to back up your Vernissage database periodically to Backblaze B2, you can use the
postgres-backup-cronjob.yaml
manifest in themanifests/
folder.- Remember to walk through the manifest and customize values to your environment such as:
YOUR_BUCKET_HERE
YOUR_K8S_CLUSTER_NAME_HERE
YOUR_S3_KEY_ID_HERE
YOUR_S3_KEY_HERE
- You can restore backups in the postgres container with the following command (make sure the VernissageServer/API pod/container is not running while you restore the database backup):
psql -U vernissage-user -X -f /var/lib/postgresql/vernissage_db_file_here postgres
- Remember to walk through the manifest and customize values to your environment such as:
- Make sure to make your S3 bucket public (if on backblaze B2), so that Vernissage can provide URLs to images in it, to web clients, as part of the
VERNISSAGE_CSP_IMG
env var.- If you do not want to host a public S3 bucket, you'll need to run a some kind of proxy that can serve up the bucket contents, and use that proxy URL for
VERNISSAGE_CSP_IMG
.- Remember to set the following setting in the VerissageWeb UI to match the S3 proxy URL: Vernissage -> Settings -> Images Url
- An example S3 proxy is the NGINX S3 Gateway -- docs are in the docs folder.
- Also remember to clear your CloudFlare cache if you are using CloudFlare.
- If you do not want to host a public S3 bucket, you'll need to run a some kind of proxy that can serve up the bucket contents, and use that proxy URL for
- How to get WebPush notifications to work:
- These instructions worked for me
- Generate a VAPID key pair using the following site: VAPID Key Generator
- WebPush does NOT work in Incognito mode! WebPush requires a regular browser window. I don't understand why yet, but that's how it works.
- Vernissage can only run on a single domain/URL.
- If you are using a metrics scraper like Alloy, you should exclude Vernissage from metrics scraping (until metrics are exposed by Vernissage):
- Add the following to your Alloy config if you use kubernetes discovery::
... discovery.kubernetes "pods" { role = "pod" selectors { role = "pod" label = "app!=vernissage" } } discovery.kubernetes "services" { role = "service" selectors { role = "service" label = "app!=vernissage" } } ...
- Add the following to your Alloy config if you use kubernetes discovery::