Skip to content

Releases: Napsty/check_rancher2

1.13.0

13 Jun 14:16
1acff23
Compare
Choose a tag to compare

Version 1.13.0 adds a new check type api-token. This check is used to monitor the expiry of the used API token (used by check_rancher2).

Background
In newer Rancher2 versions, API tokens must be created with an expiry. The tokens are no longer valid without expiration date (such as in Rancher 2.5). When the API token expired, the monitoring checks with check_rancher2 run into an authentication error. With the new api-token check type, the plugin will alert in advance, prior to the expiry of the token, when used in combination with --expiry-warn.

Furthermore the --cert-warn parameter, used for the local-certs certificate check, is now DEPRECATED. Please use --expiry-warn for this check as well.

The local-certs check type received a bug fix in the output.

Launching the plugin with --help no longer shows unrecognized option '--help' at the top.

1.12.1

08 Dec 11:46
8f6e1fb
Compare
Choose a tag to compare

Use 'command -v' instead of 'which' for required command check. Makes the plugin more distribution independent, as which is not always installed by default - depending on the Linux distribution.

1.12.0

02 Feb 14:17
3774696
Compare
Choose a tag to compare

This release adds a new check type to the plugin.

The "local-certs" check type allows to check for valid certificates deployed by Rancher in the "local" cluster, under the "System" project. These certificates are deployed as Kubernetes secrets in the "cattle-system" namespace.

These certificates are by default created with a one year validity (Rancher, whyyyy?!). When these local certificates expire, this can create some issues in the background of Rancher, e.g. when RBAC actions are used. It's therefore important to verify these certificates have not expired.

1.11.0

10 Jan 15:13
Compare
Choose a tag to compare

Enhancement:

  • Allow ignore specific workloads (#40 ) with the -i parameter. Previously only states of nodes and workloads could be ignored.

Fixes:

  • Do not treat a cluster in provisioning state as critical, do a warning instead (#39 and #41 )

1.10.0

09 Sep 13:49
a3bcdd3
Compare
Choose a tag to compare

Allow using ignore status in workload checks #29
Fixing problems with ComponentStatus health checks which were removed in Kubernetes 1.23: #35
Show Kubernetes version in single cluster check output

1.9.0

29 Jul 10:08
9ae380f
Compare
Choose a tag to compare

Improvements in plugin output: See #32
Show namespace of workload in output: See #33

1.8.0

10 Jun 05:22
Compare
Choose a tag to compare

Version 1.8.0 adds a lot of additional performance data to the plugin. The cluster and node performance data now show more information, including percentages of resources used (for example CPU used in percent).
The plugin now also supports long parameters (-H and --apihost).
Additional parameters were added to specifically trigger alerts when certain performance thresholds are reached on a node or cluster level: CPU Usage, Memory Usage or Pod Usage.
Kudos and credits to @steffeneichler for this large PR!

1.7.1

01 Dec 14:17
4887168
Compare
Choose a tag to compare

Fix cluster status check (#26)

1.7.0

21 Oct 05:50
c9118fa
Compare
Choose a tag to compare

Version 1.7.0 adds additional internal checks to the -t node check. Prior to 1.7.0, only the node status was checked. This could either be Active, Unavailable, Drained or Cordoned (if I remember correctly). Release 1.7.0 adds additional checks on pressure conditions, such as Disk Pressure.

1.6.1

24 Aug 14:18
Compare
Choose a tag to compare

Fix cluster and project not found error (#24)