Skip to content

External Fleet agent recipe #8788

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions config/recipes/elastic-agent/README.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,3 +46,7 @@ Deploys single instance Elastic Agent Deployment in Fleet mode with APM integrat
===== Synthetic monitoring - `synthetic-monitoring.yaml`

Deploys an Fleet-enrolled Elastic Agent that can be used as for link:https://www.elastic.co/guide/en/observability/current/monitor-uptime-synthetics.html[Synthetic monitoring]. This Elastic Agent uses the `elastic-agent-complete` image. The agent policy still needs to be link:https://www.elastic.co/guide/en/observability/current/synthetics-private-location.html#synthetics-private-location-add[registered as private location] in Kibana.

===== Fleet Server exposed both internally and externally - `fleet-ingress-setup.yaml`

This example shows how to expose the Fleet Server to the outside world using a Kubernetes Ingress resource. The Fleet Server is configured to use custom TLS certificates, and all communications are secured with TLS. The same Fleet Server is also accessible from within the cluster, allowing agents to connect to it regardless of their location. Refer to the comments in the `fleet-ingress-setup.yaml` file for more details on how to set up the Ingress resource and TLS certificates to enable this configuration.
357 changes: 357 additions & 0 deletions config/recipes/elastic-agent/fleet-ingress-setup.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,357 @@
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 9.1.0
count: 1
elasticsearchRef:
name: elasticsearch
config:
xpack.fleet.agents.elasticsearch.hosts: ["https://es.example.com:443"]
xpack.fleet.agents.fleet_server.hosts: [ "https://fleet.example.com:443"]
xpack.fleet.packages:
- name: system
version: latest
- name: elastic_agent
version: latest
- name: fleet_server
version: latest
- name: kubernetes
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server on ECK policy
id: eck-fleet-server
namespace: elastic
is_managed: true
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
package_policies:
- name: fleet_server-1
id: fleet_server-1
package:
name: fleet_server
- name: Elastic Agent on ECK policy
id: eck-agent
namespace: elastic
is_managed: true
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
package_policies:
- package:
name: system
name: system-1
- package:
name: kubernetes
name: kubernetes-1

---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 9.1.0
nodeSets:
- name: default-3
count: 3
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fleet-ingress
annotations:
# Disable HTTP traffic
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# Depending on the ingress implementation in your environment you may need to specify the ingress class
# kubernetes.io/ingress.class: "example"
spec:
# or alternatively use the ingressClassName field. Consult the documentation of your ingress controller.
# ingressClassName: example
tls:
# The assumption here is that these are certificates that are trusted both by agents outside the cluster as well as
# as inside. See the comments in the Agent spec below for more details.
- hosts: ["fleet.example.com"]
secretName: fleet-server-acme
- hosts: ["es.example.com"]
secretName: es-acme
- hosts: ["kb.example.com"]
secretName: kb-acme
rules:
- host: "kb.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: kibana-kb-http
port:
number: 5601
- host: "es.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: elasticsearch-es-http
port:
number: 9200
- host: "fleet.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: fleet-server-agent-http
port:
number: 8220
---
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
name: fleet-server
spec:
version: 9.1.0
http:
# Configuring the same certificates used for the ingress here has the effect that
# the CA certificate that is expected in ca.crt inside this secret is propagated to the agents
# and configured in the FLEET_CA environment variable.
# Without this the agents would only trust the self-signed certificates generated by ECK.
tls:
certificate:
secretName: fleet-server-acme
kibanaRef:
name: kibana
elasticsearchRefs:
- name: elasticsearch
mode: fleet
fleetServerEnabled: true
policyID: eck-fleet-server
deployment:
replicas: 1
podTemplate:
spec:
containers:
- name: agent
env:
# Force Elastic Agent to bootstrap itself through the public Fleet Server URL
# We are asuming here the certificates configured above are only valid for the public URL.
- name: FLEET_URL
value: https://fleet.example.com:443
serviceAccountName: fleet-server
automountServiceAccountToken: true
securityContext:
runAsUser: 0
---
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
name: elastic-agent
spec:
config:
fleet:
enabled: true
providers.kubernetes:
add_resource_metadata:
deployment: true
version: 9.1.0
kibanaRef:
name: kibana
fleetServerRef:
name: fleet-server
mode: fleet
policyID: eck-agent
daemonSet:
podTemplate:
spec:
volumes:
- name: fleet-ca
secret:
secretName: fleet-server-acme
containers:
- name: agent
env:
# - name: FLEET_CA
# value: /mnt/extra/ca.crt
- name: FLEET_URL
value: https://fleet.example.com
volumeMounts:
- name: fleet-ca
mountPath: /mnt/extra
serviceAccountName: elastic-agent
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
automountServiceAccountToken: true
securityContext:
runAsUser: 0
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fleet-server
rules:
- apiGroups: [""]
resources:
- pods
- namespaces
- nodes
verbs:
- get
- watch
- list
- apiGroups: ["apps"]
resources:
- replicasets
verbs:
- get
- watch
- list
- apiGroups: ["batch"]
resources:
- jobs
verbs:
- get
- watch
- list
- apiGroups: ["coordination.k8s.io"]
resources:
- leases
verbs:
- get
- create
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fleet-server
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fleet-server
subjects:
- kind: ServiceAccount
name: fleet-server
namespace: default
roleRef:
kind: ClusterRole
name: fleet-server
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
rules:
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
- events
- services
- configmaps
verbs:
- get
- watch
- list
- apiGroups: ["coordination.k8s.io"]
resources:
- leases
verbs:
- get
- create
- update
- nonResourceURLs:
- "/metrics"
verbs:
- get
- apiGroups: ["extensions"]
resources:
- replicasets
verbs:
- "get"
- "list"
- "watch"
- apiGroups:
- "apps"
resources:
- statefulsets
- deployments
- replicasets
- daemonsets
verbs:
- "get"
- "list"
- "watch"
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
- nonResourceURLs:
- "/metrics"
verbs:
- get
- apiGroups:
- "batch"
resources:
- jobs
- cronjobs
verbs:
- "get"
- "list"
- "watch"
- apiGroups:
- "storage.k8s.io"
resources:
- storageclasses
verbs:
- "get"
- "list"
- "watch"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: default
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io