You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/administration-guide/pages/running-at-scale.adoc
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,17 +13,18 @@ Such a scale imposes high infrastructure demands and introduces potential bottle
13
13
14
14
[NOTE]
15
15
====
16
-
CDE workloads are complex to scale mainly because of the fact that underlying IDE solutions, such as link:https://github.com/microsoft/vscode[Visual Studio Code - Open Source ("Code - OSS")] or link:https://www.jetbrains.com/remote-development/gateway/[JetBrains Gateway], are designed as single-user applications rather than multitenant services.
16
+
CDE workloads are complex to scale mainly because the underlying IDE solutions, such as link:https://github.com/microsoft/vscode[Visual Studio Code - Open Source ("Code - OSS")] or link:https://www.jetbrains.com/remote-development/gateway/[JetBrains Gateway], are designed as single-user applications rather than multitenant services.
17
17
====
18
18
19
19
.Resource quantity and object maximums
20
20
21
21
While there is no strict limit on the number of resources in a {kubernetes} cluster,
22
-
there are certain link:https://kubernetes.io/docs/setup/best-practices/cluster-large/[considerations for large clusters] to keep in mind.
22
+
there are certain link:https://kubernetes.io/docs/setup/best-practices/cluster-large/[considerations for large clusters] to remember
23
23
24
24
[NOTE]
25
25
====
26
-
Learn more about the {kubernetes} scalability in the link:https://kubernetespodcast.com/episode/111-scalability/["Scalability, with Wojciech Tyczynski"] {kubernetes} Podcast.
26
+
Learn more about the {kubernetes} scalability in the link:https://kubernetespodcast.com/episode/111-scalability/["Scalability,
27
+
with Wojciech Tyczynski" episode of Kubernetes Podcast].
27
28
====
28
29
29
30
link:https://www.redhat.com/en/technologies/cloud-computing/openshift[OpenShift Container Platform], which is a certified distribution of {kubernetes}, also provides a set of tested maximums for various resources, which can serve as an initial guideline for planning your environment:
@@ -128,11 +129,11 @@ You can find more details about DevWorkspace Operator Configuration in the link:
128
129
.OLMConfig
129
130
130
131
When an operator is installed by the link:https://olm.operatorframework.io/[Operator Lifecycle Manager (OLM)],
131
-
a stripped-down copy of its CSV is created in every namespace the operator is configured to watch.
132
+
a stripped-down copy of its CSV is created in every {namespace} the operator is configured to watch.
132
133
These stripped-down CSVs are known as “Copied CSVs”
133
134
and communicate to users which controllers are actively reconciling resource events in a given namespace.
134
135
On especially large clusters, with namespaces and installed operators tending in the hundreds or thousands,
135
-
Copied CSVs consume an untenable amount of resources; e.g. OLM’s memory usage, cluster etcd limits,
136
+
Copied CSVs consume an untenable amount of resources; e.g., OLM’s memory usage, cluster etcd limits,
136
137
networking, etc. To eliminate the CSVs copied to every namespace, configure the `OLMConfig` object accordingly:
137
138
138
139
[source,yaml]
@@ -181,11 +182,6 @@ This approach can help
181
182
improve performance and reliability by distributing the workload across multiple clusters
182
183
and providing redundancy in case of cluster failures.
183
184
184
-
[NOTE]
185
-
====
186
-
187
-
====
188
-
189
185
From the infrastructure perspective, the Developer Sandbox consists of multiple link:https://www.redhat.com/en/technologies/cloud-computing/openshift/aws[ROSA] clusters. On each cluster, the productized version of {prod} is installed and configured using link:https://argo-cd.readthedocs.io/en/stable/[Argo CD]. Since the user base is spread across multiple clusters, link:https://workspaces.openshift.com/[workspaces.openshift.com] is used as a single entry point to the productized {prod} instances. You can find implementation details about the multicluster redirector in the following link:https://github.com/codeready-toolchain/crw-multicluster-redirector[GitHub repository].
0 commit comments