This terraform module will provision an AKS cluster with other dependencies like Resource Group, Key Vault, ACR, etc. This is a collection module as it will encapsulate other primitive modules like tf-azurerm-module_primitive-kubernetes_cluster
Some useful examples can be found in the examples directory that demonstrate the usage of this module to provision different flavors of AKS with different configurations and capabilities.
There are also several other add-ons that can be deployed on the AKS cluster to provide additional functionalities. They are found in the resources directory. Few of the add ons are
- Ingress Controller
- Linkerd
- Emissary
- Azure AD Integration and RBAC
- Cert Manager
- Secrets Controller
- Reloaders
- Observability
- Demo Applications
The below diagram shows an architecture of a Private AKS clusters will all add-ons and integrations.
.pre-commit-config.yaml file defines certain pre-commit
hooks that are relevant to terraform, golang and common linting tasks. There are no custom hooks added.
commitlint
hook enforces commit message in certain format. The commit contains the following structural elements, to communicate intent to the consumers of your commit messages:
- fix: a commit of the type
fix
patches a bug in your codebase (this correlates with PATCH in Semantic Versioning). - feat: a commit of the type
feat
introduces a new feature to the codebase (this correlates with MINOR in Semantic Versioning). - BREAKING CHANGE: a commit that has a footer
BREAKING CHANGE:
, or appends a!
after the type/scope, introduces a breaking API change (correlating with MAJOR in Semantic Versioning). A BREAKING CHANGE can be part of commits of any type. footers other than BREAKING CHANGE: may be provided and follow a convention similar to git trailer format. - build: a commit of the type
build
adds changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm) - chore: a commit of the type
chore
adds changes that don't modify src or test files - ci: a commit of the type
ci
adds changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs) - docs: a commit of the type
docs
adds documentation only changes - perf: a commit of the type
perf
adds code change that improves performance - refactor: a commit of the type
refactor
adds code change that neither fixes a bug nor adds a feature - revert: a commit of the type
revert
reverts a previous commit - style: a commit of the type
style
adds code changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc) - test: a commit of the type
test
adds missing tests or correcting existing tests
Base configuration used for this project is commitlint-config-conventional (based on the Angular convention)
If you are a developer using vscode, this plugin may be helpful.
detect-secrets-hook
prevents new secrets from being introduced into the baseline. TODO: INSERT DOC LINK ABOUT HOOKS
In order for pre-commit
hooks to work properly
- You need to have the pre-commit package manager installed. Here are the installation instructions.
pre-commit
would install all the hooks when commit message is added by default except forcommitlint
hook.commitlint
hook would need to be installed manually using the command below
pre-commit install --hook-type commit-msg
- For development/enhancements to this module locally, you'll need to install all of its components. This is controlled by the
configure
target in the project'sMakefile
. Before you can runconfigure
, familiarize yourself with the variables in theMakefile
and ensure they're pointing to the right places.
make configure
This adds in several files and directories that are ignored by git
. They expose many new Make targets.
- THIS STEP APPLIES ONLY TO MICROSOFT AZURE. IF YOU ARE USING A DIFFERENT PLATFORM PLEASE SKIP THIS STEP. The first target you care about is
env
. This is the common interface for setting up environment variables. The values of the environment variables will be used to authenticate with cloud provider from local development workstation.
make configure
command will bring down azure_env.sh
file on local workstation. Devloper would need to modify this file, replace the environment variable values with relevant values.
These environment variables are used by terratest
integration suit.
Service principle used for authentication(value of ARM_CLIENT_ID) should have below privileges on resource group within the subscription.
"Microsoft.Resources/subscriptions/resourceGroups/write"
"Microsoft.Resources/subscriptions/resourceGroups/read"
"Microsoft.Resources/subscriptions/resourceGroups/delete"
Then run this make target to set the environment variables on developer workstation.
make env
- The first target you care about is
check
.
Pre-requisites
Before running this target it is important to ensure that, developer has created files mentioned below on local workstation under root directory of git repository that contains code for primitives/segments. Note that these files are azure
specific. If primitive/segment under development uses any other cloud provider than azure, this section may not be relevant.
- Login to Azure cli
- Export the below environment variables
export ARM_ENVIRONMENT=<public|usgovernment>
export ARM_SUBSCRIPTION_ID=<subscription_id>
- A file named
provider.tf
with contents below
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
skip_provider_registration = true
}
provider "azapi" {
use_cli = true
use_msi = false
}
- A file named
terraform.tfvars
which contains key value pair of variables used.
Note that since these files are added in gitignore
they would not be checked in into primitive/segment's git repo.
After creating these files, for running tests associated with the primitive/segment, run
make check
If make check
target is successful, developer is good to commit the code to primitive/segment's git repo.
make check
target
- runs
terraform commands
tolint
,validate
andplan
terraform code. - runs
conftests
.conftests
make surepolicy
checks are successful. - runs
terratest
. This is integration test suit. - runs
opa
tests
Name | Version |
---|---|
terraform | ~> 1.0 |
azapi | >= 1.4.0, < 2.0 |
azurerm | ~>3.117 |
No providers.
Name | Source | Version |
---|---|---|
resource_names | terraform.registry.launch.nttdata.com/module_library/resource_name/launch | ~> 2.0 |
resource_group | terraform.registry.launch.nttdata.com/module_primitive/resource_group/azurerm | ~> 1.0 |
key_vault | terraform.registry.launch.nttdata.com/module_primitive/key_vault/azurerm | ~> 1.0 |
key_vault_role_assignment | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
additional_key_vaults_role_assignment | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
cluster_identity | terraform.registry.launch.nttdata.com/module_primitive/user_managed_identity/azurerm | ~> 1.0 |
cluster_identity_roles | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
private_cluster_dns_zone | terraform.registry.launch.nttdata.com/module_primitive/private_dns_zone/azurerm | ~> 1.0 |
cluster_identity_private_dns_contributor | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
vnet_links | terraform.registry.launch.nttdata.com/module_primitive/private_dns_vnet_link/azurerm | ~> 1.0 |
route_table | terraform.registry.launch.nttdata.com/module_primitive/route_table/azurerm | ~> 1.0 |
udr_route_table_role_assignment | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
routes | terraform.registry.launch.nttdata.com/module_primitive/route/azurerm | ~> 1.0 |
subnet_route_table_assoc | terraform.registry.launch.nttdata.com/module_primitive/routetable_subnet_association/azurerm | ~> 1.0 |
aks | terraform.registry.launch.nttdata.com/module_primitive/kubernetes_cluster/azurerm | ~> 2.0 |
node_pool_identity_roles | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
additional_acr_role_assignments | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
public_dns_zone | terraform.registry.launch.nttdata.com/module_primitive/dns_zone/azurerm | ~> 1.0 |
kubelet_public_dns_contributor | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
kubelet_resource_group_reader | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
application_insights | terraform.registry.launch.nttdata.com/module_primitive/application_insights/azurerm | ~> 1.0 |
prometheus_monitor_workspace | terraform.registry.launch.nttdata.com/module_primitive/monitor_workspace/azurerm | ~> 1.0 |
prometheus_monitor_data_collection | terraform.registry.launch.nttdata.com/module_primitive/monitor_prometheus/azurerm | ~> 1.0 |
prometheus_monitor_workspace_private_dns_zone | terraform.registry.launch.nttdata.com/module_primitive/private_dns_zone/azurerm | ~> 1.0 |
prometheus_monitor_workspace_vnet_link | terraform.registry.launch.nttdata.com/module_primitive/private_dns_vnet_link/azurerm | ~> 1.0 |
prometheus_monitor_workspace_private_endpoint | terraform.registry.launch.nttdata.com/module_primitive/private_endpoint/azurerm | ~> 1.0 |
monitor_private_link_scope | terraform.registry.launch.nttdata.com/module_primitive/azure_monitor_private_link_scope/azurerm | ~> 1.0 |
monitor_private_link_scope_dns_zone | terraform.registry.launch.nttdata.com/module_primitive/private_dns_zone/azurerm | ~> 1.0 |
monitor_private_link_scope_vnet_link | terraform.registry.launch.nttdata.com/module_primitive/private_dns_vnet_link/azurerm | ~> 1.0 |
monitor_private_link_scope_private_endpoint | terraform.registry.launch.nttdata.com/module_primitive/private_endpoint/azurerm | ~> 1.0 |
monitor_private_link_scoped_service | terraform.registry.launch.nttdata.com/module_primitive/monitor_private_link_scoped_service/azurerm | ~> 1.0 |
No resources.
Name | Description | Type | Default | Required |
---|---|---|---|---|
product_family | (Required) Name of the product family for which the resource is created. Example: org_name, department_name. |
string |
"dso" |
no |
product_service | (Required) Name of the product service for which the resource is created. For example, backend, frontend, middleware etc. |
string |
"kube" |
no |
environment | Environment in which the resource should be provisioned like dev, qa, prod etc. | string |
"dev" |
no |
environment_number | The environment count for the respective environment. Defaults to 000. Increments in value of 1 | string |
"000" |
no |
resource_number | The resource count for the respective resource. Defaults to 000. Increments in value of 1 | string |
"000" |
no |
region | AWS Region in which the infra needs to be provisioned | string |
"eastus" |
no |
resource_names_map | A map of key to resource_name that will be used by tf-launch-module_library-resource_name to generate resource names | map(object( |
{ |
no |
resource_group_name | Name of the resource group in which the AKS cluster will be created. If not provided, this module will create one | string |
null |
no |
kubernetes_version | Specify which Kubernetes release to use. The default used is the latest Kubernetes version available in the region Use az aks get-versions --location <region> to find the available versions in the region |
string |
"1.28" |
no |
network_plugin | Network plugin to use for networking. Default is azure. | string |
"azure" |
no |
network_plugin_mode | (Optional) Specifies the network plugin mode used for building the Kubernetes network. Possible value is Overlay . Changing this forces a new resource to be created. |
string |
null |
no |
network_policy | (Optional) Sets up network policy to be used with Azure CNI. Network policy allows us to control the traffic flow between pods. Currently supported values are calico and azure. Changing this forces a new resource to be created. | string |
null |
no |
private_cluster_enabled | If true cluster API server will be exposed only on internal IP address and available only in cluster vnet. | bool |
false |
no |
private_cluster_public_fqdn_enabled | (Optional) Specifies whether a Public FQDN for this Private Cluster should be added. Defaults to false . |
bool |
false |
no |
azure_policy_enabled | Enable Azure Policy add-on for AKS cluster? Defaults to false . |
bool |
false |
no |
additional_vnet_links | A list of VNET IDs for which vnet links to be created with the private AKS cluster DNS Zone. Applicable only when private_cluster_enabled is true. | map(string) |
{} |
no |
cluster_identity_role_assignments | A map of role assignments to be associated with the cluster identity Should be of the format { private-dns = ["Private DNS Zone Contributor", ""] dns = ["DNS Zone Contributor", ""] } |
map(list(string)) |
{} |
no |
node_pool_identity_role_assignments | A map of role assignments to be associated with the node-pool identity Should be of the format { private-dns = ["Private DNS Zone Contributor", ""] dns = ["DNS Zone Contributor", ""] } |
map(list(string)) |
{} |
no |
dns_zone_suffix | The DNS Zone suffix for AKS Cluster private DNS Zone. Default is azmk8s.io for Public CloudFor gov cloud it is cx.aks.containerservice.azure.us |
string |
"azmk8s.io" |
no |
vnet_subnet_id | (Optional) The ID of a Subnet where the Kubernetes Node Pool should exist. Changing this forces a new resource to be created. | string |
null |
no |
net_profile_dns_service_ip | (Optional) IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). Changing this forces a new resource to be created. | string |
null |
no |
net_profile_outbound_type | (Optional) The outbound (egress) routing method which should be used for this Kubernetes Cluster. Possible values are loadBalancer and userDefinedRouting. Defaults to loadBalancer. if userDefinedRouting is selected, user_defined_routing variable is required. |
string |
"loadBalancer" |
no |
user_defined_routing | This variable is required only when net_profile_outbound_type is set to userDefinedRouting The private IP address of the Azure Firewall instance is needed to create route in custom Route table |
object({ |
null |
no |
net_profile_pod_cidr | (Optional) The CIDR to use for pod IP addresses. This field can only be set when network_plugin is set to kubenet. Changing this forces a new resource to be created. | string |
null |
no |
net_profile_service_cidr | (Optional) The Network Range used by the Kubernetes service. Changing this forces a new resource to be created. | string |
null |
no |
agents_availability_zones | (Optional) A list of Availability Zones across which the Node Pool should be spread. Changing this forces a new resource to be created. | list(string) |
null |
no |
agents_count | The number of Agents that should exist in the Agent Pool. Please set agents_count null while enable_auto_scaling is true to avoid possible agents_count changes. |
number |
2 |
no |
agents_labels | (Optional) A map of Kubernetes labels which should be applied to nodes in the Default Node Pool. Changing this forces a new resource to be created. | map(string) |
{} |
no |
agents_max_count | Maximum number of nodes in a pool | number |
null |
no |
agents_max_pods | (Optional) The maximum number of pods that can run on each agent. Changing this forces a new resource to be created. | number |
null |
no |
agents_min_count | Minimum number of nodes in a pool | number |
null |
no |
agents_pool_max_surge | The maximum number or percentage of nodes which will be added to the Default Node Pool size during an upgrade. | string |
null |
no |
agents_pool_linux_os_configs | list(object({ sysctl_configs = optional(list(object({ fs_aio_max_nr = (Optional) The sysctl setting fs.aio-max-nr. Must be between 65536 and 6553500 . Changing this forces a new resource to be created.fs_file_max = (Optional) The sysctl setting fs.file-max. Must be between 8192 and 12000500 . Changing this forces a new resource to be created.fs_inotify_max_user_watches = (Optional) The sysctl setting fs.inotify.max_user_watches. Must be between 781250 and 2097152 . Changing this forces a new resource to be created.fs_nr_open = (Optional) The sysctl setting fs.nr_open. Must be between 8192 and 20000500 . Changing this forces a new resource to be created.kernel_threads_max = (Optional) The sysctl setting kernel.threads-max. Must be between 20 and 513785 . Changing this forces a new resource to be created.net_core_netdev_max_backlog = (Optional) The sysctl setting net.core.netdev_max_backlog. Must be between 1000 and 3240000 . Changing this forces a new resource to be created.net_core_optmem_max = (Optional) The sysctl setting net.core.optmem_max. Must be between 20480 and 4194304 . Changing this forces a new resource to be created.net_core_rmem_default = (Optional) The sysctl setting net.core.rmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_rmem_max = (Optional) The sysctl setting net.core.rmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_somaxconn = (Optional) The sysctl setting net.core.somaxconn. Must be between 4096 and 3240000 . Changing this forces a new resource to be created.net_core_wmem_default = (Optional) The sysctl setting net.core.wmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_wmem_max = (Optional) The sysctl setting net.core.wmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_min = (Optional) The sysctl setting net.ipv4.ip_local_port_range max value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_max = (Optional) The sysctl setting net.ipv4.ip_local_port_range min value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh1 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh1. Must be between 128 and 80000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh2 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh2. Must be between 512 and 90000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh3 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh3. Must be between 1024 and 100000 . Changing this forces a new resource to be created.net_ipv4_tcp_fin_timeout = (Optional) The sysctl setting net.ipv4.tcp_fin_timeout. Must be between 5 and 120 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_intvl = (Optional) The sysctl setting net.ipv4.tcp_keepalive_intvl. Must be between 10 and 75 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_probes = (Optional) The sysctl setting net.ipv4.tcp_keepalive_probes. Must be between 1 and 15 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_time = (Optional) The sysctl setting net.ipv4.tcp_keepalive_time. Must be between 30 and 432000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_syn_backlog = (Optional) The sysctl setting net.ipv4.tcp_max_syn_backlog. Must be between 128 and 3240000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_tw_buckets = (Optional) The sysctl setting net.ipv4.tcp_max_tw_buckets. Must be between 8000 and 1440000 . Changing this forces a new resource to be created.net_ipv4_tcp_tw_reuse = (Optional) The sysctl setting net.ipv4.tcp_tw_reuse. Changing this forces a new resource to be created. net_netfilter_nf_conntrack_buckets = (Optional) The sysctl setting net.netfilter.nf_conntrack_buckets. Must be between 65536 and 147456 . Changing this forces a new resource to be created.net_netfilter_nf_conntrack_max = (Optional) The sysctl setting net.netfilter.nf_conntrack_max. Must be between 131072 and 1048576 . Changing this forces a new resource to be created.vm_max_map_count = (Optional) The sysctl setting vm.max_map_count. Must be between 65530 and 262144 . Changing this forces a new resource to be created.vm_swappiness = (Optional) The sysctl setting vm.swappiness. Must be between 0 and 100 . Changing this forces a new resource to be created.vm_vfs_cache_pressure = (Optional) The sysctl setting vm.vfs_cache_pressure. Must be between 0 and 100 . Changing this forces a new resource to be created.})), []) transparent_huge_page_enabled = (Optional) Specifies the Transparent Huge Page enabled configuration. Possible values are always , madvise and never . Changing this forces a new resource to be created.transparent_huge_page_defrag = (Optional) specifies the defrag configuration for Transparent Huge Page. Possible values are always , defer , defer+madvise , madvise and never . Changing this forces a new resource to be created.swap_file_size_mb = (Optional) Specifies the size of the swap file on each node in MB. Changing this forces a new resource to be created. })) |
list(object({ |
[] |
no |
agents_pool_name | The default Azure AKS agentpool (nodepool) name. | string |
"nodepool" |
no |
agents_proximity_placement_group_id | (Optional) The ID of the Proximity Placement Group of the default Azure AKS agentpool (nodepool). Changing this forces a new resource to be created. | string |
null |
no |
agents_size | The default virtual machine size for the Kubernetes agents. Changing this without specifying var.temporary_name_for_rotation forces a new resource to be created. |
string |
"Standard_D2_v2" |
no |
temporary_name_for_rotation | (Optional) Specifies the name of the temporary node pool used to cycle the default node pool for VM resizing. the var.agents_size is no longer ForceNew and can be resized by specifying temporary_name_for_rotation |
string |
null |
no |
agents_tags | (Optional) A mapping of tags to assign to the Node Pool. | map(string) |
{} |
no |
agents_taints | (Optional) A list of the taints added to new nodes during node pool create and scale. Changing this forces a new resource to be created. | list(string) |
null |
no |
agents_type | (Optional) The type of Node Pool which should be created. Possible values are AvailabilitySet and VirtualMachineScaleSets. Defaults to VirtualMachineScaleSets. | string |
"VirtualMachineScaleSets" |
no |
api_server_authorized_ip_ranges | (Optional) The IP ranges to allow for incoming traffic to the server nodes. | set(string) |
null |
no |
api_server_subnet_id | (Optional) The ID of the Subnet where the API server endpoint is delegated to. | string |
null |
no |
attached_acr_id_map | Azure Container Registry ids that need an authentication mechanism with Azure Kubernetes Service (AKS). Map key must be static string as acr's name, the value is acr's resource id. Changing this forces some new resources to be created. | map(string) |
{} |
no |
enable_auto_scaling | Enable node pool autoscaling. Please set agents_count null while enable_auto_scaling is true to avoid possible agents_count changes. |
bool |
false |
no |
auto_scaler_profile_balance_similar_node_groups | Detect similar node groups and balance the number of nodes between them. Defaults to false . |
bool |
false |
no |
auto_scaler_profile_empty_bulk_delete_max | Maximum number of empty nodes that can be deleted at the same time. Defaults to 10 . |
number |
10 |
no |
auto_scaler_profile_enabled | Enable configuring the auto scaler profile | bool |
false |
no |
auto_scaler_profile_expander | Expander to use. Possible values are least-waste , priority , most-pods and random . Defaults to random . |
string |
"random" |
no |
auto_scaler_profile_max_graceful_termination_sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node. Defaults to 600 . |
string |
"600" |
no |
auto_scaler_profile_max_node_provisioning_time | Maximum time the autoscaler waits for a node to be provisioned. Defaults to 15m . |
string |
"15m" |
no |
auto_scaler_profile_max_unready_nodes | Maximum Number of allowed unready nodes. Defaults to 3 . |
number |
3 |
no |
auto_scaler_profile_max_unready_percentage | Maximum percentage of unready nodes the cluster autoscaler will stop if the percentage is exceeded. Defaults to 45 . |
number |
45 |
no |
auto_scaler_profile_new_pod_scale_up_delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. Defaults to 10s . |
string |
"10s" |
no |
auto_scaler_profile_scale_down_delay_after_add | How long after the scale up of AKS nodes the scale down evaluation resumes. Defaults to 10m . |
string |
"10m" |
no |
auto_scaler_profile_scale_down_delay_after_delete | How long after node deletion that scale down evaluation resumes. Defaults to the value used for scan_interval . |
string |
null |
no |
auto_scaler_profile_scale_down_delay_after_failure | How long after scale down failure that scale down evaluation resumes. Defaults to 3m . |
string |
"3m" |
no |
auto_scaler_profile_scale_down_unneeded | How long a node should be unneeded before it is eligible for scale down. Defaults to 10m . |
string |
"10m" |
no |
auto_scaler_profile_scale_down_unready | How long an unready node should be unneeded before it is eligible for scale down. Defaults to 20m . |
string |
"20m" |
no |
auto_scaler_profile_scale_down_utilization_threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down. Defaults to 0.5 . |
string |
"0.5" |
no |
auto_scaler_profile_scan_interval | How often the AKS Cluster should be re-evaluated for scale up/down. Defaults to 10s . |
string |
"10s" |
no |
auto_scaler_profile_skip_nodes_with_local_storage | If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath. Defaults to true . |
bool |
true |
no |
auto_scaler_profile_skip_nodes_with_system_pods | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods). Defaults to true . |
bool |
true |
no |
os_disk_size_gb | Disk size of nodes in GBs. | number |
50 |
no |
os_disk_type | The type of disk which should be used for the Operating System. Possible values are Ephemeral and Managed . Defaults to Managed . Changing this forces a new resource to be created. |
string |
"Managed" |
no |
os_sku | (Optional) Specifies the OS SKU used by the agent pool. Possible values include: Ubuntu , CBLMariner , Mariner , Windows2019 , Windows2022 . If not specified, the default is Ubuntu if OSType=Linux or Windows2019 if OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 after Windows2019 is deprecated. Changing this forces a new resource to be created. |
string |
null |
no |
pod_subnet_id | (Optional) The ID of the Subnet where the pods in the default Node Pool should exist. Changing this forces a new resource to be created. | string |
null |
no |
rbac_aad | (Optional) Is Azure Active Directory integration enabled? | bool |
false |
no |
rbac_aad_admin_group_object_ids | Object ID of groups with admin access. | list(string) |
null |
no |
rbac_aad_azure_rbac_enabled | (Optional) Is Role Based Access Control based on Azure AD enabled? | bool |
null |
no |
rbac_aad_client_app_id | The Client ID of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_managed | Is the Azure Active Directory integration Managed, meaning that Azure will create/manage the Service Principal used for integration. | bool |
false |
no |
rbac_aad_server_app_id | The Server ID of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_server_app_secret | The Server Secret of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_tenant_id | (Optional) The Tenant ID used for Azure Active Directory Application. If this isn't specified the Tenant ID of the current Subscription is used. | string |
null |
no |
role_based_access_control_enabled | Enable Role Based Access Control. If this is disabled, then identity_type='SystemAssigned' by default | bool |
false |
no |
local_account_disabled | (Optional) - If true local accounts will be disabled. Defaults to false . See the documentation for more information. |
bool |
null |
no |
oidc_issuer_enabled | Enable or Disable the OIDC issuer URL. Defaults to false. | bool |
false |
no |
workload_identity_enabled | Enable or Disable Workload Identity. If enabled, oidc_issuer_enabled must be true. Defaults to false. | bool |
false |
no |
cluster_log_analytics_workspace_name | (Optional) The name of the Analytics workspace to create | string |
null |
no |
log_analytics_workspace | (Optional) Existing azurerm_log_analytics_workspace to attach azurerm_log_analytics_solution. Providing the config disables creation of azurerm_log_analytics_workspace. | object({ |
null |
no |
log_analytics_workspace_allow_resource_only_permissions | (Optional) Specifies if the log Analytics Workspace allow users accessing to data associated with resources they have permission to view, without permission to workspace. Defaults to true . |
bool |
null |
no |
log_analytics_workspace_cmk_for_query_forced | (Optional) Is Customer Managed Storage mandatory for query management? | bool |
null |
no |
log_analytics_workspace_daily_quota_gb | (Optional) The workspace daily quota for ingestion in GB. Defaults to -1 (unlimited) if omitted. | number |
null |
no |
log_analytics_workspace_data_collection_rule_id | (Optional) The ID of the Data Collection Rule to use for this workspace. | string |
null |
no |
log_analytics_workspace_enabled | Enable the integration of azurerm_log_analytics_workspace and azurerm_log_analytics_solution: https://docs.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-onboard | bool |
true |
no |
log_analytics_workspace_resource_group_name | (Optional) Resource group name to create azurerm_log_analytics_solution. | string |
null |
no |
log_analytics_workspace_sku | The SKU (pricing level) of the Log Analytics workspace. For new subscriptions the SKU should be set to PerGB2018 | string |
"PerGB2018" |
no |
log_analytics_workspace_identity | - identity_ids - (Optional) Specifies a list of user managed identity ids to be assigned. Required if type is UserAssigned .- type - (Required) Specifies the identity type of the Log Analytics Workspace. Possible values are SystemAssigned (where Azure will generate a Service Principal for you) and UserAssigned where you can specify the Service Principal IDs in the identity_ids field. |
object({ |
null |
no |
log_analytics_workspace_immediate_data_purge_on_30_days_enabled | (Optional) Whether to remove the data in the Log Analytics Workspace immediately after 30 days. | bool |
null |
no |
log_analytics_workspace_internet_ingestion_enabled | (Optional) Should the Log Analytics Workspace support ingestion over the Public Internet? Defaults to true . |
bool |
null |
no |
log_analytics_workspace_internet_query_enabled | (Optional) Should the Log Analytics Workspace support querying over the Public Internet? Defaults to true . |
bool |
null |
no |
log_analytics_workspace_local_authentication_disabled | (Optional) Specifies if the log Analytics workspace should enforce authentication using Azure AD. Defaults to false . |
bool |
null |
no |
log_analytics_workspace_reservation_capacity_in_gb_per_day | (Optional) The capacity reservation level in GB for this workspace. Possible values are 100 , 200 , 300 , 400 , 500 , 1000 , 2000 and 5000 . |
number |
null |
no |
log_retention_in_days | The retention period for the logs in days | number |
30 |
no |
node_pools | A map of node pools that need to be created and attached on the Kubernetes cluster. The key of the map can be the name of the node pool, and the key must be static string. The value of the map is a node_pool block as defined below:map(object({ name = (Required) The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created. A Windows Node Pool cannot have a name longer than 6 characters. A random suffix of 4 characters is always added to the name to avoid clashes during recreates.node_count = (Optional) The initial number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 (inclusive) for user pools and between 1 and 1000 (inclusive) for system pools and must be a value in the range min_count - max_count .tags = (Optional) A mapping of tags to assign to the resource. At this time there's a bug in the AKS API where Tags for a Node Pool are not stored in the correct case - you may wish to use Terraform's ignore_changes functionality to ignore changes to the casing until this is fixed in the AKS API.vm_size = (Required) The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created. host_group_id = (Optional) The fully qualified resource ID of the Dedicated Host Group to provision virtual machines from. Changing this forces a new resource to be created. capacity_reservation_group_id = (Optional) Specifies the ID of the Capacity Reservation Group where this Node Pool should exist. Changing this forces a new resource to be created. custom_ca_trust_enabled = (Optional) Specifies whether to trust a Custom CA. This requires that the Preview Feature Microsoft.ContainerService/CustomCATrustPreview is enabled and the Resource Provider is re-registered, see the documentation for more information.enable_auto_scaling = (Optional) Whether to enable auto-scaler. enable_host_encryption = (Optional) Should the nodes in this Node Pool have host encryption enabled? Changing this forces a new resource to be created. enable_node_public_ip = (Optional) Should each node have a Public IP Address? Changing this forces a new resource to be created. eviction_policy = (Optional) The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are Deallocate and Delete . Changing this forces a new resource to be created. An Eviction Policy can only be configured when priority is set to Spot and will default to Delete unless otherwise specified.gpu_instance = (Optional) Specifies the GPU MIG instance profile for supported GPU VM SKU. The allowed values are MIG1g , MIG2g , MIG3g , MIG4g and MIG7g . Changing this forces a new resource to be created.kubelet_config = optional(object({ cpu_manager_policy = (Optional) Specifies the CPU Manager policy to use. Possible values are none and static , Changing this forces a new resource to be created.cpu_cfs_quota_enabled = (Optional) Is CPU CFS quota enforcement for containers enabled? Changing this forces a new resource to be created. cpu_cfs_quota_period = (Optional) Specifies the CPU CFS quota period value. Changing this forces a new resource to be created. image_gc_high_threshold = (Optional) Specifies the percent of disk usage above which image garbage collection is always run. Must be between 0 and 100 . Changing this forces a new resource to be created.image_gc_low_threshold = (Optional) Specifies the percent of disk usage lower than which image garbage collection is never run. Must be between 0 and 100 . Changing this forces a new resource to be created.topology_manager_policy = (Optional) Specifies the Topology Manager policy to use. Possible values are none , best-effort , restricted or single-numa-node . Changing this forces a new resource to be created.allowed_unsafe_sysctls = (Optional) Specifies the allow list of unsafe sysctls command or patterns (ending in * ). Changing this forces a new resource to be created.container_log_max_size_mb = (Optional) Specifies the maximum size (e.g. 10MB) of container log file before it is rotated. Changing this forces a new resource to be created. container_log_max_files = (Optional) Specifies the maximum number of container log files that can be present for a container. must be at least 2. Changing this forces a new resource to be created. pod_max_pid = (Optional) Specifies the maximum number of processes per pod. Changing this forces a new resource to be created. })) linux_os_config = optional(object({ sysctl_config = optional(object({ fs_aio_max_nr = (Optional) The sysctl setting fs.aio-max-nr. Must be between 65536 and 6553500 . Changing this forces a new resource to be created.fs_file_max = (Optional) The sysctl setting fs.file-max. Must be between 8192 and 12000500 . Changing this forces a new resource to be created.fs_inotify_max_user_watches = (Optional) The sysctl setting fs.inotify.max_user_watches. Must be between 781250 and 2097152 . Changing this forces a new resource to be created.fs_nr_open = (Optional) The sysctl setting fs.nr_open. Must be between 8192 and 20000500 . Changing this forces a new resource to be created.kernel_threads_max = (Optional) The sysctl setting kernel.threads-max. Must be between 20 and 513785 . Changing this forces a new resource to be created.net_core_netdev_max_backlog = (Optional) The sysctl setting net.core.netdev_max_backlog. Must be between 1000 and 3240000 . Changing this forces a new resource to be created.net_core_optmem_max = (Optional) The sysctl setting net.core.optmem_max. Must be between 20480 and 4194304 . Changing this forces a new resource to be created.net_core_rmem_default = (Optional) The sysctl setting net.core.rmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_rmem_max = (Optional) The sysctl setting net.core.rmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_somaxconn = (Optional) The sysctl setting net.core.somaxconn. Must be between 4096 and 3240000 . Changing this forces a new resource to be created.net_core_wmem_default = (Optional) The sysctl setting net.core.wmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_wmem_max = (Optional) The sysctl setting net.core.wmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_min = (Optional) The sysctl setting net.ipv4.ip_local_port_range min value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_max = (Optional) The sysctl setting net.ipv4.ip_local_port_range max value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh1 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh1. Must be between 128 and 80000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh2 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh2. Must be between 512 and 90000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh3 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh3. Must be between 1024 and 100000 . Changing this forces a new resource to be created.net_ipv4_tcp_fin_timeout = (Optional) The sysctl setting net.ipv4.tcp_fin_timeout. Must be between 5 and 120 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_intvl = (Optional) The sysctl setting net.ipv4.tcp_keepalive_intvl. Must be between 10 and 75 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_probes = (Optional) The sysctl setting net.ipv4.tcp_keepalive_probes. Must be between 1 and 15 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_time = (Optional) The sysctl setting net.ipv4.tcp_keepalive_time. Must be between 30 and 432000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_syn_backlog = (Optional) The sysctl setting net.ipv4.tcp_max_syn_backlog. Must be between 128 and 3240000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_tw_buckets = (Optional) The sysctl setting net.ipv4.tcp_max_tw_buckets. Must be between 8000 and 1440000 . Changing this forces a new resource to be created.net_ipv4_tcp_tw_reuse = (Optional) Is sysctl setting net.ipv4.tcp_tw_reuse enabled? Changing this forces a new resource to be created. net_netfilter_nf_conntrack_buckets = (Optional) The sysctl setting net.netfilter.nf_conntrack_buckets. Must be between 65536 and 147456 . Changing this forces a new resource to be created.net_netfilter_nf_conntrack_max = (Optional) The sysctl setting net.netfilter.nf_conntrack_max. Must be between 131072 and 1048576 . Changing this forces a new resource to be created.vm_max_map_count = (Optional) The sysctl setting vm.max_map_count. Must be between 65530 and 262144 . Changing this forces a new resource to be created.vm_swappiness = (Optional) The sysctl setting vm.swappiness. Must be between 0 and 100 . Changing this forces a new resource to be created.vm_vfs_cache_pressure = (Optional) The sysctl setting vm.vfs_cache_pressure. Must be between 0 and 100 . Changing this forces a new resource to be created.})) transparent_huge_page_enabled = (Optional) Specifies the Transparent Huge Page enabled configuration. Possible values are always , madvise and never . Changing this forces a new resource to be created.transparent_huge_page_defrag = (Optional) specifies the defrag configuration for Transparent Huge Page. Possible values are always , defer , defer+madvise , madvise and never . Changing this forces a new resource to be created.swap_file_size_mb = (Optional) Specifies the size of swap file on each node in MB. Changing this forces a new resource to be created. })) fips_enabled = (Optional) Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created. FIPS support is in Public Preview - more information and details on how to opt into the Preview can be found in this article. kubelet_disk_type = (Optional) The type of disk used by kubelet. Possible values are OS and Temporary .max_count = (Optional) The maximum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be greater than or equal to min_count .max_pods = (Optional) The minimum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be less than or equal to max_count .message_of_the_day = (Optional) A base64-encoded string which will be written to /etc/motd after decoding. This allows customization of the message of the day for Linux nodes. It cannot be specified for Windows nodes and must be a static string (i.e. will be printed raw and not executed as a script). Changing this forces a new resource to be created. mode = (Optional) Should this Node Pool be used for System or User resources? Possible values are System and User . Defaults to User .min_count = (Optional) The minimum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be less than or equal to max_count .node_network_profile = optional(object({ node_public_ip_tags = (Optional) Specifies a mapping of tags to the instance-level public IPs. Changing this forces a new resource to be created. })) node_labels = (Optional) A map of Kubernetes labels which should be applied to nodes in this Node Pool. node_public_ip_prefix_id = (Optional) Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. enable_node_public_ip should be true . Changing this forces a new resource to be created.node_taints = (Optional) A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g key=value:NoSchedule ). Changing this forces a new resource to be created.orchestrator_version = (Optional) Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade). AKS does not require an exact patch version to be specified, minor version aliases such as 1.22 are also supported. - The minor version's latest GA patch is automatically chosen in that case. More details can be found in the documentation. This version must be supported by the Kubernetes Cluster - as such the version of Kubernetes used on the Cluster/Control Plane may need to be upgraded first.os_disk_size_gb = (Optional) The Agent Operating System disk size in GB. Changing this forces a new resource to be created. os_disk_type = (Optional) The type of disk which should be used for the Operating System. Possible values are Ephemeral and Managed . Defaults to Managed . Changing this forces a new resource to be created.os_sku = (Optional) Specifies the OS SKU used by the agent pool. Possible values include: Ubuntu , CBLMariner , Mariner , Windows2019 , Windows2022 . If not specified, the default is Ubuntu if OSType=Linux or Windows2019 if OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 after Windows2019 is deprecated. Changing this forces a new resource to be created.os_type = (Optional) The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are Linux and Windows . Defaults to Linux .pod_subnet_id = (Optional) The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created. priority = (Optional) The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are Regular and Spot . Defaults to Regular . Changing this forces a new resource to be created.proximity_placement_group_id = (Optional) The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created. When setting priority to Spot - you must configure an eviction_policy , spot_max_price and add the applicable node_labels and node_taints as per the Azure Documentation.spot_max_price = (Optional) The maximum price you're willing to pay in USD per Virtual Machine. Valid values are -1 (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created. This field can only be configured when priority is set to Spot .scale_down_mode = (Optional) Specifies how the node pool should deal with scaled-down nodes. Allowed values are Delete and Deallocate . Defaults to Delete .snapshot_id = (Optional) The ID of the Snapshot which should be used to create this Node Pool. Changing this forces a new resource to be created. ultra_ssd_enabled = (Optional) Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to false . See the documentation for more information. Changing this forces a new resource to be created.vnet_subnet_id = (Optional) The ID of the Subnet where this Node Pool should exist. Changing this forces a new resource to be created. A route table must be configured on this Subnet. upgrade_settings = optional(object({ drain_timeout_in_minutes = number node_soak_duration_in_minutes = number max_surge = string })) windows_profile = optional(object({ outbound_nat_enabled = optional(bool, true) })) workload_runtime = (Optional) Used to specify the workload runtime. Allowed values are OCIContainer and WasmWasi . WebAssembly System Interface node pools are in Public Preview - more information and details on how to opt into the preview can be found in this articlezones = (Optional) Specifies a list of Availability Zones in which this Kubernetes Cluster Node Pool should be located. Changing this forces a new Kubernetes Cluster Node Pool to be created. create_before_destroy = (Optional) Create a new node pool before destroy the old one when Terraform must update an argument that cannot be updated in-place. Set this argument to true will add add a random suffix to pool's name to avoid conflict. Default to true .})) |
map(object({ |
{} |
no |
open_service_mesh_enabled | Is Open Service Mesh enabled? For more details, please visit Open Service Mesh for AKS. | bool |
null |
no |
key_vault_secrets_provider_enabled | (Optional) Whether to use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster. If enabled, it creates an MSI for key vault, assigns it to the VMSS identity for key vault and assigns necessary permissions to the key vault. For more details: https://docs.microsoft.com/en-us/azure/aks/csi-secrets-store-driver |
bool |
false |
no |
secret_rotation_enabled | Is secret rotation enabled? This variable is only used when key_vault_secrets_provider_enabled is true and defaults to false |
bool |
false |
no |
secret_rotation_interval | The interval to poll for secret rotation. This attribute is only set when secret_rotation is true and defaults to 2m |
string |
"2m" |
no |
create_key_vault | Create a new Key Vault to be associated with the AKS cluster | bool |
false |
no |
key_vault_name | The name of the key vault to override the naming module | string |
null |
no |
key_vault_role_definition | Permission assigned to the key vault MSI on the key vault. Default is Key Vault Administrator |
string |
"Key Vault Administrator" |
no |
additional_key_vault_ids | IDs of the additional key vaults to be associated with the AKS cluster. The key vault MSI will be assigned the role defined in key_vault_role_definition on these key vaults. |
list(string) |
[] |
no |
enable_rbac_authorization | Enable Kubernetes Role-Based Access Control on the Key Vault | bool |
false |
no |
sku_tier | The SKU Tier that should be used for this Kubernetes Cluster. Possible values are Free and Standard |
string |
"Free" |
no |
node_resource_group | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. Changing this forces a new resource to be created. | string |
null |
no |
brown_field_application_gateway_for_ingress | Definition of brown_field * id - (Required) The ID of the Application Gateway that be used as cluster ingress.* subnet_id - (Required) The ID of the Subnet which the Application Gateway is connected to. Must be set when create_role_assignments is true . |
object({ |
null |
no |
green_field_application_gateway_for_ingress | Definition of green_field * name - (Optional) The name of the Application Gateway to be used or created in the Nodepool Resource Group, which in turn will be integrated with the ingress controller of this Kubernetes Cluster.* subnet_cidr - (Optional) The subnet CIDR to be used to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster.* subnet_id - (Optional) The ID of the subnet on which to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster. |
object({ |
null |
no |
web_app_routing | object({ dns_zone_id = "(Required) Specifies the ID of the DNS Zone in which DNS entries are created for applications deployed to the cluster when Web App Routing is enabled." }) |
object({ |
null |
no |
identity_ids | (Optional) Specifies a list of User Assigned Managed Identity IDs to be assigned to this Kubernetes Cluster. | list(string) |
[] |
no |
identity_type | (Optional) The type of identity used for the managed cluster. Conflicts with client_id and client_secret . Possible values are SystemAssigned and UserAssigned . If UserAssigned is set, an identity_ids must be set as well. |
string |
"SystemAssigned" |
no |
client_id | (Optional) The Client ID (appId) for the Service Principal used for the AKS deployment | string |
"" |
no |
client_secret | (Optional) The Client Secret (password) for the Service Principal used for the AKS deployment | string |
"" |
no |
monitor_metrics | (Optional) Specifies a Prometheus add-on profile for the Kubernetes Cluster object({ annotations_allowed = "(Optional) Specifies a comma-separated list of Kubernetes annotation keys that will be used in the resource's labels metric." labels_allowed = "(Optional) Specifies a Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric." }) |
object({ |
null |
no |
container_registry_ids | List of container registry IDs to associate with AKS. This module will assign role AcrPull to AKS for these registries |
list(string) |
[] |
no |
public_dns_zone_name | Name of a public DNS zone to create with the kubernetes cluster | string |
null |
no |
kv_soft_delete_retention_days | Number of retention days for soft delete for key vault | number |
7 |
no |
kv_sku | SKU for the key vault - standard or premium | string |
"standard" |
no |
kv_access_policies | Additional Access policies for the vault except the current user which are added by default | map(object({ |
{} |
no |
certificates | List of certificates to be imported. The pfx files should be present in the root of the module (path.root) and its name denoted as certificate_name | map(object({ |
{} |
no |
secrets | List of secrets (name and value) | map(string) |
{} |
no |
keys | List of keys to be created in key vault. Name of the key is the key of the map | map(object({ |
{} |
no |
disable_bgp_route_propagation | Disable BGP route propagation on the routing table that AKS manages. | bool |
false |
no |
create_application_insights | Ff true, create a new Application Insights resource to be associated with the AKS cluster | bool |
false |
no |
application_insights | Details for the Application Insights resource to be associated with the AKS cluster. Required only when create_application_insights=true | object({ |
{} |
no |
create_monitor_private_link_scope | If true, create a new Private Link Scope for Azure Monitor. NOTE: This will cause all azure monitor / log analytics traffic to go through private link. |
bool |
false |
no |
brown_field_monitor_private_link_scope_id | An existing monitor private link scope to associate with azure monitor resources. Usable only when create_monitor_private_link_scope is false . |
string |
null |
no |
monitor_private_link_scope_subnet_id | The ID of the subnet to associate with the Azure Monitor private link scope | string |
null |
no |
monitor_private_link_scope_dns_zone_suffixes | The DNS zone suffixes for the private link scope | set(string) |
[ |
no |
enable_prometheus_monitoring | Deploy Prometheus monitoring resources with the AKS cluster | bool |
false |
no |
brown_field_prometheus_monitor_workspace_id | The ID of an existing Azure Monitor workspace to use for Prometheus monitoring | string |
null |
no |
prometheus_workspace_public_access_enabled | Enable public access to the Azure Monitor workspace for prometheus | bool |
true |
no |
enable_prometheus_monitoring_private_endpoint | Enable private endpoint for Prometheus monitoring | bool |
false |
no |
prometheus_monitoring_private_endpoint_subnet_id | The ID of a subnet to create a private endpoint for Prometheus monitoring | string |
null |
no |
prometheus_enable_default_rule_groups | Enable default recording rules for prometheus | bool |
true |
no |
prometheus_default_rule_group_naming | Resource names for the default recording rules | map(string) |
{ |
no |
prometheus_default_rule_group_interval | Interval to run default recording rules in ISO 8601 format (between PT1M and PT15M) | string |
"PT1M" |
no |
prometheus_rule_groups | map(object({ enabled = Whether or not the rule group is enabled description = Description of the rule group interval = Interval to run the rule group in ISO 8601 format (between PT1M and PT15M) recording_rules = list(object({ name = Name of the recording rule enabled = Whether or not the recording rule is enabled expression = PromQL expression for the time series value labels = Labels to add to the time series })) alert_rules = list(object({ name = Name of the alerting rule action = optional(object({ action_group_id = ID of the action group to send alerts to })) enabled = Whether or not the alert rule is enabled expression = PromQL expression to evaluate for = Amount of time the alert must be active before firing, represented in ISO 8601 duration format (i.e. PT5M) labels = Labels to add to the alerts fired by this rule alert_resolution = optional(object({ auto_resolved = Whether or not to auto-resolve the alert after the condition is no longer true time_to_resolve = Amount of time to wait before auto-resolving the alert, represented in ISO 8601 duration format (i.e. PT5M) })) severity = Severity of the alert, between 0 and 4 annotations = Annotations to add to the alerts fired by this rule })) |
map(object({ |
{} |
no |
tags | A map of custom tags to be attached to this module resources | map(string) |
{} |
no |
Name | Description |
---|---|
kube_config_raw | The azurerm_kubernetes_cluster 's kube_config_raw argument. Raw Kubernetes config to be used bykubectl and other compatible tools. |
kube_admin_config_raw | The azurerm_kubernetes_cluster 's kube_admin_config_raw argument. Raw Kubernetes config for the admin account tobe used by kubectl and other compatible tools. This is only available when Role Based Access Control with Azure Active Directory is enabled and local accounts enabled. |
admin_host | The host in the azurerm_kubernetes_cluster 's kube_admin_config block. The Kubernetes cluster server host. |
admin_password | The password in the azurerm_kubernetes_cluster 's kube_admin_config block. A password or token used to authenticate to the Kubernetes cluster. |
admin_username | The username in the azurerm_kubernetes_cluster 's kube_admin_config block. A username used to authenticate to the Kubernetes cluster. |
azure_policy_enabled | The azurerm_kubernetes_cluster 's azure_policy_enabled argument. Should the Azure Policy Add-On be enabled? For more details please visit Understand Azure Policy for Azure Kubernetes Service |
azurerm_log_analytics_workspace_id | The id of the created Log Analytics workspace |
azurerm_log_analytics_workspace_name | The name of the created Log Analytics workspace |
azurerm_log_analytics_workspace_primary_shared_key | Specifies the workspace key of the log analytics workspace |
client_certificate | The client_certificate in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded public certificate used by clients to authenticate to the Kubernetes cluster. |
client_key | The client_key in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded private key used by clients to authenticate to the Kubernetes cluster. |
cluster_ca_certificate | The cluster_ca_certificate in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded public CA certificate used as the root of trust for the Kubernetes cluster. |
cluster_fqdn | The FQDN of the Azure Kubernetes Managed Cluster. |
cluster_portal_fqdn | The FQDN for the Azure Portal resources when private link has been enabled, which is only resolvable inside the Virtual Network used by the Kubernetes Cluster. |
cluster_private_fqdn | The FQDN for the Kubernetes Cluster when private link has been enabled, which is only resolvable inside the Virtual Network used by the Kubernetes Cluster. |
generated_cluster_private_ssh_key | The cluster will use this generated private key as ssh key when var.public_ssh_key is empty or null. Private key data in PEM (RFC 1421) format. |
generated_cluster_public_ssh_key | The cluster will use this generated public key as ssh key when var.public_ssh_key is empty or null. The fingerprint of the public key data in OpenSSH MD5 hash format, e.g. aa:bb:cc:.... Only available if the selected private key format is compatible, similarly to public_key_openssh and the ECDSA P224 limitations. |
host | The host in the azurerm_kubernetes_cluster 's kube_config block. The Kubernetes cluster server host. |
cluster_name | Name of the AKS cluster |
cluster_id | ID of the AKS cluster |
resource_group_name | Name of the Resource Group for the AKS Cluster. Node RG is separate that is created automatically by Azure |
application_gateway_id | Application Gateway ID when application |
key_vault_secrets_provider | The azurerm_kubernetes_cluster 's key_vault_secrets_provider block. |
key_vault_secrets_provider_enabled | Has the azurerm_kubernetes_cluster turned on key_vault_secrets_provider block? |
cluster_identity | The azurerm_kubernetes_cluster 's identity block. |
kubelet_identity | The azurerm_kubernetes_cluster 's kubelet_identity block. |
key_vault_id | Custom Key Vault ID |
node_resource_group | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. |
location | The azurerm_kubernetes_cluster 's location argument. (Required) The location where the Managed Kubernetes Cluster should be created. |
network_profile | The azurerm_kubernetes_cluster 's network_profile block |
password | The password in the azurerm_kubernetes_cluster 's kube_config block. A password or token used to authenticate to the Kubernetes cluster. |
username | The username in the azurerm_kubernetes_cluster 's kube_config block. A username used to authenticate to the Kubernetes cluster. |
oidc_issuer_url | The OIDC issuer URL of the AKS cluster. |
private_cluster_dns_zone_id | ID of the private DNS zone for the private cluster. Created only for private cluster |
private_cluster_dns_zone_name | Name of the private DNS zone for the private cluster. Created only for private cluster |
public_dns_zone_id | Id of the public DNS zone created with the cluster |
public_dns_zone_name_servers | Name of the public DNS zone created with the cluster |
user_assigned_msi_object_id | The object ID of the user assigned managed identity. |
user_assigned_msi_client_id | The client ID of the user assigned managed identity. |
additional_vnet_links | The additional VNet links on the DNS zone of private AKS cluster. |
application_insights_id | Resource ID of the app insights instance |
monitor_private_link_scope_id | Resource ID of the monitor private link scope |
monitor_private_link_scope_dns_zone_ids | Map of Resource IDs of the monitor private link scope DNS zones |
monitor_private_link_scope_private_endpoint_id | Resource ID of the monitor private link scope private endpoint |
prometheus_workspace_id | Resource ID of the Prometheus Monitor workspace |
prometheus_workspace_dns_zone_id | Resource ID of the Prometheus Monitor workspace DNS zone |
prometheus_workspace_private_endpoint_id | Resource ID of the Prometheus Monitor workspace private endpoint |
prometheus_data_collection_endpoint_id | Resource ID of the Prometheus Monitor data collection endpoint |
prometheus_data_collection_rule_id | Resource ID of the Prometheus Monitor data collection rule |
Name | Version |
---|---|
terraform | ~> 1.0 |
azapi | >= 1.4.0, < 2.0 |
azurerm | ~>3.117 |
No providers.
Name | Source | Version |
---|---|---|
resource_names | terraform.registry.launch.nttdata.com/module_library/resource_name/launch | ~> 2.0 |
resource_group | terraform.registry.launch.nttdata.com/module_primitive/resource_group/azurerm | ~> 1.0 |
key_vault | terraform.registry.launch.nttdata.com/module_primitive/key_vault/azurerm | ~> 1.0 |
key_vault_role_assignment | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
additional_key_vaults_role_assignment | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
cluster_identity | terraform.registry.launch.nttdata.com/module_primitive/user_managed_identity/azurerm | ~> 1.0 |
cluster_identity_roles | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
private_cluster_dns_zone | terraform.registry.launch.nttdata.com/module_primitive/private_dns_zone/azurerm | ~> 1.0 |
cluster_identity_private_dns_contributor | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
vnet_links | terraform.registry.launch.nttdata.com/module_primitive/private_dns_vnet_link/azurerm | ~> 1.0 |
route_table | terraform.registry.launch.nttdata.com/module_primitive/route_table/azurerm | ~> 1.0 |
udr_route_table_role_assignment | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
routes | terraform.registry.launch.nttdata.com/module_primitive/route/azurerm | ~> 1.0 |
subnet_route_table_assoc | terraform.registry.launch.nttdata.com/module_primitive/routetable_subnet_association/azurerm | ~> 1.0 |
aks | terraform.registry.launch.nttdata.com/module_primitive/kubernetes_cluster/azurerm | ~> 2.0 |
node_pool_identity_roles | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
additional_acr_role_assignments | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
public_dns_zone | terraform.registry.launch.nttdata.com/module_primitive/dns_zone/azurerm | ~> 1.0 |
kubelet_public_dns_contributor | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
kubelet_resource_group_reader | terraform.registry.launch.nttdata.com/module_primitive/role_assignment/azurerm | ~> 1.0 |
application_insights | terraform.registry.launch.nttdata.com/module_primitive/application_insights/azurerm | ~> 1.0 |
prometheus_monitor_workspace | terraform.registry.launch.nttdata.com/module_primitive/monitor_workspace/azurerm | ~> 1.0 |
prometheus_monitor_data_collection | terraform.registry.launch.nttdata.com/module_primitive/monitor_prometheus/azurerm | ~> 1.0 |
prometheus_monitor_workspace_private_dns_zone | terraform.registry.launch.nttdata.com/module_primitive/private_dns_zone/azurerm | ~> 1.0 |
prometheus_monitor_workspace_vnet_link | terraform.registry.launch.nttdata.com/module_primitive/private_dns_vnet_link/azurerm | ~> 1.0 |
prometheus_monitor_workspace_private_endpoint | terraform.registry.launch.nttdata.com/module_primitive/private_endpoint/azurerm | ~> 1.0 |
monitor_private_link_scope | terraform.registry.launch.nttdata.com/module_primitive/azure_monitor_private_link_scope/azurerm | ~> 1.0 |
monitor_private_link_scope_dns_zone | terraform.registry.launch.nttdata.com/module_primitive/private_dns_zone/azurerm | ~> 1.0 |
monitor_private_link_scope_vnet_link | terraform.registry.launch.nttdata.com/module_primitive/private_dns_vnet_link/azurerm | ~> 1.0 |
monitor_private_link_scope_private_endpoint | terraform.registry.launch.nttdata.com/module_primitive/private_endpoint/azurerm | ~> 1.0 |
monitor_private_link_scoped_service | terraform.registry.launch.nttdata.com/module_primitive/monitor_private_link_scoped_service/azurerm | ~> 1.0 |
No resources.
Name | Description | Type | Default | Required |
---|---|---|---|---|
product_family | (Required) Name of the product family for which the resource is created. Example: org_name, department_name. |
string |
"dso" |
no |
product_service | (Required) Name of the product service for which the resource is created. For example, backend, frontend, middleware etc. |
string |
"kube" |
no |
environment | Environment in which the resource should be provisioned like dev, qa, prod etc. | string |
"dev" |
no |
environment_number | The environment count for the respective environment. Defaults to 000. Increments in value of 1 | string |
"000" |
no |
resource_number | The resource count for the respective resource. Defaults to 000. Increments in value of 1 | string |
"000" |
no |
region | AWS Region in which the infra needs to be provisioned | string |
"eastus" |
no |
resource_names_map | A map of key to resource_name that will be used by tf-launch-module_library-resource_name to generate resource names | map(object( |
{ |
no |
resource_group_name | Name of the resource group in which the AKS cluster will be created. If not provided, this module will create one | string |
null |
no |
kubernetes_version | Specify which Kubernetes release to use. The default used is the latest Kubernetes version available in the region Use az aks get-versions --location <region> to find the available versions in the region |
string |
"1.32" |
no |
network_plugin | Network plugin to use for networking. Default is azure. | string |
"azure" |
no |
network_plugin_mode | (Optional) Specifies the network plugin mode used for building the Kubernetes network. Possible value is Overlay . Changing this forces a new resource to be created. |
string |
null |
no |
network_policy | (Optional) Sets up network policy to be used with Azure CNI. Network policy allows us to control the traffic flow between pods. Currently supported values are calico and azure. Changing this forces a new resource to be created. | string |
null |
no |
private_cluster_enabled | If true cluster API server will be exposed only on internal IP address and available only in cluster vnet. | bool |
false |
no |
private_cluster_public_fqdn_enabled | (Optional) Specifies whether a Public FQDN for this Private Cluster should be added. Defaults to false . |
bool |
false |
no |
azure_policy_enabled | Enable Azure Policy add-on for AKS cluster? Defaults to false . |
bool |
false |
no |
additional_vnet_links | A list of VNET IDs for which vnet links to be created with the private AKS cluster DNS Zone. Applicable only when private_cluster_enabled is true. | map(string) |
{} |
no |
cluster_identity_role_assignments | A map of role assignments to be associated with the cluster identity Should be of the format { private-dns = ["Private DNS Zone Contributor", ""] dns = ["DNS Zone Contributor", ""] } |
map(list(string)) |
{} |
no |
node_pool_identity_role_assignments | A map of role assignments to be associated with the node-pool identity Should be of the format { private-dns = ["Private DNS Zone Contributor", ""] dns = ["DNS Zone Contributor", ""] } |
map(list(string)) |
{} |
no |
dns_zone_suffix | The DNS Zone suffix for AKS Cluster private DNS Zone. Default is azmk8s.io for Public CloudFor gov cloud it is cx.aks.containerservice.azure.us |
string |
"azmk8s.io" |
no |
vnet_subnet_id | (Optional) The ID of a Subnet where the Kubernetes Node Pool should exist. Changing this forces a new resource to be created. | string |
null |
no |
net_profile_dns_service_ip | (Optional) IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). Changing this forces a new resource to be created. | string |
null |
no |
net_profile_outbound_type | (Optional) The outbound (egress) routing method which should be used for this Kubernetes Cluster. Possible values are loadBalancer and userDefinedRouting. Defaults to loadBalancer. if userDefinedRouting is selected, user_defined_routing variable is required. |
string |
"loadBalancer" |
no |
user_defined_routing | This variable is required only when net_profile_outbound_type is set to userDefinedRouting The private IP address of the Azure Firewall instance is needed to create route in custom Route table |
object({ |
null |
no |
net_profile_pod_cidr | (Optional) The CIDR to use for pod IP addresses. This field can only be set when network_plugin is set to kubenet. Changing this forces a new resource to be created. | string |
null |
no |
net_profile_service_cidr | (Optional) The Network Range used by the Kubernetes service. Changing this forces a new resource to be created. | string |
null |
no |
agents_availability_zones | (Optional) A list of Availability Zones across which the Node Pool should be spread. Changing this forces a new resource to be created. | list(string) |
null |
no |
agents_count | The number of Agents that should exist in the Agent Pool. Please set agents_count null while enable_auto_scaling is true to avoid possible agents_count changes. |
number |
2 |
no |
agents_labels | (Optional) A map of Kubernetes labels which should be applied to nodes in the Default Node Pool. Changing this forces a new resource to be created. | map(string) |
{} |
no |
agents_max_count | Maximum number of nodes in a pool | number |
null |
no |
agents_max_pods | (Optional) The maximum number of pods that can run on each agent. Changing this forces a new resource to be created. | number |
null |
no |
agents_min_count | Minimum number of nodes in a pool | number |
null |
no |
agents_pool_max_surge | The maximum number or percentage of nodes which will be added to the Default Node Pool size during an upgrade. | string |
null |
no |
agents_pool_linux_os_configs | list(object({ sysctl_configs = optional(list(object({ fs_aio_max_nr = (Optional) The sysctl setting fs.aio-max-nr. Must be between 65536 and 6553500 . Changing this forces a new resource to be created.fs_file_max = (Optional) The sysctl setting fs.file-max. Must be between 8192 and 12000500 . Changing this forces a new resource to be created.fs_inotify_max_user_watches = (Optional) The sysctl setting fs.inotify.max_user_watches. Must be between 781250 and 2097152 . Changing this forces a new resource to be created.fs_nr_open = (Optional) The sysctl setting fs.nr_open. Must be between 8192 and 20000500 . Changing this forces a new resource to be created.kernel_threads_max = (Optional) The sysctl setting kernel.threads-max. Must be between 20 and 513785 . Changing this forces a new resource to be created.net_core_netdev_max_backlog = (Optional) The sysctl setting net.core.netdev_max_backlog. Must be between 1000 and 3240000 . Changing this forces a new resource to be created.net_core_optmem_max = (Optional) The sysctl setting net.core.optmem_max. Must be between 20480 and 4194304 . Changing this forces a new resource to be created.net_core_rmem_default = (Optional) The sysctl setting net.core.rmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_rmem_max = (Optional) The sysctl setting net.core.rmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_somaxconn = (Optional) The sysctl setting net.core.somaxconn. Must be between 4096 and 3240000 . Changing this forces a new resource to be created.net_core_wmem_default = (Optional) The sysctl setting net.core.wmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_wmem_max = (Optional) The sysctl setting net.core.wmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_min = (Optional) The sysctl setting net.ipv4.ip_local_port_range max value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_max = (Optional) The sysctl setting net.ipv4.ip_local_port_range min value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh1 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh1. Must be between 128 and 80000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh2 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh2. Must be between 512 and 90000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh3 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh3. Must be between 1024 and 100000 . Changing this forces a new resource to be created.net_ipv4_tcp_fin_timeout = (Optional) The sysctl setting net.ipv4.tcp_fin_timeout. Must be between 5 and 120 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_intvl = (Optional) The sysctl setting net.ipv4.tcp_keepalive_intvl. Must be between 10 and 75 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_probes = (Optional) The sysctl setting net.ipv4.tcp_keepalive_probes. Must be between 1 and 15 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_time = (Optional) The sysctl setting net.ipv4.tcp_keepalive_time. Must be between 30 and 432000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_syn_backlog = (Optional) The sysctl setting net.ipv4.tcp_max_syn_backlog. Must be between 128 and 3240000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_tw_buckets = (Optional) The sysctl setting net.ipv4.tcp_max_tw_buckets. Must be between 8000 and 1440000 . Changing this forces a new resource to be created.net_ipv4_tcp_tw_reuse = (Optional) The sysctl setting net.ipv4.tcp_tw_reuse. Changing this forces a new resource to be created. net_netfilter_nf_conntrack_buckets = (Optional) The sysctl setting net.netfilter.nf_conntrack_buckets. Must be between 65536 and 147456 . Changing this forces a new resource to be created.net_netfilter_nf_conntrack_max = (Optional) The sysctl setting net.netfilter.nf_conntrack_max. Must be between 131072 and 1048576 . Changing this forces a new resource to be created.vm_max_map_count = (Optional) The sysctl setting vm.max_map_count. Must be between 65530 and 262144 . Changing this forces a new resource to be created.vm_swappiness = (Optional) The sysctl setting vm.swappiness. Must be between 0 and 100 . Changing this forces a new resource to be created.vm_vfs_cache_pressure = (Optional) The sysctl setting vm.vfs_cache_pressure. Must be between 0 and 100 . Changing this forces a new resource to be created.})), []) transparent_huge_page_enabled = (Optional) Specifies the Transparent Huge Page enabled configuration. Possible values are always , madvise and never . Changing this forces a new resource to be created.transparent_huge_page_defrag = (Optional) specifies the defrag configuration for Transparent Huge Page. Possible values are always , defer , defer+madvise , madvise and never . Changing this forces a new resource to be created.swap_file_size_mb = (Optional) Specifies the size of the swap file on each node in MB. Changing this forces a new resource to be created. })) |
list(object({ |
[] |
no |
agents_pool_name | The default Azure AKS agentpool (nodepool) name. | string |
"nodepool" |
no |
agents_proximity_placement_group_id | (Optional) The ID of the Proximity Placement Group of the default Azure AKS agentpool (nodepool). Changing this forces a new resource to be created. | string |
null |
no |
agents_size | The default virtual machine size for the Kubernetes agents. Changing this without specifying var.temporary_name_for_rotation forces a new resource to be created. |
string |
"Standard_D2_v2" |
no |
temporary_name_for_rotation | (Optional) Specifies the name of the temporary node pool used to cycle the default node pool for VM resizing. the var.agents_size is no longer ForceNew and can be resized by specifying temporary_name_for_rotation |
string |
null |
no |
agents_tags | (Optional) A mapping of tags to assign to the Node Pool. | map(string) |
{} |
no |
agents_taints | (Optional) A list of the taints added to new nodes during node pool create and scale. Changing this forces a new resource to be created. | list(string) |
null |
no |
agents_type | (Optional) The type of Node Pool which should be created. Possible values are AvailabilitySet and VirtualMachineScaleSets. Defaults to VirtualMachineScaleSets. | string |
"VirtualMachineScaleSets" |
no |
api_server_authorized_ip_ranges | (Optional) The IP ranges to allow for incoming traffic to the server nodes. | set(string) |
null |
no |
api_server_subnet_id | (Optional) The ID of the Subnet where the API server endpoint is delegated to. | string |
null |
no |
attached_acr_id_map | Azure Container Registry ids that need an authentication mechanism with Azure Kubernetes Service (AKS). Map key must be static string as acr's name, the value is acr's resource id. Changing this forces some new resources to be created. | map(string) |
{} |
no |
enable_auto_scaling | Enable node pool autoscaling. Please set agents_count null while enable_auto_scaling is true to avoid possible agents_count changes. |
bool |
false |
no |
auto_scaler_profile_balance_similar_node_groups | Detect similar node groups and balance the number of nodes between them. Defaults to false . |
bool |
false |
no |
auto_scaler_profile_empty_bulk_delete_max | Maximum number of empty nodes that can be deleted at the same time. Defaults to 10 . |
number |
10 |
no |
auto_scaler_profile_enabled | Enable configuring the auto scaler profile | bool |
false |
no |
auto_scaler_profile_expander | Expander to use. Possible values are least-waste , priority , most-pods and random . Defaults to random . |
string |
"random" |
no |
auto_scaler_profile_max_graceful_termination_sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node. Defaults to 600 . |
string |
"600" |
no |
auto_scaler_profile_max_node_provisioning_time | Maximum time the autoscaler waits for a node to be provisioned. Defaults to 15m . |
string |
"15m" |
no |
auto_scaler_profile_max_unready_nodes | Maximum Number of allowed unready nodes. Defaults to 3 . |
number |
3 |
no |
auto_scaler_profile_max_unready_percentage | Maximum percentage of unready nodes the cluster autoscaler will stop if the percentage is exceeded. Defaults to 45 . |
number |
45 |
no |
auto_scaler_profile_new_pod_scale_up_delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. Defaults to 10s . |
string |
"10s" |
no |
auto_scaler_profile_scale_down_delay_after_add | How long after the scale up of AKS nodes the scale down evaluation resumes. Defaults to 10m . |
string |
"10m" |
no |
auto_scaler_profile_scale_down_delay_after_delete | How long after node deletion that scale down evaluation resumes. Defaults to the value used for scan_interval . |
string |
null |
no |
auto_scaler_profile_scale_down_delay_after_failure | How long after scale down failure that scale down evaluation resumes. Defaults to 3m . |
string |
"3m" |
no |
auto_scaler_profile_scale_down_unneeded | How long a node should be unneeded before it is eligible for scale down. Defaults to 10m . |
string |
"10m" |
no |
auto_scaler_profile_scale_down_unready | How long an unready node should be unneeded before it is eligible for scale down. Defaults to 20m . |
string |
"20m" |
no |
auto_scaler_profile_scale_down_utilization_threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down. Defaults to 0.5 . |
string |
"0.5" |
no |
auto_scaler_profile_scan_interval | How often the AKS Cluster should be re-evaluated for scale up/down. Defaults to 10s . |
string |
"10s" |
no |
auto_scaler_profile_skip_nodes_with_local_storage | If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath. Defaults to true . |
bool |
true |
no |
auto_scaler_profile_skip_nodes_with_system_pods | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods). Defaults to true . |
bool |
true |
no |
os_disk_size_gb | Disk size of nodes in GBs. | number |
50 |
no |
os_disk_type | The type of disk which should be used for the Operating System. Possible values are Ephemeral and Managed . Defaults to Managed . Changing this forces a new resource to be created. |
string |
"Managed" |
no |
os_sku | (Optional) Specifies the OS SKU used by the agent pool. Possible values include: Ubuntu , CBLMariner , Mariner , Windows2019 , Windows2022 . If not specified, the default is Ubuntu if OSType=Linux or Windows2019 if OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 after Windows2019 is deprecated. Changing this forces a new resource to be created. |
string |
null |
no |
pod_subnet_id | (Optional) The ID of the Subnet where the pods in the default Node Pool should exist. Changing this forces a new resource to be created. | string |
null |
no |
rbac_aad | (Optional) Is Azure Active Directory integration enabled? | bool |
false |
no |
rbac_aad_admin_group_object_ids | Object ID of groups with admin access. | list(string) |
null |
no |
rbac_aad_azure_rbac_enabled | (Optional) Is Role Based Access Control based on Azure AD enabled? | bool |
null |
no |
rbac_aad_client_app_id | The Client ID of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_managed | Is the Azure Active Directory integration Managed, meaning that Azure will create/manage the Service Principal used for integration. | bool |
false |
no |
rbac_aad_server_app_id | The Server ID of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_server_app_secret | The Server Secret of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_tenant_id | (Optional) The Tenant ID used for Azure Active Directory Application. If this isn't specified the Tenant ID of the current Subscription is used. | string |
null |
no |
role_based_access_control_enabled | Enable Role Based Access Control. If this is disabled, then identity_type='SystemAssigned' by default | bool |
false |
no |
local_account_disabled | (Optional) - If true local accounts will be disabled. Defaults to false . See the documentation for more information. |
bool |
null |
no |
oidc_issuer_enabled | Enable or Disable the OIDC issuer URL. Defaults to false. | bool |
false |
no |
workload_identity_enabled | Enable or Disable Workload Identity. If enabled, oidc_issuer_enabled must be true. Defaults to false. | bool |
false |
no |
cluster_log_analytics_workspace_name | (Optional) The name of the Analytics workspace to create | string |
null |
no |
log_analytics_workspace | (Optional) Existing azurerm_log_analytics_workspace to attach azurerm_log_analytics_solution. Providing the config disables creation of azurerm_log_analytics_workspace. | object({ |
null |
no |
log_analytics_workspace_allow_resource_only_permissions | (Optional) Specifies if the log Analytics Workspace allow users accessing to data associated with resources they have permission to view, without permission to workspace. Defaults to true . |
bool |
null |
no |
log_analytics_workspace_cmk_for_query_forced | (Optional) Is Customer Managed Storage mandatory for query management? | bool |
null |
no |
log_analytics_workspace_daily_quota_gb | (Optional) The workspace daily quota for ingestion in GB. Defaults to -1 (unlimited) if omitted. | number |
null |
no |
log_analytics_workspace_data_collection_rule_id | (Optional) The ID of the Data Collection Rule to use for this workspace. | string |
null |
no |
log_analytics_workspace_enabled | Enable the integration of azurerm_log_analytics_workspace and azurerm_log_analytics_solution: https://docs.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-onboard | bool |
true |
no |
log_analytics_workspace_resource_group_name | (Optional) Resource group name to create azurerm_log_analytics_solution. | string |
null |
no |
log_analytics_workspace_sku | The SKU (pricing level) of the Log Analytics workspace. For new subscriptions the SKU should be set to PerGB2018 | string |
"PerGB2018" |
no |
log_analytics_workspace_identity | - identity_ids - (Optional) Specifies a list of user managed identity ids to be assigned. Required if type is UserAssigned .- type - (Required) Specifies the identity type of the Log Analytics Workspace. Possible values are SystemAssigned (where Azure will generate a Service Principal for you) and UserAssigned where you can specify the Service Principal IDs in the identity_ids field. |
object({ |
null |
no |
log_analytics_workspace_immediate_data_purge_on_30_days_enabled | (Optional) Whether to remove the data in the Log Analytics Workspace immediately after 30 days. | bool |
null |
no |
log_analytics_workspace_internet_ingestion_enabled | (Optional) Should the Log Analytics Workspace support ingestion over the Public Internet? Defaults to true . |
bool |
null |
no |
log_analytics_workspace_internet_query_enabled | (Optional) Should the Log Analytics Workspace support querying over the Public Internet? Defaults to true . |
bool |
null |
no |
log_analytics_workspace_local_authentication_disabled | (Optional) Specifies if the log Analytics workspace should enforce authentication using Azure AD. Defaults to false . |
bool |
null |
no |
log_analytics_workspace_reservation_capacity_in_gb_per_day | (Optional) The capacity reservation level in GB for this workspace. Possible values are 100 , 200 , 300 , 400 , 500 , 1000 , 2000 and 5000 . |
number |
null |
no |
log_retention_in_days | The retention period for the logs in days | number |
30 |
no |
node_pools | A map of node pools that need to be created and attached on the Kubernetes cluster. The key of the map can be the name of the node pool, and the key must be static string. The value of the map is a node_pool block as defined below:map(object({ name = (Required) The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created. A Windows Node Pool cannot have a name longer than 6 characters. A random suffix of 4 characters is always added to the name to avoid clashes during recreates.node_count = (Optional) The initial number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 (inclusive) for user pools and between 1 and 1000 (inclusive) for system pools and must be a value in the range min_count - max_count .tags = (Optional) A mapping of tags to assign to the resource. At this time there's a bug in the AKS API where Tags for a Node Pool are not stored in the correct case - you may wish to use Terraform's ignore_changes functionality to ignore changes to the casing until this is fixed in the AKS API.vm_size = (Required) The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created. host_group_id = (Optional) The fully qualified resource ID of the Dedicated Host Group to provision virtual machines from. Changing this forces a new resource to be created. capacity_reservation_group_id = (Optional) Specifies the ID of the Capacity Reservation Group where this Node Pool should exist. Changing this forces a new resource to be created. custom_ca_trust_enabled = (Optional) Specifies whether to trust a Custom CA. This requires that the Preview Feature Microsoft.ContainerService/CustomCATrustPreview is enabled and the Resource Provider is re-registered, see the documentation for more information.enable_auto_scaling = (Optional) Whether to enable auto-scaler. enable_host_encryption = (Optional) Should the nodes in this Node Pool have host encryption enabled? Changing this forces a new resource to be created. enable_node_public_ip = (Optional) Should each node have a Public IP Address? Changing this forces a new resource to be created. eviction_policy = (Optional) The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are Deallocate and Delete . Changing this forces a new resource to be created. An Eviction Policy can only be configured when priority is set to Spot and will default to Delete unless otherwise specified.gpu_instance = (Optional) Specifies the GPU MIG instance profile for supported GPU VM SKU. The allowed values are MIG1g , MIG2g , MIG3g , MIG4g and MIG7g . Changing this forces a new resource to be created.kubelet_config = optional(object({ cpu_manager_policy = (Optional) Specifies the CPU Manager policy to use. Possible values are none and static , Changing this forces a new resource to be created.cpu_cfs_quota_enabled = (Optional) Is CPU CFS quota enforcement for containers enabled? Changing this forces a new resource to be created. cpu_cfs_quota_period = (Optional) Specifies the CPU CFS quota period value. Changing this forces a new resource to be created. image_gc_high_threshold = (Optional) Specifies the percent of disk usage above which image garbage collection is always run. Must be between 0 and 100 . Changing this forces a new resource to be created.image_gc_low_threshold = (Optional) Specifies the percent of disk usage lower than which image garbage collection is never run. Must be between 0 and 100 . Changing this forces a new resource to be created.topology_manager_policy = (Optional) Specifies the Topology Manager policy to use. Possible values are none , best-effort , restricted or single-numa-node . Changing this forces a new resource to be created.allowed_unsafe_sysctls = (Optional) Specifies the allow list of unsafe sysctls command or patterns (ending in * ). Changing this forces a new resource to be created.container_log_max_size_mb = (Optional) Specifies the maximum size (e.g. 10MB) of container log file before it is rotated. Changing this forces a new resource to be created. container_log_max_files = (Optional) Specifies the maximum number of container log files that can be present for a container. must be at least 2. Changing this forces a new resource to be created. pod_max_pid = (Optional) Specifies the maximum number of processes per pod. Changing this forces a new resource to be created. })) linux_os_config = optional(object({ sysctl_config = optional(object({ fs_aio_max_nr = (Optional) The sysctl setting fs.aio-max-nr. Must be between 65536 and 6553500 . Changing this forces a new resource to be created.fs_file_max = (Optional) The sysctl setting fs.file-max. Must be between 8192 and 12000500 . Changing this forces a new resource to be created.fs_inotify_max_user_watches = (Optional) The sysctl setting fs.inotify.max_user_watches. Must be between 781250 and 2097152 . Changing this forces a new resource to be created.fs_nr_open = (Optional) The sysctl setting fs.nr_open. Must be between 8192 and 20000500 . Changing this forces a new resource to be created.kernel_threads_max = (Optional) The sysctl setting kernel.threads-max. Must be between 20 and 513785 . Changing this forces a new resource to be created.net_core_netdev_max_backlog = (Optional) The sysctl setting net.core.netdev_max_backlog. Must be between 1000 and 3240000 . Changing this forces a new resource to be created.net_core_optmem_max = (Optional) The sysctl setting net.core.optmem_max. Must be between 20480 and 4194304 . Changing this forces a new resource to be created.net_core_rmem_default = (Optional) The sysctl setting net.core.rmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_rmem_max = (Optional) The sysctl setting net.core.rmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_somaxconn = (Optional) The sysctl setting net.core.somaxconn. Must be between 4096 and 3240000 . Changing this forces a new resource to be created.net_core_wmem_default = (Optional) The sysctl setting net.core.wmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_wmem_max = (Optional) The sysctl setting net.core.wmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_min = (Optional) The sysctl setting net.ipv4.ip_local_port_range min value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_max = (Optional) The sysctl setting net.ipv4.ip_local_port_range max value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh1 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh1. Must be between 128 and 80000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh2 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh2. Must be between 512 and 90000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh3 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh3. Must be between 1024 and 100000 . Changing this forces a new resource to be created.net_ipv4_tcp_fin_timeout = (Optional) The sysctl setting net.ipv4.tcp_fin_timeout. Must be between 5 and 120 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_intvl = (Optional) The sysctl setting net.ipv4.tcp_keepalive_intvl. Must be between 10 and 75 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_probes = (Optional) The sysctl setting net.ipv4.tcp_keepalive_probes. Must be between 1 and 15 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_time = (Optional) The sysctl setting net.ipv4.tcp_keepalive_time. Must be between 30 and 432000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_syn_backlog = (Optional) The sysctl setting net.ipv4.tcp_max_syn_backlog. Must be between 128 and 3240000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_tw_buckets = (Optional) The sysctl setting net.ipv4.tcp_max_tw_buckets. Must be between 8000 and 1440000 . Changing this forces a new resource to be created.net_ipv4_tcp_tw_reuse = (Optional) Is sysctl setting net.ipv4.tcp_tw_reuse enabled? Changing this forces a new resource to be created. net_netfilter_nf_conntrack_buckets = (Optional) The sysctl setting net.netfilter.nf_conntrack_buckets. Must be between 65536 and 147456 . Changing this forces a new resource to be created.net_netfilter_nf_conntrack_max = (Optional) The sysctl setting net.netfilter.nf_conntrack_max. Must be between 131072 and 1048576 . Changing this forces a new resource to be created.vm_max_map_count = (Optional) The sysctl setting vm.max_map_count. Must be between 65530 and 262144 . Changing this forces a new resource to be created.vm_swappiness = (Optional) The sysctl setting vm.swappiness. Must be between 0 and 100 . Changing this forces a new resource to be created.vm_vfs_cache_pressure = (Optional) The sysctl setting vm.vfs_cache_pressure. Must be between 0 and 100 . Changing this forces a new resource to be created.})) transparent_huge_page_enabled = (Optional) Specifies the Transparent Huge Page enabled configuration. Possible values are always , madvise and never . Changing this forces a new resource to be created.transparent_huge_page_defrag = (Optional) specifies the defrag configuration for Transparent Huge Page. Possible values are always , defer , defer+madvise , madvise and never . Changing this forces a new resource to be created.swap_file_size_mb = (Optional) Specifies the size of swap file on each node in MB. Changing this forces a new resource to be created. })) fips_enabled = (Optional) Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created. FIPS support is in Public Preview - more information and details on how to opt into the Preview can be found in this article. kubelet_disk_type = (Optional) The type of disk used by kubelet. Possible values are OS and Temporary .max_count = (Optional) The maximum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be greater than or equal to min_count .max_pods = (Optional) The minimum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be less than or equal to max_count .message_of_the_day = (Optional) A base64-encoded string which will be written to /etc/motd after decoding. This allows customization of the message of the day for Linux nodes. It cannot be specified for Windows nodes and must be a static string (i.e. will be printed raw and not executed as a script). Changing this forces a new resource to be created. mode = (Optional) Should this Node Pool be used for System or User resources? Possible values are System and User . Defaults to User .min_count = (Optional) The minimum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be less than or equal to max_count .node_network_profile = optional(object({ node_public_ip_tags = (Optional) Specifies a mapping of tags to the instance-level public IPs. Changing this forces a new resource to be created. })) node_labels = (Optional) A map of Kubernetes labels which should be applied to nodes in this Node Pool. node_public_ip_prefix_id = (Optional) Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. enable_node_public_ip should be true . Changing this forces a new resource to be created.node_taints = (Optional) A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g key=value:NoSchedule ). Changing this forces a new resource to be created.orchestrator_version = (Optional) Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade). AKS does not require an exact patch version to be specified, minor version aliases such as 1.22 are also supported. - The minor version's latest GA patch is automatically chosen in that case. More details can be found in the documentation. This version must be supported by the Kubernetes Cluster - as such the version of Kubernetes used on the Cluster/Control Plane may need to be upgraded first.os_disk_size_gb = (Optional) The Agent Operating System disk size in GB. Changing this forces a new resource to be created. os_disk_type = (Optional) The type of disk which should be used for the Operating System. Possible values are Ephemeral and Managed . Defaults to Managed . Changing this forces a new resource to be created.os_sku = (Optional) Specifies the OS SKU used by the agent pool. Possible values include: Ubuntu , CBLMariner , Mariner , Windows2019 , Windows2022 . If not specified, the default is Ubuntu if OSType=Linux or Windows2019 if OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 after Windows2019 is deprecated. Changing this forces a new resource to be created.os_type = (Optional) The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are Linux and Windows . Defaults to Linux .pod_subnet_id = (Optional) The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created. priority = (Optional) The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are Regular and Spot . Defaults to Regular . Changing this forces a new resource to be created.proximity_placement_group_id = (Optional) The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created. When setting priority to Spot - you must configure an eviction_policy , spot_max_price and add the applicable node_labels and node_taints as per the Azure Documentation.spot_max_price = (Optional) The maximum price you're willing to pay in USD per Virtual Machine. Valid values are -1 (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created. This field can only be configured when priority is set to Spot .scale_down_mode = (Optional) Specifies how the node pool should deal with scaled-down nodes. Allowed values are Delete and Deallocate . Defaults to Delete .snapshot_id = (Optional) The ID of the Snapshot which should be used to create this Node Pool. Changing this forces a new resource to be created. ultra_ssd_enabled = (Optional) Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to false . See the documentation for more information. Changing this forces a new resource to be created.vnet_subnet_id = (Optional) The ID of the Subnet where this Node Pool should exist. Changing this forces a new resource to be created. A route table must be configured on this Subnet. upgrade_settings = optional(object({ drain_timeout_in_minutes = number node_soak_duration_in_minutes = number max_surge = string })) windows_profile = optional(object({ outbound_nat_enabled = optional(bool, true) })) workload_runtime = (Optional) Used to specify the workload runtime. Allowed values are OCIContainer and WasmWasi . WebAssembly System Interface node pools are in Public Preview - more information and details on how to opt into the preview can be found in this articlezones = (Optional) Specifies a list of Availability Zones in which this Kubernetes Cluster Node Pool should be located. Changing this forces a new Kubernetes Cluster Node Pool to be created. create_before_destroy = (Optional) Create a new node pool before destroy the old one when Terraform must update an argument that cannot be updated in-place. Set this argument to true will add add a random suffix to pool's name to avoid conflict. Default to true .})) |
map(object({ |
{} |
no |
open_service_mesh_enabled | Is Open Service Mesh enabled? For more details, please visit Open Service Mesh for AKS. | bool |
null |
no |
key_vault_secrets_provider_enabled | (Optional) Whether to use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster. If enabled, it creates an MSI for key vault, assigns it to the VMSS identity for key vault and assigns necessary permissions to the key vault. For more details: https://docs.microsoft.com/en-us/azure/aks/csi-secrets-store-driver |
bool |
false |
no |
secret_rotation_enabled | Is secret rotation enabled? This variable is only used when key_vault_secrets_provider_enabled is true and defaults to false |
bool |
false |
no |
secret_rotation_interval | The interval to poll for secret rotation. This attribute is only set when secret_rotation is true and defaults to 2m |
string |
"2m" |
no |
create_key_vault | Create a new Key Vault to be associated with the AKS cluster | bool |
false |
no |
key_vault_name | The name of the key vault to override the naming module | string |
null |
no |
key_vault_role_definition | Permission assigned to the key vault MSI on the key vault. Default is Key Vault Administrator |
string |
"Key Vault Administrator" |
no |
additional_key_vault_ids | IDs of the additional key vaults to be associated with the AKS cluster. The key vault MSI will be assigned the role defined in key_vault_role_definition on these key vaults. |
list(string) |
[] |
no |
enable_rbac_authorization | Enable Kubernetes Role-Based Access Control on the Key Vault | bool |
false |
no |
sku_tier | The SKU Tier that should be used for this Kubernetes Cluster. Possible values are Free and Standard |
string |
"Free" |
no |
node_resource_group | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. Changing this forces a new resource to be created. | string |
null |
no |
brown_field_application_gateway_for_ingress | Definition of brown_field * id - (Required) The ID of the Application Gateway that be used as cluster ingress.* subnet_id - (Required) The ID of the Subnet which the Application Gateway is connected to. Must be set when create_role_assignments is true . |
object({ |
null |
no |
green_field_application_gateway_for_ingress | Definition of green_field * name - (Optional) The name of the Application Gateway to be used or created in the Nodepool Resource Group, which in turn will be integrated with the ingress controller of this Kubernetes Cluster.* subnet_cidr - (Optional) The subnet CIDR to be used to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster.* subnet_id - (Optional) The ID of the subnet on which to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster. |
object({ |
null |
no |
web_app_routing | object({ dns_zone_id = "(Required) Specifies the ID of the DNS Zone in which DNS entries are created for applications deployed to the cluster when Web App Routing is enabled." }) |
object({ |
null |
no |
identity_ids | (Optional) Specifies a list of User Assigned Managed Identity IDs to be assigned to this Kubernetes Cluster. | list(string) |
[] |
no |
identity_type | (Optional) The type of identity used for the managed cluster. Conflicts with client_id and client_secret . Possible values are SystemAssigned and UserAssigned . If UserAssigned is set, an identity_ids must be set as well. |
string |
"SystemAssigned" |
no |
client_id | (Optional) The Client ID (appId) for the Service Principal used for the AKS deployment | string |
"" |
no |
client_secret | (Optional) The Client Secret (password) for the Service Principal used for the AKS deployment | string |
"" |
no |
monitor_metrics | (Optional) Specifies a Prometheus add-on profile for the Kubernetes Cluster object({ annotations_allowed = "(Optional) Specifies a comma-separated list of Kubernetes annotation keys that will be used in the resource's labels metric." labels_allowed = "(Optional) Specifies a Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric." }) |
object({ |
null |
no |
container_registry_ids | List of container registry IDs to associate with AKS. This module will assign role AcrPull to AKS for these registries |
list(string) |
[] |
no |
public_dns_zone_name | Name of a public DNS zone to create with the kubernetes cluster | string |
null |
no |
kv_soft_delete_retention_days | Number of retention days for soft delete for key vault | number |
7 |
no |
kv_sku | SKU for the key vault - standard or premium | string |
"standard" |
no |
kv_access_policies | Additional Access policies for the vault except the current user which are added by default | map(object({ |
{} |
no |
certificates | List of certificates to be imported. The pfx files should be present in the root of the module (path.root) and its name denoted as certificate_name | map(object({ |
{} |
no |
secrets | List of secrets (name and value) | map(string) |
{} |
no |
keys | List of keys to be created in key vault. Name of the key is the key of the map | map(object({ |
{} |
no |
disable_bgp_route_propagation | Disable BGP route propagation on the routing table that AKS manages. | bool |
false |
no |
create_application_insights | Ff true, create a new Application Insights resource to be associated with the AKS cluster | bool |
false |
no |
application_insights | Details for the Application Insights resource to be associated with the AKS cluster. Required only when create_application_insights=true | object({ |
{} |
no |
create_monitor_private_link_scope | If true, create a new Private Link Scope for Azure Monitor. NOTE: This will cause all azure monitor / log analytics traffic to go through private link. |
bool |
false |
no |
brown_field_monitor_private_link_scope_id | An existing monitor private link scope to associate with azure monitor resources. Usable only when create_monitor_private_link_scope is false . |
string |
null |
no |
monitor_private_link_scope_subnet_id | The ID of the subnet to associate with the Azure Monitor private link scope | string |
null |
no |
monitor_private_link_scope_dns_zone_suffixes | The DNS zone suffixes for the private link scope | set(string) |
[ |
no |
enable_prometheus_monitoring | Deploy Prometheus monitoring resources with the AKS cluster | bool |
false |
no |
brown_field_prometheus_monitor_workspace_id | The ID of an existing Azure Monitor workspace to use for Prometheus monitoring | string |
null |
no |
prometheus_workspace_public_access_enabled | Enable public access to the Azure Monitor workspace for prometheus | bool |
true |
no |
enable_prometheus_monitoring_private_endpoint | Enable private endpoint for Prometheus monitoring | bool |
false |
no |
prometheus_monitoring_private_endpoint_subnet_id | The ID of a subnet to create a private endpoint for Prometheus monitoring | string |
null |
no |
prometheus_enable_default_rule_groups | Enable default recording rules for prometheus | bool |
true |
no |
prometheus_default_rule_group_naming | Resource names for the default recording rules | map(string) |
{ |
no |
prometheus_default_rule_group_interval | Interval to run default recording rules in ISO 8601 format (between PT1M and PT15M) | string |
"PT1M" |
no |
prometheus_rule_groups | map(object({ enabled = Whether or not the rule group is enabled description = Description of the rule group interval = Interval to run the rule group in ISO 8601 format (between PT1M and PT15M) recording_rules = list(object({ name = Name of the recording rule enabled = Whether or not the recording rule is enabled expression = PromQL expression for the time series value labels = Labels to add to the time series })) alert_rules = list(object({ name = Name of the alerting rule action = optional(object({ action_group_id = ID of the action group to send alerts to })) enabled = Whether or not the alert rule is enabled expression = PromQL expression to evaluate for = Amount of time the alert must be active before firing, represented in ISO 8601 duration format (i.e. PT5M) labels = Labels to add to the alerts fired by this rule alert_resolution = optional(object({ auto_resolved = Whether or not to auto-resolve the alert after the condition is no longer true time_to_resolve = Amount of time to wait before auto-resolving the alert, represented in ISO 8601 duration format (i.e. PT5M) })) severity = Severity of the alert, between 0 and 4 annotations = Annotations to add to the alerts fired by this rule })) |
map(object({ |
{} |
no |
tags | A map of custom tags to be attached to this module resources | map(string) |
{} |
no |
Name | Description |
---|---|
kube_config_raw | The azurerm_kubernetes_cluster 's kube_config_raw argument. Raw Kubernetes config to be used bykubectl and other compatible tools. |
kube_admin_config_raw | The azurerm_kubernetes_cluster 's kube_admin_config_raw argument. Raw Kubernetes config for the admin account tobe used by kubectl and other compatible tools. This is only available when Role Based Access Control with Azure Active Directory is enabled and local accounts enabled. |
admin_host | The host in the azurerm_kubernetes_cluster 's kube_admin_config block. The Kubernetes cluster server host. |
admin_password | The password in the azurerm_kubernetes_cluster 's kube_admin_config block. A password or token used to authenticate to the Kubernetes cluster. |
admin_username | The username in the azurerm_kubernetes_cluster 's kube_admin_config block. A username used to authenticate to the Kubernetes cluster. |
azure_policy_enabled | The azurerm_kubernetes_cluster 's azure_policy_enabled argument. Should the Azure Policy Add-On be enabled? For more details please visit Understand Azure Policy for Azure Kubernetes Service |
azurerm_log_analytics_workspace_id | The id of the created Log Analytics workspace |
azurerm_log_analytics_workspace_name | The name of the created Log Analytics workspace |
azurerm_log_analytics_workspace_primary_shared_key | Specifies the workspace key of the log analytics workspace |
client_certificate | The client_certificate in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded public certificate used by clients to authenticate to the Kubernetes cluster. |
client_key | The client_key in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded private key used by clients to authenticate to the Kubernetes cluster. |
cluster_ca_certificate | The cluster_ca_certificate in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded public CA certificate used as the root of trust for the Kubernetes cluster. |
cluster_fqdn | The FQDN of the Azure Kubernetes Managed Cluster. |
cluster_portal_fqdn | The FQDN for the Azure Portal resources when private link has been enabled, which is only resolvable inside the Virtual Network used by the Kubernetes Cluster. |
cluster_private_fqdn | The FQDN for the Kubernetes Cluster when private link has been enabled, which is only resolvable inside the Virtual Network used by the Kubernetes Cluster. |
generated_cluster_private_ssh_key | The cluster will use this generated private key as ssh key when var.public_ssh_key is empty or null. Private key data in PEM (RFC 1421) format. |
generated_cluster_public_ssh_key | The cluster will use this generated public key as ssh key when var.public_ssh_key is empty or null. The fingerprint of the public key data in OpenSSH MD5 hash format, e.g. aa:bb:cc:.... Only available if the selected private key format is compatible, similarly to public_key_openssh and the ECDSA P224 limitations. |
host | The host in the azurerm_kubernetes_cluster 's kube_config block. The Kubernetes cluster server host. |
cluster_name | Name of the AKS cluster |
cluster_id | ID of the AKS cluster |
resource_group_name | Name of the Resource Group for the AKS Cluster. Node RG is separate that is created automatically by Azure |
application_gateway_id | Application Gateway ID when application |
key_vault_secrets_provider | The azurerm_kubernetes_cluster 's key_vault_secrets_provider block. |
key_vault_secrets_provider_enabled | Has the azurerm_kubernetes_cluster turned on key_vault_secrets_provider block? |
cluster_identity | The azurerm_kubernetes_cluster 's identity block. |
kubelet_identity | The azurerm_kubernetes_cluster 's kubelet_identity block. |
key_vault_id | Custom Key Vault ID |
node_resource_group | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. |
location | The azurerm_kubernetes_cluster 's location argument. (Required) The location where the Managed Kubernetes Cluster should be created. |
network_profile | The azurerm_kubernetes_cluster 's network_profile block |
password | The password in the azurerm_kubernetes_cluster 's kube_config block. A password or token used to authenticate to the Kubernetes cluster. |
username | The username in the azurerm_kubernetes_cluster 's kube_config block. A username used to authenticate to the Kubernetes cluster. |
oidc_issuer_url | The OIDC issuer URL of the AKS cluster. |
private_cluster_dns_zone_id | ID of the private DNS zone for the private cluster. Created only for private cluster |
private_cluster_dns_zone_name | Name of the private DNS zone for the private cluster. Created only for private cluster |
public_dns_zone_id | Id of the public DNS zone created with the cluster |
public_dns_zone_name_servers | Name of the public DNS zone created with the cluster |
user_assigned_msi_object_id | The object ID of the user assigned managed identity. |
user_assigned_msi_client_id | The client ID of the user assigned managed identity. |
additional_vnet_links | The additional VNet links on the DNS zone of private AKS cluster. |
application_insights_id | Resource ID of the app insights instance |
monitor_private_link_scope_id | Resource ID of the monitor private link scope |
monitor_private_link_scope_dns_zone_ids | Map of Resource IDs of the monitor private link scope DNS zones |
monitor_private_link_scope_private_endpoint_id | Resource ID of the monitor private link scope private endpoint |
prometheus_workspace_id | Resource ID of the Prometheus Monitor workspace |
prometheus_workspace_dns_zone_id | Resource ID of the Prometheus Monitor workspace DNS zone |
prometheus_workspace_private_endpoint_id | Resource ID of the Prometheus Monitor workspace private endpoint |
prometheus_data_collection_endpoint_id | Resource ID of the Prometheus Monitor data collection endpoint |
prometheus_data_collection_rule_id | Resource ID of the Prometheus Monitor data collection rule |